id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,549,259,513 | pytorch | Segmentation fault (core dumped) in `torch._fft_r2c`/`torch._fft_c2`/`torch._fft_c2r` | ### ๐ Describe the bug
torch._fft_r2c,torch._fft_c2 and torch._fft_c2r triggered a crash when the dim argument was a special negative number.
minimal example:
```
https://colab.research.google.com/drive/1zX87Tt6Kr5_qyQCBv90SCXiPJe5bvrW1?usp=sharing
```
output:
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | module: crash,triaged,module: edge cases | low | Critical |
2,549,268,180 | pytorch | Segmentation fault (core dumped) in `torch._weight_norm`/`torch._weight_int8pack_mm` | ### ๐ Describe the bug
Under specific inputs, torch._weight_norm,torch._weight_int8pack_mm triggered a crash.
minimal example:
**torch._weight_norm**
```
import torch
v = torch.full((11, 0, 4, 0, 0, 5, 6, 8, 0, 10, 0, 0, 10, 0, 0, 3, 12, 15, 0, 11,), -1.5e+300, dtype=torch.float64, requires_grad=False)
g = torch.full((11, 0, 4, 0, 0, 5, 6, 8, 0, 10, 0, 0, 10, 0, 0, 3, 12, 15, 0, 11,), -1.5e+300, dtype=torch.float64, requires_grad=False)
dim = 0
torch._weight_norm(v, g, dim)
```
**torch._weight_int8pack_mm**
```
import torch
self = torch.tensor([[0., 0., 0.], [0., 0., -1.]], dtype=torch.bfloat16)
mat2 = torch.full((3, 3, 3, 3), -1, dtype=torch.int8, requires_grad=False)
qScaleAndZeros = torch.tensor([0,0,0], dtype=torch.bfloat16)
C = torch._weight_int8pack_mm(self, mat2, qScaleAndZeros)
```
output:
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | module: crash,triaged,module: edge cases | low | Critical |
2,549,283,097 | kubernetes | Ephemeral storage exhausted by users not mounting the emptyDir | ### What happened?
A user requested/limited ephemeral storage, but forgot to mount the emptyDir volume. The application was writing data to the snapshot of the image in containerd, which went to mounted /var/lib/containerd folder and not /var/lib/kubelet. Kubelet is watching the size of /var/lib/kubelet and was not evicting the pod, which resulted in /var/lib/containerd overfilled and node not responsive.
### What did you expect to happen?
Not sure, but current ephemeral storage mechanism is not intuitive to users. The data goes in 2 different folders and there's no control over the containerd snapshots size from kubernetes side.
### How can we reproduce it (as minimally and precisely as possible)?
Limit ephemeral storage, not create the emptyDir volume, write some data to /tmp and see /var/lib/containerd getting overfilled.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.28.11
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
$ uname -a
Linux k8s-epyc-01.sdsc.optiputer.net 5.4.0-195-generic #215-Ubuntu SMP Fri Aug 2 18:28:05 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
containerd/now 1.7.2-0ubuntu1~20.04.1 amd64 [installed,upgradable to: 1.7.12-0ubuntu2~20.04.1]
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,sig/node,triage/needs-information,needs-triage | medium | Critical |
2,549,284,817 | pytorch | Segmentation fault (core dumped) in `torch.ao.nn.quantized.dynamic.LSTMCell/GRUCell` | ### ๐ Describe the bug
Under specific inputs, torch.ao.nn.quantized.dynamic.LSTMCell,torch.ao.nn.quantized.dynamic.GRUCell triggered a crash.
minimal example:
```
https://colab.research.google.com/drive/1TZYJTq3r7DoOtGBnc6Pothae8WS7C_tY#scrollTo=ztqRqVZL49Z7
```
output:
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | module: crash,oncall: quantization | low | Critical |
2,549,296,633 | transformers | Add support for Molmo | ### Feature request
Hi,
Would it be possible to add support for [Molmo](https://huggingface.co/allenai/Molmo-7B-D-0924) (currently using custom code)?
Thanks!
### Motivation
Molmo is not supported
### Your contribution
N/A | New model,Feature request,Vision,Multimodal | low | Major |
2,549,301,883 | pytorch | Aborted (core dumped) in `torch.linalg.ldl_solve` with double free or corruption (out) | ### ๐ Describe the bug
If the pivots parameter is [0,1], a crash is triggered, accompanied by the message: "double free or corruption (out)."
minimal example:
```
https://colab.research.google.com/drive/1oIhePrvpXX40xu73dvckBHkXDgjSc5tG?usp=sharing
```
output:
```
Solution: tensor([[1.0000],
[0.3750]])
double free or corruption (out)
Aborted (core dumped)
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | module: crash,triaged,module: linear algebra | low | Critical |
2,549,322,514 | pytorch | Floating point exception (core dumped) in `torch.ao.nn.intrinsic.quantized.ConvReLU1d/ConvReLU2d/ConvReLU3d` when stride=0 | ### ๐ Describe the bug
torch.ao.nn.intrinsic.quantized.ConvReLU1d/ConvReLU2d/ConvReLU3d triggered a crash when stride=0.
minimal example:
```
https://colab.research.google.com/drive/1rkXIVz3XMmC9Cka63QLKaSwHjwHsNZRh?usp=sharing
```
output:
```
Floating point exception (core dumped)
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | triaged,module: edge cases | low | Critical |
2,549,342,425 | pytorch | Segmentation fault (core dumped) in `torch.nn.functional.max_pool1d` | ### ๐ Describe the bug
torch.nn.functional.max_pool1d triggered a crash when stride and kernel_size are too large
minimal example:
```
import torch
kernel_size = 9223372036854775807
stride = 9223372036854775807
input_params = torch.randn(2, 10, 4)
output = torch.nn.functional.max_pool1d(input_params, kernel_size, stride)
```
output:
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @albanD | module: crash,triaged,module: python frontend,module: edge cases | low | Critical |
2,549,364,088 | yt-dlp | [bbc] Unable to extract playlist data - iplayer link | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
UK
### Provide a description that is worded well enough to be understood
when I try to download the video in this page: https://www.bbc.co.uk/ideas/videos/were-not-meant-to-be-happy-all-the-time/p05tl7t3 . yt-dlp reported an error, my version is 2024.08.06
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.bbc.co.uk/ideas/videos/were-not-meant-to-be-happy-all-the-time/p05tl7t3']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (pip)
[debug] Python 3.12.6 (CPython arm64 64bit) - macOS-13.7-arm64-arm-64bit (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 6.1.1-tessus (setts), ffprobe 6.1.1-tessus
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[bbc] Extracting URL: https://www.bbc.co.uk/ideas/videos/were-not-meant-to-be-happy-all-the-time/p05tl7t3
[bbc] p05tl7t3: Downloading webpage
ERROR: [bbc] p05tl7t3: Unable to extract playlist data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/opt/homebrew/Cellar/yt-dlp/2024.8.6/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 740, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/2024.8.6/libexec/lib/python3.12/site-packages/yt_dlp/extractor/bbc.py", line 1469, in _real_extract
self._search_regex(
File "/opt/homebrew/Cellar/yt-dlp/2024.8.6/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 1333, in _search_regex
raise RegexNotFoundError(f'Unable to extract {_name}')
```
| geo-blocked,site-bug | low | Critical |
2,549,377,129 | ui | [bug]: Resizable, wrong resize cursor on hover over different resize direction after resize | ### Describe the bug
When resizing using the resizable demo at [Resizable Component](https://ui.shadcn.com/docs/components/resizable)
When resizing and after not clicking on any objects, wrong resize cursor is displayed.
When lets say resizing horizontally or vertically, and then finish resizing. if we don't click on anything using the mouse and hover over a different axis of resizer we get the originally used resize cursor instead.
So if we resized it vertically and after don't click on anything and hover over horizontal resize, we get vertical mouse resize icon.
### Affected component/components
Resizable, ResizableHandle
### How to reproduce
1. Resize horizontally or vertically (don't click or select anything after resize)
2. Hover over a different axis resize bar.
3. Last resized axis mouse cursor is displayed on hover instead of current resize axis.
### Codesandbox/StackBlitz link
https://ui.shadcn.com/docs/components/resizable
### Logs
_No response_
### System Info
```bash
Google Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,549,412,468 | pytorch | Aborted (core dumped) in `torch.package.package_exporter.PackageExporter`/`torch.package.PackageExporter` | ### ๐ Describe the bug
torch.package.package_exporter.PackageExporter, torch.package.PackageExporter triggered a crash when f is invalid input.
minimal example:
**torch.package.package_exporter.PackageExporter**
```
import torch.package.package_exporter
package_exporter = torch.package.package_exporter.PackageExporter(f=123)
```
**torch.package.PackageExporter**
```
import torch.package
try:
exporter = torch.package.PackageExporter(f=1, importer=1)
exporter.export()
except Exception as e:
print("An error occurred:", e)
```
output:
```
Aborted (core dumped)
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | module: crash,oncall: package/deploy | low | Critical |
2,549,460,234 | rust | s390x vector facilities support | This tracks the status of s390x vector facilities support in rustc and standard libraries.
- ABI support
- [x] support z13 vector ABI: done in https://github.com/rust-lang/rust/pull/131586
https://github.com/rust-lang/rust/blob/58420a065b68ecb3eec03b942740c761cdadd5c4/compiler/rustc_target/src/abi/call/s390x.rs#L1-L2
- [x] remove explicit disabling of the vector feature (blocked on the above FIXME): done in https://github.com/rust-lang/rust/pull/131586
https://github.com/rust-lang/rust/blob/58420a065b68ecb3eec03b942740c761cdadd5c4/compiler/rustc_target/src/spec/targets/s390x_unknown_linux_gnu.rs#L9-L11
- target_feature support
- [x] support nightly-only `cfg(target_feature = "vector")` and unstable `target_feature(enable = "vector")` (under `feature(s390x_target_feature)`): done in https://github.com/rust-lang/rust/pull/127506
- [ ] support other vector facility-related target features: vector-enhancements-1, vector-enhancements-2, etc. (see also https://github.com/rust-lang/rust/issues/88937); pending in https://github.com/rust-lang/rust/pull/135630
- [ ] stabilize `target_feature = "vector"` (and other vector-related target features if available)
~~At least blocked until ABI support done~~ UPDATE: ABI support done
And may be more things are needed given the [precedent of postponed stabilization of vector features in RISC-V](https://github.com/rust-lang/rust/pull/116485#issuecomment-1755499395).
- asm support
- [x] support clobber-only vector registers: done in https://github.com/rust-lang/rust/pull/130630
- [x] stabilize `feature(asm_experimental_arch)` on s390x: done in https://github.com/rust-lang/rust/pull/131258
- [x] support `#[repr(simd)]` types in input/output of asm (i.e., fully support vector registers) under `feature(asm_experimental_reg)`: pending in https://github.com/rust-lang/rust/pull/131664
Both [LLVM](https://llvm.org/docs/LangRef.html#supported-constraint-code-list) and [GCC](https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html) do not document `v` constraint, but actually [seem to be supported](https://github.com/llvm/llvm-project/blob/0dbc85a59f556736133019f655e7110fc7ae8847/clang/lib/Basic/Targets/SystemZ.cpp#L76).
~~Blocked until ABI support done.~~ UPDATE: ABI support done
- [ ] stabilize `feature(asm_experimental_reg)` for s390x vector registers (https://github.com/rust-lang/rust/issues/133416)
- core_arch support (tracking issue: https://github.com/rust-lang/rust/issues/135681)
- [ ] add unstable vector intrinsics to core::arch::s390x
~~Blocked until ABI support done.~~ UPDATE: ABI support done
- std_detect support (tracking issue: https://github.com/rust-lang/rust/issues/135413)
- [x] support unstable `is_s390x_feature_detected!("vector")`: done in https://github.com/rust-lang/stdarch/pull/1699
@rustbot label +O-SystemZ | A-inline-assembly,T-compiler,A-SIMD,O-SystemZ,T-libs,A-ABI | low | Major |
2,549,463,834 | pytorch | Be smart about autograd formulas saving either the input or output, depending on context | ### ๐ The feature, motivation and pitch
See https://github.com/NVIDIA/apex/pull/1715
The general idea is that for some operators, in principle the autograd formula can be written depending on *either* the input or the output.
For example, for the norm, usually `x_hat` is computed by saving the mean and rstd from the forwards, and then computing `(input - mean) * rstd`. However, we can also compute it by recomputing it from the *output*, such as by doing `(output - bias) / weight`.
Which one is better is actually somewhat tricky to determine. For example, if the subsequent op requires the output of the norm (like a matmul), then it would be better to save the output (i.e. `matmul(norm(x))`). On the other hand, if we already need to save the input of the norm, then saving the output would just require us to save more than necessary.
The general problem here is quite tricky, as it relies on "mathematical properties" (which autograd formulas can be rewritten?). But, I think if we choose a set of patterns we'd like to potentially rewrite, we could:
1. Do it as a pattern-matching pass on the joint graph.
2. Upon the pattern hitting, check if the input node or the output node already has other users. If so, we use the autograd formula that allows us to "reuse" the existing saved node.
Example:
```
import torch.nn as nn
import torch.nn.functional as F
torch.set_default_device('cuda')
class CustomLayerNormModule(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super().__init__()
self.linear1 = nn.Linear(input_dim, input_dim)
self.layer_norm = nn.LayerNorm(input_dim)
self.linear2 = nn.Linear(input_dim, input_dim)
def forward(self, x):
x = self.linear1(x)
x = self.layer_norm(x)
x = self.linear2(x)
return x
# Example usage
batch_size = 1000
hidden_dim = 10000
# Create an instance of the custom module
model = torch.compile(CustomLayerNormModule(hidden_dim, hidden_dim, hidden_dim))
x = torch.randn(batch_size, hidden_dim)
# Forward pass
output = model(x)
```
<img width="1395" alt="image" src="https://github.com/user-attachments/assets/767f6522-ebdf-4c8e-86d4-2c5c88b90afd">
Of the non-inputs it saves, `addmm` and `add_1` are the nontrivial ones. But if we do the rewrite here, we could avoid saving `addmm` and only save `add_1`.
cc @msaroufim @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | module: performance,triaged,actionable,oncall: pt2,module: inductor | low | Major |
2,549,489,923 | deno | Nondeterministic `deno test` output with Deno.exit() | Running the following minimal reproduction code with `deno test` will not output the `console.error` message most of the time:
```ts
console.error('Error: API_KEY not set.');
Deno.exit(1);
```
This is a problem in real-life scenarios when tests should be aborted if setup conditions are not met.
Running it with `deno run` will output the message correctly:

I've looked at the documentation for [Deno.exit()](https://docs.deno.com/api/deno/~/Deno.exit) but don't see anything about leaks, and the [test Sanitizers](https://docs.deno.com/runtime/fundamentals/testing/#sanitizers) don't mention `console`.
```
deno 1.46.3 (stable, release, x86_64-unknown-linux-gnu)
v8 12.9.202.5-rusty
typescript 5.5.2
```
| bug,testing | low | Critical |
2,549,504,169 | PowerToys | [enhance] Permanent "Find My Mouse" | ### Description of the new feature / enhancement
**Presently:**
With Find My Mouse (FMM) enabled and currently active, when you click, it deactivates.
**Proposal:**
FMM's active state persists after LMB or RMB or MMB is pressed.
I have no preference regarding whether this is default behavior, or a toggleable option, but that should be considered.
### Scenario when this would be used?
I believe darkening the irrelevant parts of the screen is more effective for finding the mouse than using some color or animation.
This would be broadly useful for videomakers, lecturers, visualization makers and teachers who are teaching software interfaces with very small, and very colorful, buttons, like, just for example, Blender.
### Additionally
- This behavior should work alongside Mouse Highlighter, which displays the clicks themselves.
- Add transparency support for "Spotlight Color". I don't want to modify the underlying image.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,549,566,296 | pytorch | torch.unravel_index does not check out-of-bounds | ### ๐ Describe the bug
I'm unsure if this is intended behaviour or not, but when using torch.unravel_index and entering indices that are out-of-bounds, the returned indices wrap back around to the start of the tensor.
For example, unvravelling indices 0 through 8 for a 2x2 array raises an error in numpy:
`np.unravel_index(np.arange(0,9,1), (2,2))`
`ValueError: index 4 is out of bounds for array with size 4`
but returns two copies of every index in torch:
`torch.unravel_index(torch.arange(0,9,1), (2,2))`
`(tensor([0, 0, 1, 1, 0, 0, 1, 1, 0]), tensor([0, 1, 0, 1, 0, 1, 0, 1, 0]))`
In particular, accessing the 5th element of a 2x2 array would access the [0,0] element.
### Versions
PyTorch version: 2.3.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (x86_64)
GCC version: Could not collect
Clang version: 12.0.5 (clang-1205.0.22.9)
CMake version: version 3.24.4
Libc version: N/A
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 10:14:12) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.3.0
[pip3] torchviz==0.0.2
[conda] numpy 1.26.4 py312h7d6adbd_0
[conda] numpy-base 1.26.4 py312hbb3573c_0
[conda] pytorch 2.3.0 cpu_py312h9d484b6_0
[conda] torchviz 0.0.2 pypi_0 pypi
cc @mruberry @rgommers | triaged,module: numpy,module: advanced indexing | low | Critical |
2,549,569,425 | godot | Visual Profiler displays incorrect CPU times | ### Tested versions
4.3
### System information
Windows 10, Vulkan forward +, Nvidia 3070
### Issue description
I've been doing a lot of profiling lately to try to find performance issues and added in Tracy support. I've noticed the values profiled in the same area are VASTLY different. I'm getting sub-100 fps, but the render time displayed by the built-in Visual Profiler is only showing around 2ms to render a viewport while Tracy shows ~8ms.


I shifted some of the RENDER_TIMESTAMP macros around as they didn't catch all the code in the function, but they still seem to be falling way short.
### Steps to reproduce
Start the visual profiler and note that the time for rendering does not match the time necessary to render the game at the current framerate. Optionally get the tracy godot addon and add profile points where around things like RENDER_TIMESTAMP("> Render Viewports"); to compare values.
### Minimal reproduction project (MRP)
I don't think it's project specific, but you could use https://github.com/user-attachments/files/17140515/test_light_performance.zip | topic:core,topic:rendering,topic:editor,needs testing | low | Major |
2,549,635,677 | tauri | document that changing additional_browser_args require changing data_directory if multiple webviews will be opened | ### Describe the bug
Applying `additional_browser_args`, empty or not to any window builder during app_setup.
```
[2024-09-26][06:10:44][tauri_runtime_wry][ERROR] failed to create webview: WebView2 error:
WindowsError(Error { code: HRESULT(0x8007139F), message: "The group or resource is not in the correct state to perform the requested operation." })
```
I create few windows during app setup, main, popover etc, and notice bugged behaviour when `additional_browser_args` applied to any on them.
For example, if use `additional_browser_args("--enable-features=msWebView2EnableDraggableRegions --disable-features=ElasticOverscroll")`, or `.additional_browser_args("")` to main window builder - i get `HRESULT(0x8007139F)` on any webview create from javascript frontend (ts/react).
But other windows, created from rust frontend works.
If i apply `additional_browser_args` to my secondary, popover window, `webviews` creates from js successfully, but interaction with popover webview, created from rust backend causes panic.
Applying `additional_browser_args` for both, `main` and `popover` window cause only `failed to create webview: WebView2 error` fro js, without panic.
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
โ WebView2: 129.0.2792.52
โ MSVC: Visual Studio Community 2022
โ rustc: 1.83.0-nightly (506f22b46 2024-09-19)
โ cargo: 1.83.0-nightly (a9a418d1a 2024-09-15)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: nightly-x86_64-pc-windows-msvc (default)
- node: 22.8.0
- npm: 10.8.3
[-] Packages
- tauri ๐ฆ: 2.0.0-rc.15
- tauri-build ๐ฆ: 2.0.0-rc.12
- wry ๐ฆ: 0.43.1
- tao ๐ฆ: 0.30.0
- tauri-cli ๐ฆ: 2.0.0-rc.7
- @tauri-apps/api ๎: 2.0.0-rc.4
- @tauri-apps/cli ๎: 2.0.0-rc.10
[-] Plugins
- tauri-plugin-process ๐ฆ: 2.0.0-rc.1
- @tauri-apps/plugin-process ๎: 2.0.0-rc.1
- tauri-plugin-notification ๐ฆ: 2.0.0-rc.5
- @tauri-apps/plugin-notification ๎: not installed!
- tauri-plugin-shell ๐ฆ: 2.0.0-rc.3
- @tauri-apps/plugin-shell ๎: 2.0.0-rc.1
- tauri-plugin-log ๐ฆ: 2.0.0-rc.2
- @tauri-apps/plugin-log ๎: 2.0.0-rc.1
- tauri-plugin-store ๐ฆ: 2.0.0-rc.3
- @tauri-apps/plugin-store ๎: 2.0.0-rc.1
- tauri-plugin-updater ๐ฆ: 2.0.0-rc.3
- @tauri-apps/plugin-updater ๎: 2.0.0-rc.1 (outdated, latest: 2.0.0-rc.2)
- tauri-plugin-autostart ๐ฆ: 2.0.0-rc.1
- @tauri-apps/plugin-autostart ๎: 2.0.0-rc.1
- tauri-plugin-single-instance ๐ฆ: 2.0.0-rc.4
- @tauri-apps/plugin-single-instance ๎: not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
```text
[2024-09-26][06:20:48][tauri_runtime_wry][ERROR] failed to create webview: WebView2 error: WindowsError(Error { code: HRESULT(0x8007139F), message: "The group or resource is not in the correct state to perform the requested operation." })
[2024-09-26][06:20:57][tao::platform_impl::platform::event_loop::runner][WARN] NewEvents emitted without explicit RedrawEventsCleared
[2024-09-26][06:20:57][tao::platform_impl::platform::event_loop::runner][WARN] RedrawEventsCleared emitted without explicit MainEventsCleared
thread 'main' panicked at src\windows\popover.rs:81:48:
called `Result::unwrap()` on an `Err` value: Runtime(FailedToReceiveMessage)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'main' panicked at core\src\panicking.rs:221:5:
panic in a function that cannot unwind
stack backtrace:
0: 0x7ff69fd066d1 - std::backtrace_rs::backtrace::dbghelp64::trace
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\..\..\backtrace\src\backtrace\dbghelp64.rs:91
1: 0x7ff69fd066d1 - std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: 0x7ff69fd066d1 - std::sys::backtrace::_print_fmt
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\sys\backtrace.rs:66
3: 0x7ff69fd066d1 - std::sys::backtrace::impl$0::print::impl$0::fmt
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\sys\backtrace.rs:39
4: 0x7ff69fd2cbd9 - core::fmt::rt::Argument::fmt
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/core\src\fmt\rt.rs:177
5: 0x7ff69fd2cbd9 - core::fmt::write
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/core\src\fmt\mod.rs:1186
6: 0x7ff69fd012a7 - std::io::Write::write_fmt<std::sys::pal::windows::stdio::Stderr>
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\io\mod.rs:1823
7: 0x7ff69fd06515 - std::sys::backtrace::BacktraceLock::print
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\sys\backtrace.rs:42
8: 0x7ff69fd086d9 - std::panicking::default_hook::closure$1
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\panicking.rs:268
9: 0x7ff69fd0848f - std::panicking::default_hook
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\panicking.rs:295
10: 0x7ff69fd08d63 - std::panicking::rust_panic_with_hook
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\panicking.rs:801
11: 0x7ff69fd08bb2 - std::panicking::begin_panic_handler::closure$0
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\panicking.rs:667
12: 0x7ff69fd0718f - std::sys::backtrace::__rust_end_short_backtrace<std::panicking::begin_panic_handler::closure_env$0,never$>
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\sys\backtrace.rs:170
13: 0x7ff69fd087ee - std::panicking::begin_panic_handler
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\panicking.rs:665
14: 0x7ff69fd4c9d5 - core::panicking::panic_nounwind_fmt::runtime
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/core\src\panicking.rs:112
15: 0x7ff69fd4c9d5 - core::panicking::panic_nounwind_fmt
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/core\src\panicking.rs:122
16: 0x7ff69fd4ca83 - core::panicking::panic_nounwind
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/core\src\panicking.rs:221
17: 0x7ff69fd4cc07 - core::panicking::panic_cannot_unwind
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/core\src\panicking.rs:310
18: 0x7ff69f718e06 - webview2_com_sys::Microsoft::Web::WebView2::Win32::impl$991::new::Invoke<webview2_com::callback::WebResourceRequestedEventHandler_Impl,-1>
at D:\packages\cargo\registry\src\index.crates.io-6f17d22bba15001f\webview2-com-sys-0.33.0\src\Microsoft.rs:39957
19: 0x7ffc8468f540 - _CxxFrameHandler3
20: 0x7ffc846833d8 - is_exception_typeof
21: 0x7ffc94c94a26 - RtlCaptureContext2
22: 0x7ff69f718dbb - webview2_com_sys::Microsoft::Web::WebView2::Win32::impl$991::new::Invoke<webview2_com::callback::WebResourceRequestedEventHandler_Impl,-1>
at D:\packages\cargo\registry\src\index.crates.io-6f17d22bba15001f\webview2-com-sys-0.33.0\src\Microsoft.rs:39970
23: 0x7ffc1a879e98 - DllGetClassObject
24: 0x7ffc1a879cef - DllGetClassObject
25: 0x7ffc1a89189d - DllGetClassObject
26: 0x7ffc1a8e3d40 - DllCanUnloadNow
27: 0x7ffc1a8cf970 - DllCanUnloadNow
28: 0x7ffc1a8d27df - DllCanUnloadNow
29: 0x7ffc1a8d427d - DllCanUnloadNow
30: 0x7ffc1a8d41e7 - DllCanUnloadNow
31: 0x7ffc1a8d40f2 - DllCanUnloadNow
32: 0x7ffc1aac9391 - GetHandleVerifier
33: 0x7ffc1aaf8333 - GetHandleVerifier
34: 0x7ffc1a8d6083 - DllCanUnloadNow
35: 0x7ffc1aad3e5e - GetHandleVerifier
36: 0x7ffc1a94508c - DllCanUnloadNow
37: 0x7ffc1aa82c3a - GetHandleVerifier
38: 0x7ffc1aa83523 - GetHandleVerifier
39: 0x7ffc1aa831e3 - GetHandleVerifier
40: 0x7ffc1aa82de4 - GetHandleVerifier
41: 0x7ffc1aa82c3a - GetHandleVerifier
42: 0x7ffc1a9372f5 - DllCanUnloadNow
43: 0x7ffc1a937770 - DllCanUnloadNow
44: 0x7ffc1a99a9e9 - DllCanUnloadNow
45: 0x7ffc1aa811c3 - GetHandleVerifier
46: 0x7ffc1aa811c3 - GetHandleVerifier
47: 0x7ffc1aa826ff - GetHandleVerifier
48: 0x7ffc1aa788c4 - GetHandleVerifier
49: 0x7ffc1a8c0eae - DllCanUnloadNow
50: 0x7ffc1a8c0d16 - DllCanUnloadNow
51: 0x7ffc1a8c057d - DllCanUnloadNow
52: 0x7ffc1aae500c - GetHandleVerifier
53: 0x7ffc1aae4efc - GetHandleVerifier
54: 0x7ffc1aa0989f - DllCanUnloadNow
55: 0x7ffc933082e1 - DispatchMessageW
56: 0x7ffc93307da1 - DispatchMessageW
57: 0x7ff69e7f633f - windows::Win32::UI::WindowsAndMessaging::DispatchMessageW
at D:\packages\cargo\registry\src\index.crates.io-6f17d22bba15001f\windows-0.58.0\src\Windows\Win32\UI\WindowsAndMessaging\mod.rs:772
58: 0x7ff69e30945b - tao::platform_impl::platform::event_loop::EventLoop<enum2$<tauri_runtime_wry::Message<enum2$<tauri::EventLoopMessage> > > >::run_return<enum2$<tauri_runtime_wry::Message<enum2$<tauri::EventLoopMessage> > >,tauri_runtime_wry::impl$45::run::closure_env$0<enu
at D:\packages\cargo\registry\src\index.crates.io-6f17d22bba15001f\tao-0.30.0\src\platform_impl\windows\event_loop.rs:259
59: 0x7ff69e309adb - tao::platform_impl::platform::event_loop::EventLoop<enum2$<tauri_runtime_wry::Message<enum2$<tauri::EventLoopMessage> > > >::run<enum2$<tauri_runtime_wry::Message<enum2$<tauri::EventLoopMessage> > >,tauri_runtime_wry::impl$45::run::closure_env$0<enum2$<tau
at D:\packages\cargo\registry\src\index.crates.io-6f17d22bba15001f\tao-0.30.0\src\platform_impl\windows\event_loop.rs:221
60: 0x7ff69e473401 - tao::event_loop::EventLoop<enum2$<tauri_runtime_wry::Message<enum2$<tauri::EventLoopMessage> > > >::run<enum2$<tauri_runtime_wry::Message<enum2$<tauri::EventLoopMessage> > >,tauri_runtime_wry::impl$45::run::closure_env$0<enum2$<tauri::EventLoopMessage>,tau
at D:\packages\cargo\registry\src\index.crates.io-6f17d22bba15001f\tao-0.30.0\src\event_loop.rs:211
61: 0x7ff69e80a719 - tauri_runtime_wry::impl$45::run<enum2$<tauri::EventLoopMessage>,tauri::app::impl$16::run::closure_env$0<tauri_runtime_wry::Wry<enum2$<tauri::EventLoopMessage> >,void (*)(ref$<tauri::app::AppHandle<tauri_runtime_wry::Wry<enum2$<tauri::EventLoopMessage> > >
at D:\packages\cargo\registry\src\index.crates.io-6f17d22bba15001f\tauri-runtime-wry-2.0.0-rc.13\src\lib.rs:2633
62: 0x7ff69e37e140 - tauri::app::App<tauri_runtime_wry::Wry<enum2$<tauri::EventLoopMessage> > >::run<tauri_runtime_wry::Wry<enum2$<tauri::EventLoopMessage> >,void (*)(ref$<tauri::app::AppHandle<tauri_runtime_wry::Wry<enum2$<tauri::EventLoopMessage> > > >,enum2$<tauri::app::Run
at D:\packages\cargo\registry\src\index.crates.io-6f17d22bba15001f\tauri-2.0.0-rc.15\src\app.rs:1093
63: 0x7ff69e647f9a - sebn_taskbar_client::core::app::run
at D:\mykyta.nehrych\code\sebn\sebn-taskbar-client-next\src-tauri\src\core\app.rs:136
64: 0x7ff69e6b83ed - sebn_taskbar_client::main
at D:\mykyta.nehrych\code\sebn\sebn-taskbar-client-next\src-tauri\src\main.rs:17
65: 0x7ff69e4bcb8b - core::ops::function::FnOnce::call_once<void (*)(),tuple$<> >
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library\core\src\ops\function.rs:250
66: 0x7ff69e3f72de - core::hint::black_box
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library\core\src\hint.rs:389
67: 0x7ff69e3f72de - std::sys::backtrace::__rust_begin_short_backtrace<void (*)(),tuple$<> >
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library\std\src\sys\backtrace.rs:154
68: 0x7ff69e123b51 - std::rt::lang_start::closure$0<tuple$<> >
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library\std\src\rt.rs:164
69: 0x7ff69fcfad89 - std::rt::lang_start_internal::closure$2
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\rt.rs:143
70: 0x7ff69fcfad89 - std::panicking::try::do_call
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\panicking.rs:557
71: 0x7ff69fcfad89 - std::panicking::try
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\panicking.rs:520
72: 0x7ff69fcfad89 - std::panic::catch_unwind
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\panic.rs:348
73: 0x7ff69fcfad89 - std::rt::lang_start_internal
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library/std\src\rt.rs:143
74: 0x7ff69e123b2a - std::rt::lang_start<tuple$<> >
at /rustc/506f22b4663f3e756e1e6a4f66c6309fdc00819c\library\std\src\rt.rs:163
75: 0x7ff69e6b8459 - main
76: 0x7ff69fd49cec - invoke_main
at D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:78
77: 0x7ff69fd49cec - __scrt_common_main_seh
at D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288
78: 0x7ffc937c257d - BaseThreadInitThunk
79: 0x7ffc94c4af28 - RtlUserThreadStart
thread caused non-unwinding panic. aborting.
```
### Additional context
_No response_ | type: documentation | low | Critical |
2,549,655,620 | flutter | [BUG] Clipboard Changes Not Detected in Flutter on Linux (Wayland) Unless App is Focused | ### Steps to reproduce
1. Run a Flutter app on a Linux system using a `Wayland` session.
2. The app should listen for clipboard changes using a timer using inbuilt clipboard.
3. Copy new content to the clipboard while the Flutter app is running but not focused.
4. The clipboard change is not detected.
5. Switch focus back to the Flutter app window. The clipboard change is now detected and updated.
6. Run the same app on `X11`, and observe that clipboard changes are detected immediately without needing to switch window focus.
### Expected results
Clipboard changes should be detected in real-time on `Wayland`, just like in `X11`, without requiring the app to regain focus.
### Actual results
When a Flutter app that is designed to listen to clipboard in realtime and display it via a text widget is run in `Wayland` in Linux it doesn't listen to the changes in clipboard unless app window is focused, right after content has been copied. The same application would work perfectly in X11.
### Code sample
<details open><summary>Code sample</summary>
```dart
void _startListening() async {
Timer.periodic(const Duration(milliseconds: 100), (_) async {
if (_pauseClipboard) return;
ClipboardData? data = await Clipboard.getData(Clipboard.kTextPlain);
final content = data?.text;
if (content != null && content != _previousClipboardText) {
_getClipboardText(content);
}
});
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
**Behavior:**
- x11
https://github.com/user-attachments/assets/d16f32b1-3b6b-46bb-a95d-cf3023d9e185
- wayland
https://github.com/user-attachments/assets/dee8b306-816e-4f23-8f5f-731937e9dd2a
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.24.3, on Pop!_OS 22.04 LTS 6.9.3-76060903-generic, locale en_US.UTF-8)
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[โ] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[โ] Linux toolchain - develop for Linux desktop
[โ] Android Studio (version 2024.1)
[โ] VS Code (version 1.93.1)
[โ] Connected device (1 available)
[โ] Network resources
! Doctor found issues in 1 category.
```
</details>
| a: text input,engine,platform-linux,P2,team-linux,triaged-linux | low | Critical |
2,549,741,482 | go | x/tools/gopls: The completion feature of gopls has high latency when dealing with very large packages. | ### gopls version
golang.org/x/tools/gopls v0.16.2
### go env
```shell
GO111MODULE='auto'
GOARCH='amd64'
GOBIN=''
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH=''
GOPRIVATE=''
GOROOT='/usr/lib/go'
GOSUMDB='off'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/lib/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build1211819663=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
The completion feature of gopls has high latency when dealing with very large packages. Below are the completion latencies observed in the sql package of the cockroachdb source code.
The code statistics for the sql package in cockroachdb are as follows:
```shell
cloc cockroach/pkg/sql
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Go 2974 148944 228093 1355540
-------------------------------------------------------------------------------
SUM: 3504 152684 236151 1424410
-------------------------------------------------------------------------------
```
The distribution of the main function latencies in gopls completion is as follows:
```shell
completion.go:54: elapsed 489.202836ms, Completion entry,
completion.go:484: elapsed 477.769576ms, NarrowestPackageForFile,
snapshot.go:87: elapsed 477.709921ms, TypeCheck,
check.go:415: elapsed 37.059356ms, getPackageHandles,
check.go:422: elapsed 51.8002ms, getImportGraph,
check.go:606: elapsed 194.72ยตs, getImportPackage
check.go:629: elapsed 387.29934ms, checkPackage,
check.go:1614: elapsed 385.05347ms, check.Files
```
The most time-consuming part is ** check.go:1614: elapsed 385.05347ms, check.Files, 288 **, which takes around 385 milliseconds. Adding other operations, the total latency reaches 489.202836ms. This results in noticeable lag during use.
### What did you see happen?
During each completion, a TypeCheck is performed on the package containing the currently modified file, and this process is very time-consuming for large packages.
### What did you expect to see?
Is it possible to only re-parse the current file during completion, rather than performing a TypeCheck on the entire package? This would allow completion to be completed in milliseconds even for large packages.
### Editor and settings
nvim
### Logs
_No response_ | gopls,Tools | low | Critical |
2,549,807,198 | ant-design | Table็ปไปถๅจๅๆถ่ฎพ็ฝฎscroll.xไธบ"max-content"ๅstickyๆถ่กจๅคด้ไฝ | ### Reproduction link
[https://codepen.io/vcxldk/pen/xxvbMoR](https://codepen.io/vcxldk/pen/xxvbMoR)
### Steps to reproduce
ๅๆถ่ฎพ็ฝฎscroll.xไธบ"max-content"๏ผstickyไธบtrue๏ผๅทฆไพงไธคๅ่ฎพ็ฝฎไธบๅทฆไพงๅบๅฎ๏ผๅๅช่ฎพ็ฝฎไธไธช้ๅบๅฎๅ็ๅฎฝๅบฆ๏ผๅ
ถไปๅๆช่ฎพ็ฝฎๅฎฝๅบฆ๏ผ๏ผไธ่กจๆ ผๆ ๆฐๆฎๆถ
### What is expected?
่กจๅคดๆพ็คบๆญฃๅธธ๏ผไธๅบ็ฐ้ไฝ
### What is actually happening?
ๆๅณ็ๅบๅฎๅไผๅบ็ฐ้ไฝ๏ผๅฆไธๅพ๏ผ

| Environment | Info |
| --- | --- |
| antd | 5.21.1 |
| React | 18 |
| System | windows10 22H2 |
| Browser | Chrome117.0.5938.132 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ Bug,Inactive | low | Minor |
2,549,828,361 | ui | Bug : ERR_MODULE_NOT_FOUND, i had Followed All Instruction during Creating My Vite Project still getting this Issue.. | ### Describe the bug
i have followed all the informations which is provided by the shadcn community for creating the vite Project but when i reached to last step and try to run the npx shadcn@latest add button that time i got error like ..
code: 'ERR_MODULE_NOT_FOUND',
url: 'file:///C:/Users/INP/AppData/Local/npm-cache/_npx/d66c5096c7023bfb/node_modules/zod/lib/index.mjs'..
i dont know what is the issue. and also i had faced this issue many times when i want to create project with the typescript , i had successfully used it in the my react js project , but when i try to use it for tsx then it is giving me this error..
my current Node version: 22

### Affected component/components
button and every component
### How to reproduce
follow the all creating project setup for vite.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
MODULE NOT FOUND
```
### System Info
```bash
google chrome , brave
```
### Before submitting
- [ ] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,549,888,035 | ui | [feat]: Request to accept avatar component to support StaticImageData as a type in its source | ### Feature description
Currently the Avatar component supports only string or undefined in its source parameter <Avatar src="url">
Can a feature be added so that it support static images or StaticImageData as a type
### Affected component/components
Avatar
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,549,896,504 | PowerToys | Keyboard-Manager inserts text on wrong shortcut then stops working | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I have a keyboard manager shortcut set to CTRL (Left) + d to write some text.
### โ๏ธ Expected Behavior
I press CTRL (left) + d to write the preset text
### โ Actual Behavior
The shortcut also fired on SHIFT (left) + d and even d without any additional key. When this behaviour stopped, the keyboard manager stopped working alltogether.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,549,929,094 | electron | [Bug]: CrashReporter still uses the default path along with 'crashDumps' path if it's started before app.onReady() | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
31.3.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10 19045.4894
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
On startup, crash reporter to not create a folder at `%APPDATA%/<app.name>` and instead what's provided via `app.setPath('crashDumps',` function.
### Actual Behavior
As the [documentation ](https://www.electronjs.org/docs/latest/api/crash-reporter#crashreporterstartoptions) suggests:
> This method should be called as early as possible in app startup, preferably before app.on('ready'). If the crash reporter is not initialized at the time a renderer process is created, then that renderer process will not be monitored by the crash reporter.
I do the following:
```
app.name = APPLICATION_NAME;
app.setPath('crashDumps', <path>)
crashReporter.start({uploadToServer: false})
app.on('ready', () => this.onReady());
```
And at the start up of the application, this creates a folder at `C:\Users\<username>\AppData\Roaming\APPLICATION_NAME` instead of the `path` I've set. The crash dump still ends up on the `path` I've set but I think also on the unexpected folder.
### Testcase Gist URL
_No response_
### Additional Information
If I run the following before starting the crashReporter, then that path is used instead.
```
app.setPath('userData', <another path>);
```
Which is the right thing to do, but I find it still odd for it to use `userData` instead of `crashDumps`. | platform/windows,bug :beetle:,has-repro-comment,31-x-y | low | Critical |
2,549,941,179 | opencv | testLayerUsingCaffeModels fails with the new DNN engine | ### System Information
Platform: any
Reference: https://github.com/opencv/opencv/pull/26056/
### Detailed description
After unregistration of the custom 'Interp' the model uses the standard Resize layer.
According to the graph, the model must produce 2 x 3 x 18 x 16 tensor with Resize layer,
but the result is compared with 2 x 3 x 17 x 15 tensor, just like the custom 'Interp' layer produced,
so we get the test failure. It looks like the test needs to be fixed.
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Critical |
2,549,957,128 | opencv | Implement concat operation fusion with the new DNN engine | ### System Information
Platform: any
Reference: https://github.com/opencv/opencv/pull/26056/
### Detailed description
Temporarily disabled Concat optimization.
It's not quite compatible with dynamic shapes, so we need to make sure that we correctly predicted shapes of all the concatenated tensors and their offsets inside the result and also properly allocated that concatenated tensor in advance
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | feature,category: dnn | low | Minor |
2,549,968,492 | opencv | Implement fp16 support in the new DNN engine | ### System Information
Platform: any
Reference: https://github.com/opencv/opencv/pull/26056
### Detailed description
-
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | feature,category: dnn | low | Minor |
2,549,980,410 | ollama | Support model allenai/OLMoE-1B-7B-0924 | i want check performance this model(MoE) | model request | low | Major |
2,550,006,183 | opencv | Flatten layer process axis atribute incorrectly | ### System Information
Platform: any
Reference: https://github.com/opencv/opencv/pull/26056
### Detailed description
/*
[TODO] this is not quite correct,
in ONNX Flatten valid range is [0, numAxes],
not [0, numAxes-1] which normalize_axis() produces.
But if we fix it, flatten_const.onnx from opencv_extra
is not processed correctly.
libprotobuf-c reads it correctly,
but the current version of libprotobuf does not
*/
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Minor |
2,550,017,633 | opencv | Implement non-CPU back-ends support in the new DNN engine | Reference: https://github.com/opencv/opencv/pull/26056
### Describe the feature and motivation
-
### Additional context
- | feature,category: dnn | low | Minor |
2,550,025,018 | opencv | Implement getFLOPS for the new DNN engine | Reference: https://github.com/opencv/opencv/pull/26056
### Describe the feature and motivation
-
### Additional context
- | feature,category: dnn | low | Minor |
2,550,029,012 | opencv | Implement custom layers support in the new DNN engine | Reference: https://github.com/opencv/opencv/pull/26056
### Describe the feature and motivation
-
### Additional context
- | feature,category: dnn | low | Minor |
2,550,056,886 | ant-design | Select mode="tags"๏ผ่พๅ
ฅๆฏๅฆโ22โๅ๏ผๅๆฌก็นๅป่พๅ
ฅๆก๏ผโ22โไผ่ขซๆธ
็ฉบใๅฆไธ็งๅบๆฏๆฏๅคๅถ็ฒ่ดดๅๅๆฌก็นๅป่พๅ
ฅๆก๏ผ็ฒ่ดด็ๅ
ๅฎนไนไผ่ขซๆธ
็ฉบ | ### Reproduction link
[https://ant-design.antgroup.com/components/select-cn](https://ant-design.antgroup.com/components/select-cn)
### Steps to reproduce
Select mode="tags"๏ผ่พๅ
ฅๆฏๅฆโ22โๅ๏ผๅๆฌก็นๅป่พๅ
ฅๆก๏ผโ22โไผ่ขซๆธ
็ฉบใๅฆไธ็งๅบๆฏๆฏๅคๅถ็ฒ่ดดๅๅๆฌก็นๅป่พๅ
ฅๆก๏ผ็ฒ่ดด็ๅ
ๅฎนไนไผ่ขซๆธ
็ฉบ๏ผๅฎๆน็คบไพๅณๅฏๅค็ฐ


### What is expected?
้็ฐๅบๆฏไธญ็ๅ
ๅฎนๅจๅๆฌก็นๅป่พๅ
ฅๆก็ๆถๅไธ่ขซๆธ
็ฉบ
### What is actually happening?
่พๅ
ฅๆ่
ๅคๅถ็ๅ
ๅฎน่ขซๆธ
็ฉบไบ
| Environment | Info |
| --- | --- |
| antd | 5.20.1 |
| React | ^18.2.0 |
| System | window |
| Browser | google |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ Bug,Inactive | low | Major |
2,550,063,539 | flutter | [go_router] ModalBottomSheetRoute & StatefulShellRoute it is not possible to hide the sheet properly after it has opened from branch [go_router] | ### Steps to reproduce
1. You need to copy the example from go_router using StatefulShellRoute.
2. Add an additional tab in the lower bar as a Menu, by clicking on which the Modal Bottom Sheet should open.
3. When switching between tabs, it is not possible to close the bottom sheet automatically.
### Expected results
When switching between tabs, it should work as it does, but if you click on Menu, then the modal bottom sheet should open, where the previous screen will be behind, from where he clicked on Menu.
After that, when you click on another tab, the bottom sheet should close.
### Actual results
I tried in different ways, using the global key to check for the presence of context, using canpol from context, also using local variables as described in the example below. In all cases, there are nuances, for example, if, following the example below, you run and poke between tabs, then at what point does the exception multiple global key happen, sometimes the sheet simply does not have time to close, so I tested and set future delayed so that it could close. But the problem with multiple global key remains relevant, and I don't like this method, maybe there is some official solution? I didn't find it in the examples.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
final GlobalKey<NavigatorState> _rootNavigatorKey = GlobalKey<NavigatorState>(debugLabel: 'root');
final GlobalKey<NavigatorState> _sectionANavigatorKey = GlobalKey<NavigatorState>(debugLabel: 'sectionANav');
void main() {
runApp(NestedTabNavigationExampleApp());
}
class NestedTabNavigationExampleApp extends StatelessWidget {
NestedTabNavigationExampleApp({super.key});
final GoRouter _router = GoRouter(
navigatorKey: _rootNavigatorKey,
initialLocation: '/a',
routes: <RouteBase>[
StatefulShellRoute.indexedStack(
builder: (BuildContext context, GoRouterState state, StatefulNavigationShell navigationShell) {
return ScaffoldWithNavBar(
navigationShell: navigationShell,
);
},
branches: <StatefulShellBranch>[
StatefulShellBranch(
navigatorKey: _sectionANavigatorKey,
routes: <RouteBase>[
GoRoute(
path: '/a',
builder: (BuildContext context, GoRouterState state) =>
const RootScreen(label: 'A', detailsPath: '/a/details'),
routes: <RouteBase>[
GoRoute(
path: 'details',
builder: (BuildContext context, GoRouterState state) => const DetailsScreen(label: 'A'),
),
],
),
],
),
StatefulShellBranch(
routes: <RouteBase>[
GoRoute(
path: '/b',
builder: (BuildContext context, GoRouterState state) => const RootScreen(
label: 'B',
detailsPath: '/b/details/1',
secondDetailsPath: '/b/details/2',
),
routes: <RouteBase>[
GoRoute(
path: 'details/:param',
builder: (BuildContext context, GoRouterState state) => DetailsScreen(
label: 'B',
param: state.pathParameters['param'],
),
),
],
),
],
),
StatefulShellBranch(
routes: <RouteBase>[
GoRoute(
path: '/c',
builder: (BuildContext context, GoRouterState state) => const RootScreen(
label: 'C',
detailsPath: '/c/details',
),
routes: <RouteBase>[
GoRoute(
path: 'details',
builder: (BuildContext context, GoRouterState state) => DetailsScreen(
label: 'C',
extra: state.extra,
),
),
],
),
],
),
StatefulShellBranch(
routes: <RouteBase>[
GoRoute(
path: '/menu',
pageBuilder: (context, state) => const BottomSheetPage(
child: _MenuBottomSheetBody(),
),
),
],
),
],
),
],
);
@override
Widget build(BuildContext context) => MaterialApp.router(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
routerConfig: _router,
);
}
class _MenuBottomSheetBody extends StatelessWidget {
const _MenuBottomSheetBody();
@override
Widget build(BuildContext context) {
return DecoratedBox(
decoration: const BoxDecoration(
color: Colors.white,
borderRadius: BorderRadius.only(
topLeft: Radius.circular(16),
topRight: Radius.circular(16),
),
),
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
ListTile(
title: const Text('Option 1'),
onTap: () {},
),
ListTile(
title: const Text('Option 2'),
onTap: () {},
),
],
),
);
}
}
class ScaffoldWithNavBar extends StatefulWidget {
const ScaffoldWithNavBar({
required this.navigationShell,
Key? key,
}) : super(key: key ?? const ValueKey<String>('ScaffoldWithNavBar'));
final StatefulNavigationShell navigationShell;
@override
State<ScaffoldWithNavBar> createState() => _ScaffoldWithNavBarState();
}
class _ScaffoldWithNavBarState extends State<ScaffoldWithNavBar> with TickerProviderStateMixin {
bool _isSheetOpen = false;
late final TabController _tabController = TabController(length: 4, vsync: this);
@override
void dispose() {
super.dispose();
_tabController.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
key: const ValueKey<String>('ScaffoldWithNavBar'),
body: widget.navigationShell,
bottomNavigationBar: TabBar(
controller: _tabController,
indicator: TopIndicator(),
tabs: const <Tab>[
Tab(icon: Icon(Icons.home), text: 'Section A'),
Tab(icon: Icon(Icons.work), text: 'Section B'),
Tab(icon: Icon(Icons.tab), text: 'Section C'),
Tab(icon: Icon(Icons.menu), text: 'Menu'),
],
onTap: (int index) async {
await Future.delayed(const Duration(milliseconds: 200), () {
if (_isSheetOpen) {
setState(() {
_isSheetOpen = false;
context.pop();
});
}
});
if (index == 3) {
setState(() {
_isSheetOpen = true;
GoRouter.of(context).push('/menu');
});
} else {
widget.navigationShell.goBranch(
index,
initialLocation: index == widget.navigationShell.currentIndex,
);
}
},
),
);
}
}
class TopIndicator extends Decoration {
@override
BoxPainter createBoxPainter([VoidCallback? onChanged]) => _TopIndicatorBox();
}
class _TopIndicatorBox extends BoxPainter {
@override
void paint(Canvas canvas, Offset offset, ImageConfiguration cfg) {
final paint = Paint()
..shader = const RadialGradient(
colors: [
Colors.black,
Colors.black,
],
).createShader(
Rect.fromCircle(
center: offset,
radius: 0,
),
)
..strokeWidth = 2
..isAntiAlias = true
..strokeCap = StrokeCap.square;
canvas.drawLine(
Offset(offset.dx, 0.5),
Offset(cfg.size!.width + offset.dx, 0.5),
paint,
);
}
}
class RootScreen extends StatelessWidget {
const RootScreen({
required this.label,
required this.detailsPath,
this.secondDetailsPath,
super.key,
});
final String label;
final String detailsPath;
final String? secondDetailsPath;
@override
Widget build(BuildContext context) => Scaffold(
appBar: AppBar(
title: Text('Root of section $label'),
),
body: Center(
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
Text('Screen $label', style: Theme.of(context).textTheme.titleLarge),
const Padding(padding: EdgeInsets.all(4)),
TextButton(
onPressed: () {
GoRouter.of(context).go(detailsPath, extra: '$label-XYZ');
},
child: const Text('View details'),
),
const Padding(padding: EdgeInsets.all(4)),
if (secondDetailsPath != null)
TextButton(
onPressed: () {
GoRouter.of(context).go(secondDetailsPath!);
},
child: const Text('View more details'),
),
],
),
),
);
}
class DetailsScreen extends StatefulWidget {
const DetailsScreen({
required this.label,
this.param,
this.extra,
this.withScaffold = true,
super.key,
});
final String label;
final String? param;
final Object? extra;
final bool withScaffold;
@override
State<StatefulWidget> createState() => DetailsScreenState();
}
class DetailsScreenState extends State<DetailsScreen> {
int _counter = 0;
@override
Widget build(BuildContext context) {
if (widget.withScaffold) {
return Scaffold(
appBar: AppBar(
title: Text('Details Screen - ${widget.label}'),
),
body: _build(context),
);
} else {
return ColoredBox(
color: Theme.of(context).scaffoldBackgroundColor,
child: _build(context),
);
}
}
Widget _build(BuildContext context) => Center(
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
Text('Details for ${widget.label} - Counter: $_counter', style: Theme.of(context).textTheme.titleLarge),
const Padding(padding: EdgeInsets.all(4)),
TextButton(
onPressed: () {
setState(() {
_counter++;
});
},
child: const Text('Increment counter'),
),
const Padding(padding: EdgeInsets.all(8)),
if (widget.param != null)
Text('Parameter: ${widget.param!}', style: Theme.of(context).textTheme.titleMedium),
const Padding(padding: EdgeInsets.all(8)),
if (widget.extra != null) Text('Extra: ${widget.extra!}', style: Theme.of(context).textTheme.titleMedium),
if (!widget.withScaffold) ...<Widget>[
const Padding(padding: EdgeInsets.all(16)),
TextButton(
onPressed: () {
GoRouter.of(context).pop();
},
child: const Text('< Back', style: TextStyle(fontWeight: FontWeight.bold, fontSize: 18)),
),
],
],
),
);
}
class BottomSheetPage extends Page {
const BottomSheetPage({
required this.child,
this.showDragHandle = false,
this.useSafeArea = false,
super.key,
});
final Widget child;
final bool showDragHandle;
final bool useSafeArea;
@override
Route createRoute(BuildContext context) => ModalBottomSheetRoute(
settings: this,
isScrollControlled: true,
showDragHandle: showDragHandle,
useSafeArea: useSafeArea,
backgroundColor: Colors.transparent,
builder: (context) => (ModalRoute.of(context)!.settings as BottomSheetPage).child,
);
}
```
</details>
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs</summary>
```console
E/flutter (14511): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: 'package:flutter/src/widgets/navigator.dart': Failed assertion: line 5350 pos 12: '!_debugLocked': is not true.
E/flutter (14511): #0 _AssertionError._doThrowNew (dart:core-patch/errors_patch.dart:50:61)
E/flutter (14511): #1 _AssertionError._throwNew (dart:core-patch/errors_patch.dart:40:5)
E/flutter (14511): #2 NavigatorState.pop (package:flutter/src/widgets/navigator.dart:5350:12)
E/flutter (14511): #3 NavigatorState.maybePop (package:flutter/src/widgets/navigator.dart:5316:9)
E/flutter (14511): <asynchronous suspension>
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.1, on macOS 15.0 24A335 darwin-arm64, locale ru-KZ)
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 16.0)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2024.1)
[โ] VS Code (version 1.93.1)
[โ] Connected device (4 available)
[โ] Network resources
โข No issues found!
```
</details>
| package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.26 | low | Critical |
2,550,069,980 | godot | Visual Shader looks messy after changed display scale factor | ### Tested versions
4.3.stable.mono.official
### System information
Windows 11
### Issue description
I have done a visual shader with a display with 1080p resolution, 100% scaling as this:

But after I switched to another display with 200% scaling, all nodes looks like just enlarged 2x but stay on they orginal position.
It makes nodes look messy and hard to edit.

### Steps to reproduce
1. Make a Visual Shader in a 1080P, 100% scale display.
2. Close godot project.
3. Set another 2160P, 200% scale display to primary display.
4. Reopen godot project.
### Minimal reproduction project (MRP)
[project.zip](https://github.com/user-attachments/files/17146264/project.zip)
| bug,topic:porting,topic:shaders,topic:gui | low | Minor |
2,550,082,127 | react | [Compiler Bug]: setState in useEffect / react compiler and eslint hooks plugin | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAgBMEAzAQygBsCSoA7OXASwjvwFkBPAQU0wApg+TGTBgEhAEql8AXwCU+YAB02+OKzAEA2rjIwA5glwAafONwAVfUdwBdfAF58UcQGU9uBPzpVK81VV8fAB6EPwEMEomOlwAWkImMDIAI0oEOLoMeOis-BgEMkY4jQBbTCZ0mBCCoviyiqqg0PDI3PjE5LSMrPQcmIR8wuKACwgIAGswEIwRii0mADcM4kwwZuk6gDpXBABREhIERn5+RUcAPiVm4KYSfH4RMQlpEi24WALY-ABCR2c9IZjIoVGpgsELNYgbhHqJxFJSO9PghYvIANw3OTNBQYujNDR0LRKcy4TjpMByJwucQABQgfAQMH4gNs6MCagKuFgbAAPIlFiSyZFHMAtEKwLILgAJBCUSgQfAAdRwlEIPJC-IuuNk7Iw2DwRFIFGo+FoDGYrA4PD4ACZBMI4S8ZAprmoCUTdDZjGZIV77FTdh4yF4fH4Ani1GbGCw2FBMIRgwgobYzq7wfg7g8nvDXkiYF8CH8AX6QZiIcZk8ZYc8EW8PvmUbg2WCsWodRHgptGDtxAcjidU5c0+C4wmvJWYc3gjj2cF3QQhGLyZTnLs6QymSzgbjmpzufg+UtBeSRUvIpKZXKFcqYKr1ZrtaoQLIgA
### Repro steps
the eslint plugin react hook complains about calling setState in a useEffect:
more discussion here: https://github.com/facebook/react/issues/15329
I'm happy with the eslint error ;- it is potentially dangerous.
but the eslint-plugin-react-compiler errors if you disable the react hook rule ;- in this particular case this seems over the top.
Will it stop the react compiler compiling this file or its just a eslint extra warning?
In the 2nd example, I've abstracted the setState to another function and I get no errors. This is fine in the context of the eslint plugin but what about react compiler?
I was lead to believe a eslint compiler plugin warning means it will *not* be compiled and bail out, but I guess for warnings because of disabling eslint rules, this is not the case?
In my opinion:
* The react compiler eslint plugin should not error on disabling the react hooks plugin
* if the react compiler thinks something is a problem for the compiler, it should generate its own warning and as in this case it would need to be more thorough than the eslint plugin hooks plugin
### How often does this bug happen?
Every time
### What version of React are you using?
0.0.0-experimental-92aaa43-20240924 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,550,167,651 | pytorch | Provide `gather_mm` functionality and/or expand nested tensor support | ### ๐ The feature, motivation and pitch
I am interested in achieving the same functionality as [dgl.ops.gather_mm](https://docs.dgl.ai/generated/dgl.ops.gather_mm.html) without having to rely on DGL as an external dependency (as it is not trivial to get up and running).
`gather_mm` essentially performs a set of dense matrix multiplications using a single invocation while looking up the corresponding RHS matrices (e.g. weights) on demand.
### Alternatives
The functionality can be implemented using torch nested tensors. This avoids the need for a foor loop to compute the matrix multiplications however for loop are still needed to reshape / convert to and from input tensors and nested tensor. This limits the performance and increases the memory requirement.
Below is a code snippet for it:
```python
import torch
print(f'Running PyTorch version: {torch.__version__}')
torchdevice = torch.device('cpu')
if torch.cuda.is_available():
torchdevice = torch.device('cuda')
print('Default GPU is ' + torch.cuda.get_device_name(torch.device('cuda')))
print('Running on ' + str(torchdevice))
NN = 1000
DD1 = 32
DD2 = 7
RR = 10
aa = torch.randn((NN,DD1), device=torchdevice)
bb = torch.randn((RR,DD1,DD2), device=torchdevice)
iidx_b = torch.randint(low=0, high=RR, size=(NN,),device=torchdevice)
def my_gather_mm(a, b, idx_b):
# mimic https://docs.dgl.ai/generated/dgl.ops.gather_mm.html
R,D1,D2 = b.shape
N = idx_b.shape[0]
# Sanity check sizes
assert(a.shape[0]==N and a.shape[1]==D1)
torchdevice = a.device
src_idx = torch.arange(N,device=torchdevice)
# Ideally the conversions below to nested tensor would be handled without for looops and without copy
nested_a = torch.nested.as_nested_tensor([a[idx_b==i,:] for i in range(R)] )
src_idx_reshuffled = torch.cat( [src_idx[idx_b==i] for i in range(R)] )
nested_b = torch.nested.as_nested_tensor(
[b[i,:,:].squeeze() for i in range(R)] )
# The actual gather matmul computation
nested_ab = torch.matmul(nested_a,nested_b)
# Convert back to tensors, again, ideally this would be handled natively with no copy
ab_segmented = torch.cat(nested_ab.unbind(),dim=0)
ab = torch.empty((N,D2),device=torchdevice)
ab[src_idx_reshuffled] = ab_segmented
return ab
aab = my_gather_mm(aa, bb, iidx_b)
print(aab.shape)
```
### Additional context
A relevant paper looking at optimising `gather_mm`:
https://ieeexplore.ieee.org/abstract/document/10196568
PyG has a similar `segment_matmul` op:
https://pyg-lib.readthedocs.io/en/latest/modules/ops.html#pyg_lib.ops.segment_matmul
which is also found in DGL:
https://docs.dgl.ai/generated/dgl.ops.segment_mm.html
Representing nested tensors as concatenated tensors is discussed here:
https://github.com/pytorch/nestedtensor/issues/453
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: nestedtensor | low | Major |
2,550,198,982 | vscode | Remove duplicate rendering code of Context Attachments | There seems to be 2 code copies of attachment rendering ([List Renderer](https://github.com/microsoft/vscode/blob/0da18f3d72b6d9c72f2d20a7545f4eac8a69b1ee/src/vs/workbench/contrib/chat/browser/chatContentParts/chatAttachmentsContentPart.ts#L50), [Input Part](https://github.com/microsoft/vscode/blob/0da18f3d72b6d9c72f2d20a7545f4eac8a69b1ee/src/vs/workbench/contrib/chat/browser/chatInputPart.ts#L706)). Every time we introduce a new attachment type or change some of the logic we need to make the change in both places which is not ideal and can easily get forgotten. | debt,chat | low | Minor |
2,550,257,527 | opencv | TrackerDaSiamRPN does not work with the new DNN engine | ### System Information
Platform: Any
Reference: https://github.com/opencv/opencv/pull/26056
### Detailed description
-
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: video,category: dnn | low | Minor |
2,550,260,331 | pytorch | Inconsistent behavior of cdist with half-precision inputs | ### ๐ Describe the bug
By default, `torch.cdist` currently does not accept half-precision datatypes as input, throwing a `RuntimeError: "cdist" not implemented for 'Half'`.
However, if the inputs get large enough, or `compute_mode` is manually set to `"use_mm_for_euclid_dist"`, no error is thrown, but the resulting distances are more numerically inaccurate than expected.
```python3
a = torch.rand((1, 4096), dtype=torch.float16)
b = torch.rand((32, 4096), dtype=torch.float16)
naive = torch.sum((a[None] - b) ** 2, dim=-1) ** 0.5
stable = (a[None] - b).norm(p=2, dim=-1)
cdist = torch.cdist(a, b, compute_mode="use_mm_for_euclid_dist")
ref = torch.cdist(a.float(), b.float())
print(
(naive - ref).square().mean().item(),
(stable - ref).square().mean().item(),
(cdist - ref).square().mean().item()
)
```
`>>> 2.727540777414106e-05 2.2074902517488226e-05 6.475927511928603e-05`
Perhaps it would be better to either implement a more numerically stable version of cdist for float16, or to raise the error consistently across input sizes.
### Versions
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
GPU 7: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.161.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7543 32-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3737.8899
CPU min MHz: 1500.0000
BogoMIPS: 5599.92
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] rotary-embedding-torch==0.7.0
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchlaplace==0.0.4
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.8 py312h5eee18b_0
[conda] mkl_random 1.2.4 py312hdb19cb5_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] pytorch 2.4.0 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] rotary-embedding-torch 0.7.0 pypi_0 pypi
[conda] torchaudio 2.4.0 py312_cu124 pytorch
[conda] torchlaplace 0.0.4 pypi_0 pypi
[conda] torchtriton 3.0.0 py312 pytorch
[conda] torchvision 0.19.0 py312_cu124 pytorch
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @fritzo @neerajprad @alicanb @nikitaved | module: cpu,triaged,actionable,module: norms and normalization | low | Critical |
2,550,264,308 | storybook | [Documentation]: globalTypes icon deprecated documentation | ### Describe the problem
The documentation for using icons seems to be outdated. I can also not find any documentation on the available icon names. Some icons from @storybook/icon seem to work when using them all lowercase and without the `Icon` part.
| documentation,needs triage | low | Minor |
2,550,281,488 | angular | Spawning Web Worker from previous AppVersion results in network fallback | ### Which @angular/* package(s) are the source of the bug?
service-worker
### Is this a regression?
Yes
### Description
Consider the following scenario:
- There is an existing Angular app with Service Worker
- An angular app has an on-demand web worker, that is cached via Service Worker
- A web worker is updated and a new version of an Angular app is deployed
- Existing client received a new version of an Angular app, but did not refresh an already opened tab
In that scenario, when an existing client, that already received a new version of an Angular app, but did not refresh an already opened tab, triggers a code path, where a new Web Worker is created, a Web Worker script is fetched from the network, instead of from a cache. Because of the nature of deployment, scripts from a previous version are not stored on a server, meaning a request for an old Web Worker script results in a 404 error, and eventually in a Web Worker initialization failure.
I'd expect a Service Worker to serve a Web Worker script for an old version if a fetch is initiated from an old Angular app version.
Please use a minimal reproduction repository and follow the steps in the video to simulate the issue.
[ngsw-web-worker.webm](https://github.com/user-attachments/assets/2da4f484-747a-42c4-ac5b-fd56cc412e01)
### Please provide a link to a minimal reproduction of the bug
https://github.com/rozpuszczalny/ngsw-web-worker
### Please provide the exception or error you saw
```true
Worker emitted an Event on 'error' callback; service worker called network, where it should use cache.
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.2.6
Node: 20.12.0
Package Manager: npm 10.5.0
OS: linux x64
Angular: 18.2.6
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
... service-worker
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.6
@angular-devkit/build-angular 18.2.6
@angular-devkit/core 18.2.6
@angular-devkit/schematics 18.2.6
@schematics/angular 18.2.6
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
```
### Anything else?
We are experiencing such an issue in our production setup, which makes an update process quite painful for the users. Weirdly, for some users, simply reloading a tab does not solve the issue - we have to instruct the users to manually delete a Service Worker via `chrome://serviceworker-internals`. We still didn't identify nor reproduce that scenario where reload is not working.
---
Probably introduced by https://github.com/angular/angular/pull/42607, which was first released with Angular v12.
---
A possible solution is to check if `clientId` and `resultingClientId` are both set. If that's true, `clientId` should be used over `resultingClientId`. My crude testing showed (at least in Chrome browser) that `resultingClientId` is the client ID of a newly created web worker, and `clientId` is the client ID of a tab that spawned a web worker.
Since `resultingClientId` is relevant when opening a new tab, I've tested and noticed that `clientId` is set to an empty string when opening a website in a separate tab. However, opening `window.open('/', '_top')` sets both `clientId` and `resultingClientId` - similar to a web worker scenario. Unfortunately, `location.reload()` also sets both `clientId` and `resultingClientId`, which further complicates a potential solution.
https://github.com/angular/angular/blob/1549afe10eddc92845cd1fde862aaa010c2395af/packages/service-worker/worker/src/driver.ts#L713-L713 | area: service-worker | low | Critical |
2,550,314,135 | angular | Creating custom CVA based components is STILL almost impossible to in most not-super-basic scenarios (regardless the unified event api) | ### Which @angular/* package(s) are relevant/related to the feature request?
forms
### Description
It is still impossible to implement working custom CVA based components in 50% of scenarios.
Main cause of this is non-existing way how to get ALL information about AbstractControl.
Why?
- there is not any 100% working way how to get information about important control state (valid/invalid/pristine/touched).
- it is important to know control states like valid/invalid/pristine/touched to correctly implement smarter components than hello world
Why current implementation does not work?
- CVA interface does not provide ANY way how to get information about status changes (it can only react to ValueChanges - writevalue and disabled changes - setDisabled), there is not anything like setValid(), setTouched() pristine etc.
- You can get ngControl from injector (because CVA is solving only 20% of domain...) .Control has now "unified events api" https://angular.dev/api/forms/AbstractControl#events unfortunately, it is preserved when {emitEvent: false} during the method calls
- It is EVEN BIGGER Problem as far as disabled control has BOTH valid and invalid SET TO FALSE (lol), so you just DO NOT HAVE ANY way how to get any information about the state of validity, when your control is enabled with {emitEvent:false} (check source code below)
- it is so sad that the only way how to get 100% information about control state is to LISTEN for DOM mutation on Host element (to read ng-invalid, ng-valid, ng-pending, ng-prisitine, etc. classes)
- the same problem when validity of the control changes when the new value is set - you have NO WAY how to get the new valid / invalid status when user patch/set value with emitEvent: false. You cannot even handle this inside of writeValue() because validators are executed after that.... So not any workaround..
FORCING user not to use {emitEvent:false } which is his way to prevent emiting valueChanges "ON THE OUTSIDE SIDE" - aka. consumer of the component (in bigger form) is definitely not a solution to this problem, where Component creator has not any legit way how to get state of the control.
```
import { Component } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormControl, PristineChangeEvent, ReactiveFormsModule, StatusChangeEvent, TouchedChangeEvent} from '@angular/forms';
import { bootstrapApplication } from '@angular/platform-browser';
@Component({
selector: 'app-root',
standalone: true,
imports: [ReactiveFormsModule, CommonModule],
template: `
<input type="number" [formControl]="number"/>
Valid: {{number.valid}}
Invalid: {{number.invalid}}
Errors: {{number.errors | json}}
<button (click)="number.enable({emitEvent: false})">ENABLE</button>
<hr/>
Problems:
<ul>
<li>Disabled input has both VALID and INVALID set TRUE - eventhogh there is no validator and it NO ERRORS</li>
<li>When you enable control - it is now valid, but NO events is fired from Control.Events</li>
</ul>
<hr/>
Events:
<ul>
@for(event of events; track $index) {
<li>{{event | json}}</li>
}
</ul>
`,
})
export class App {
events: any[] =[];
number = new FormControl();
ngOnInit( ){
this.number.disable();
this.number.events.subscribe(e=> {
if (e instanceof StatusChangeEvent) {
this.events.push("Status - " + e.status);
} else if (e instanceof TouchedChangeEvent) {
this.events.push("Touched - "+ e.touched);
} else if (e instanceof PristineChangeEvent) {
this.events.push("Pristine - "+ e.pristine);
}
}
);
}
}
bootstrapApplication(App);
```
### Proposed solution
- more methods in CVA interface
- event emitter on AbstractControl which emits everytime, regardless the emitEvent settings
### Alternatives considered
- our only WAY how to solve this is to write our own AbstractControl's implementations (inherit the base ones from angular) and to add additional event emiiters...
- in this case where is the main problem the valid/invalid state of control, you can "rebind" your internal valid/invalid/pristine/touched state when setDisabled() of CVA is called.
- in case of changed valid/invalid state when the new value is set to control with emitEvent:false, there is not any way except to "queue" sync check after "writeValue"
<img width="452" alt="image" src="https://github.com/user-attachments/assets/89225650-c8b5-45cc-8a07-d198a8fe1b51">
Just sad state. | area: forms | low | Critical |
2,550,328,093 | flutter | [in_app_purchase] A way to listen to StoreKit messages in flutter? | ### Use case
Currently, We can only listen to purchase stream in flutter. I want to be able to listen to Storekit messages and respond to them. This will be useful for handling errors related to billing issues and winback offers.
### Proposal
Swift Code for sample:
```swift
for await message in StoreKit.Message.messages {
if message.reason != .winBackOffer {
// Ask the system to display messages now.
try? displayStoreMessage(message)
}
}
``` | platform-ios,p: in_app_purchase,package,c: proposal,P2,team-ios,triaged-ios | low | Critical |
2,550,401,287 | pytorch | Poor-quality random numbers generated by torch.poisson on gpus | ### ๐ Describe the bug
Random numbers sampled from `torch.poisson` on a gpu are poor-quality, this leads easly to wrong statistical predictions. The effect is enhanced when sampling occurs in blocks of size `(512, 512)`.
The following code demonstrates this by sampling two poisson distributions and looking at their difference (for a single entry of the `(512, 512)` sized tensor). The result should be skellam distributed and hence the mean converge to the difference of the two entries and the variance to their sum.
```python
import torch as pt
import pandas as pd
import numpy as np
from typing import Tuple
def update_default(n_current: int,
tensor_1: pt.Tensor,
tensor_2: pt.Tensor,
M1_aggregate: pt.Tensor,
M2_aggregate: pt.Tensor,
generator : pt.Generator) -> None:
noisy_tensor_1 = pt.poisson(tensor_1, generator=generator)
noisy_tensor_2 = pt.poisson(tensor_2, generator=generator)
diff_tensor = noisy_tensor_2 - noisy_tensor_1
delta_M1 = diff_tensor - M1_aggregate
M1_aggregate += delta_M1 / n_current
delta_M2 = diff_tensor - M1_aggregate
M2_aggregate += delta_M1 * delta_M2
def run(n_total: int,
tensor_1: pt.Tensor,
tensor_2: pt.Tensor,
generator : pt.Generator) -> Tuple[pt.Tensor, pt.Tensor]:
M1_agg = pt.full_like(tensor_1, 0.)
M2_agg = pt.full_like(tensor_1, 0.)
for i in range(0, n_total):
update_default(n_current=i+1, tensor_1=tensor_1, tensor_2=tensor_2, M1_aggregate=M1_agg, M2_aggregate=M2_agg, generator=generator)
return M1_agg, M2_agg / n_total
def get_estimates_at_index(n: int,
ind: list[int, int],
tensor1: pt.Tensor,
tensor2: pt.Tensor,
generator : pt.Generator) -> Tuple[float, float]:
mean, var = run(n, tensor1, tensor2, generator)
return mean[*ind].item(), var[*ind].item()
def get_numpy_estimate(n:int, val1:float, val2:float) -> Tuple[float, float]:
diff = np.random.poisson(val2, (n,)).astype(float) - np.random.poisson(val1, (n,)).astype(float)
mean = diff.mean()
var = diff.var()
return mean, var
device = 'cuda'
g_cuda = pt.Generator(device=device)
g_cuda.manual_seed(7324786)
np.random.default_rng(9954786)
shape = (512, 512)
center = [256, 256]
ti_1 = pt.full(shape, 21.2, device=device, dtype=float)
ti_2 = ti_1.clone()
ti_2[*center] = 1.5 * ti_2[*center]
ind_results = []
for n in [100, 1000, 10000, 100000, 1000000, 10000000]:
mean_default, var_default = get_estimates_at_index(n, center, ti_1, ti_2, g_cuda)
mean_np, var_np = get_numpy_estimate(n, ti_1[*center].item(), ti_2[*center].item())
ind_results.append(pd.DataFrame({
'n': n,
'mean_pt': mean_default,
'var_pt': var_default,
'mean_np': mean_np,
'var_np': var_np},
index=[0]))
result = pd.concat(ind_results, ignore_index=True).sort_values(by='n')
print(result)
# https://en.wikipedia.org/wiki/Skellam_distribution
print(f"Expectation mean: {ti_2[*center] - ti_1[*center]}")
print(f"Expectation Variance : {ti_2[*center] + ti_1[*center]}")
```
The code runs for a bit (you can reduce the number of samples to obtain a faster indication), and eventually returns:
```
n mean_pt var_pt mean_np var_np
0 100 9.980000 45.059600 10.330000 48.101100
1 1000 10.541000 50.198319 10.705000 51.733975
2 10000 10.661000 48.355879 10.683200 52.867838
3 100000 10.567200 48.061524 10.589090 52.812283
4 1000000 10.604558 48.009594 10.600764 53.057829
5 10000000 10.601304 48.071595 10.599137 52.978110
Expectation mean: 10.599999999999998
Expectation Variance : 53.0
```
Notes:
- `numpy` included for reference
- It works correctly when setting device to `cpu`
- Code verified using normal distributions and device `cpu`
### Versions
Collecting environment information...
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.5 | packaged by Anaconda, Inc. | (main, Sep 12 2024, 18:18:29) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2050
Nvidia driver version: 552.41
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=1400
DeviceID=CPU0
Family=1
L2CacheSize=18432
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=1400
Name=Intel(R) Core(TM) Ultra 7 165H
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-service 2.4.0 py312h2bbff1b_1
[conda] mkl_fft 1.3.10 py312h827c3e9_0
[conda] mkl_random 1.2.7 py312h0158946_0
[conda] numpy 1.26.4 py312hfd52020_0
[conda] numpy-base 1.26.4 py312h4dde369_0
[conda] pytorch 2.4.1 py3.12_cuda12.4_cudnn9_0 pytorch
[conda] pytorch-cuda 12.4 h3fd98bf_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.1 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
cc @fritzo @neerajprad @alicanb @nikitaved | module: distributions,triaged | low | Critical |
2,550,494,291 | TypeScript | Incomplete typecheck when assigning to a mapped object type with remapped keys | ### ๐ Search Terms
typechecker, mapped object type, remapped keys, ts2322
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about mapped types.
- Tried on `nightly` and every version down to 4.1.5 (first version supporting remapped keys)
### โฏ Playground Link
https://www.typescriptlang.org/play/?#code/C4TwDgpgBAygFgQ0lAvFA3lAZgexwLigGdgAnASwDsBzAbigCMFTDKBXAWwYlPoF9aAKFDJ4SCAFkkqDNjyEA5LhwAmBfSYsoCzWv5CR0KWEgATMcjSYA2gGkoVKAGsIIHFliJkCIp-HG7AF1AwgsIIKgBQUEsNkoAY2ByHEooDiQANQQAGzYIAB57CAAPYAhKU18XNw8wgD4ACkEoFudXQlsAGmbWoi8IUP7u1rSkM0JjMzDh1vSwQf8kboBKDB6W+JSSUZMIU1tXGTnrasChEc3KbYA3HLyZPvET1zP1nbNrObMDkECZW9yECEfCAA
### ๐ป Code
```ts
type Shape = { foo: string; bar: number; };
type ShapeMap = { foo: 'foo2'; bar: 'bar2'; };
type MappedShape = { [K in keyof Shape as ShapeMap[K]]: Shape[K] };
function mapValue<K extends keyof Shape>(
key: K,
shape: Shape,
mapped: MappedShape,
map: ShapeMap,
) {
const mappedKey = map[key];
const value = shape[key];
mapped[mappedKey] = value; // <-- Not allowed to assign `value` here
}
```
### ๐ Actual behavior
The error says
```
Type 'Shape[K]' is not assignable to type 'MappedShape[ShapeMap[K]]'.
Type 'Shape' is missing the following properties from type 'MappedShape': foo2, bar2
```
The second line is correct, `Shape` is different from `MappedShape` since they don't have any property names in common (which is the idea ๐). That incompatibility is not relevant when it's the value in `Shape[K]` that should be assignable to `MappedShape[ShapeMap[K]]`.
To an uninitiated eye it looks like the typechecking either ends too early or too late since it compares the wrong types when trying to do the assignment. That's just a guess that might be incorrect so take that with all the salt you want ๐
### ๐ Expected behavior
Assigning `Shape[K]` to `MappedShape[ShapeMap[K]]` should be allowed since they are the same types when resolved.
### Additional information about the issue
If changing the code to always take `foo` as the key ([Playground](https://www.typescriptlang.org/play/?#code/C4TwDgpgBAygFgQ0lAvFA3lAZgexwLigGdgAnASwDsBzAbigCMFTDKBXAWwYlPoF9aAKFDJ4SCAFkkqDNjyEA5LhwAmBfSYsoCzWv5CR0KWEgATMcjSYA2gGkoVKAGsIIHFliJkCIp-HG7AF1AwgsIIKgBQUEsNkoAY2ByHEooDiQANQQAGzYIAApBKGLnV0VlBQAaIpKiLwhQ+uqStKQzQmMzMOaS9LBG-yRqgEoMGuL4lJJWkwhTW1cZPusXEEChFsnKaYA3HLyZOvEV13XxmbNrPrMFtZk93IghPiA)) it works correctly:
```ts
type Shape = { foo: string; bar: number; };
type ShapeMap = { foo: 'foo2'; bar: 'bar2'; };
type MappedShape = { [K in keyof Shape as ShapeMap[K]]: Shape[K] };
function mapValue(
key: 'foo',
shape: Shape,
mapped: MappedShape,
map: ShapeMap,
) {
const mappedKey = map[key];
const value = shape[key];
mapped[mappedKey] = value; // No type error
}
``` | Suggestion,Awaiting More Feedback | low | Critical |
2,550,496,065 | bitcoin | depends: llvm-ranlib (etc): "No such file or directory" on Intel macOS 15.0 | Tried on Intel macOS 15.0 (Xcode 16.0) and 13.7 (Xcode 15.2):
```
$ cd depends
$ make
/bin/sh: command -v llvm-ranlib: No such file or directory
/bin/sh: command -v llvm-strip: No such file or directory
/bin/sh: command -v llvm-nm: No such file or directory
/bin/sh: command -v llvm-objdump: No such file or directory
/bin/sh: command -v dsymutil: No such file or directory
Fetching libevent-2.1.12-stable.tar.gz from ...
```
It does seem to finish building dependencies, but I haven't tested if they actually work.
I do have `/usr/bin/{ranlib,strip,nm,objdump,dsymutil}`. | macOS,Build system | low | Major |
2,550,559,509 | vscode | SCM Graph - Date ordering | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
The new Source Control Graph is cool; I like it a lot. However, could we at least have the option to have commits in strict date order? When I'm working on bringing an older branch up to date with changes that have been merged to the main branch, it feels weird (to me anyway, as someone using a merge-based workflow pattern) for all the older commits on the current branch to be show ahead of the newer changes on the other branch. I don't mind if this isn't the default view as long as I can get it somehow.
This would also mean that this particular view wouldn't change much when switching branches.
Thanks!
----
I couldn't find anything on the topic in online documentation, but that might have been because there's been extensions that provide history viewing functionality that are making searches difficult. I also couldn't identify any existing issues on this topic. | feature-request,scm | low | Minor |
2,550,590,658 | storybook | [Bug]: In Chrome, scrolling in Preview iframe stops working after resizing iframe by dragging divider | ### Describe the bug
In Chrome, I lose the ability to scroll inside the Preview iframe after dragging the divider between the Preview iframe and the Sidebar on the left.
Below is a video "demonstrating" the problem. At the end, I attempt and fail to scroll vertically inside the Preview iframe, though in the video you can't tell that that's what I'm trying to do.
https://github.com/user-attachments/assets/218c7d6a-ccd9-4ec3-aa3d-c7f662227467
### Reproduction link
https://stackblitz.com/github/storybookjs/sandboxes/tree/next/nextjs/default-ts/after-storybook
### Reproduction steps
1. Open Chrome. I'm using Chrome version 128.0.6613.120 (Official Build) (arm64), and my operating system is macOS 13.5.2.
2. Go to https://stackblitz.com/github/storybookjs/sandboxes/tree/next/nextjs/default-ts/after-storybook
3. In Storybook, navigate to a "Page" story (just so that the content in the Preview iframe is long enough for vertical scrolling).
4. Click inside the Preview iframe; successfully scroll vertically.
5. Resize the Preview iframe by dragging the divider between the Preview iframe and the Sidebar on the left.
6. Attempt to scroll vertically inside the Preview iframe.
### System
```bash
Storybook Environment Info:
System:
OS: Linux 5.0 undefined
CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 1.0 - /bin/jsh
Binaries:
Node: 18.20.3 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.2.3 - /usr/local/bin/npm <----- active
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
@storybook/addon-essentials: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/addon-interactions: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/addon-onboarding: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/blocks: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/nextjs: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/react: ^8.4.0-alpha.1 => 8.4.0-alpha.1
@storybook/test: ^8.4.0-alpha.1 => 8.4.0-alpha.1
storybook: ^8.4.0-alpha.1 => 8.4.0-alpha.1
```
### Additional context
This seems very similar to some old issues that were resolved:
- https://github.com/storybookjs/storybook/issues/1779
- https://github.com/storybookjs/storybook/issues/2482
- https://github.com/storybookjs/storybook/issues/17202 | bug,needs triage | low | Critical |
2,550,617,938 | go | x/pkgsite: Source links for git.glasklar.is | I would like to add source links for modules hosted on git.glasklar.is. Following the instructions in https://github.com/golang/go/issues/40477, I created a CL in pkgsite: https://go-review.googlesource.com/c/pkgsite/+/609535
(Sorry for filling a new issue. The instructions say to just add a comment in the original thread but that github issue is lockedโฆ)
| pkgsite | low | Minor |
2,550,648,075 | go | proposal: cmd/go: permit C files when not using cgo or SWIG | ### Proposal Details
## Background
The Go compiler doesn't allow to have C, C++, Objectice-C, and Fortran files in a package unless cgo or SWIG are used. This means that if a Go package contains any of these files, then `CGO_ENABLED` can't be 0 and there should be at least one Go file with the `import "C"` statement.
This restriction was added in Go 1.4 in https://github.com/golang/go/commit/a0785a53add4253db84349d58abbe2ba8be130d9. My understanding is that its intention was to prohibit people from using the `6c` compiler and guide them to cgo instead.
## Proposal
Now that `6c` nobody even knows what is is the `6c` compiler, I propose to lift the aforementioned restriction.
## Motivation
The main motivation here is to allow C files not used in cgo to be copied into the Go vendor directory. Note that only directories with at least one Go file are eligible to be copied to the vendor directory. This conflicts with the requirement of using having to use cgo when Go and C files coexist in the same directory, as cgo might not be desired to avoid compiling unnecessary files or even having runtime dependencies (i.e. C header) available at build time.
There is at least one real project that needs this: [microsoft/retina](https://github.com/microsoft/retina) has some Go packages that only exist to hold C files not used with cgo, but that are compiled on-the-fly as `bpf` objects so that the builder image doesn't require all the BPF machinery installed and to facilitate cross-compilations. These object will then be loaded and attached to the Linux kernel.
Currently microsoft/retina builds with `CGO_ENABLED=0` to workaround the Go compiler restrictions, but in the near future cgo will have to be enabled due to other requirements, so this workaround won't work anymore. | Proposal | low | Minor |
2,550,672,670 | flutter | Multiple Pinned headers should work within a SliverMainAxisGroup | ### Steps to reproduce
Put multiple `PinnedHeaderSliver` in a `SliverMainAxisGroup` widget
It looks like this [issue](https://github.com/flutter/flutter/issues/155395) is talking about the same use case, but it has been closed without further investigation.
### Expected results
All the pinned widgets should be pinned to the top of the scroll view.
### Actual results
Only the first pinned widget is pinned to the top of the scroll view.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: Scaffold(
appBar: AppBar(
title: const Text('SliverMainAxisGroup pinned headers'),
),
body: const CustomScrollView(
slivers: [
SliverMainAxisGroup(
slivers: [
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.green, pinned: true),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.yellow, pinned: false),
_Sliver(color: Colors.green, pinned: true),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.yellow, pinned: false),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.yellow, pinned: false),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.yellow, pinned: false),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.yellow, pinned: false),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.green, pinned: true),
_Sliver(color: Colors.yellow, pinned: false),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.yellow, pinned: false),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.yellow, pinned: false),
_Sliver(color: Colors.red, pinned: false),
_Sliver(color: Colors.yellow, pinned: false),
],
),
],
),
),
);
}
}
class _Sliver extends StatelessWidget {
const _Sliver({
required this.color,
required this.pinned,
});
final Color color;
final bool pinned;
@override
Widget build(BuildContext context) {
final child = Container(
height: 80,
color: color,
child: Padding(
padding: const EdgeInsets.all(16),
child: Center(
child: Text(
pinned ? 'Pinned' : 'Not pinned',
style: const TextStyle(fontSize: 30),
),
),
),
);
if (pinned) {
return PinnedHeaderSliver(child: child);
} else {
return SliverToBoxAdapter(child: child);
}
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Video demonstration</summary>
https://github.com/user-attachments/assets/3a318000-d298-4f04-9a75-f97fc4190782
</details>
### Logs
<details open><summary>Logs</summary>
```console
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on macOS 14.1.1 23B81 darwin-arm64, locale fr-FR)
โข Flutter version 3.24.3 on channel stable at /Users/xxxx/.puro/envs/stable/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (2 weeks ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/xxxx/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
โข Xcode at /Applications/Xcode-15.4.0.app/Contents/Developer
โข Build 15F31d
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2023.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314)
[โ] VS Code (version 1.93.1)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.96.0
[โ] Connected device (4 available)
โข IN2023 (mobile) โข 87950fce โข android-arm64 โข Android 13 (API 33)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.1.1 23B81 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.1.1 23B81 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 129.0.6668.60
! Error: Browsing on the local area network for iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| framework,f: scrolling,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.26 | low | Critical |
2,550,687,653 | pytorch | Onnx scaled_dot_product_attention does not allow to export model | ### ๐ Describe the bug
The model is converted with dynamo and opset 18, torch nightly, and most recent onnx and onnxscript,
# PyTorch ONNX Conversion Report
```
โ
Obtain model graph with `torch.export.export`
โช Obtain model graph with `torch.export.export(..., strict=False)`
โช Obtain model graph with `torch.jit.trace`
โ Translate the graph into ONNX
โช Run `onnx.checker` on the ONNX model
โช Execute the model with ONNX Runtime
โช Validate model output accuracy
```
```
RuntimeError: Internal error: pybind11::error_already_set called while Python error indicator not set.
While executing %scaled_dot_product_attention : [num_users=2] = call_function[target=torch.ops.aten.scaled_dot_product_attention.default](args = (%expand, %expand_1, %expand_2), kwargs = {})
Original traceback:
File "src/models/Gatr_pf_e_tau_onnx2.py", line 162, in forward
embedded_outputs, _ = self.gatr(
File "PID_GNN/src/gatr_v111/nets/gatr.py", line 158, in forward
h_mv, h_s = block(
File "PID_GNN/src/gatr_v111/layers/gatr_block.py", line 126, in forward
h_mv, h_s = self.attention(
File "/gatr_v111/layers/attention/self_attention.py", line 135, in forward
h_mv, h_s = self.attention(q_mv, k_mv, v_mv, q_s, k_s, v_s, attention_mask=None)
File "/afs/cern.ch/work/m/mgarciam/private/PID_GNN/src/gatr_v111/layers/attention/attention.py", line 83, in forward
h_mv, h_s = self.geometric_attention(
File "src/gatr_v111/primitives/attention.py", line 522, in forward
v_out = scaled_dot_product_attention(q, k, v) # attn_mask=attn_mask)
```
the markdown is:
[onnx_export_2024-09-26_15-53-37-082571_conversion.md](https://github.com/user-attachments/files/17149851/onnx_export_2024-09-26_15-53-37-082571_conversion.md)
### Versions
onnxscript 0.1.0.dev20240925
torch-nigthly
onnx
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,550,706,844 | next.js | NextResponse.rewrite does not override origin server headers | ### Link to the code that reproduces this issue
https://github.com/Gebov/nextjs-rewrite-headers
### To Reproduce
1. Start the app
2. Request /rewrite
3. Inspect the response headers
4. Verify that the Cache-Control header is not overriden
5. Verify that the custom header is overriden
### Current vs. Expected behavior
Currently the app is configured to proxy requests to nextjs.org with the URL /rewrite.
I expect all the headers specified in the NextResponse.headers to be overriden. In particular the Cache-Control header.
Currently it is not overriden. Custom headers work.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Enterprise
Available memory (MB): 32488
Available CPU cores: 16
Binaries:
Node: 20.15.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.13 // Latest available version is detected (14.2.13).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
_No response_ | bug,Middleware | low | Minor |
2,550,722,098 | transformers | Will Trainer.predict() return data in the same order as the original dataset during multi-machine and multi-gpus inference? | ### Feature request
I want to use accelerate for multi-machine and multi-gpus inference. Since using trainer.predict does not return the original inference data, only the inference results, I am not sure if their order can still be maintained in the case of multi-machine and multi-gpus.
### Motivation
I want to use accelerate for multi-machine and multi-gpus inference. Since using trainer.predict does not return the original inference data, only the inference results, I am not sure if their order can still be maintained in the case of multi-machine and multi-gpus.
### Your contribution
this is my code๏ผ
def main(args):
def tokenize_function(examples):
return tokenizer(examples['text'], padding=False, truncation=False)
# Step 1: Load the model
model = LlamaForSequenceClassification.from_pretrained(args.model_name, num_labels=args.num_class, problem_type='multi_label_classification', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(args.model_name, use_fast=True)
# Step 2: Load the dataset
eval_dataset = load_dataset(args.eval_dataset_type, data_files=args.eval_dataset_name)
tokenized_dataset = eval_dataset.map(tokenize_function, batched=True, batch_size=10000)
# Step 3: Define the training arguments
training_args = MyTrainingArguments(
hdfs_path=args.hdfs_path,
output_dir=args.output_dir,
per_device_train_batch_size=args.batch_size,
per_device_eval_batch_size=args.batch_size,
gradient_accumulation_steps=args.gradient_accumulation_steps,
learning_rate=args.learning_rate,
logging_steps=args.logging_steps,
num_train_epochs=args.num_train_epochs,
max_steps=args.max_steps,
report_to=args.report_to,
save_steps=args.save_steps,
save_total_limit=args.save_total_limit,
gradient_checkpointing=args.gradient_checkpointing,
logging_dir=args.logging_dir,
bf16=True,
evaluation_strategy="steps",
eval_steps=args.eval_steps
)
# Step 5: Define the trainer
trainer = Trainer(model=model,
args=training_args,
eval_dataset=tokenized_dataset['train'],
compute_metrics=compute_metrics,
data_collator=DataCollatorWithPadding(tokenizer=tokenizer, max_length=args.max_length),
tokenizer=tokenizer,)
model, trainer = accelerator.prepare(model, trainer)
thresholds = [0.177, 0.041, 0.299, 0.023, 0.049, 0.012, 0.057, 0.212, 0.187, 0.044, 0.107, 0.035, 0.257, 0.19, 0.258, 0.26, 0.166, 0.097, 0.263, 1.0, 0.549, 0.22, 0.03, 0.294, 0.232, 0.524, 0.113, 0.028, 0.064, 0.135, 0.289, 0.121, 0.016, 0.32, 0.095]
thresholds = np.array(thresholds)
with torch.no_grad():
predictions = trainer.predict(tokenized_dataset['train'])
score = predictions.predictions
score = 1 / (1 + np.exp(-score))
pred = np.where(score >= thresholds, 1, 0)
pred = pred.tolist() | trainer,Feature request,Accelerate | low | Minor |
2,550,732,043 | pytorch | torch.export.export fails to trace through a binary operator | ### ๐ Describe the bug
torch.export.export fails to trace through a binary operator.
Here is the failing code
```python
import torch
class MyTensor:
def __init__(self, tensor):
self.tensor = tensor
def __mul__(self, rhs):
return self.tensor * rhs
class MyModule(torch.nn.Module):
def forward(self, t: torch.Tensor):
my_tensor = MyTensor(torch.ones_like(t))
return my_tensor * t # Fails to trace this
my_module = MyModule()
torch.export.export(my_module, args=(torch.tensor([2, 3]),))
```
This results in an error
```
Traceback (most recent call last):
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 645, in proxy_args_kwargs
proxy_args = tuple(arg.as_proxy() for arg in args)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 645, in <genexpr>
proxy_args = tuple(arg.as_proxy() for arg in args)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/base.py", line 253, in as_proxy
raise NotImplementedError(str(self))
NotImplementedError: UserDefinedObjectVariable(MyTensor)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/bpetkant/ws/scratchpad/scratchpad.py", line 17, in <module>
torch.export.export(my_module, args=(torch.tensor([2, 3]),))
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/export/__init__.py", line 174, in export
return _export(
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/export/_trace.py", line 945, in wrapper
raise e
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/export/_trace.py", line 928, in wrapper
ep = fn(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 89, in wrapper
return fn(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1455, in _export
aten_export_artifact = export_func(
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1060, in _strict_export
gm_torch_level = _export_to_torch_ir(
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/export/_trace.py", line 512, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1379, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1116, in __call__
return self._torchdynamo_orig_callable(
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 472, in __call__
return _compile(
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_utils_internal.py", line 84, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 817, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 636, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1185, in transform_code_object
transformations(instructions, code_options)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 178, in _fn
return fn(*args, **kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 582, in transform
tracer.run()
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2451, in run
super().run()
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 893, in run
while self.step():
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 805, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 234, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 962, in call_function
return handler(tx, args, kwargs)
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 907, in _handle_insert_op_in_graph
*proxy_args_kwargs(args, kwargs),
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 652, in proxy_args_kwargs
unimplemented(
File "/home/bpetkant/ws/sharktank/.venv/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 220, in unimplemented
raise Unsupported(msg) from from_exc
torch._dynamo.exc.Unsupported: call_function args: UserDefinedObjectVariable(MyTensor) TensorVariable()
from user code:
File "/home/bpetkant/ws/scratchpad/scratchpad.py", line 13, in forward
return my_tensor * t
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
`MyTensor` is internal to the program and not part of the signature, so it should not require registration, but even registration with `torch.utils._pytree.register_pytree_node` does not help.
Note that the below code exports fine.
The difference is that instead of calling `__mul__` on `MyTensor`, it calls `my_mul`, which has the same function body.
```python
import torch
class MyTensor:
def __init__(self, tensor):
self.tensor = tensor
def my_mul(self, rhs):
return self.tensor * rhs
class MyModule(torch.nn.Module):
def forward(self, t: torch.Tensor):
my_tensor = MyTensor(torch.ones_like(t))
return my_tensor.my_mul(t)
my_module = MyModule()
torch.export.export(my_module, args=(torch.tensor([2, 3]),))
```
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9454 48-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3810.7910
CPU min MHz: 1500.0000
BogoMIPS: 5492.22
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] onnx==1.15.0
[pip3] torch==2.4.1+cpu
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,550,744,183 | godot | LLVM builds selects different version for tools | ### Tested versions
master
### System information
Debian testing; SCons 4.5.2; Python 3.12.6
### Issue description
First of all I am not sure if it is bug of godot or rather a bug of SCons but anyway users (me) are affected
I have installed several llvm versions: 15,16,18. If I use llvm without lto everything is fine but if I try to enable lto I get these strange errors:
```
Ranlib Library modules/libmodule_csg.linuxbsd.template_release.x86_64.llvm.a ...
bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (86) (Producer: 'LLVM16.0.6' Reader: 'LLVM 15.0.7')
bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (86) (Producer: 'LLVM16.0.6' Reader: 'LLVM 15.0.7')
bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (86) (Producer: 'LLVM16.0.6' Reader: 'LLVM 15.0.7')
```
It selects one version for creating (what it creates `*.o` files?) and uses another version to link a library from them. It is weird especially in my case because it uses a lower version which can't understand format of higher version. If I remove `llvm-15*` packages everything starts to work correctly but emscripten depends on `llvm-15` (at least for now in debian testing). It also a bit strange that it uses 16 because I also installed 18.
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | topic:buildsystem,needs testing | low | Critical |
2,550,835,321 | vscode | AgentCompletions makes false assumptions | The `filterText` tricks there are a bit wonky.
https://github.com/microsoft/vscode/blob/a3c638174671449cfab4b4e7542d4121575e8dfa/src/vs/workbench/contrib/chat/browser/contrib/chatInputCompletions.ts#L215-L223
If a provider wants to demote an item in reference another of its items it can just place them like so in the array of completion it returns. The ranking goes by score and in case of equal score the initial, provider defined, order is used a tiebreaker (assuming `sortText` isn't used, which would also be a way to achieve this) | debt | low | Minor |
2,550,882,393 | tauri | [bug] [macOS] Issue with loading fonts in expo RN app with Tauri on MacOS | ### Describe the bug
We have an Expo React Native app that we run using Tauri. It works fine on Windows, but there's a strange issue on macOS. When running the app on macOS through Tauri, the fonts fail to load correctly (although the app works fine when run in a browser on macOS, both in Safari and Chrome).
The issue is that the fonts don't load as expected. More specifically, they seem to load (everything is displayed correctly), but `useFonts` returns `Error: 6000ms timeout exceeded`.
```ts
const [fontsLoaded, err] = useFonts({
'Inter-Black': require('@/assets/fonts/inter/Inter-Black.otf'),
'Inter-Regular': require('@/assets/fonts/inter/Inter-Regular.otf'),
'Inter-Medium': require('@/assets/fonts/inter/Inter-Medium.otf'),
'Inter-Bold': require('@/assets/fonts/inter/Inter-Bold.otf'),
'Inter-SemiBold': require('@/assets/fonts/inter/Inter-SemiBold.otf'),
});
```
Under the hood, `useFonts` (from **expo-font**) uses `document.fonts.load(...)`. When I tried calling this function directly, I encountered the same issue: the callback was only triggered once (should be called 5 times). However, when I used `document.fonts.ready`, it indicated that all fonts were loaded.
As a workaround, we now use a combination of `useFonts` and `document.fonts.ready`.
Interestingly, if I only load one font, the error doesnโt occur. I also tried increasing the timeout to 60 seconds, but the same error persisted.
### Reproduction
_No response_
### Expected behavior
No error while loading fonts.
### Full `tauri info` output
```text
[โ] Environment
- OS: Mac OS 14.6.1 X64
โ Xcode Command Line Tools: installed
โ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
โ cargo: 1.80.1 (376290515 2024-07-16)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.9.0
- yarn: 1.22.19
- npm: 10.1.0
[-] Packages
- tauri [RUST]: 1.8.0
- tauri-build [RUST]: 1.5.5
- wry [RUST]: 0.24.11
- tao [RUST]: 0.16.10
- @tauri-apps/api [NPM]: 1.6.0
- @tauri-apps/cli [NPM]: 1.6.2
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../dist
- devPath: http://localhost:8081/
- framework: React
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: macOS,status: needs triage | low | Critical |
2,550,885,799 | godot | Shaders get corrupted after power outage | ### Tested versions
4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GT 1030 (NVIDIA; 32.0.15.6109) - Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz (8 Threads)
### Issue description
I was working on my project, I hit save and 3 milliseconds later the power went off
Nothing was corrupted except the unique .gdshader file I have in the entire project
The thing is, I was working on scripts and scenes and that shader has not been touched since the dinosaurs
I didn't even have it opened on Shader Editor, so there has to be a bug somewhere
### Steps to reproduce
1. Have at least one .gdshader file in your project
2. Hit save and disconnect your PC from power
### Minimal reproduction project (MRP)
You can use this project:
https://godotengine.org/asset-library/asset/2732 | topic:editor,needs testing | low | Critical |
2,550,892,221 | rust | Implement `Default` for singleton function item types | ## Context
I have recently found myself doing some work that involves storing function-like items in structs along with other data, which involves a field of some generic type `F` that is going to implement some `Fn` trait. Such a struct might look something like this:
```rust
struct DeferredMappedData<S, T, F> {
data: S,
function: F,
_phantom: PhantomData<T>,
}
```
with the idea that `F: Fn(S) -> T` or similar. Thus, such a type would typically support things like
```rust
fn eval(&self) -> T { //... }
fn new(data: S, function: impl Fn(S) -> T) -> Self { //... }
```
and so on.
## The problem
An issue that I've run into is that `F` is very often the type of a function item, in which case one would hope that the actual value of the function could be inferred from the type parameter and nothing else, which is essentially the same thing as implementing `Default`. This would allow me, for example, to write code like this:
```rust
impl<S, T, F: Default> DeferredMappedData<S, T, F> {
fn from_data(data: S) -> Self {
Self {
data,
function: Default::default(),
_phantom: PhantomData,
}
}
}
```
However, that is not the case, and code like this is, at the moment, effectively worthless, since function item types are not `Default`.
Of course, this shortfall doesn't always arise so explicitly; for instance, the `Default` bound could be introduced by a `#[derive]` macro or similar. In my particular case, this actually has to do with building values through reflection (and hence possibly during things like deserialization).
## Workarounds
As far as I'm aware, the only real way to work around this is to manually monomorphize away `F` yourself (and hence bake the explicit function into the implementations of the methods). If you only care about some small number of values of `F`, maybe that's acceptable (e.g. using macros), but in any kind of general interface, there is effectively no workaround for this at all, to my knowledge.
## Aside
Being able to actually *name* these function types would also be valuable โ even with the `Default` implementation, you still can't name the type of the monomorphized struct, for instance. My understanding is that there are efforts towards that kind of thing, like being able to alias the names of automatically inferred types, but I'm not really in the loop there. Again, I'm certainly not a compiler person or anything, but I think that, independently, being able to explicitly name the singleton types of function items would also be nice. (Basically, I don't see why the compiler should necessarily have to infer this for me at all times.)
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"compiler-errors"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-trait-system,T-lang,C-feature-request | low | Critical |
2,550,905,250 | go | x/build/cmd/golangbuild: How to set the timeout for LUCI builder โfetch preobuild goโ | Hi,
The network bandwidth of the Loong64 builder is small, so fetch prebuilt go timeout errors often occur during the build testing process. How can I avoid it?
Thanks.

| Builders,NeedsInvestigation | low | Critical |
2,550,934,363 | godot | iOS 18 TouchScreenButton not working. | ### Tested versions
Godot v4.2.stable.official [46dc27791]
### System information
iPhone 16 pro (iOS 18), iPad Air (5th Gen) (iOS 18)
### Issue description
I'm using a TouchScreenButton to make a phone game and the click isn't being registered on my iOS 18 devices. I have older phones running older iOS versions that work fine. When I build for my iOS 18 devices (iPad and iPhone 16) the TouchScreenButton "Pressed" signal is not being registered / the callback is not being called.
### Steps to reproduce
1) Create a TouchScreenButton with "Pressed" signal.
2) Add callback for Pressed signal to log something
3) Deploy to iOS 18 device
4) no button press registered
### Minimal reproduction project (MRP)
N/A | bug,platform:ios,needs testing,topic:input,topic:2d | low | Minor |
2,550,935,007 | godot | Pixel font renders impossibly small size on tooltips | ### Tested versions
4.3 stable
### System information
Linux with X11
### Issue description
When using a pixel font like Terminus, and picking the smallest possible size as Main Font in the settings, some GUI elements still try to render in a smaller font size leading to unreadable text.

(make sure to view the image above at a pixel perfect zoom)
### Steps to reproduce
* use [terminus ttf](https://files.ax86.net/terminus-ttf/) as Main Font and set size to `12`
* check a signal tooltip with args that tries to render at size 10 or something
### Minimal reproduction project (MRP)
no project necessary | bug,topic:gui | low | Minor |
2,550,943,041 | next.js | "next dev --turbo" fails in WASM with error (`turbo.createProject` is not supported by the wasm bindings) | ### Link to the code that reproduces this issue
https://stackblitz.com/edit/stackblitz-starters-yew3c9
### To Reproduce
1. Start `next` dev server with Turbopack (`next dev --turbo`) in WASM (e.g. using webcontainers.io)
2. The `dev` command fails with error
```
~/projects/stackblitz-starters-yew3c9
โฏ npm install
added 10 packages, and changed 6 packages in 34s
144 packages are looking for funding
run `npm fund` for details
~/projects/stackblitz-starters-yew3c9 34s
โฏ npx next dev --turbo
โฒ Next.js 15.0.0-canary.171 (turbo)
- Local: http://localhost:3000
โ Starting...
Downloading swc package @next/swc-wasm-nodejs... to /home/.cache/next-swc
Error: `turbo.createProject` is not supported by the wasm bindings.
at Object.createProject (/home/projects/stackblitz-starters-yew3c9/node_modules/next/dist/build/swc/index.js:808:31)
at createHotReloaderTurbopack (/home/projects/stackblitz-starters-yew3c9/node_modules/next/dist/server/dev/hot-reloader-turbopack.js:121:42)
at async startWatcher (/home/projects/stackblitz-starters-yew3c9/node_modules/next/dist/server/lib/router-utils/setup-dev-bundler.js:164:38)
at async setupDevBundler (/home/projects/stackblitz-starters-yew3c9/node_modules/next/dist/server/lib/router-utils/setup-dev-bundler.js:814:20)
at async Span.traceAsyncFn (/home/projects/stackblitz-starters-yew3c9/node_modules/next/dist/trace/trace.js:157:20)
at async initialize (/home/projects/stackblitz-starters-yew3c9/node_modules/next/dist/server/lib/router-server.js:87:30)
at async Server.eval (/home/projects/stackblitz-starters-yew3c9/node_modules/next/dist/server/lib/start-server.js:266:36)
```
### Current vs. Expected behavior
I expected `next dev --turbo` to work in WASM mode so I could have fast hot reloading, but it throws this error
### Provide environment information
```bash
โฏ npx next info
Operating System:
Platform: linux
Arch: x64
Version: Ubuntu 20.04.0 LTS
Available memory (MB): NaN
Available CPU cores: 8
Binaries:
Node: 18.20.3
npm: 10.2.3
Yarn: 1.22.19
pnpm: 8.15.6
Relevant Packages:
next: 15.0.0-canary.171 // Latest available version is detected (15.0.0-canary.171).
eslint-config-next: 13.5.1
react: 18.2.0
react-dom: 18.2.0
typescript: 5.2.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I tested my reproduction against "15.0.0-canary.171" (latest canary release), and "14.2.13" (latest stable release). On stable it outputs a different error: `TypeError: bindings.turbo.createProject is not a function`.
I found the code that throws this: https://github.com/vercel/next.js/blob/3ed9f4b3f4d1f8b431ff04d4e6a45a949680a31f/packages/next/src/build/swc/index.ts#L1063
Is wasm bindings support planned for turbopack anytime soon? Was very bummed to see this, since it considerably slows down next.js in browser IDEs. | Turbopack | low | Critical |
2,550,968,062 | next.js | Nextjs ResponseCookies function crashes with unhandled exception on decodeURIComponent if cookies has any string with % on it. | ### Link to the code that reproduces this issue
https://github.com/Sathosk/reponse-cookies-issue-reproduction-app
### To Reproduce
1. Run npm run dev
2. Open localhost:3000
### Current vs. Expected behavior
The application should handle the situation gracefully and not crash.
Instead, the following error occurs:
```bash
โจฏ URIError: URI malformed
at decodeURIComponent (<anonymous>)
at Home (./src/app/page.tsx:11:78)
at AsyncLocalStorage.run (node:async_hooks:346:14)
at stringify (<anonymous>)
at AsyncResource.runInAsyncScope (node:async_hooks:206:9)
digest: "2977456002"
```
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 16333
Available CPU cores: 12
Binaries:
Node: 20.14.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.171 // Latest available version is detected (15.0.0-canary.171).
eslint-config-next: N/A
react: 19.0.0-rc-778e1ed2-20240926
react-dom: 19.0.0-rc-778e1ed2-20240926
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Other (Deployed)
### Additional context
The issue seems to stem from the ResponseCookies function that Next.js provides for creating a new Set-Cookie header.
Before version [14.2.8](https://github.com/vercel/next.js/commit/55cdf2b2bb279a269f1d4007fb7116f1da9c5ac9), cookies set in middleware could not be synced with RSC due to the request-response cycle. To bypass this issue, I implemented a custom function:
```
function applySetCookie(req: NextRequest, res: NextResponse): void {
// parse the outgoing Set-Cookie header
const setCookieHeader = res.headers.getSetCookie()
const parsedCookies = parseSetCookies(setCookieHeader) // This used to be ResponseCookies function provided by Nextjs
// Build a new Cookie header for the request by adding the setCookies
const newReqHeaders = new Headers(req.headers)
const newReqCookies = new RequestCookies(newReqHeaders)
parsedCookies.forEach((cookie) => {
newReqCookies.set(cookie)
})
// set โrequest header overridesโ on the outgoing response
NextResponse.next({
request: { headers: newReqHeaders },
}).headers.forEach((value, key) => {
if (
key === 'x-middleware-override-headers' ||
key.startsWith('x-middleware-request-')
) {
res.headers.set(key, value)
}
})
}
```
This approach worked, but I faced the same issue whenever a cookie contained a % character. It's not uncommon for cookies to have such characters.
The core issue here is that ResponseCookies is not handling exceptions thrown by the decodeURIComponent function. My workaround was to write a custom parser for handling cookies, and I have not faced any problems since.
However, starting with version 14.2.8, the functionality of merging cookies from middleware was added in the source code, essentially doing what I was doing. But the problem persists with the use of ResponseCookies, which crashes the application when decodeURIComponent throws an exception.
While I can implement a fix on my end, I believe this issue should be handled by the framework to prevent similar crashes. | bug,Middleware,Runtime | low | Critical |
2,551,042,248 | ui | [bug]: Copy button does not open package manager choice | ### Describe the bug
All "copy this command" sections throughout the docs have a menu that prompts you if you would like to copy this command with npm, yarn, pnpm, or bun. But I found one place in the "getting started with Vite" docs where it does not. At step `5` in the guide where it tells you to copy-paste the dev dependency for `@types/node`, it does not ask for a package manager but just copy-pastes the npm version.
Link to page: [https://ui.shadcn.com/docs/installation/vite](https://ui.shadcn.com/docs/installation/vite)
### Affected component/components
No compoent
### How to reproduce
1. Go to [https://ui.shadcn.com/docs/installation/vite](https://ui.shadcn.com/docs/installation/vite)
2. Go to step 5
3. Click the copy button for the `npm i -D @types/node` command
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Google chrome on a mac
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,551,053,502 | vscode | [VSCode Extension API] Allow extensions to define custom disabled messages when a command is disabled | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
When a command is disabled through the `enablement` property, the editor and explorer menus simply render a disabled button that shows the `command.title` as a tooltip.
It would be great to have the ability to customize the tooltip for the disabled button to show a custom error message instead.
An example of what the API could like:
```json
{
"contributes": {
"commands": [
{
"command": "extension.sayHello",
"title": "Hello World",
"enablement": "editorLangId == markdown && !isLinux",
"error": {
"editorLandId != markdown": "This command is only available in markdown files",
"isLinux": "This command is only available on Linux"
}
}
]
}
}
```
Another way to do it would be to allow developers to control the tooltip through a custom context variable
```json
{
"contributes": {
"commands": [
{
"command": "extension.sayHello",
"title": "Hello World",
"enablement": "editorLangId == markdown && !isLinux",
"tooltip": "extension.sayHello.tooltip"
}
]
}
}
``` | feature-request,menus | low | Critical |
2,551,064,570 | ui | [feat]: Add tailwind content step for vite guide | ### Feature description
In the guide for setting up shadcnui with vite, [https://ui.shadcn.com/docs/installation/vite](https://ui.shadcn.com/docs/installation/vite), I suggest we add a new section between current 2 and 3 that tells the user to change their `content` field in `tailwind.config.js` to include `content: ['./index.html', './src/**/*.{js,ts,jsx,tsx}']`.

This is a step that is needed to get it to work. I followed the steps that there currently in the docs and it did not work as I got the following issue:
```
No Tailwind CSS configuration found at /Users/me/Desktop/project
It is likely you do not have Tailwind CSS installed or have an invalid configuration.
Install Tailwind CSS then try again.
Visit https://tailwindcss.com/docs/guides/vite to get started.
```
So its a necessary step to get it to work now, after I changed the `content` field it started working. It's also what Tailwind has right after installing the package so makes sense to include tbh

### Affected component/components
_No response_
### Additional Context
I could make a quick simple PR for this to get it out ๐
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,551,080,435 | flutter | Padding inside FilledButton have a different behavior on web when on Mobile Browser vs Desktop Browser | ### Steps to reproduce
1. Run the following code with flutter run -d chrome
```dart
import 'package:flutter/material.dart';
void main() {
runApp(
MaterialApp(
home: Scaffold(
body: Center(
child: SizedBox(
height: 40,
child: FilledButton(
style: FilledButton.styleFrom(
padding: const EdgeInsets.all(16),
),
onPressed: () {},
child: const Text("test"),
),
),
),
),
),
);
}
```
2. open a chrome window on the resulting url
3. open a browser window on the resulting url with a iPhone simulator or an Android Emulator
### Expected results
The two test button looks the same
### Actual results
The two button looks really different.
The button text is visible on a chrome browser on macOS.
The button text is not visible on a chrome browser on iPhone/Android.
The mobile version is probably the correct one, with the text being cropped by the height constraint + big padding, but the debug was difficult be cause on desktop it looked good.
<img width="1723" alt="image" src="https://github.com/user-attachments/assets/c2b20152-7956-4b56-9878-7e9a9651cf94">
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(
MaterialApp(
home: Scaffold(
body: Center(
child: SizedBox(
height: 40,
child: FilledButton(
style: FilledButton.styleFrom(
padding: const EdgeInsets.all(16),
),
onPressed: () {},
child: const Text("test"),
),
),
),
),
),
);
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="1723" alt="image" src="https://github.com/user-attachments/assets/c2b20152-7956-4b56-9878-7e9a9651cf94">
</details>
### Logs
<details open><summary>Logs</summary>
No interesting logs in the console
```console
Performing hot restart...
Waiting for connection from debug service on Chrome...
Restarted application in 45ms.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel [user-branch], 3.22.0, on macOS 14.6.1 23G93 darwin-arm64,
locale en-IT)
! Flutter version 3.22.0 on channel [user-branch] at
/opt/homebrew/Caskroom/flutter/3.19.4/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an
official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions
at https://flutter.dev/docs/get-started/install.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss
this error.
โข Framework revision 5dcb86f68f (5 months ago), 2024-05-09 07:39:20 -0500
โข Engine revision f6344b75dc
โข Dart version 3.4.0
โข DevTools version 2.34.3
โข Pub download mirror
https://pvotaltech.jfrog.io/artifactory/api/pub/pvotal-pub/
โข If those were intentional, you can disregard the above warnings; however
it is recommended to use "git" directly to perform update checks and
upgrades.
[โ] Android toolchain - develop for Android devices (Android SDK version 33.0.0)
โข Android SDK at /Users/clucera/Library/Android/sdk
โข Platform android-34, build-tools 33.0.0
โข Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build
17.0.7+0-17.0.7b1000.6-10550314)
โข All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.0)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16A242d
! iOS 18.0 Simulator not installed; this may be necessary for iOS and macOS
development.
To download and install the platform, open Xcode, select Xcode > Settings
> Platforms,
and click the GET button for the required platform.
For more information, please visit:
https://developer.apple.com/documentation/xcode/installing-additional-si
mulator-runtimes
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2023.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build
17.0.7+0-17.0.7b1000.6-10550314)
[โ] IntelliJ IDEA Ultimate Edition (version 2023.1.6)
โข IntelliJ at /Applications/IntelliJ IDEA.app
โข Flutter plugin version 78.2.1
โข Dart plugin version 231.9414.10
[โ] VS Code (version 1.88.1)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.90.0
[โ] Connected device (4 available)
โข iPhone 15 Pro Max (mobile) โข 7419A062-CAA2-4A3E-B7C1-4273A165C394 โข ios โข com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.6.1 23G93 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.6.1 23G93 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 129.0.6668.60
[โ] Network resources
โข All expected network resources are available.
```
</details>
| framework,f: material design,d: api docs,platform-web,has reproducible steps,P2,c: parity,team-design,triaged-design,found in release: 3.24,found in release: 3.27 | low | Critical |
2,551,082,496 | go | crypto: support ACVP testing | ### Proposal Details
_Note: not a formal proposal since this is internal work without new exposed APIs or observable behaviour. It's primarily surfacing FIPS work for tracking purposes._
## Background
Go's FIPS 140-3 validation (#69536) will require that we demonstrate that we are only using approved cryptographic algorithms. Doing so is a pre-requisite for [cryptographic _module_ verification (CMVP)][cmvp].
The NIST [Cryptographic Algorithm Validation Program (CAVP)][cavp] allows for certification of algorithm implementations via the [Automated Cryptographic Validation Test Program (ACVT)][acvt] using the [Automated Cryptographic Validation Protocol (ACVP)][acvp]. The [protocol specification][acvp-proto] is available online in an IETF RFC-like format.
[cmvp]: https://csrc.nist.gov/Projects/cryptographic-module-validation-program
[cavp]: https://csrc.nist.gov/projects/cryptographic-algorithm-validation-program
[acvt]: https://www.nccoe.nist.gov/automation-nist-cryptographic-module-validation-program
[acvp]: https://github.com/usnistgov/ACVP
[acvp-proto]: https://pages.nist.gov/ACVP/draft-fussell-acvp-spec.html
## BoringSSL acvptool
Thankfully, the BoringSSL project has [already implemented][acvp-tool] and [documented][bssl-acvp] a pure-Go client that can both interact with the demo NIST server, and operate in an offline mode suitable for CI. It "lowers" the more complex NIST protocol into a simple request/response protocol used over stdin/stdout to speak to a forked module wrapper processes. `@agl`briefly discusses its origin [in a blog post][agl-acvp].
[acvp-tool]: https://boringssl.googlesource.com/boringssl/+/master/util/fipstools/acvp/acvptool/
[bssl-acvp]: https://boringssl.googlesource.com/boringssl/+/refs/heads/master/util/fipstools/acvp/ACVP.md
[agl-acvp]: https://www.imperialviolet.org/2020/12/23/acvp.html
## Requirements
To meet the testing requirements Go should offer an `acvptool` compatible module wrapper for the Go FIPS module.
It should be implemented so that it's possible to build and test from different operating environments (OEs) and with/without processor algorithm accelerators (PAA) features.
It should be integrated into CI so that there is continual assurance that our algorithms will pass when performing live ACVP testing with the NIST test, or production servers.
Since the [license in BoringSSL for new code][bssl-license] (such as the acvp tooling) is compatible with the [Go repository
license][go-license], I believe we have flexibility in terms of whether we vendor the tooling and test data or use both as-is from the BoringSSL repo. The existing Go code in that repo has no external dependencies that would pose a challenge for integration here.
[bssl-license]: https://github.com/google/boringssl/blob/dec0800988062ab0b1d5ea5f3c9575f3392bcd37/LICENSE#L144C1-L158
[go-license]: https://go.dev/LICENSE | Testing,NeedsInvestigation,FeatureRequest | medium | Major |
2,551,084,368 | godot | When building projects, Godot doesnt consistently use the same dotnet executable from c++ and the c#, specifically on mac | ### Tested versions
Looks to be reproducible back to 4.0 based on the commits. I repro'ed it in 4.3
### System information
Godot v4.3.stable.mono - macOS 14.5.0 - Vulkan (Mobile) - integrated Apple M2 Max - Apple M2 Max (12 Threads)
### Issue description
Dotnet isn't generally required to be in a specific location, it has various mechanism to allow other projects/executables to find it, through environment variables, text file redirects and other methods. See https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-environment-variables#dotnet_root-dotnet_rootx86-dotnet_root_x86-dotnet_root_x64
But Godot has 2 different methods of looking up the dotnet executable, in c# and from the c++ and they don't match.
C# is only looking for the executable in 2 locations:
["/usr/local/share/dotnet/dotnet";](https://github.com/godotengine/godot/blob/master/modules/mono/editor/GodotTools/GodotTools/Build/DotNetFinder.cs#L29)
and
["/usr/local/share/dotnet/x64/dotnet"](https://github.com/godotengine/godot/blob/master/modules/mono/editor/GodotTools/GodotTools/Build/DotNetFinder.cs#L23)
The c++ is correctly using the common c# environment variables to find the right version, first checking environment variables like DOTNET_ROOT_ARM64 and then DOTNET_ROOT (it could maybe be slightly better by searching the path):
https://github.com/godotengine/godot/blob/master/modules/mono/editor/hostfxr_resolver.cpp#L335
In my tests, I was hoping that [DOTNET_HOST_PATH](https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-environment-variables#dotnet_host_path) would have been set by the pipeline out of the box, but its not at that code location in DotNetFinder.cs.
I think the solution would end up looking maybe like this (but maybe there is a real library function we could be calling):
https://github.com/dotnet/roslyn/blob/main/src/Compilers/Shared/RuntimeHostInfo.cs#L59
### Steps to reproduce
On mac, install dotnet or the sdks into a non-standard location (like say maybe you might do on a build machine).
Then set DOTNET_ROOT to point at that location as the dotnet documentation suggests: https://learn.microsoft.com/en-us/dotnet/core/install/macos#set-environment-variables-system-wide
Then build for Android and the c# will not be able to find the right dotnet executable.
### Minimal reproduction project (MRP)
Since the bug comes from the dotnet installation location variations, not the project, any project will show the problem. | discussion,topic:dotnet | low | Critical |
2,551,108,595 | rust | Failed To Run Custom Build Command For `proc-macro-test v0.0.0` During Windows Compile From Source | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
Tried compiling rust from source on Windows, but getting the below error: `failed to run custom build command for `proc-macro-test v0.0.0`.
Interestingly enough, when I was compiling `1.76.0`, it worked fine with the same setting?
Information:
* Windows 11 23H2 22631.4169
* Visual Studio 2022 17.11.4
* MSVC 143 - VS 2022 C++ x64/x86 build tools (latest)
* Windows 11 SDK (10.0.22621.0)
* C++ ATL for latest v143 build tools (x86 & x64)
* C++ Clang Compiler for Windows (17.0.3)
* MSBuild support for LLVM (clang-cl) toolset
* CMake: 3.30.3
* Ninja: 1.11.1
* Python: 3.12.1
Command:
```bash
# Failed during check
python .\x.py check
# Failed during build
python .\x.py build
```
Error:
```
error: failed to run custom build command for `proc-macro-test v0.0.0
```
I expected to see this happen: *Build completed successfully after running `python .\x.py check`*
Instead, this happened: *Got Build completed unsuccessfully after running `python .\x.py check`*
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
N/A
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
Checking hir-expand v0.0.0 (C:\Users\Aries\Desktop\Project\Repository\P\rust\src\tools\rust-analyzer\crates\hir-expand)
error: failed to run custom build command for `proc-macro-test v0.0.0 (C:\Users\Aries\Desktop\Project\Repository\P\rust\src\tools\rust-analyzer\crates\proc-macro-srv\proc-macro-test)`
Caused by:
process didn't exit successfully: `C:\Users\Aries\Desktop\Project\Repository\P\rust\build\x86_64-pc-windows-msvc\stage0-tools\release\build\proc-macro-test-ac25f6fb587a2d77\build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-changed=imp
Creating C:\Users\Aries\Desktop\Project\Repository\P\rust\build\x86_64-pc-windows-msvc\stage0-tools\x86_64-pc-windows-msvc\release\build\proc-macro-test-01ff525158874abc\out\proc-macro-test-imp-staging
Creating C:\Users\Aries\Desktop\Project\Repository\P\rust\build\x86_64-pc-windows-msvc\stage0-tools\x86_64-pc-windows-msvc\release\build\proc-macro-test-01ff525158874abc\out\proc-macro-test-imp-staging\src
Copying C:\Users\Aries\Desktop\Project\Repository\P\rust\src\tools\rust-analyzer\crates\proc-macro-srv\proc-macro-test\imp\Cargo.toml to C:\Users\Aries\Desktop\Project\Repository\P\rust\build\x86_64-pc-windows-msvc\stage0-tools\x86_64-pc-windows-msvc\release\build\proc-macro-test-01ff525158874abc\out\proc-macro-test-imp-staging\Cargo.toml
Copying C:\Users\Aries\Desktop\Project\Repository\P\rust\src\tools\rust-analyzer\crates\proc-macro-srv\proc-macro-test\imp\build.rs to C:\Users\Aries\Desktop\Project\Repository\P\rust\build\x86_64-pc-windows-msvc\stage0-tools\x86_64-pc-windows-msvc\release\build\proc-macro-test-01ff525158874abc\out\proc-macro-test-imp-staging\build.rs
Copying C:\Users\Aries\Desktop\Project\Repository\P\rust\src\tools\rust-analyzer\crates\proc-macro-srv\proc-macro-test\imp\src\lib.rs to C:\Users\Aries\Desktop\Project\Repository\P\rust\build\x86_64-pc-windows-msvc\stage0-tools\x86_64-pc-windows-msvc\release\build\proc-macro-test-01ff525158874abc\out\proc-macro-test-imp-staging\src\lib.rs
Running "\\\\?\\C:\\Users\\Aries\\Desktop\\Project\\Repository\\P\\rust\\build\\x86_64-pc-windows-msvc\\stage0\\bin\\cargo.exe" "build" "-p" "proc-macro-test-impl" "--message-format" "json" "--target-dir" "C:\\Users\\Aries\\Desktop\\Project\\Repository\\P\\rust\\build\\x86_64-pc-windows-msvc\\stage0-tools\\x86_64-pc-windows-msvc\\release\\build\\proc-macro-test-01ff525158874abc\\out\\target" "--target" "x86_64-pc-windows-msvc"
proc-macro-test-impl failed to build
============ stdout ============
{"reason":"compiler-message","package_id":"path+file:///C:/Users/Aries/Desktop/Project/Repository/P/rust/build/x86_64-pc-windows-msvc/stage0-tools/x86_64-pc-windows-msvc/release/build/proc-macro-test-01ff525158874abc/out/proc-macro-test-imp-staging#proc-macro-test-impl@0.0.0","manifest_path":"C:\\Users\\Aries\\Desktop\\Project\\Repository\\P\\rust\\build\\x86_64-pc-windows-msvc\\stage0-tools\\x86_64-pc-windows-msvc\\release\\build\\proc-macro-test-01ff525158874abc\\out\\proc-macro-test-imp-staging\\Cargo.toml","target":{"kind":["custom-build"],"crate_types":["bin"],"name":"build-script-build","src_path":"C:\\Users\\Aries\\Desktop\\Project\\Repository\\P\\rust\\build\\x86_64-pc-windows-msvc\\stage0-tools\\x86_64-pc-windows-msvc\\release\\build\\proc-macro-test-01ff525158874abc\\out\\proc-macro-test-imp-staging\\build.rs","edition":"2021","doc":false,"doctest":false,"test":false},"message":{"rendered":"error: linking with `link.exe` failed: exit code: 1104\n |\n = note: \"C:\\\\Program Files\\\\Microsoft Visual Studio\\\\2022\\\\Enterprise\\\\VC\\\\Tools\\\\MSVC\\\\14.41.34120\\\\bin\\\\HostX64\\\\x64\\\\link.exe\" \"/NOLOGO\" \"C:\\\\Users\\\\Aries\\\\AppData\\\\Local\\\\Temp\\\\rustcdbUXZm\\\\symbols.o\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0-tools\\\\x86_64-pc-windows-msvc\\\\release\\\\build\\\\proc-macro-test-01ff525158874abc\\\\out\\\\target\\\\debug\\\\build\\\\proc-macro-test-impl-653802d6ec5b86af\\\\build_script_build-653802d6ec5b86af.build_script_build.e80398efb4fee8ff-cgu.0.rcgu.o\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0-tools\\\\x86_64-pc-windows-msvc\\\\release\\\\build\\\\proc-macro-test-01ff525158874abc\\\\out\\\\target\\\\debug\\\\build\\\\proc-macro-test-impl-653802d6ec5b86af\\\\build_script_build-653802d6ec5b86af.29g61l5aniiv8i4hmxt9nq7no.rcgu.o\" \"/LIBPATH:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0-tools\\\\x86_64-pc-windows-msvc\\\\release\\\\build\\\\proc-macro-test-01ff525158874abc\\\\out\\\\target\\\\debug\\\\deps\" \"/LIBPATH:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libstd-d7a86f21fcd377c7.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libpanic_unwind-97f6a0482881a03a.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\librustc_demangle-f8c4d6a2240f107f.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libstd_detect-803b4d5ce4fcd522.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libhashbrown-5e5ab7fb8d3e9a6b.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\librustc_std_workspace_alloc-7846558dfa99a578.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libunwind-3adc2db30827f7fe.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libcfg_if-c91146a1b584a0a7.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\liballoc-c032859c81f4576b.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\librustc_std_workspace_core-628fee62996a202b.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libcore-dfdcb1635a201156.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libcompiler_builtins-1f67c2a5a11a0b2e.rlib\" \"kernel32.lib\" \"advapi32.lib\" \"kernel32.lib\" \"ntdll.lib\" \"userenv.lib\" \"ws2_32.lib\" \"kernel32.lib\" \"ws2_32.lib\" \"kernel32.lib\" \"/defaultlib:libcmt\" \"/NXCOMPAT\" \"/LIBPATH:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\" \"/OUT:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0-tools\\\\x86_64-pc-windows-msvc\\\\release\\\\build\\\\proc-macro-test-01ff525158874abc\\\\out\\\\target\\\\debug\\\\build\\\\proc-macro-test-impl-653802d6ec5b86af\\\\build_script_build-653802d6ec5b86af.exe\" \"/OPT:REF,NOICF\" \"/DEBUG\" \"/PDBALTPATH:%_PDB%\" \"/NATVIS:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\etc\\\\intrinsic.natvis\" \"/NATVIS:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\etc\\\\liballoc.natvis\" \"/NATVIS:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\etc\\\\libcore.natvis\" \"/NATVIS:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\etc\\\\libstd.natvis\"\n = note: LINK : fatal error LNK1104: cannot open file 'C:\\Users\\Aries\\Desktop\\Project\\Repository\\P\\rust\\build\\x86_64-pc-windows-msvc\\stage0-tools\\x86_64-pc-windows-msvc\\release\\build\\proc-macro-test-01ff525158874abc\\out\\target\\debug\\build\\proc-macro-test-impl-653802d6ec5b86af\\build_script_build-653802d6ec5b86af.exe'\r\n \n\n","$message_type":"diagnostic","children":[{"children":[],"code":null,"level":"note","message":"\"C:\\\\Program Files\\\\Microsoft Visual Studio\\\\2022\\\\Enterprise\\\\VC\\\\Tools\\\\MSVC\\\\14.41.34120\\\\bin\\\\HostX64\\\\x64\\\\link.exe\" \"/NOLOGO\" \"C:\\\\Users\\\\Aries\\\\AppData\\\\Local\\\\Temp\\\\rustcdbUXZm\\\\symbols.o\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0-tools\\\\x86_64-pc-windows-msvc\\\\release\\\\build\\\\proc-macro-test-01ff525158874abc\\\\out\\\\target\\\\debug\\\\build\\\\proc-macro-test-impl-653802d6ec5b86af\\\\build_script_build-653802d6ec5b86af.build_script_build.e80398efb4fee8ff-cgu.0.rcgu.o\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0-tools\\\\x86_64-pc-windows-msvc\\\\release\\\\build\\\\proc-macro-test-01ff525158874abc\\\\out\\\\target\\\\debug\\\\build\\\\proc-macro-test-impl-653802d6ec5b86af\\\\build_script_build-653802d6ec5b86af.29g61l5aniiv8i4hmxt9nq7no.rcgu.o\" \"/LIBPATH:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0-tools\\\\x86_64-pc-windows-msvc\\\\release\\\\build\\\\proc-macro-test-01ff525158874abc\\\\out\\\\target\\\\debug\\\\deps\" \"/LIBPATH:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libstd-d7a86f21fcd377c7.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libpanic_unwind-97f6a0482881a03a.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\librustc_demangle-f8c4d6a2240f107f.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libstd_detect-803b4d5ce4fcd522.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libhashbrown-5e5ab7fb8d3e9a6b.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\librustc_std_workspace_alloc-7846558dfa99a578.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libunwind-3adc2db30827f7fe.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libcfg_if-c91146a1b584a0a7.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\liballoc-c032859c81f4576b.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\librustc_std_workspace_core-628fee62996a202b.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libcore-dfdcb1635a201156.rlib\" \"C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\\\\libcompiler_builtins-1f67c2a5a11a0b2e.rlib\" \"kernel32.lib\" \"advapi32.lib\" \"kernel32.lib\" \"ntdll.lib\" \"userenv.lib\" \"ws2_32.lib\" \"kernel32.lib\" \"ws2_32.lib\" \"kernel32.lib\" \"/defaultlib:libcmt\" \"/NXCOMPAT\" \"/LIBPATH:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\x86_64-pc-windows-msvc\\\\lib\" \"/OUT:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0-tools\\\\x86_64-pc-windows-msvc\\\\release\\\\build\\\\proc-macro-test-01ff525158874abc\\\\out\\\\target\\\\debug\\\\build\\\\proc-macro-test-impl-653802d6ec5b86af\\\\build_script_build-653802d6ec5b86af.exe\" \"/OPT:REF,NOICF\" \"/DEBUG\" \"/PDBALTPATH:%_PDB%\" \"/NATVIS:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\etc\\\\intrinsic.natvis\" \"/NATVIS:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\etc\\\\liballoc.natvis\" \"/NATVIS:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\etc\\\\libcore.natvis\" \"/NATVIS:C:\\\\Users\\\\Aries\\\\Desktop\\\\Project\\\\Repository\\\\P\\\\rust\\\\build\\\\x86_64-pc-windows-msvc\\\\stage0\\\\lib\\\\rustlib\\\\etc\\\\libstd.natvis\"","rendered":null,"spans":[]},{"children":[],"code":null,"level":"note","message":"LINK : fatal error LNK1104: cannot open file 'C:\\Users\\Aries\\Desktop\\Project\\Repository\\P\\rust\\build\\x86_64-pc-windows-msvc\\stage0-tools\\x86_64-pc-windows-msvc\\release\\build\\proc-macro-test-01ff525158874abc\\out\\target\\debug\\build\\proc-macro-test-impl-653802d6ec5b86af\\build_script_build-653802d6ec5b86af.exe'\r\n","rendered":null,"spans":[]}],"code":null,"level":"error","message":"linking with `link.exe` failed: exit code: 1104","spans":[]}}
{"reason":"compiler-message","package_id":"path+file:///C:/Users/Aries/Desktop/Project/Repository/P/rust/build/x86_64-pc-windows-msvc/stage0-tools/x86_64-pc-windows-msvc/release/build/proc-macro-test-01ff525158874abc/out/proc-macro-test-imp-staging#proc-macro-test-impl@0.0.0","manifest_path":"C:\\Users\\Aries\\Desktop\\Project\\Repository\\P\\rust\\build\\x86_64-pc-windows-msvc\\stage0-tools\\x86_64-pc-windows-msvc\\release\\build\\proc-macro-test-01ff525158874abc\\out\\proc-macro-test-imp-staging\\Cargo.toml","target":{"kind":["custom-build"],"crate_types":["bin"],"name":"build-script-build","src_path":"C:\\Users\\Aries\\Desktop\\Project\\Repository\\P\\rust\\build\\x86_64-pc-windows-msvc\\stage0-tools\\x86_64-pc-windows-msvc\\release\\build\\proc-macro-test-01ff525158874abc\\out\\proc-macro-test-imp-staging\\build.rs","edition":"2021","doc":false,"doctest":false,"test":false},"message":{"rendered":"error: aborting due to 1 previous error\n\n","$message_type":"diagnostic","children":[],"code":null,"level":"error","message":"aborting due to 1 previous error","spans":[]}}
{"reason":"build-finished","success":false}
============ stderr ============
Compiling proc-macro-test-impl v0.0.0 (C:\Users\Aries\Desktop\Project\Repository\P\rust\build\x86_64-pc-windows-msvc\stage0-tools\x86_64-pc-windows-msvc\release\build\proc-macro-test-01ff525158874abc\out\proc-macro-test-imp-staging)
error: could not compile `proc-macro-test-impl` (build script) due to 2 previous errors
--- stderr
thread 'main' panicked at crates\proc-macro-srv\proc-macro-test\build.rs:91:9:
proc-macro-test-impl failed to build
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
Build completed unsuccessfully in 0:09:16
```
</p>
</details>
| O-windows,T-bootstrap,C-bug | low | Critical |
2,551,115,819 | vscode | Enterprise Support | # Support for policies would be greatly appreciated.
You see the other week I managed to meet some of the Intune PG members and they were super nice and showed me how can one import the `vscode.admx` file into Intune.
But we were all surprised to see that there is only a single policy setting available as of today.
```XML
<?xml version="1.0" encoding="utf-8"?>
<policyDefinitions revision="1.1" schemaVersion="1.0">
<policyNamespaces>
<target prefix="VSCode" namespace="Microsoft.Policies.VSCode" />
</policyNamespaces>
<resources minRequiredRevision="1.0" />
<supportedOn>
<definitions>
<definition name="Supported_1_67" displayName="$(string.Supported_1_67)" />
</definitions>
</supportedOn>
<categories>
<category displayName="$(string.Application)" name="Application" />
<category displayName="$(string.Category_updateConfigurationTitle)" name="updateConfigurationTitle"><parentCategory ref="Application" /></category>
</categories>
<policies>
<policy name="UpdateMode" class="Both" displayName="$(string.UpdateMode)" explainText="$(string.UpdateMode_updateMode)" key="Software\Policies\Microsoft\VSCode" presentation="$(presentation.UpdateMode)">
<parentCategory ref="updateConfigurationTitle" />
<supportedOn ref="Supported_1_67" />
<elements>
<enum id="UpdateMode" valueName="UpdateMode">
<item displayName="$(string.UpdateMode_none)"><value><string>none</string></value></item>
<item displayName="$(string.UpdateMode_manual)"><value><string>manual</string></value></item>
<item displayName="$(string.UpdateMode_start)"><value><string>start</string></value></item>
<item displayName="$(string.UpdateMode_default)"><value><string>default</string></value></item>
</enum>
</elements>
</policy>
</policies>
</policyDefinitions>
```
# The following additional policies would be a great start
1. Disable access to Extension Marketplace
2. Allow list for extensions
3. Block list for extensions
4. Mandatory extensions list
5. Extension update control
6. Alternative location for Extension Marketplace
7. Disable settings synchronization
> I believe you'll find most of these already exist as issues
Thank you so much! | install-update,config,under-discussion | low | Minor |
2,551,129,463 | rust | Unable to match non_exhaustive enum tuple variants with rest pattern | Tuple enum variants `Tuple(i32)` from external crates marked with `#[non_exhaustive]` should be able to be matched by the RestPattern `Tuple(val, ..)`. But it fails to compile.
I tried this code:
```rust
// In an external lib crate
#[non_exhaustive]
pub enum ExtNonExhaustiveVariant {
ExhaustiveUnit,
#[non_exhaustive]
Unit,
#[non_exhaustive]
Tuple(i32),
#[non_exhaustive]
StructNoField {},
#[non_exhaustive]
Struct {
field: i32,
},
}
// In the bin crate
fn main() {
match ExtNonExhaustiveVariant::ExhaustiveUnit {
ExtNonExhaustiveVariant::ExhaustiveUnit => 0, // OK
ExtNonExhaustiveVariant::Unit { .. } => 0, // OK
ExtNonExhaustiveVariant::Tuple(val, ..) => val, // FAIL
ExtNonExhaustiveVariant::StructNoField { .. } => 0, // OK
ExtNonExhaustiveVariant::Struct { field, .. } => field, // OK
_ => 0,
};
}
```
I expected to see this happen: No compile error.
Instead, this happened:
```
error[E0603]: tuple variant `Tuple` is private
--> src/main.rs:9:34
|
9 | ExtNonExhaustiveVariant::Tuple(val, ..) => val,
| ^^^^^ private tuple variant
|
note: the tuple variant `Tuple` is defined here
--> /repro/lib/src/lib.rs:14:5
|
13 | #[non_exhaustive]
| ----------------- cannot be constructed because it is `#[non_exhaustive]`
14 | Tuple(i32),
| ^^^^^
For more information about this error, try `rustc --explain E0603`.
error: could not compile `bin` (bin "bin") due to 1 previous error
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
``` | A-diagnostics,T-lang,T-compiler,C-bug,D-confusing | low | Critical |
2,551,134,185 | godot | .APK is unable to read the .OBB file | ### Tested versions
4.4dev and 4.3.1
(Both the editor and the templates are built from source)
### System information
Editor: Godot v4.4.dev unknown - Windows 10.0.22631 - Forward Mobile
Device: Redmi Note 8 Pro, Android 11
### Issue description
Using the APK Expansion while exporting to android exports an OBB and an APK file, however after installing the apk and putting the obb file with name `main.[code].com.[companyname].[packagename].obb` to the folder `com.[companyname].[packagename]` under Android/Obb , the app will give the error `Couldn't load project data at path".". Is the .pck file missing?`. I assume that the application is somehow unable to see the obb file. The exact same project with the exact same settings except the APK Expansion being disabled runs perfectly
### Steps to reproduce
- Export an android project with APK expansion
- On a mobile device, download the APK and place the OBB file at the correct path
- Run the application
### Minimal reproduction project (MRP)
N/A | bug,platform:android,topic:export | low | Critical |
2,551,169,229 | pytorch | torch.is_grad_enabled() is False when using custom_op decorator | ### ๐ Describe the bug
I am trying to make a C++ operation compilable by defining a custom OP for my Python wrapper of the C++ functions.
Inside the custom op, I would like to know if gradients are currently enabled or not - because I have to create temporary data in the forward pass that is needed in the backward pass.
But when I use torch.library.custom_op decorator, `torch.is_grad_enabled()` always returns False inside the decorated function.
It also remains false when I define a backward pass with `register_autograd`.
This is at least unexpected behaviour, but in my opinion also likely a bug? Is there a way around this?
Currently using Pytorch 2.5 nightly from 20240731. For a minimal example, you can take the [example code from PyTorch](https://pytorch.org/tutorials/advanced/python_custom_ops.html).
Unrelated problem:
When I return a strided tensor (which is created in the C++ op), the backward pass does not work, because it expects a non-strided gradient. Even when using the same strides in `register_fake`, it does not fix the problem. I always have to make the tensor contiguous before returning.
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0.dev20240731
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.31
Python version: 3.10.0 | packaged by conda-forge | (default, Nov 20 2021, 02:24:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1062-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0.dev20240731
[pip3] torchao==0.6.0.dev20240916+cu121
[pip3] torchaudio==2.4.0.dev20240731
[pip3] torcheval==0.0.7
[pip3] torchtune==0.2.1
[pip3] torchvision==0.20.0.dev20240731
[pip3] triton==3.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 https://aws-ml-conda.s3.us-west-2.amazonaws.com
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.26.4 py310hb13e2d6_0 conda-forge
[conda] pytorch 2.5.0.dev20240731 py3.10_cuda12.1_cudnn9.1.0_0 pytorch-nightly
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchao 0.6.0.dev20240916+cu121 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240731 py310_cu121 pytorch-nightly
[conda] torcheval 0.0.7 pypi_0 pypi
[conda] torchtriton 3.0.0+dedb7bdf33 py310 pytorch-nightly
[conda] torchtune 0.2.1 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240731 py310_cu121 pytorch-nightly
```
cc @svekars @brycebortree @sekyondaMeta @ezyang @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | module: docs,triaged,module: custom-operators,oncall: pt2 | low | Critical |
2,551,192,366 | ollama | Request to Add Support for Helsinki-NLP Models | I would like to request adding support for the Helsinki-NLP models, specifically the ones available in this dataset: [Helsinki-NLP/opus-100](https://huggingface.co/datasets/Helsinki-NLP/opus-100/tree/main).
Although the current version of the software supports only GGUF files, I believe that making an exception for these models, which come in BIN format, would be beneficial. The Helsinki-NLP models are known for their high-quality translations, and they are very lightweight. This makes them an excellent choice, especially for users with low-end machines that might struggle with heavier models.
Supporting these models could significantly expand the translation capabilities while maintaining good performance, even on less powerful hardware. | model request | low | Major |
2,551,205,318 | go | x/tools/go/analysis/nilness: heuristics for flagging conversions from nil *T to interface | We have forever been discussing ways to make the nilness analyzer report conversions from a definitely-nil pointer of type *T to an interface type. The challenge is how to distinguish the legitimate uses (when a nil *T pointer is valid) from the mistakes. This issue is a place to publicly record some of the raw data we got from running a simple heuristic across the Go module mirror corpus and discuss potential improvements.
The analysis is just this patch to the existing code:
```diff
diff --git a/go/analysis/passes/nilness/nilness.go b/go/analysis/passes/nilness/nilness.go
index faaf12a93..8b049a4ea 100644
--- a/go/analysis/passes/nilness/nilness.go
+++ b/go/analysis/passes/nilness/nilness.go
@@ -121,6 +121,15 @@ func runFunc(pass *analysis.Pass, fn *ssa.Function) {
case *ssa.Send:
// (Not a runtime error, but a likely mistake.)
notNil(stack, instr, instr.Chan, "send to nil channel")
+
+ case *ssa.MakeInterface:
+ switch instr.X.Type().Underlying().(type) {
+ case *types.Slice, *types.Map:
+ // nils of these types are fine
+ default:
+ notNil(stack, instr, instr.X,
+ fmt.Sprintf("converting nil %s to interface %s", instr.X.Type(), instr.Type()))
+ }
}
}
```
https://go-mod-viewer.appspot.com/github.com/cilium/ebpf@v0.16.0/linker.go#L319: converting nil *github.com/cilium/ebpf/btf.Func to interface github.com/cilium/ebpf/btf.Type
https://go-mod-viewer.appspot.com/github.com/cilium/ebpf@v0.16.0/linker.go#L319: converting nil *github.com/cilium/ebpf/btf.Func to interface github.com/cilium/ebpf/btf.Type
https://go-mod-viewer.appspot.com/github.com/ava-labs/avalanchego@v1.11.11/vms/platformvm/txs/add_delegator_test.go#L207: converting nil *github.com/ava-labs/avalanchego/vms/platformvm/txs.AddDelegatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/ava-labs/avalanchego@v1.11.11/vms/platformvm/txs/add_permissionless_delegator_tx_test.go#L1861: converting nil *github.com/ava-labs/avalanchego/vms/platformvm/txs.AddPermissionlessDelegatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/ava-labs/avalanchego@v1.11.11/vms/platformvm/txs/add_permissionless_validator_tx_test.go#L1834: converting nil *github.com/ava-labs/avalanchego/vms/platformvm/txs.AddPermissionlessValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/ava-labs/avalanchego@v1.11.11/vms/platformvm/txs/add_subnet_validator_test.go#L215: converting nil *github.com/ava-labs/avalanchego/vms/platformvm/txs.AddSubnetValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/ava-labs/avalanchego@v1.11.11/vms/platformvm/txs/add_subnet_validator_test.go#L221: converting nil *github.com/ava-labs/avalanchego/vms/platformvm/txs.AddSubnetValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/ava-labs/avalanchego@v1.11.11/vms/platformvm/txs/add_subnet_validator_test.go#L227: converting nil *github.com/ava-labs/avalanchego/vms/platformvm/txs.AddSubnetValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/ava-labs/avalanchego@v1.11.11/vms/platformvm/txs/add_validator_test.go#L224: converting nil *github.com/ava-labs/avalanchego/vms/platformvm/txs.AddValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/go-spring/spring-base@v1.1.3/log/plugin_filter.go#L28: converting nil *github.com/go-spring/spring-base/log.AcceptAllFilter to interface github.com/go-spring/spring-base/log.Filter
https://go-mod-viewer.appspot.com/github.com/go-spring/spring-base@v1.1.3/log/plugin_filter.go#L29: converting nil *github.com/go-spring/spring-base/log.DenyAllFilter to interface github.com/go-spring/spring-base/log.Filter
https://go-mod-viewer.appspot.com/github.com/go-spring/spring-base@v1.1.3/log/plugin_filter.go#L30: converting nil *github.com/go-spring/spring-base/log.LevelFilter to interface github.com/go-spring/spring-base/log.Filter
https://go-mod-viewer.appspot.com/github.com/go-spring/spring-base@v1.1.3/log/plugin_filter.go#L31: converting nil *github.com/go-spring/spring-base/log.LevelMatchFilter to interface github.com/go-spring/spring-base/log.Filter
https://go-mod-viewer.appspot.com/github.com/go-spring/spring-base@v1.1.3/log/plugin_filter.go#L32: converting nil *github.com/go-spring/spring-base/log.LevelRangeFilter to interface github.com/go-spring/spring-base/log.Filter
https://go-mod-viewer.appspot.com/github.com/go-spring/spring-base@v1.1.3/log/plugin_filter.go#L33: converting nil *github.com/go-spring/spring-base/log.TimeFilter to interface github.com/go-spring/spring-base/log.Filter
https://go-mod-viewer.appspot.com/github.com/go-spring/spring-base@v1.1.3/log/plugin_filter.go#L34: converting nil *github.com/go-spring/spring-base/log.TagFilter to interface github.com/go-spring/spring-base/log.Filter
https://go-mod-viewer.appspot.com/github.com/go-spring/spring-base@v1.1.3/log/plugin_filter.go#L35: converting nil *github.com/go-spring/spring-base/log.CompositeFilter to interface github.com/go-spring/spring-base/log.Filter
https://go-mod-viewer.appspot.com/github.com/andersfylling/snowflake/v5@v5.0.1/snowflake_test.go#L66: converting nil *github.com/andersfylling/snowflake/v5.Snowflake to interface interface{}
https://go-mod-viewer.appspot.com/github.com/andersfylling/snowflake/v5@v5.0.1/snowflake_test.go#L88: converting nil *github.com/andersfylling/snowflake/v5.Snowflake to interface interface{}
https://go-mod-viewer.appspot.com/github.com/andersfylling/snowflake/v5@v5.0.1/snowflake_test.go#L111: converting nil *github.com/andersfylling/snowflake/v5.Snowflake to interface interface{}
https://go-mod-viewer.appspot.com/k8s.io/client-go@v0.31.1/rest/request_test.go#L1968: converting nil *k8s.io/apimachinery/pkg/apis/meta/v1.DeleteOptions to interface interface{}
https://go-mod-viewer.appspot.com/github.com/MetalBlockchain/metalgo@v1.11.9/vms/platformvm/txs/add_delegator_test.go#L207: converting nil *github.com/MetalBlockchain/metalgo/vms/platformvm/txs.AddDelegatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/MetalBlockchain/metalgo@v1.11.9/vms/platformvm/txs/add_permissionless_delegator_tx_test.go#L1860: converting nil *github.com/MetalBlockchain/metalgo/vms/platformvm/txs.AddPermissionlessDelegatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/MetalBlockchain/metalgo@v1.11.9/vms/platformvm/txs/add_permissionless_validator_tx_test.go#L1833: converting nil *github.com/MetalBlockchain/metalgo/vms/platformvm/txs.AddPermissionlessValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/MetalBlockchain/metalgo@v1.11.9/vms/platformvm/txs/add_subnet_validator_test.go#L215: converting nil *github.com/MetalBlockchain/metalgo/vms/platformvm/txs.AddSubnetValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/MetalBlockchain/metalgo@v1.11.9/vms/platformvm/txs/add_subnet_validator_test.go#L221: converting nil *github.com/MetalBlockchain/metalgo/vms/platformvm/txs.AddSubnetValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/MetalBlockchain/metalgo@v1.11.9/vms/platformvm/txs/add_subnet_validator_test.go#L227: converting nil *github.com/MetalBlockchain/metalgo/vms/platformvm/txs.AddSubnetValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/MetalBlockchain/metalgo@v1.11.9/vms/platformvm/txs/add_validator_test.go#L224: converting nil *github.com/MetalBlockchain/metalgo/vms/platformvm/txs.AddValidatorTx to interface any
https://go-mod-viewer.appspot.com/github.com/enbility/spine-go@v0.7.0/util/type.go#L11: converting nil *T to interface any
https://go-mod-viewer.appspot.com/github.com/cilium/cilium@v1.16.2/pkg/types/portmap_test.go#L163: converting nil *github.com/cilium/cilium/pkg/types.namedPortMultiMap to interface github.com/cilium/cilium/pkg/types.NamedPortMultiMap
https://go-mod-viewer.appspot.com/github.com/vmware/govmomi@v0.43.0/vim25/types/helpers_test.go#L352: converting nil *T to interface any
https://go-mod-viewer.appspot.com/github.com/timandy/routine@v1.1.4/thread_local_map_entry_test.go#L68: converting nil *github.com/timandy/routine.personCloneable to interface github.com/timandy/routine.entry
https://go-mod-viewer.appspot.com/github.com/timandy/routine@v1.1.4/thread_local_map_entry_test.go#L137: converting nil *github.com/timandy/routine.personCloneable to interface github.com/timandy/routine.entry
https://go-mod-viewer.appspot.com/github.com/openimsdk/tools@v0.0.49/mw/replace_nil_test.go#L42: converting nil *github.com/openimsdk/tools/mw.A to interface any
https://go-mod-viewer.appspot.com/github.com/expr-lang/expr@v1.16.9/test/deref/deref_test.go#L188: converting nil *int32 to interface any
https://go-mod-viewer.appspot.com/goki.dev/laser@v0.1.34/basic_test.go#L26: converting nil *goki.dev/laser.A to interface any
https://go-mod-viewer.appspot.com/github.com/goki/ki@v1.1.17/kit/convert_test.go#L24: converting nil *github.com/goki/ki/kit.A to interface any
https://go-mod-viewer.appspot.com/github.com/davecgh/go-xdr@v0.0.0-20161123171359-e6a2ba005892/xdr2/decode_test.go#L699: converting nil *int32 to interface interface{}
https://go-mod-viewer.appspot.com/k8s.io/apiserver@v0.31.1/pkg/admission/plugin/cel/filter_test.go#L475: converting nil *k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.Unstructured to interface k8s.io/apimachinery/pkg/runtime.Object
https://go-mod-viewer.appspot.com/k8s.io/kube-openapi@v0.0.0-20240826222958-65a50c78dec5/pkg/internal/third_party/go-json-experiment/json/arshal_test.go#L3317: converting nil *k8s.io/kube-openapi/pkg/internal/third_party/go-json-experiment/json.valueStringer to interface any
https://go-mod-viewer.appspot.com/k8s.io/kube-openapi@v0.0.0-20240826222958-65a50c78dec5/pkg/internal/third_party/go-json-experiment/json/arshal_test.go#L3322: converting nil **k8s.io/kube-openapi/pkg/internal/third_party/go-json-experiment/json.valueStringer to interface any
https://go-mod-viewer.appspot.com/k8s.io/kube-openapi@v0.0.0-20240826222958-65a50c78dec5/pkg/internal/third_party/go-json-experiment/json/arshal_test.go#L3333: converting nil *k8s.io/kube-openapi/pkg/internal/third_party/go-json-experiment/json.pointerStringer to interface any
https://go-mod-viewer.appspot.com/k8s.io/kube-openapi@v0.0.0-20240826222958-65a50c78dec5/pkg/internal/third_party/go-json-experiment/json/arshal_test.go#L3338: converting nil **k8s.io/kube-openapi/pkg/internal/third_party/go-json-experiment/json.pointerStringer to interface any
https://go-mod-viewer.appspot.com/k8s.io/kube-openapi@v0.0.0-20240826222958-65a50c78dec5/pkg/internal/third_party/go-json-experiment/json/arshal_test.go#L7131: converting nil *k8s.io/kube-openapi/pkg/internal/third_party/go-json-experiment/json.valueStringer to interface any
https://go-mod-viewer.appspot.com/k8s.io/kube-openapi@v0.0.0-20240826222958-65a50c78dec5/pkg/internal/third_party/go-json-experiment/json/arshal_test.go#L7137: converting nil **k8s.io/kube-openapi/pkg/internal/third_party/go-json-experiment/json.valueStringer to interface any
https://go-mod-viewer.appspot.com/k8s.io/kube-openapi@v0.0.0-20240826222958-65a50c78dec5/pkg/internal/third_party/go-json-experiment/json/arshal_test.go#L7151: converting nil *k8s.io/kube-openapi/pkg/internal/third_party/go-json-experiment/json.pointerStringer to interface any
https://go-mod-viewer.appspot.com/k8s.io/kube-openapi@v0.0.0-20240826222958-65a50c78dec5/pkg/internal/third_party/go-json-experiment/json/arshal_test.go#L7157: converting nil **k8s.io/kube-openapi/pkg/internal/third_party/go-json-experiment/json.pointerStringer to interface any
https://go-mod-viewer.appspot.com/github.com/stellar/go-xdr@v0.0.0-20231122183749-b53fb00bcac2/xdr2/decode_test.go#L630: converting nil *int32 to interface interface{}
https://go-mod-viewer.appspot.com/github.com/stellar/go-xdr@v0.0.0-20231122183749-b53fb00bcac2/xdr3/decode_test.go#L755: converting nil *int32 to interface interface{}
https://go-mod-viewer.appspot.com/open-match.dev/open-match@v1.8.1/examples/demo/updater/updater_test.go#L47: converting nil *int to interface interface{}
https://go-mod-viewer.appspot.com/github.com/wI2L/jettison@v0.7.4/json_1.14_test.go#L96: converting nil *github.com/wI2L/jettison.niljsonm to interface encoding/json.Marshaler
https://go-mod-viewer.appspot.com/github.com/wI2L/jettison@v0.7.4/json_1.14_test.go#L104: converting nil *github.com/wI2L/jettison.niltextm to interface encoding.TextMarshaler
https://go-mod-viewer.appspot.com/github.com/wI2L/jettison@v0.7.4/json_1.14_test.go#L112: converting nil *github.com/wI2L/jettison.niljetim to interface github.com/wI2L/jettison.comboMarshaler
https://go-mod-viewer.appspot.com/github.com/wI2L/jettison@v0.7.4/json_1.14_test.go#L120: converting nil *github.com/wI2L/jettison.nilmjctx to interface github.com/wI2L/jettison.comboMarshalerCtx
https://go-mod-viewer.appspot.com/github.com/hashicorp/go-metrics@v0.5.3/circonus/circonus_test.go#L188: converting nil *github.com/hashicorp/go-metrics/circonus.CirconusSink to interface github.com/hashicorp/go-metrics.MetricSink
https://go-mod-viewer.appspot.com/github.com/hashicorp/go-metrics@v0.5.3/datadog/dogstatsd_test.go#L165: converting nil *github.com/hashicorp/go-metrics/datadog.DogStatsdSink to interface github.com/hashicorp/go-metrics.MetricSink
https://go-mod-viewer.appspot.com/github.com/hashicorp/go-metrics@v0.5.3/prometheus/prometheus_test.go#L395: converting nil *github.com/hashicorp/go-metrics/prometheus.PrometheusSink to interface github.com/hashicorp/go-metrics.MetricSink
https://go-mod-viewer.appspot.com/github.com/hashicorp/go-metrics@v0.5.3/prometheus/prometheus_test.go#L397: converting nil *github.com/hashicorp/go-metrics/prometheus.PrometheusPushSink to interface github.com/hashicorp/go-metrics.MetricSink
https://go-mod-viewer.appspot.com/bitbucket.org/ai69/amoy@v0.2.3/shell_test.go#L106: converting nil *bitbucket.org/ai69/amoy.customStruct to interface bitbucket.org/ai69/amoy.customInterface
https://go-mod-viewer.appspot.com/github.com/blend/go-sdk@v1.20240719.1/ex/ex_test.go#L343: converting nil *github.com/blend/go-sdk/ex.Ex to interface any
https://go-mod-viewer.appspot.com/github.com/blend/go-sdk@v1.20240719.1/ex/ex_test.go#L374: converting nil *github.com/blend/go-sdk/ex.Ex to interface any
| NeedsInvestigation,Analysis | low | Critical |
2,551,206,052 | ui | [feat]: Transfer Component | ### Feature description
I would like to request the addition of a Transfer Component to Shadcn UI. This component is commonly found in UI libraries like MUI and Ant Design, allowing users to transfer items between two lists (left panel to right panel, and vice versa). This feature is especially useful in applications that require multi-select item management in an intuitive and efficient way.
The Transfer Component typically includes the following features:
Dual list panels (source and target)
Ability to move items between lists
Filtering and searching within the lists
Support for selecting all or individual items
Customizable list titles and actions
Adding a Transfer Component to Shadcn UI would enhance its usability for developers building complex UIs, as it simplifies item management across categories or states.
### Affected component/components
This would be a new component. However, it could interact with existing components like Checkbox, Button, and possibly List.
### Additional Context
For reference, here are similar implementations in other UI libraries:
[MUI Transfer Component](https://mui.com/material-ui/react-transfer-list/)
[Ant Design Transfer Component](https://ant.design/components/transfer)
The Transfer Component is useful for various scenarios, such as user role management, assignment of items, or categorization workflows.
Thank you for considering this feature request!
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,551,216,321 | go | x/tools/gopls: packages.Load failure with conflicting go.work and go.mod toolchain versions | ### What version of Go, VS Code & VS Code Go extension are you using?
<details><summary>Version Information</summary><br>
* Run `go version` to get version of Go from _the VS Code integrated terminal_.
- go version go1.22.7 darwin/arm64
* Run `gopls -v version` to get version of Gopls from _the VS Code integrated terminal_.
- golang.org/x/tools/gopls v0.16.2
* Run `code -v` or `code-insiders -v` to get version of VS Code or VS Code Insiders.
- 1.93.1
* Check your installed extensions to get the version of the VS Code Go extension
- 0.42.1
* Run Ctrl+Shift+P (Cmd+Shift+P on Mac OS) > `Go: Locate Configured Go Tools` command.
- ```
```# Tools Configuration
## Environment
GOBIN: undefined
toolsGopath:
gopath: /Users/remko/.go
GOROOT: /Users/remko/.go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.7.darwin-arm64
PATH: ...
## Tools
go: /opt/homebrew/bin/go: go version go1.22.7 darwin/arm64
gopls: /Users/remko/.go/bin/gopls (version: v0.16.2 built with go: go1.22.7)
gotests: /Users/remko/.go/bin/gotests (version: v1.6.0 built with go: go1.22.7)
gomodifytags: /Users/remko/.go/bin/gomodifytags (version: v1.17.0 built with go: go1.22.7)
impl: /Users/remko/.go/bin/impl (version: v1.4.0 built with go: go1.22.7)
goplay: /Users/remko/.go/bin/goplay (version: v1.0.0 built with go: go1.22.7)
dlv: /Users/remko/.go/bin/dlv (version: v1.23.0 built with go: go1.22.7)
staticcheck: /Users/remko/.go/bin/staticcheck (version: v0.5.1 built with go: go1.22.7)
## Go env
Workspace Folder (bw): /Users/remko/bw/bw
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/remko/Library/Caches/go-build'
GOENV='/Users/remko/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/remko/.go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/remko/.go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/remko/.go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.7.darwin-arm64'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/Users/remko/.go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.7.darwin-arm64/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.22.7'
GCCGO='gccgo'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK='/Users/remko/bw/bw/go.work'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/06/t4s5klwd24g25jxxgzbh6gwc0000gn/T/go-build439318135=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
</details>
### Share the Go related settings you have added/edited
```
"go.lintTool": "staticcheck",
"go.lintOnSave": "package",
"go.vetOnSave": "package",
"go.formatTool": "gofmt",
"go.testFlags": ["-short"],
"go.testEnvVars": {
"NO_COLOR": "1"
},
"go.useLanguageServer": true,
```
### Describe the bug
IntelliSense seems to be using the incorrect version of Go. I get the following error:

I upgraded all my tools, and as you can see from the logs above, there is no mention of 1.22.5 anywhere.
Still, I get the following in my gopls log:

Note that the version of go installed in in the PATH (/opt/homebrew/bin) *is* 1.22.5, but as far as I understand (and as the output of `go version` seem to confirm), it should use the 1.22.7 toolchain if that is what is requested by go.work. | gopls,Tools | low | Critical |
2,551,255,992 | storybook | [Bug]: Network error when starting dev server leads to confusing "circular structure" error output | ### Describe the bug
It's possible for Storybook to encounter a network error when starting its dev server, and when it tries to stringify the error, it encounters a circular object, which then leads to _another_ error. The error output becomes concerned only with this second error, and the original issue is lost entirely.
Here's the full error output:
```
@storybook/core v8.3.3
info => Starting manager..
TypeError: Converting circular structure to JSON
--> starting at object with constructor 'Object'
--- property 'issuerCertificate' closes the circle
at JSON.stringify (<anonymous>)
at renderHTML (file:///.../common/temp/node_modules/.pnpm/@storybook+core@8.3.3/node_modules/@storybook/core/dist/builder-manager/index.js:2836:41)
at async starterGeneratorFn (file:///.../common/temp/node_modules/.pnpm/@storybook+core@8.3.3/node_modules/@storybook/core/dist/builder-manager/index.js:3054:11)
at async Module.start (file:///.../common/temp/node_modules/.pnpm/@storybook+core@8.3.3/node_modules/@storybook/core/dist/builder-manager/index.js:3146:9)
at async storybookDevServer (...\common\temp\node_modules\.pnpm\@storybook+core@8.3.3\node_modules\@storybook\core\dist\core-server\index.cjs:47306:11)
at async buildOrThrow (...\common\temp\node_modules\.pnpm\@storybook+core@8.3.3\node_modules\@storybook\core\dist\core-server\index.cjs:46581:12)
at async buildDevStandalone (...\common\temp\node_modules\.pnpm\@storybook+core@8.3.3\node_modules\@storybook\core\dist\core-server\index.cjs:48518:78)
at async withTelemetry (...\common\temp\node_modules\.pnpm\@storybook+core@8.3.3\node_modules\@storybook\core\dist\core-server\index.cjs:47080:12)
at async dev (...\common\temp\node_modules\.pnpm\@storybook+core@8.3.3\node_modules\@storybook\core\dist\cli\bin\index.cjs:2877:3)
at async r.<anonymous> (...\common\temp\node_modules\.pnpm\@storybook+core@8.3.3\node_modules\@storybook\core\dist\cli\bin\index.cjs:2929:74)
WARN Broken build, fix the error above.
WARN You may need to refresh the browser.
```
### Reproduction link
N/A
### Reproduction steps
We're not exactly sure what caused this, but it seemed to be a transient network issue, since the problem went away on its own after 5-10 minutes.
### System
```bash
Storybook Environment Info:
System:
OS: Windows 11 10.0.22631
CPU: (8) x64 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
Binaries:
Node: 20.11.1 - C:\Program Files\Node.js\node.EXE
npm: 10.8.2 - C:\Program Files\Node.js\npm.CMD
pnpm: 9.7.1 - C:\Program Files\Node.js\pnpm.CMD <----- active
Browsers:
Edge: Chromium (127.0.2651.74)
npmPackages:
@storybook/react: link:apps/geiger/node_modules/@storybook/react => 8.3.3
storybook: link:apps/geiger/node_modules/storybook => 8.3.3
```
### Additional context
_No response_ | bug,sev:S2 | low | Critical |
2,551,272,783 | rust | tests/debuginfo/macro-stepping.rs remains broken with SingleUseConsts | This is the part of @saethlin's #128945 that survived #130329. Reordering the IR in the front end in the SingleUseConsts pass breaks the stepping order. This could be mitigated to some extent (but not completely, unclear if it would be sufficient for this test to pass) with better support in LLVM for front ends to express which IR instructions should be considered for breakpoints.
@rustbot label +A-debuginfo +A-testsuite +A-llvm +A-mir +C-bug +T-compiler +WG-debugging +WG-llvm | A-LLVM,A-testsuite,A-debuginfo,T-compiler,A-MIR,C-bug,WG-llvm,WG-debugging | low | Critical |
2,551,300,523 | ui | [bug]: could not determine executable to run for package shadcn-ui | ### Describe the bug
Subject: Issue adding any component with ```shadcn-ui```
Hi team,
I'm encountering an issue while trying to add the card component using ```shadcn-ui```. I've attached screenshots for reference.
Steps to reproduce:
I run the following command: ```bunx --bun shadcn-ui@latest add card``` (OR) ```bun x shadcn-ui@latest add card```
**Expected behavior:**
The card component should be added to my project using ```shadcn-ui.```


### Affected component/components
All the components
### How to reproduce
**Actual behavior:**
I receive the following error message: "could not determine executable to run for package shadcn-ui"
Additional information:
I've attempted to add the component using both ```npm``` and ```bun```, with the same error occurring.
Operating System: Windows 11
Node.js version: ```v18.18.1```
Bun version: ```v1.1.29```
npm version: ```v10.8.3```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,551,333,987 | ollama | Way to stop all running models | `ollama stop all` or `ollama stop *` etc
would be handy | feature request | low | Major |
2,551,375,076 | vscode | Consider to convert `src/vs/workbench/contrib/webview/browser/pre/service-worker.js` to TypeScript | Given we now run with ESM, there is no real reason anymore to have JS files in our sources.
https://github.com/microsoft/vscode/blob/9e23739d0b9f3381d99be9bde649709d4329daa5/src/vs/workbench/contrib/webview/browser/pre/service-worker.js#L1-L8 | debt,webview | low | Minor |
2,551,437,910 | godot | print() Shows inaccurate data after ~15-ish seconds | ### Tested versions
- Reproducible in: Godot 4.3 Stable (Only one I tested).
### System information
Godot v4.3.stable - Windows 10.0.22631 (Windows 11, but it says 10 on Godot for some reason?) - GLES3 (Compatibility) - NVIDIA GeForce RTX 3060 Laptop GPU (NVIDIA; 32.0.15.6081) - 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz (16 Threads)
### Issue description
https://github.com/user-attachments/assets/fd593951-a2f8-44f2-93dc-9861543c2c4a
Essentially, for the first 15 seconds i play the game, using print() to print the player's state, etc. works fine.
After 15 seconds, it just breaks and prints out incorrect values. The game itself continues to print correct values. No clue what the reason behind this might be, nor what causes it.
I don't think that this is a coding bug and rather an engine bug because the game itself works just fine, and also gives accurate data, but the editor's print() is very inaccurate.
### Steps to reproduce
- Run the project on the 'night' scene
- Wait 15 seconds
- Labels will work fine but the print() statements will not.
### Minimal reproduction project (MRP)
project is over 700mb large, can't attach it. I don't know how to reproduce it outside of this project. | topic:editor,needs testing | low | Critical |
2,551,543,602 | svelte | Assigning to rvalue `svelte(js_parse_error)` | ### Describe the bug
With this code:
```ts
foo! += 1;
```
(which is [valid typescript](https://www.typescriptlang.org/play/?#code/DYUwLgBAZg9jBcEB2BXAtgIxAJwgH2RWGAgF4IAGAbgCgbYYBCCAanIEYqg))
Svelte throws `Assigning to rvalue svelte(js_parse_error)`
### Reproduction
[REPL](https://svelte-5-preview.vercel.app/#H4sIAAAAAAAAAyWMzQrCMBAGX2X9rhbUa_oDPofrodatBNJNSDaC1L67FI8zA7Ni9kEK3G2FjovA4ZoSGtgn7VDeEkzQoMSap910Zco-GYVRXz3DCmNgZQtiNMfoSOvykExf0hoC9XRuWWlPBzr2dGlZu9P_MbCiwRKffvbyhLNcZbtvP5myBrmUAAAA)
### Logs
_No response_
### System Info
```shell
svelte@5.0.0-next.259
```
### Severity
annoyance | bug,blocked by upstream | low | Critical |
2,551,563,846 | ui | [feat]: Support for Vinxi applications | ### Feature description
I would like Shadcn CLI to support Vinxi applications which under the hood uses ViteJS.
Currently on a Vinxi setup, shadcn's CLI complains that the framework is not supported, I'm assuming it does that because in Vinxi codebase, the vite.config.ts is usually named app.config.ts.
I was able to solve this by renaming app.config.ts -> vite.config.ts but I would like the CLI to understand that app.config.ts is a ViteJS app too.
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,551,584,724 | vscode | [Bug] OS-Level Launch Configuration Not Behaving As Expected | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
The OS Level Presentation settings do not seem to be working as expected.
In below screenshot, I expected the launch config within the dev container (Linux OS) to be hidden but it is still visible:

Steps to Reproduce:
1. Setup a project for [`Dev Containers`](https://containers.dev)
1. Create 2 launch configs, one for host and one for within dev container
1. Use `{"windows/osx/linux": {"presentation": {"hidden": true/false}}}` property combinations such that only one launch config is visible in either environment (host / dev container)
_____
Version: 1.93.1 (user setup)
Commit: 38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40
Date: 2024-09-11T17:20:05.685Z
Electron: 30.4.0
ElectronBuildId: 10073054
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Windows_NT x64 10.0.22631
| bug,debug | low | Critical |
2,551,607,280 | godot | Shader compile error running android godot editor on chromebooks | ### Tested versions
- Reproducible in 4.3 from Google Play Store
- Not Reproducible in 3.6 from Google Play Store
### System information
Reproduces on our 2 chromebooks:
ChromeOS 103.0.5060.53 - Godot v4.3.stable - compatibility renderer - Acer Chromebook R11 (CB5-132T, C738T) - Celeron N3160
ChromeOS 126.0.6478.222 - Godot v4.3.stable - compatibility renderer - Dell Chromebook 5190 - Celeron N3350
### Issue description
Running godot editor there is no 3d rendered, for example no grid lines or meshes drawn in new 3d scene. 2d is fine.

There are shader compile errors shown in adb logcat:
```
Creating new Godot fragment instance.
OnCreate: GodotFragment{75026bc} (035fb026-ec39-4ab5-8c63-f137278a4bb2 id=0x7f0800a1)
Initializing Godot plugin registry
OnInitNativeLayer: GodotFragment{75026bc} (035fb026-ec39-4ab5-8c63-f137278a4bb2 id=0x7f0800a1)
Godot native layer initialization completed: true
USER ERROR: Couldn't load file '/project.binary', error code 12.
at: _load_settings_text_or_binary (core/config/project_settings.cpp:803)
Godot native layer setup completed
OnInitRenderView: GodotFragment{75026bc} (035fb026-ec39-4ab5-8c63-f137278a4bb2 id=0x7f0800a1)
OnStart: GodotFragment{75026bc} (035fb026-ec39-4ab5-8c63-f137278a4bb2 id=0x7f0800a1)
OnResume: GodotFragment{75026bc} (035fb026-ec39-4ab5-8c63-f137278a4bb2 id=0x7f0800a1)
Skipped 47 frames! The application may be doing too much work on its main thread.
Loading /vendor/lib/hw/gralloc.cros.so from current namespace instead of sphal namespace.
android::hardware::configstore::V1_0::ISurfaceFlingerConfigs::hasWideColorDisplay retrieved: 0
android::hardware::configstore::V1_0::ISurfaceFlingerConfigs::hasHDRDisplay retrieved: 0
Initialized EGL, version 1.4
Swap behavior 2
Device claims wide gamut support, cannot find matching config, error = EGL_SUCCESS
creating OpenGL ES 3.0 context :
Loading /vendor/lib/hw/android.hardware.graphics.mapper@2.0-impl.so from current namespace instead of sphal namespace.
Loading /vendor/lib/hw/gralloc.cros.so from current namespace instead of sphal namespace.
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
1:
USER ERROR: SceneShaderGLES3: Program linking failed:
error: declarations for uniform `world_transform' have mismatching invariant qualifiers
at: _display_error_with_code (drivers/gles3/shader_gles3.cpp:254)
USER ERROR: Method/function failed.
at: _compile_specialization (drivers/gles3/shader_gles3.cpp:456)
USER WARNING: shader failed to compile, unable to bind shader.
at: _version_bind_shader (./drivers/gles3/shader_gles3.h:222)
OpenGL API OpenGL ES 3.1 Mesa 19.0.0-rc5 (git-2e7833ad91) - Compatibility - Using Device: Intel Open Source Technology Center - Mesa DRI Intel(R) HD Graphics 400 (Braswell)
PlayerBase::PlayerBase()
TrackPlayerBase::TrackPlayerBase()
Emulating old channel mask behavior (ignoring positional mask 0x3, using default mask 0x3 based on channel count of 2)
AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount 0 -> 1418
Displayed org.godotengine.editor.v4/org.godotengine.editor.GodotEditor: +2s522ms
Selected screen scale: 1
Selected screen scale: 1
Attempting to set client state on removed layer: Splash Screen org.godotengine.editor.v4#0
Attempting to destroy on removed layer: Splash Screen org.godotengine.editor.v4#0
OnGodotSetupCompleted
Skipping profile installation for org.godotengine.editor.v4
OnGodotMainLoopStarted
```
### Steps to reproduce
1. install Godot 4 from google play store on chromebook
2. Run and create new project in compatibility mode
3. There are no 3d grid lines drawn in 3d scene editor, meshes added to scene do not draw
### Minimal reproduction project (MRP)
Just create a new project | bug,platform:android,topic:rendering,topic:porting | low | Critical |
2,551,618,618 | flutter | Eliminate dependency on DeviceLab bot/account for Apple development signing certificate renewals | The certificate renewal process is described in [How to renew the DeviceLab development certificate](https://g3doc.corp.google.com/company/teams/flutter/infrastructure/devicelab/apple_cert_renewal.md) (Google-internal). In the _Create the new signing certificate_ section, we rely on a DeviceLab bot to perform the renewal, this and the related Apple Developer account are single points of failure.
* [ ] Allow renewals to be performed using any Apple Developer account that is authorised to create a developer signing certificate.
* [ ] Allow the certificate creation process to be performed from any authorised user's machine. | team-infra,P2,triaged-infra | low | Critical |
2,551,654,066 | material-ui | [material-ui][Autocomplete] onInputChange called with undefined event contrary to type definition | ### Steps to reproduce
Link to live example: (required) https://stackblitz.com/edit/react-ouzbqw?file=Demo.tsx
Steps:
1. Set the onInputChange prop
2. Check what is passed in for the event param
### Current behavior
Hello ๐
I'm noticing that the `onInputChange` is [typed as](https://github.com/mui/material-ui/blob/7fd82d58b79cfc90cc0d8c904b7cd753f8020e32/packages/mui-base/src/useAutocomplete/useAutocomplete.d.ts#L262-L266):
```
onInputChange?: (
event: React.SyntheticEvent,
value: string,
reason: AutocompleteInputChangeReason,
) => void;
```
with a non-null `event` param, but `null` seems to be coming through for the param on mount. [See demo](https://stackblitz.com/edit/react-ouzbqw?file=Demo.tsx):

### Expected behavior
I'm not sure if `onInputChange` is supposed to be called on mount or not, but if it is and it's expected that `null` is an option for `event`, would it be possible to update the typing of `onInputChange` to reflect that?
Thanks!
### Context
We are trying to use strict null safety
### Your environment
See https://stackblitz.com/edit/react-ouzbqw?file=Demo.tsx - also reproduce-able in [this docs demo](https://mui.com/material-ui/react-autocomplete/#controlled-states)
**Search keywords**: onInputChange undefined event | typescript,package: material-ui,component: autocomplete | low | Minor |
2,551,654,307 | vscode | With `editor.experimentalEditContextEnabled`, switching to another application during composition leads to a weird state | Version: 1.94.0-insider
Commit: c43f508b732d24b0c4732de9db2b38b4c5b88b8a
Date: 2024-09-26T05:04:07.935Z
Electron: 30.5.1
ElectronBuildId: 10262041
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 22.6.0
Steps to Reproduce:
1. Set `"editor.experimentalEditContextEnabled": true`
4. Enable Google IME for Japanese.
5. Type `sennsei`
6. Type `Space`
7. Switching to another application with mouse click.
8. Back to VS Code
9. Type `Space`.
10. We get an inexpected output.
https://github.com/user-attachments/assets/ac535f45-c851-49df-8735-04aa6e793454
| bug,upstream-issue-linked,editor-edit-context | low | Major |
2,551,663,793 | go | proposal: runtime: New runtime.Chain API to emit only a call chain from the callee goroutine | ### Proposal Details
Apology if there is/are already existing similar proposal like this, but allow me to describe the requirement.
Background
As far as I am aware of today, the Go runtime provides two different ways to obtain a particular stacktrace.
- `runtime.Stack(buf, all=true)` and `runtime.Stack(buf, all=false)`. The `debug.Stack` is basically a convenient helper to produce a local goroutine stack with fixed 1K buf size.
- In a large scale application [we are running enterprise level applications to support large and multiple services in cloud], we observed that there is a challenge to help debugging. The only local goroutine based stack trace is way to limited and often give us little clue as of how we got here. The `full` goroutine one is also way to costly to deploy into our deeper stack [we recently just noticed a performance regression due to deployed a self-implemented full Stack dump]. The full stack dump is fairly useful but we also need to focus on just the call chain we care about [there are a lot of other goroutines from various different kind of areas/libs which don't typically add any useful value.]
The question is, can the Go runtime provides a balanced view of a particular call chain that invoked from the [for example, `runtime.Chain`] call goroutine and all the way up?
This would be a particular useful signal to help only concrete on the goroutine that emitted such call and it is typically the one that received some kind of error [either due to internal processing or from external].
Our home-grown solution so far is to produce a `runtime.Stack(64k, all=true)` and then manually walk the goroutine section one by one via text-based parsing using a few heuristics, for example '^goroutine ' or ' created by ' to find the call relationship. This is not only very costly but also error-prone.
Thus, we'd like to see whether it is doable for the Go runtime to provide a new API, say `runtime.Chain` so that it could help produce a "current goroutine" based call chain dump.
Given that the current stack dump annotates goroutine " created by", I assume that the runtime does at least have some kind of internal bookkeeping already to reason about the call relationship. Since Go doesn't advocate goroutine-based programming and there isn't much else options out there [are there?], we turn to the Go team to seek help.
Some Proposed semantics:
1. It could be that producing such chain is still some non-trivial undertake, could it be possible to design some ABIs where the runtime could emit a list of goroutines IDs [or some cheap metadata if you still don't want to disclose implementation details] so that at least we could see a somewhat complete call stack? Or have some sort of object handle so to allow application to choose what to dump?
2. The order output is on the sequence of call chain upwards. Basically, the one called `runtime.Chain` would be the first entry in the output, and the caller of that second, and so forth. If whatever emitted metadata can be programmable with hints about the relationship, e.g. `created by`, then it is fine without any order and allow application to stitch those together.
3. It is possible that at the time when composing the chain, some of the goroutines already got terminated and purged out of the memory. I don't know the runtime detail enough to make any proposal here as of what happens if we see a gap, but I could assume that one potential option would be to end the chain. Often, with just one more or a couple more call chain, the debuggability can be greatly improved.
Thank you!
Jim | Proposal | low | Critical |
2,551,688,001 | go | net/http: new protocol registration mechanism | The following is not quite a proposal. It is a declaration of an intent to commit shenanigans, and an apology (in the sense of "a reasoned argument or justification of something") for them.
The net/http package allows an external package (practically: golang.org/x/net/http2) to register an HTTP/2 server and client implementation with it. (This is Server.TLSNextProto and Transport.TLSNextProto, plus Transport.RegisterProtocol.)
The current registration mechanism is cumbersome and can't easily be extended to support some features we want to add to the package. For example, we want to add support for unencrypted HTTP/2 (#67816), but the current extension mechanism assumes all HTTP/2 connections are a *tls.Conn. We have no way to pass an unencrypted net.Conn from net/http to the HTTP/2 implementation.
We have a plan to move x/net/http2 into std (#67810), but this involves a complex sequence of steps in which adding unencrypted HTTP/2 support is supposed to occur before the package move.
Therefore, it would be very convenient in the short term to have a better connection between net/http and golang.org/x/net/http2.
Ideally, this connection would be extensible if/when we discover additional ways the two packages need to communicate. It should also add little to no exported API surface to net/http, since it will have few-to-no users.
I propose, therefore, to add the following two unexported functions to net/http, using //go:linkname to make them visible to x/net/http2 (and only x/net/http2):
```go
//go:linkname serverRegisterProtocolImplementation golang.org/x/net/http2.nethttp_serverRegisterProtocolImplementation
func serverRegisterProtocolImplementation(s *Server, proto string, impl any) error
//go:linkname transportRegisterProtocolImplementation golang.org/x/net/http2.nethttp_transportRegisterProtocolImplementation
func transportRegisterProtocolImplementation(t *Transport, proto string, impl any) error
```
This is implemented in https://go.dev/cl/616097 (net/http) and https://go.dev/cl/616118 (x/net/http2).
The interface passed in the impl parameters is fiddly, low-level, contains no user-serviceable parts, and is subject to change to in the future. (We pass it as an "any" to make it easier to evolve if necessary.) See the above CLs for the details.
It is likely that we will want to expose a user-visible protocol registration mechanism in the future to support HTTP/3, since there exists at least one existing third-party HTTP/3 implementation. We could do this by converting the above unexported functions to exported methods and defining an appropriate interface for HTTP/3 server/client implementations:
```go
func (*Server) RegisterProtocolImplementation(proto string, impl any)
func (*Transport) RegisterProtocolImplementation(proto string, impl any)
```
That's a separate proposal, though.
| NeedsInvestigation | low | Critical |
2,551,697,192 | flutter | Bot `mac-731-h526` is failing for various reasons | This bot is failing to checkout `flutter/flutter` across multiple tests
`Mac_arm64 build_tests_2_4 `
https://ci.chromium.org/ui/p/flutter/builders/try/Mac_arm64%20build_tests_2_4/15092/infra
https://ci.chromium.org/ui/p/flutter/builders/try/Mac_arm64%20build_tests_2_4/15076/infra
https://ci.chromium.org/ui/p/flutter/builders/try/Mac_arm64%20build_tests_2_4/15074/infra
`Mac_arm64 framework_tests_misc`
https://ci.chromium.org/ui/p/flutter/builders/try/Mac_arm64%20framework_tests_misc/14302/overview
etc
Failure [log](https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8735691188089322913/+/u/Checkout_flutter_flutter/git_fetch__2_/stdout) is:
```
fatal: unable to access 'https://github.com/flutter/flutter/': SSL: certificate verification failed (result: 5)
```
Noticed on https://github.com/flutter/flutter/pull/155794
| team-infra,P1,triaged-infra | medium | Critical |
2,551,761,806 | rust | Better type inference with `impl_trait_in_assoc_type` (ITIAT) | ```rust
#![feature(impl_trait_in_assoc_type)]
fn coca(_: String) {}
trait Foo<T> {}
impl Foo<()> for String {}
trait Bar {
type T;
type I: Foo<Self::T>;
type F: FnOnce(Self::I);
const M: Self::F;
}
impl Bar for () {
type T = impl Sized; // The compiler could infer this associated type as `()`.
type I = impl Foo<Self::T>;
type F = impl FnOnce(Self::I);
const M: Self::F = coca;
}
```
[playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=ba5e2718db00ff48f015aafe9f511aab)
Errors with:
```rust
error[E0277]: the trait bound `String: Foo<<() as Bar>::T>` is not satisfied
--> src/lib.rs:21:24
|
21 | const M: Self::F = coca;
| ^^^^ the trait `Foo<<() as Bar>::T>` is not implemented for `String`
|
= help: the trait `Foo<()>` is implemented for `String`
```
| requires-nightly,D-confusing,F-impl_trait_in_assoc_type | low | Critical |
2,551,767,982 | pytorch | `CUDNN_BACKEND_OPERATION: cudnnFinalize Failed cudnn_status: CUDNN_STATUS_BAD_PARAM` when using `float16`, works for `bfloat16` / `float32` | ### ๐ Describe the bug
Trying to use `torch.nn.grad.conv1d_{input,weight}` as part of a custom backwards pass. The operation runs when the dtype is `bfloat16` or `float32` but fails with the aforementioned error when using `float16`.
Any ideas why?
Here is a minimal repro:
```python
import torch
from torch.nn.grad import conv1d_input, conv1d_weight
# Shapes
bs = 2
seqlen = 32
d = 64
g = 2
hl = 4
dg = d // g
# NOTE: Changing this to float16 results in `RuntimeError: CUDNN_BACKEND_OPERATION: cudnnFinalize Failed cudnn_status: CUDNN_STATUS_BAD_PARAM`
dtype = torch.bfloat16
# Inputs
x = torch.randn(bs, seqlen, g, dg, device="cuda", dtype=dtype)
x2 = x.reshape(bs, seqlen, -1).permute(0, 2, 1) # bs, d, seqlen
h = torch.randn(g, 1, hl, device="cuda", dtype=dtype)
h_grouped = h.repeat_interleave(dg, dim=0) # (d, 1, hl)
assert h_grouped.shape == torch.Size([d, 1, hl])
padding = hl - 1
groups = d
# depthwise causal conv
y = torch.nn.functional.conv1d(x2, h_grouped, groups=d, padding=padding)[..., :-padding]
assert y.shape == torch.Size([bs, d, seqlen])
dy = torch.randn_like(y)
# These ops will fail if dtype is set to `float16`
dx = conv1d_input(x2.shape, h_grouped, dy, padding=padding, groups=groups)
dh_grouped = conv1d_weight(x2, h_grouped.shape, dy, padding=padding, groups=groups)
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0.dev20240814+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,180,182,184,186,188,190
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151,153,155,157,159,161,163,165,167,169,171,173,175,177,179,181,183,185,187,189,191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] pytorch-lightning==2.1.3
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240814+cu121
[pip3] torchaudio==2.4.0.dev20240814+cu121
[pip3] torchmetrics==1.2.1
[pip3] torchvision==0.20.0.dev20240814+cu121
[pip3] triton==3.0.0
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] pytorch-lightning 2.1.3 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240814+cu121 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240814+cu121 pypi_0 pypi
[conda] torchmetrics 1.2.1 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240814+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim | module: nn,module: cuda,triaged | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.