id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,499,072,741 | pytorch | torch.compile makes model slower | ### 🐛 Describe the bug
NOTE: we are only interested in compiling the decoder, the encoder is also shown in the time traces but should be ignored.
I've been trying for a long time to get `torch.compile` to work as advertised, but I've been struggling to get anything close to what others claim. In fact, `torch.compile` makes my model slower.
You can see the model source code and find instructions on how to reproduce my environment at https://github.com/alita-moore/img-to-text and you can see the torch trace logs [here](https://gist.github.com/alita-moore/b55bac79c5b1fdacbdf01e6b21f5a4f5).
See a snippet of the timing graph here (of the decoder):
<img width="974" alt="Screenshot 2024-08-31 at 10 51 24 PM" src="https://github.com/user-attachments/assets/e2ab688a-c00e-4f29-aa80-5b2cf88591ea">
You'll have to generate the timing graph yourself because the file is too large to upload.
Then here are the relevant log outputs detailing the timing between the compile and uncompiled runs:
## Decoder timing
The following are the rough timings from the forward pass of the decoder:
### Uncompiled timing (~7ms / forward pass)
```
DEBUG:root:---- Prepared inputs for generation: 0.00019s
DEBUG:root:---- Generated token: 0.00694s
DEBUG:root:---- Selected token: 0.00067s
DEBUG:root:---- Prepared inputs for generation: 0.00019s (2.11% of Next token)
DEBUG:root:---- Generated token: 0.00694s (78.24% of Next token)
DEBUG:root:---- Selected token: 0.00067s (7.55% of Next token)
DEBUG:root:-- Generated token 4: 0.01087s
DEBUG:root:---- Prepared inputs for generation: 0.00013s
DEBUG:root:---- Generated token: 0.00649s
DEBUG:root:---- Selected token: 0.00055s
DEBUG:root:---- Prepared inputs for generation: 0.00013s (1.52% of Next token)
DEBUG:root:---- Generated token: 0.00649s (76.47% of Next token)
DEBUG:root:---- Selected token: 0.00055s (6.44% of Next token)
DEBUG:root:-- Generated token 5: 0.01054s
DEBUG:root:---- Prepared inputs for generation: 0.00012s
DEBUG:root:---- Generated token: 0.00701s
DEBUG:root:---- Selected token: 0.00054s
DEBUG:root:---- Prepared inputs for generation: 0.00012s (1.34% of Next token)
DEBUG:root:---- Generated token: 0.00701s (80.23% of Next token)
DEBUG:root:---- Selected token: 0.00054s (6.18% of Next token)
DEBUG:root:-- Generated token 6: 0.01016s
DEBUG:root:---- Prepared inputs for generation: 0.00012s
DEBUG:root:---- Generated token: 0.00697s
DEBUG:root:---- Selected token: 0.00062s
DEBUG:root:---- Prepared inputs for generation: 0.00012s (1.36% of Next token)
DEBUG:root:---- Generated token: 0.00696s (78.63% of Next token)
```
### Compiled timing (~8ms / forward pass)
```
DEBUG:root:---- Prepared inputs for generation: 0.00015s
DEBUG:root:---- Generated token: 0.35622s
DEBUG:root:---- Selected token: 0.00086s
DEBUG:root:---- Prepared inputs for generation: 0.00015s (0.04% of Next token)
DEBUG:root:---- Generated token: 0.35621s (99.31% of Next token)
DEBUG:root:---- Selected token: 0.00086s (0.24% of Next token)
DEBUG:root:-- Generated token 2: 0.36042s
DEBUG:root:---- Prepared inputs for generation: 0.00023s
DEBUG:root:---- Generated token: 0.00825s
DEBUG:root:---- Selected token: 0.00071s
DEBUG:root:---- Prepared inputs for generation: 0.00023s (2.15% of Next token)
DEBUG:root:---- Generated token: 0.00825s (77.41% of Next token)
DEBUG:root:---- Selected token: 0.00071s (6.64% of Next token)
DEBUG:root:-- Generated token 3: 0.01238s
DEBUG:root:---- Prepared inputs for generation: 0.00020s
DEBUG:root:---- Generated token: 0.00818s
DEBUG:root:---- Selected token: 0.00067s
DEBUG:root:---- Prepared inputs for generation: 0.00020s (1.90% of Next token)
DEBUG:root:---- Generated token: 0.00818s (77.62% of Next token)
DEBUG:root:---- Selected token: 0.00067s (6.39% of Next token)
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1022-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 1 MiB (2 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Critical |
2,499,130,568 | godot | Camera3D perspective gizmo does not display correct frustum in "keep_height" mode | ### Tested versions
Reproducible in:
v4.4.dev1.mono.official [28a72fa43]
v4.3.stable.mono.official [77dcf97d8]
v4.2.2.stable.mono.official [15073afe3]
### System information
Godot v4.2.2.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (NVIDIA; 32.0.15.6081) - Intel(R) Core(TM) i9-14900K (32 Threads)
### Issue description
Camera3D gizmo does show correct frustum when switching between keep_aspect "Keep Height" and "Keep Width", it only displays the correct frustum for "Keep Width" Additionally the handles do not move from the side to the top like in orthographic mode
Keep Height: ❌
<img src="https://github.com/user-attachments/assets/6d78f605-c639-4162-a3f5-c706c297ed23" width="500">
<img src="https://github.com/user-attachments/assets/85d2a06e-7ba9-4944-b590-e54b4eb69071" width="500">
Keep Width ✔️
<img src="https://github.com/user-attachments/assets/60718a4a-bae6-4e62-838a-2488bef5237a" width="500">
<img src="https://github.com/user-attachments/assets/baee6acb-7e0c-423f-bdf2-c65a436d5b1e" width="500">
### Steps to reproduce
Open a new project
Add a Camera3D
Switch keep_aspect to Keep Width
Observe frustum not changing
Switch keep_aspect to Keep Height
Observe frustum not changing
### Minimal reproduction project (MRP)
Included is the addon [Little Camera Preview](https://godotengine.org/asset-library/asset/2500) by anothonyec to make viewing the change easier.
[frustum_gizmo_with_addon.zip](https://github.com/user-attachments/files/16826648/frustum_gizmo.zip)
[frustum_gizmo_without_addon.zip](https://github.com/user-attachments/files/16826651/frustum__gizmo_no_addon.zip)
| bug,topic:3d | low | Minor |
2,499,133,480 | terminal | Allow user to specify the number of rows or columns to increase or decrease by when resizing pane via keyboard. | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
When resizing panes via keyboard, it would be nice to specify the resize amount. Right now, when you resize the pane, the amount is different depending on pane width, window width, etc. Can we have the ability to resize by X columns or rows (depending on resize direction)?
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
In the keyboard settings, when assigning a keyboard shortcut for pane resizing, allow a second integer value specifying the number of columns or rows to resize by.
<!--
A clear and concise description of what you want to happen.
-->
If the user specifies the value 2 for the resize pane action, then if they grow or shrink the pane, it will increase or decrease by 2 rows or columns depending on direction.
| Issue-Feature,Area-Settings,Product-Terminal | low | Critical |
2,499,134,039 | svelte | CSS zoom on grid breaks flip | ### Describe the bug
Setting the [CSS zoom property](https://developer.mozilla.org/en-US/docs/Web/CSS/zoom) to a number other than `1` on a grid container with children with `animate:flip` causes the flip animations to originate from incorrect positions.
While I'm personally unfamiliar with Svelte's source code, I'd imagine this is a bug with the flip animation's origin position calculation not accounting for the zoom.
Please let me know if you need any more detail. Additionally, if you can point me to where this calculation is done in the code, I can take a shot at implementing a fix. Thank you!
### Reproduction
REPL: https://svelte.dev/repl/8da44c207d3741e296054ff787111f3a?version=4.2.19
In this example, you can add and remove fruit randomly from a 3x3 grid. Try clicking the "Add" and "Remove" buttons and you'll find that the fruit is animated correctly. However, now try checking the "Zoom" checkbox, which applies the CSS property `zoom: 2` to the grid. After doing so, you'll find that adding and removing fruit is no longer animated correctly. Specifically, the origins of the animations are calculated incorrectly.
### Logs
_No response_
### System Info
```shell
System:
OS: Linux 5.15 Ubuntu 20.04.6 LTS (Focal Fossa)
CPU: (16) x64 AMD Ryzen 9 5900HS with Radeon Graphics
Memory: 11.25 GB / 13.65 GB
Container: Yes
Shell: 3.3.0 - /usr/bin/fish
Binaries:
Node: 20.10.0 - /usr/local/bin/node
Yarn: 1.22.19 - ~/.yarn/bin/yarn
npm: 10.5.0 - /usr/local/bin/npm
npmPackages:
svelte: ^4.2.1 => 4.2.1
Tested on Chrome Version 128.0.6613.86 (Official Build) (64-bit)
```
### Severity
annoyance | transition/animation | low | Critical |
2,499,134,953 | pytorch | DYNAMIC_TRT_MODEL_CONVERSION 0 INTERNAL ASSERT FAILED or, comfyui made me come here | ### 🐛 Describe the bug
DYNAMIC_TRT_MODEL_CONVERSION
0 INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\jit\\ir\\alias_analysis.cpp":621, please report a bug to PyTorch. We don't have an op for aten::view but it isn't a special case. Argument types: Tensor, int, Candidates: aten::view(Tensor(a) self, SymInt[] size) -> Tensor(a) aten::view.dtype(Tensor(a) self, ScalarType dtype) -> Tensor(a)
# ComfyUI Error Report
## Error Details
- **Node Type:** DYNAMIC_TRT_MODEL_CONVERSION
- **Exception Type:** RuntimeError
- **Exception Message:** 0 INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\jit\\ir\\alias_analysis.cpp":621, please report a bug to PyTorch. We don't have an op for aten::view but it isn't a special case. Argument types: Tensor, int,
Candidates:
aten::view(Tensor(a) self, SymInt[] size) -> Tensor(a)
aten::view.dtype(Tensor(a) self, ScalarType dtype) -> Tensor(a)
## Stack Trace
```
File "C:\AI\pinokio\api\comfy.git\app\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\AI\pinokio\api\comfy.git\app\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\AI\pinokio\api\comfy.git\app\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\AI\pinokio\api\comfy.git\app\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py", line 539, in convert
return super()._convert(
File "C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py", line 282, in _convert
torch.onnx.export(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 551, in export
_export(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 1648, in _export
graph, params_dict, torch_out = _model_to_graph(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 1170, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 1046, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 950, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\jit\_trace.py", line 1497, in _get_trace_graph
outs = ONNXTracedModule(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\jit\_trace.py", line 141, in forward
graph, out = torch._C._create_graph_by_tracing(
```
## System Information
- **ComfyUI Version:** v0.1.3-23-gbaa6b4d
- **Arguments:** main.py --cuda-malloc --preview-method taesd
- **OS:** nt
- **Python Version:** 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.4.0+cu121
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 25769279488
- **VRAM Free:** 15346121536
- **Torch VRAM Total:** 10166992896
- **Torch VRAM Free:** 1148402496
## Logs
```
2024-08-31 21:04:32,645 - root - INFO - Total VRAM 24576 MB, total RAM 65458 MB
2024-08-31 21:04:32,645 - root - INFO - pytorch version: 2.4.0+cu121
2024-08-31 21:04:32,648 - root - INFO - Set vram state to: NORMAL_VRAM
2024-08-31 21:04:32,648 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
2024-08-31 21:04:33,800 - root - INFO - Using pytorch cross attention
2024-08-31 21:04:35,216 - root - INFO - [Prompt Server] web root: C:\AI\pinokio\api\comfy.git\app\web
2024-08-31 21:04:36,786 - root - INFO -
Import times for custom nodes:
2024-08-31 21:04:36,786 - root - INFO - 0.0 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\websocket_image_save.py
2024-08-31 21:04:36,786 - root - INFO - 0.0 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\cg-use-everywhere
2024-08-31 21:04:36,786 - root - INFO - 0.0 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-NAI-styler
2024-08-31 21:04:36,787 - root - INFO - 0.0 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Universal-Styler
2024-08-31 21:04:36,787 - root - INFO - 0.0 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Custom-Scripts
2024-08-31 21:04:36,787 - root - INFO - 0.0 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-GGUF
2024-08-31 21:04:36,787 - root - INFO - 0.1 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Flowty-LDSR
2024-08-31 21:04:36,787 - root - INFO - 0.2 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_TensorRT
2024-08-31 21:04:36,787 - root - INFO - 0.3 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Manager
2024-08-31 21:04:36,788 - root - INFO - 0.4 seconds: C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-Fluxpromptenhancer
2024-08-31 21:04:36,788 - root - INFO -
2024-08-31 21:04:36,800 - root - INFO - Starting server
2024-08-31 21:04:36,800 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-08-31 21:11:33,850 - root - INFO - got prompt
2024-08-31 21:11:33,993 - root - INFO - model weight dtype torch.bfloat16, manual cast: None
2024-08-31 21:11:33,995 - root - INFO - model_type FLUX
2024-08-31 21:11:34,049 - root - INFO - Requested to load Flux
2024-08-31 21:11:34,049 - root - INFO - Loading 1 new model
2024-08-31 21:11:41,506 - root - INFO - loaded completely 0.0 8592.597778320312 True
2024-08-31 21:11:50,425 - root - ERROR - !!! Exception during processing !!! 0 INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\jit\\ir\\alias_analysis.cpp":621, please report a bug to PyTorch. We don't have an op for aten::view but it isn't a special case. Argument types: Tensor, int,
Candidates:
aten::view(Tensor(a) self, SymInt[] size) -> Tensor(a)
aten::view.dtype(Tensor(a) self, ScalarType dtype) -> Tensor(a)
2024-08-31 21:11:50,463 - root - ERROR - Traceback (most recent call last):
File "C:\AI\pinokio\api\comfy.git\app\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\AI\pinokio\api\comfy.git\app\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\AI\pinokio\api\comfy.git\app\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\AI\pinokio\api\comfy.git\app\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py", line 539, in convert
return super()._convert(
File "C:\AI\pinokio\api\comfy.git\app\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py", line 282, in _convert
torch.onnx.export(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 551, in export
_export(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 1648, in _export
graph, params_dict, torch_out = _model_to_graph(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 1170, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 1046, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\onnx\utils.py", line 950, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\jit\_trace.py", line 1497, in _get_trace_graph
outs = ONNXTracedModule(
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "C:\AI\pinokio\api\comfy.git\app\env\lib\site-packages\torch\jit\_trace.py", line 141, in forward
graph, out = torch._C._create_graph_by_tracing(
RuntimeError: 0 INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\jit\\ir\\alias_analysis.cpp":621, please report a bug to PyTorch. We don't have an op for aten::view but it isn't a special case. Argument types: Tensor, int,
Candidates:
aten::view(Tensor(a) self, SymInt[] size) -> Tensor(a)
aten::view.dtype(Tensor(a) self, ScalarType dtype) -> Tensor(a)
2024-08-31 21:11:50,466 - root - INFO - Prompt executed in 16.61 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"last_node_id":8,"last_link_id":1,"nodes":[{"id":8,"type":"DYNAMIC_TRT_MODEL_CONVERSION","pos":{"0":831,"1":287,"2":0,"3":0,"4":0,"5":0,"6":0,"7":0,"8":0,"9":0},"size":{"0":352.79998779296875,"1":370},"flags":{},"order":1,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":1}],"outputs":[],"properties":{"Node name for S&R":"DYNAMIC_TRT_MODEL_CONVERSION"},"widgets_values":["tensorrt/TRT-",1,1,2,1024,1216,1600,1024,832,1600,1,1,2,14]},{"id":7,"type":"UnetLoaderGGUF","pos":{"0":485,"1":288,"2":0,"3":0,"4":0,"5":0,"6":0,"7":0,"8":0,"9":0},"size":{"0":315,"1":58},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[1],"shape":3,"slot_index":0}],"properties":{"Node name for S&R":"UnetLoaderGGUF"},"widgets_values":["flux1-dev-Q5_1.gguf"]}],"links":[[1,7,0,8,0,"MODEL"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":[0,0]}},"version":0.4}
```
## Additional Context
Trying to run a Flux GGUF model through Tensorrt:

hopefully the pinokio/comfyui logs will be helpful
[logs.zip](https://github.com/user-attachments/files/16826753/logs.zip)
### Versions
hopefully this zip has what you need
[logs.zip](https://github.com/user-attachments/files/16826754/logs.zip)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,499,149,639 | godot | var_to_str may stuck in infinite loop | ### Tested versions
- Reproducible in v4.4.dev1.official.28a72fa43
### System information
Windows 11, Vulkan API 1.3.205 - Forward+ - Using Vulkan Device #0: NVIDIA - NVIDIA GeForce RTX 3060 Ti
### Issue description
In some cases, var_to_str may lead to an infinite loop.
for example:
```
extends Node2D
var a
var b
func _ready() -> void:
a = A.new()
a.parent = self
b = B.new() # Comment out these two lines, and the infinite loop will not occur.
b.parent = self # Comment out these two lines, and the infinite loop will not occur.
print(var_to_str(self))
class A:
var parent
class B:
var parent
```
### Steps to reproduce
1. open MRP and run
### Minimal reproduction project (MRP)
[test_var_to_str.zip](https://github.com/user-attachments/files/16826859/test_var_to_str.zip)
| bug,topic:core | low | Minor |
2,499,205,810 | terminal | Rows should remember their width (MeasureRight()) | ### Windows Terminal version
1.21.2361.0
### Windows build number
10.0.22631.0
### Other Software
- System Language: `zh_cn`
- Console Font: `Hack Nerd Font Mono`
- Font Size: 12
### Steps to reproduce
1. Print more than two lines ending with space and causing line wrap
```python
# `120` here is my console width
for i in range(2):
print('x' * 120 + ' ')
print('yya')
input('--')
```
2. Resize the window to change the console width.
### Expected Behavior
Cursor should be after `--` of `input('--')` throughout the process
### Actual Behavior
After resize, cursor is before `a` of `print('yya')`
- Before resize: (width is 120)

- After resize: (width is 121, line wrap here seems wrong too)

| Product-Conpty,Area-Output,Issue-Bug | low | Minor |
2,499,284,598 | rust | rustc consumes > 50GB memory, completes eventually but def. stuck on something | First of all, I am convinced it is more the fault of me doing something wrong. My hope and guess is that this is not a bug in rust, but just me misusing the language. I can however not find why it is happening, but I imagine it is something to do with an accidantal recursive trait bound or other generic related issue...
- Problem is specifically with: <https://github.com/plabayo/rama/blob/09c8bc0fea80560763c12149aceae3d9ddae87f5/examples/http_high_level_client.rs>
- And it seemed to have been introduced the "problem" in <https://github.com/plabayo/rama/pull/297/commits/aa4ea25cd2e4d624e8f82d1c48d375f285e3266b> (that's when CI stage timings increased from 12m to 85m...)
Of course it might be that already before `aa4ea25cd2e4d624e8f82d1c48d375f285e3266b` I was defining trait bounds or using generics in a "wrong" manner.
Logs attached for timings I was able to produce with nightly (in stable it was taking too long for me to be able to wait):
[rustc.logs.zip](https://github.com/user-attachments/files/16827827/rustc.logs.zip)
| A-type-system,I-compiletime,T-compiler,I-compilemem,E-needs-mcve,T-types | low | Critical |
2,499,327,630 | rust | Compiler can't tell that two types are equal in a blanket supertrait impl | I tried this code:
```rust
trait TraitA: TraitB {
type Foo;
const FOO: Self::Foo;
}
trait TraitB {
type Bar;
const BAR: Self::Bar;
}
impl<T: TraitA> TraitB for T {
type Bar = <T as TraitA>::Foo;
const BAR: <T as TraitB>::Bar = <T as TraitA>::FOO;
}
```
I expected the code to compile. Instead, I got the following error:
``` Compiling playground v0.0.1 (/playground)
error[E0308]: mismatched types
--> src/lib.rs:13:37
|
13 | const BAR: <T as TraitB>::Bar = <T as TraitA>::FOO;
| ^^^^^^^^^^^^^^^^^^ expected `TraitB::Bar`, found `TraitA::Foo`
|
= note: expected associated type `<T as TraitB>::Bar`
found associated type `<T as TraitA>::Foo`
= note: an associated type was expected, but a different one was found
For more information about this error, try `rustc --explain E0308`.
error: could not compile `playground` (lib) due to 1 previous error
```
Note that removing the supertrait requirement causes the code to compile:
<details>
<summary>Code without supertrait requirement</summary>
```rust
trait TraitA {
type Foo;
const FOO: Self::Foo;
}
trait TraitB {
type Bar;
const BAR: Self::Bar;
}
impl<T: TraitA> TraitB for T {
type Bar = <T as TraitA>::Foo;
const BAR: <T as TraitB>::Bar = <T as TraitA>::FOO;
}
```
</details>
Also, adding the associated type requirement to the supertrait requirement causes the code to compile:
<details>
<summary>Code with the associated type requirement in the supertrait requirement</summary>
```rust
trait TraitA: TraitB<Bar = Self::Foo> {
type Foo;
const FOO: Self::Foo;
}
trait TraitB {
type Bar;
const BAR: Self::Bar;
}
impl<T: TraitA> TraitB for T {
type Bar = <T as TraitA>::Foo;
const BAR: <T as TraitB>::Bar = <T as TraitA>::FOO;
}
```
</details>
Discovered by esper8989 on the community discord.
### Meta
The issue reproduces on the playground on "Stable version: 1.80.1" and "Nightly version: 1.82.0-nightly (2024-08-31 a7399ba69d37b019677a)" | T-compiler,C-bug,T-types | low | Critical |
2,499,363,233 | PowerToys | Turn off the yellow frame in Windows 10pro. Yellow border on all monitors - Get Rid of the Yellow Outline - for Windows10pro | ### Description of the new feature / enhancement
Turn off the yellow frame in Windows 10pro.
### Scenario when this would be used?
When using streaming programmes, a yellow box is displayed. Windows 11 has an option to turn off "Settings -> Privacy & security -> Screenshot borders". Windows 10 does not have this option.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,499,366,046 | godot | Using the view joystick in the Android editor causes jitter | ### Tested versions
- Reproducable in: Godot 4.3 Stable Android editor
### System information
Android, Godot 4.3 Stable, Compatibility
### Issue description
Using the joystick to move the camera causes this jitter. Its noticeable when moving the camera in the y axis
https://github.com/user-attachments/assets/471765ef-0a44-42b2-b7a0-d2135857f649
### Steps to reproduce
Android editor. View joystick
### Minimal reproduction project (MRP)
Project with a base plane mesh for camera jitter reference | bug,platform:android,topic:editor,topic:input | low | Minor |
2,499,376,197 | rust | Missing AsMut<[T;N]> for [T;N] | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
fn foo<T: AsMut<[u8; 12]>>(mut t: T) {
t.as_mut()[0] = 0;
}
fn bar(x: &mut [u8; 12]) {
foo(x);
}
```
I expected it to compile.
Instead, this happened:
```
error[E0277]: the trait bound `[u8; 12]: AsMut<[u8; 12]>` is not satisfied
--> crate/src/it.rs:5:9
|
5 | foo(x);
| --- ^ the trait `AsMut<[u8; 12]>` is not implemented for `[u8; 12]`, which is required by `&mut [u8; 12]: AsMut<[u8; 12]>`
| |
| required by a bound introduced by this call
|
= help: the following other types implement trait `AsMut<T>`:
[T; N]
[T]
= note: required for `&mut [u8; 12]` to implement `AsMut<[u8; 12]>`
note: required by a bound in `foo`
--> crate/src/it.rs:1:11
|
1 | fn foo<T: AsMut<[u8; 12]>>(mut t: T) {
| ^^^^^^^^^^^^^^^ required by this bound in `foo`
```
</details>
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"GrigorenkoPV"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-libs | low | Critical |
2,499,377,097 | ui | [bug]: multi-character alias not working | ### Describe the bug
Due to unavoidable circumstances, I have to use the global import alias `Frontend/*`, changing that to `@/*` or adding `@/*` as a separate alias does not work for me.
With this I have now run into the issue that the CLI cannot seem to handle aliases that have more than one character in their name.
The code that reads the tsconfig file, for example, simply returns the first character of the alias instead of removing the `/*` at the end. See [here](https://github.com/shadcn-ui/ui/blob/81c7e44863c4f571a9818d5ac55092effbd348f1/packages/shadcn/src/utils/get-project-info.ts#L161)
That behavior will result in the files being generated in the wrong folder `F/components/ui/` instead of `components/ui/` for example
After I figured that out, I fixed the import paths in the `components.json` file to look like this:
```json
"aliases": {
"components": "Frontend/components",
"utils": "Frontend/lib/utils",
"ui": "Frontend/components/ui",
"lib": "Frontend/lib",
"hooks": "Frontend/hooks"
}
```
With that done, the files are being generated in the correct directory as expected.
Unfortunately, this isn't the end of the issue, though, if I add a new component, like the separator, for example `npx shadcn@latest add separator`, then the generated file will, for whatever reason, still use "F/" as the import path, resulting in the components to use imports like `import { cn } from "F/lib/utils"`, this is not a valid import. The import needs to be `import { cn } from "Frontend/lib/utils"` as specified in the components.json file.
Another somewhat unrelated but easy to fix thing I've noticed is that `*` isn't recognized as an import path, I spent at least half an hour until I finally looked at the source code to figure out what the issue was.
### Affected component/components
all of them I assume
### How to reproduce
Use the following as the global import alias:
```json
"paths": {
"Frontend/*": ["*"]
}
```
1. The installation won't work because the path is `*` and not `./*`, the CLI won't recognize it as a global import path
2. The alias itself will be cut off at the first character instead of the slash
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11
Vite 5.3.3
React 18.3.1
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,499,390,646 | tauri | [bug] DragDropEvent::Drop does not add https:// url into paths vec. | ### Describe the bug
When dragging a file into the Tauri Window a `WindowEvent::DragDrop` event is created. If the file's source was the file system, it will be a relative path to that file. If the file's source is a browser, such as Firefox, it the `paths` Vec will be empty.
### Reproduction
1. Create a new Tauri app.
2. In `tauri.conf.json` set `dragDropEnabled` to `true`
3. Call the following method for the Tauri Builder:
```rust
tauri::Builder::default()
.on_window_event(|_, event| match event {
WindowEvent::DragDrop(drop) => {
dbg!(drop);
}
_ => (),
})
```
4. Drag an [image](https://uploads.dailydot.com/2018/10/olli-the-polite-cat.jpg?auto=compress&fm=pjpg) into the Tauri window.
### Expected behavior
Console outputs:
```
[app/src-tauri/src/lib.rs:14:17] drop = Drop {
paths: [
"https://uploads.dailydot.com/2018/10/olli-the-polite-cat.jpg",
],
position: PhysicalPosition {
x: 407.0,
y: 346.0,
},
}
```
### Full `tauri info` output
```text
WARNING: no lock files found, defaulting to npm
Error `tauri.conf.json` error on `build`: Additional properties are not allowed ('devUrl', 'frontendDist' were unexpected)
Error `tauri.conf.json` error: Additional properties are not allowed ('app', 'bundle', 'identifier', 'productName', 'version' were unexpected)
```
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: macOS,status: needs triage | low | Critical |
2,499,402,296 | pytorch | [torch.jit.script] An INTERNAL ASSERT error will be raised when permuting the inputs of a jit scripted graph with mismatched indices. | ### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when permuting the inputs of a jit scripted graph with mismatched indices. The code is as follows:
```python
import numpy as np
import torch
@torch.jit.script
def foo(i, j, k):
pass
g = foo.graph
idxs = [0]
for (i, inp) in enumerate(g.inputs()):
inp.setDebugName(f'inp{i}')
idxs.append(i)
permuted_idxs = list(np.random.permutation(idxs))
g.permuteInputs(permuted_idxs)
```
Error messages:
```
Traceback (most recent call last):
File "/data/code.py", line 17, in <module>
g.permuteInputs(permuted_idxs)
RuntimeError: new_order.size() == outputs_.size() INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/ir.cpp":1586, please report a bug to PyTorch.
```
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` .
### Versions
PyTorch version: 2.5.0.dev20240815+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 900.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cpu
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cpu pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,499,424,122 | pytorch | A crash would be raised when using `._StackString` with empty lists | ### 🐛 Describe the bug
A `Segmentation fault` will be raised when using `torch.classes._TorchScriptTesting._StackString` with empty input lists in a scripted function and calling the `pop()` function. The code is as follows:
```python
import torch
from torch.testing._internal.common_utils import find_library_location
lib_file_path = find_library_location('libtorchbind_test.so')
torch.ops.load_library(str(lib_file_path))
def foo():
ss = torch.classes._TorchScriptTesting._StackString([])
ss2 = torch.classes._TorchScriptTesting._StackString([])
ss.merge(ss2)
return ss
scripted = torch.jit.script(foo)
out = scripted()
out.pop()
```
Error messages:
> Segmentation fault (core dumped)
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` .
### Versions
PyTorch version: 2.5.0.dev20240815+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 800.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cpu
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cpu pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,499,430,039 | pytorch | `TorchScriptTesting._PickleTester` caused a segmentation fault with empty input lists in a jit traced function | ### 🐛 Describe the bug
A `Segmentation fault` will be raised when using `torch.classes._TorchScriptTesting._PickleTester` with an empty input list in a jit traced function. The code is as follows:
```python
import torch
from torch.testing._internal.common_utils import find_library_location
lib_file_path = find_library_location('libtorchbind_test.so')
torch.ops.load_library(str(lib_file_path))
class TryTracing(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.f = torch.classes._TorchScriptTesting._PickleTester([])
def forward(self):
return torch.ops._TorchScriptTesting.take_an_instance(self.f)
traced = torch.jit.trace(TryTracing(), [])
```
Error messages:
> Segmentation fault (core dumped)
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` .
### Versions
Collecting environment information...
PyTorch version: 2.5.0.dev20240815+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 900.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cpu
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cpu pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,499,449,306 | godot | MSBuild failure when exporting Android with NativeAOT turned on | ### Tested versions
Tested in 4.2 and 4.3 stable
### System information
MacOS M3
### Issue description
With AOT enabled, godot sends architecture "Android" instead of "linux-bionic" to msbuild command which makes it fail:
` IlcCompile:
Generating native code
"/Users/user/.nuget/packages/runtime.osx-arm64.microsoft.dotnet.ilcompiler/8.0.8/tools/ilc" @"/Users/user/dev/wl/royal-metagame-godot/.godot/mono/temp/obj/ExportDebug/android-arm64/native/RoyalMetagameGodot.ilc.rsp"
Unhandled Exception: System.CommandLine.CommandLineException: Target OS 'android' is not supported
`
The change done in PR #86791 fixes this problem.
### Steps to reproduce
Create a project, enable NativeAOT in the csproj file and export the android apk.
### Minimal reproduction project (MRP)
Really just create a project and set
<EnableAot>true</EnableAot>
With dotnet 8.0 as target framework | bug,platform:android,topic:export | low | Critical |
2,499,487,244 | ui | [feat]: Adding possibility of removing Icon in Select | ### Feature description
# Select component simplified
I have a Navbar where I'm using the select to select the language of the app. however as it is a NavBar the space in the mobile version is quite limited, so I am looking for ways to reduce components width.
I have finally decided to remove the Icon in the Select component to gain some extra space. However, there is no way to do it with variables so I had to modify the component itself. Could we add a param to simplify the component by removing that Icon?
## Default option

## Modified option

### Affected component/components
Select
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,499,501,353 | next.js | [Next/Jest] debugger statement is ignored in chrome debugger | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
Available memory (MB): 32768
Available CPU cores: 12
Binaries:
Node: 18.20.2
npm: 10.5.0
Yarn: 3.6.3
pnpm: 8.7.6
Relevant Packages:
next: 14.2.7 // Latest available version is detected (14.2.7).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which example does this report relate to?
https://github.com/vercel/next.js/tree/canary/examples/with-jest
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
I use next/jest and run a test with debugger statement. It can't stop the line of code in chrome inspector.
### Expected Behavior
Debugger statement available.
### To Reproduce
Follow [the repo to reproduce](https://github.com/tkow/demo-next-jest-debugger-statement-does-not-work).
Run `node --inspect-brk ./node_modules/.bin/jest ./app --watch --no-cache --runInBand` with test files having `debugger;` in them, then click nodejs icon button in the third of left top of inspector window to connect jest to inspector. | examples | low | Critical |
2,499,501,548 | rust | Scalar vector operations can use NEON instructions, which have incorrect subnormal behavior | This is the Rust version of https://github.com/llvm/llvm-project/issues/106909: scalar `f32`/`f64` operations can produce the wrong result for subnormals on ARM if NEON instructions are used.
This just got fixed in upstream LLVM, but we should also have a test covering this.
Cc @nikic | A-LLVM,T-compiler,C-bug,A-floating-point | low | Minor |
2,499,502,336 | godot | C# running in Godot is about 5x slower than running it from a Console app | ### Tested versions
- Reproducible in: v4.4.dev1.mono.official [28a72fa43].
### System information
Windows 11, Godot 4.4-dev, Vulkan, Jetbrains Rider, .Net 9
### Issue description
In one of my tests I tried to run some benchmarks because I couldn't find any issues in my code anymore related to performance and profiling did not lead to any significant results.
In that test I executed the exact same logic from a Console app to use it with Benchmark.Net and noticed that the code running in the Console is about 5x as fast than running it in Godot.
In my actual environment these calls are running in a separate Thread pool so it shouldn't be related to Godot itself but to the .Net runtime used by Godot
Expected:
Performance of C# code running in Godot is about the same as running it from a Console app
Actual:
Performance of C# code running in Godot is ~5x slower than running it from a Console app
### Steps to reproduce
Create a C# project and create a CPU heavy task.
Run this in a tight loop ~10k times
Execute the same logic in a Console app and compare the results
### Minimal reproduction project (MRP)
[performancetest.zip](https://github.com/user-attachments/files/16829201/performancetest.zip)
This is a small part of my VoxelEngine I am currently writing in pure C#, it contains quite a bit of Code but for very simple tasks I couldn't produce a huge amount of difference ~1ms per iteration.
Pls note that this code runs before anything renders at all so I would assume it should perform the same.
But this project shows a HUGE difference.
The results are:
Console: Took 23s
Godot: Took 1m 30s | bug,topic:dotnet,performance | low | Major |
2,499,508,964 | godot | Godot 4.3 buttons switching to hover style after being pressed on IOS export | ### Tested versions
-Reproduced on version 4.3 stable
Worked correctly on 4.2.1 stable
### System information
MacBook Pro - OS 14.4 (23E214), using Godot 4.3 stable. iPhone 13 Pro using 17.6.1, and iPad mini 17.5.1 and 17.6.1
### Issue description
I installed Godot 4.3 stable. I migrated a project I was working on in Godot 4.2.1 stable. When I exported it to my iPhone, the buttons would switch to the hover style after being pressed. This did not happen on 4.2.1. This occurred on both regular buttons as well as texture buttons.
### Steps to reproduce
I then created a brand new project in 4.3, created just a simple button with default style in the center, made it 400x400, set an empty style for the focus so it didn't have the border. I also added a texture button with a default style, a hover style, and pressed style. I then exported it to my iPhone to test.
When I drag my finger from outside the button, into the button, it shows the hover style when in the center like expected, and when I move out, it goes back to the default like it should. When I actually press on the button, it shows the pressed style, but then instead of goign back to the normal, it switches to the hover even though my finger isn't on the button any more. If I press on another button, or any area outside the button, it then switches back to the normal style. This happens for both the default button style as well as the texture button.
I reproduced this in 4.2.1 stable, and the buttons do not respond to the hover/moving finger in and out of the button space, which is fine in this case. I did not design my project with this in mind. However the buttons do switch back to the default normal style after being pressed, which is what I want to happen again on 4.3.
I only have iOS devices to test with, not sure if this happens on Android as well.
Here is a link to a short demo of what I am trying to explain: https://youtu.be/jorcIJNQdwg
### Minimal reproduction project (MRP)
[Button Example.zip](https://github.com/user-attachments/files/16829219/Button.Example.zip)
To test on iOS, you'll need to put in your own Apple Store team ID as well as a bundle identifier.
I created simple Normal, Hover, and Pressed textures to better demonstrate it. You can see the differences on the default button as well, it goes slightly grey for the hover, and darker when pressed. | bug,platform:ios,topic:input,topic:gui | low | Minor |
2,499,509,806 | godot | Issue with unintended possibility for multiple inheritance with Nodes and attached script objects. | ### Tested versions
- reproducable in Godot v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 4060 Ti (NVIDIA; 32.0.15.6081) - AMD Ryzen 7 3800X 8-Core Processor (16 Threads)
### Issue description
It's possible to get an object in godot into a state where it has two entirely different inheritance hierarchies.
You can still cast between the two different hierarchies, but it's messy. I suspect the whole thing is unintended.
### Steps to reproduce
Start with a node of type CharacterBody2D, attach a script of type "second_type.gd" with any contents, which extends from Node, then attach an Area2D. Connect the "_on_area_2d_body_exited" or any other method that takes type Node2D. (The area2D is only used because it has a signal with an argument of type Node2D for demonstrating the issue)
In the body of the chosen function paste:
```
var step = body;
var x:second_type = step;
print(body)
print(step)
print(x)
#Semantically equivilent. Does not work. Editor hates.
var y:second_type = body as second_type
```
Observe that the editor complains about the last line saying "Invalid cast: cannot convert from "Node2D to "second_type"".
Comment that line out.
Run the application.
Trigger your event however you like.
Observe that body, step, and x are all the same object, although x is of type Node and body is of type Node2D, which should not be possible in Godots type structure.
### Minimal reproduction project (MRP)
[minimal_reproduction.zip](https://github.com/user-attachments/files/16829299/minimal_reproduction.zip)
| topic:gdscript,needs testing | low | Major |
2,499,511,443 | ui | [bug]: Not choosing a value in a select nested in a popover closes the popover. | ### Describe the bug
if a select is placed in a popover the popover closes if you click out of the select without choosing a value.
### Affected component/components
Popover
### How to reproduce
click on popover
enter select
click back into the popover
popover closes

### Codesandbox/StackBlitz link
https://v0.dev/r/TOP7DD1xxY1?share=QI8DlC6dERDKeXgz36iMHglr
### Logs
_No response_
### System Info
```bash
v0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,499,511,754 | stable-diffusion-webui | [Bug]: Whatever I do its trying to use a 3.12.x version that's not even on my PC | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I'm trying to install this for the first time, I haven't had Python at all on my PC for months now.
I install python 3.10.6, its the only version existing on my pc:

path:

windows' control panel uninstall thingy (to show u I literally only have 3.10.6 installed)

but when I try to install this, based on the documentation: https://github.com/AUTOMATIC1111/stable-diffusion-webui?tab=readme-ov-file#automatic-installation-on-windows
then this is what error I get (mentioning that I have 3.12.4 BUT I DONT HAVE ANY VERSION OTHER THAN 3.10.6 :DDDDDDDDDDDDDDDDDDDDDDD)

So what the hell can I do with this?
I've tried:
- uninstalling every single thing related to python
- restart pc
- install 3.10.6 version of python
- deleted venv folder and deleted python again
- installed 3.10.6 again
- did a "repair" with the python installer on 3.10.6
and it still has a stroke. Am I missing something?
### Steps to reproduce the problem
above
### What should have happened?
above
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
above
### Console logs
```Shell
X:\1111\stable-diffusion-webui>webui-user.bat
'"X:\1111\stable-diffusion-webui\venv\Scripts\activate.bat"' is not recognized as an internal or external command,
operable program or batch file.
venv "X:\1111\stable-diffusion-webui\venv\Scripts\Python.exe"
=============================================================================================================================
INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have 3.12.4.
If you encounter an error with "RuntimeError: Couldn't install torch." message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and "venv" folder in WebUI's directory.
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/
Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre
Use --skip-python-version-check to suppress this warning.
=============================================================================================================================
Python 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:03:56) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
ERROR: Could not find a version that satisfies the requirement torch==2.1.2 (from versions: 2.2.0, 2.2.0+cu121, 2.2.1, 2.2.1+cu121, 2.2.2, 2.2.2+cu121, 2.3.0, 2.3.0+cu121, 2.3.1, 2.3.1+cu121, 2.4.0, 2.4.0+cu121)
ERROR: No matching distribution found for torch==2.1.2
Traceback (most recent call last):
File "X:\1111\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "X:\1111\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
File "X:\1111\stable-diffusion-webui\modules\launch_utils.py", line 381, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
File "X:\1111\stable-diffusion-webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "X:\1111\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
Press any key to continue . . .
```
### Additional information
_No response_ | bug-report | low | Critical |
2,499,519,741 | PowerToys | Unable to do maths on Complex numbers | ### Microsoft PowerToys version
0.83.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
The Mathematics part of PowerToys Run is unable to calculate complex/imaginary numbers
Example:

### ✔️ Expected Behavior
In: `=sqrt(-1)`
Out: `i`
### ❌ Actual Behavior
In: `-sqrt(-1)`
Out: `Failed to calculate input`
`Unable to cast object type 'System.Numerics.Complex' to type 'System.IConvertible'`
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,499,523,759 | ui | [bug]: ui fix issue that is not at | ### Describe the bug
[

](url)
### Affected component/components
website
### How to reproduce
![Uploading Screenshot 2024-09-01 195701.png…]()
### Codesandbox/StackBlitz link
reponsieve issue
### Logs
_No response_
### System Info
```bash
chorme
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,499,530,550 | rust | ICE: `Invalid 'Const' during codegen: UnevaluatedConst {..}` | <!--
[31mICE[0m: Rustc ./2.rs '-Zincremental-verify-ich=yes -Cincremental=<dir> -Cdebuginfo=2 -Clink-dead-code=true -Zvalidate-mir --edition=2021' 'error: internal compiler error: compiler/rustc_codegen_ssa/src/debuginfo/type_names.rs:739:14: Invalid `Const` during codegen: UnevaluatedConst { def: DefId(0:38 ~ 2[dd6c]::zpk2tf_st::{constant#1}), args: [(Mul: (4_usize: usize), (2_usize: usize))] }', 'error: internal compiler error: compiler/rustc_codegen_ssa/src/debuginfo/type_names.rs:739:14: Invalid `Const` during codegen: UnevaluatedConst { def: DefId(0:38 ~ 2[dd6c]::zpk2tf_st::{constant#1}), args: [(Mul: (4_usize: usize), (2_usize: usize))] }'
File: /tmp/im/2.rs
-->
auto-reduced (treereduce-rust):
````rust
#![feature(generic_const_exprs, generic_arg_infer)]
use core::mem::MaybeUninit;
pub struct Arr<T, const N: usize> {
v: [MaybeUninit<T>; N],
}
impl<T, const N: usize> Arr<T, N> {
const ELEM: MaybeUninit<T> = MaybeUninit::uninit();
const INIT: [MaybeUninit<T>; N] = [Self::ELEM; N];
fn new() -> Self {
Arr { v: Self::INIT }
}
}
pub struct BaFormatFilter<const N: usize> {}
pub enum DigitalFilter<const N: usize>
where
[(); N * 2 + 1]: Sized,
[(); N * 2]: Sized,
{
Ba(BaFormatFilter<{ N * 2 + 1 }>),
}
pub fn iirfilter_st_copy<const N: usize, const M: usize>(_: [f32; M]) -> DigitalFilter<N>
where
[(); N * 2 + 1]: Sized,
[(); N * 2]: Sized,
{
let zpk = zpk2tf_st(&Arr::<f32, { N * 2 }>::new(), &Arr::<f32, { N * 2 }>::new());
DigitalFilter::Ba(zpk)
}
pub fn zpk2tf_st<const N: usize>(_z: &Arr<f32, N>, _p: &Arr<f32, N>) -> BaFormatFilter<{ N + 1 }> {
BaFormatFilter {}
}
fn main() {
iirfilter_st_copy::<4, 2>([10., 50.]);
}
````
<details><summary><strong>original code</strong></summary>
<p>
original:
````rust
// issue: rust-lang/rust#106423
// ICE collection encountered polymorphic constant: UnevaluatedConst {..}
//@ edition:2021
//@ check-pass
#![feature(generic_const_exprs, generic_arg_infer)]
#![allow(incomplete_features)]
#![allow(unused)]
use core::mem::MaybeUninit;
pub struct Arr<T, const N: usize> {
v: [MaybeUninit<T>; N],
}
impl<T, const N: usize> Arr<T, N> {
const ELEM: MaybeUninit<T> = MaybeUninit::uninit();
const INIT: [MaybeUninit<T>; N] = [Self::ELEM; N]; // important for optimization of `new`
fn new() -> Self {
Arr { v: Self::INIT }
}
}
pub struct BaFormatFilter<const N: usize> {}
pub enum DigitalFilter<const N: usize>
where
[(); N * 2 + 1]: Sized,
[(); N * 2]: Sized,
{
Ba(BaFormatFilter<{ N * 2 + 1 }>),
}
pub fn iirfilter_st_copy<const N: usize, const M: usize>(_: [f32; M]) -> DigitalFilter<N>
where
[(); N * 2 + 1]: Sized,
[(); N * 2]: Sized,
{
let zpk = zpk2tf_st(&Arr::<f32, { N * 2 }>::new(), &Arr::<f32, { N * 2 }>::new());
DigitalFilter::Ba(zpk)
}
pub fn zpk2tf_st<const N: usize>(
_z: &Arr<f32, N>,
_p: &Arr<f32, N>,
) -> BaFormatFilter<{ N + 1 }>
where
[(); N + 1]: Sized,
{
BaFormatFilter {}
}
fn main() {
iirfilter_st_copy::<4, 2>([10., 50.,]);
}
````
</p>
</details>
Version information
````
rustc 1.83.0-nightly (1a1cc050d 2024-09-01)
binary: rustc
commit-hash: 1a1cc050d8efc906ede39f444936ade1fdc9c6cb
commit-date: 2024-09-01
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zincremental-verify-ich=yes -Cincremental=<dir> -Cdebuginfo=2 -Clink-dead-code=true -Zvalidate-mir --edition=2021`
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
warning: the feature `generic_const_exprs` is incomplete and may not be safe to use and/or cause compiler crashes
--> /tmp/icemaker_global_tempdir.40eYA5W2oIUb/rustc_testrunner_tmpdir_reporting.DqRxcpiOUpwU/mvce.rs:1:12
|
1 | #![feature(generic_const_exprs, generic_arg_infer)]
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #76560 <https://github.com/rust-lang/rust/issues/76560> for more information
= note: `#[warn(incomplete_features)]` on by default
warning: field `v` is never read
--> /tmp/icemaker_global_tempdir.40eYA5W2oIUb/rustc_testrunner_tmpdir_reporting.DqRxcpiOUpwU/mvce.rs:6:5
|
5 | pub struct Arr<T, const N: usize> {
| --- field in this struct
6 | v: [MaybeUninit<T>; N],
| ^
|
= note: `#[warn(dead_code)]` on by default
error: internal compiler error: compiler/rustc_codegen_ssa/src/debuginfo/type_names.rs:739:14: Invalid `Const` during codegen: UnevaluatedConst { def: DefId(0:37 ~ mvce[36f5]::zpk2tf_st::{constant#0}), args: [(Mul: (4_usize: usize), (2_usize: usize))] }
thread 'rustc' panicked at compiler/rustc_codegen_ssa/src/debuginfo/type_names.rs:739:14:
Box<dyn Any>
stack backtrace:
0: 0x7ba9a8ffeb2a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hd80d83efaa4dd6d3
1: 0x7ba9a9803157 - core::fmt::write::h4f2fc0d8a4bdbc07
2: 0x7ba9aaa9d211 - std::io::Write::write_fmt::h5570f2f97ed637ef
3: 0x7ba9a90011fb - std::panicking::default_hook::{{closure}}::hbf64d79aec33952e
4: 0x7ba9a9000e6e - std::panicking::default_hook::h80f8634d4a0460dc
5: 0x7ba9a81618a9 - std[5b205b49935b963f]::panicking::update_hook::<alloc[3fe16f20e15296c8]::boxed::Box<rustc_driver_impl[27278ae619702df]::install_ice_hook::{closure#0}>>::{closure#0}
6: 0x7ba9a9001b17 - std::panicking::rust_panic_with_hook::hac455baedf27b84b
7: 0x7ba9a819bb71 - std[5b205b49935b963f]::panicking::begin_panic::<rustc_errors[b46f225bc36daaa0]::ExplicitBug>::{closure#0}
8: 0x7ba9a818f096 - std[5b205b49935b963f]::sys::backtrace::__rust_end_short_backtrace::<std[5b205b49935b963f]::panicking::begin_panic<rustc_errors[b46f225bc36daaa0]::ExplicitBug>::{closure#0}, !>
9: 0x7ba9a818f046 - std[5b205b49935b963f]::panicking::begin_panic::<rustc_errors[b46f225bc36daaa0]::ExplicitBug>
10: 0x7ba9a81a4df1 - <rustc_errors[b46f225bc36daaa0]::diagnostic::BugAbort as rustc_errors[b46f225bc36daaa0]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
11: 0x7ba9a876d054 - rustc_middle[fdfbfe13c471c215]::util::bug::opt_span_bug_fmt::<rustc_span[682bbc8ac5595dcb]::span_encoding::Span>::{closure#0}
12: 0x7ba9a8752c0a - rustc_middle[fdfbfe13c471c215]::ty::context::tls::with_opt::<rustc_middle[fdfbfe13c471c215]::util::bug::opt_span_bug_fmt<rustc_span[682bbc8ac5595dcb]::span_encoding::Span>::{closure#0}, !>::{closure#0}
13: 0x7ba9a8752abb - rustc_middle[fdfbfe13c471c215]::ty::context::tls::with_context_opt::<rustc_middle[fdfbfe13c471c215]::ty::context::tls::with_opt<rustc_middle[fdfbfe13c471c215]::util::bug::opt_span_bug_fmt<rustc_span[682bbc8ac5595dcb]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
14: 0x7ba9a62e4750 - rustc_middle[fdfbfe13c471c215]::util::bug::bug_fmt
15: 0x7ba9a9fae940 - rustc_codegen_ssa[3363d47dc11c3c03]::debuginfo::type_names::push_generic_params_internal
16: 0x7ba9a9f90c1a - rustc_codegen_llvm[35a5f2b3be35e22b]::debuginfo::metadata::build_struct_type_di_node
17: 0x7ba9a9f9f73d - rustc_codegen_llvm[35a5f2b3be35e22b]::debuginfo::metadata::type_di_node
18: 0x7ba9aa744941 - rustc_codegen_ssa[3363d47dc11c3c03]::mir::codegen_mir::<rustc_codegen_llvm[35a5f2b3be35e22b]::builder::Builder>
19: 0x7ba9aa73fa35 - rustc_codegen_llvm[35a5f2b3be35e22b]::base::compile_codegen_unit::module_codegen
20: 0x7ba9aa7be70d - <rustc_codegen_llvm[35a5f2b3be35e22b]::LlvmCodegenBackend as rustc_codegen_ssa[3363d47dc11c3c03]::traits::backend::ExtraBackendMethods>::compile_codegen_unit
21: 0x7ba9aa7bad84 - <rustc_codegen_llvm[35a5f2b3be35e22b]::LlvmCodegenBackend as rustc_codegen_ssa[3363d47dc11c3c03]::traits::backend::CodegenBackend>::codegen_crate
22: 0x7ba9aa99a070 - <rustc_interface[561bcd00c1a26bfb]::queries::Linker>::codegen_and_build_linker
23: 0x7ba9aa5c9a1b - rustc_interface[561bcd00c1a26bfb]::interface::run_compiler::<core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>, rustc_driver_impl[27278ae619702df]::run_compiler::{closure#0}>::{closure#1}
24: 0x7ba9aa673dd0 - std[5b205b49935b963f]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[561bcd00c1a26bfb]::util::run_in_thread_with_globals<rustc_interface[561bcd00c1a26bfb]::util::run_in_thread_pool_with_globals<rustc_interface[561bcd00c1a26bfb]::interface::run_compiler<core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>, rustc_driver_impl[27278ae619702df]::run_compiler::{closure#0}>::{closure#1}, core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>>::{closure#0}, core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>>
25: 0x7ba9aa67443a - <<std[5b205b49935b963f]::thread::Builder>::spawn_unchecked_<rustc_interface[561bcd00c1a26bfb]::util::run_in_thread_with_globals<rustc_interface[561bcd00c1a26bfb]::util::run_in_thread_pool_with_globals<rustc_interface[561bcd00c1a26bfb]::interface::run_compiler<core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>, rustc_driver_impl[27278ae619702df]::run_compiler::{closure#0}>::{closure#1}, core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>>::{closure#0}, core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[42bb5247f35782d4]::result::Result<(), rustc_span[682bbc8ac5595dcb]::ErrorGuaranteed>>::{closure#1} as core[42bb5247f35782d4]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
26: 0x7ba9aa6747ab - std::sys::pal::unix::thread::Thread::new::thread_start::h59b1848733925366
27: 0x7ba9abcdf39d - <unknown>
28: 0x7ba9abd6449c - <unknown>
29: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.83.0-nightly (1a1cc050d 2024-09-01) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z incremental-verify-ich=yes -C incremental=[REDACTED] -C debuginfo=2 -C link-dead-code=true -Z validate-mir
query stack during panic:
end of query stack
error: aborting due to 1 previous error; 2 warnings emitted
```
</p>
</details>
<!--
query stack:
error: internal compiler error: compiler/rustc_codegen_ssa/src/debuginfo/type_names.rs:739:14: Invalid `Const` during codegen: UnevaluatedConst { def: DefId(0:37 ~ mvce[36f5]::zpk2tf_st::{constant#0}), args: [(Mul: (4_usize: usize), (2_usize: usize))] }
note: compiler flags: -Z incremental-verify-ich=yes -C incremental=[REDACTED] -C debuginfo=2 -C link-dead-code=true -Z validate-mir
-->
@rustbot label +F-generic_const_exprs +F-generic_arg_infer | A-debuginfo,I-ICE,T-compiler,C-bug,F-generic_const_exprs,F-generic_arg_infer | low | Critical |
2,499,537,971 | rust | Detect situations where a visitor implementation is skipped by accidentally directly calling the corresponding walk function | For the AST (but also other visitors in the compiler), visitor functions typically come in two flavors:
* `visit_X` - Overrideable part of the trait invocation. If you override it, you should call `walk_X` if you want to recurse or do `visit_Y` calls on the sub-components (for some other `Y`) you want to walk into, skipping the rest.
* `walk_X` - Non-overrideable, free function which actually recurses on the structure by calling `visit_Y` on all of the sub-components (for some other `Y`) of the thing. Sometimes this is called `super_visit_X`.
It is typically a bug to call `walk_Y` on a sub-component within a `visit_X` invocation. For example, see:
https://github.com/rust-lang/rust/pull/129858
Which fixed a bug where we called `walk_expr` on the async closure's body when visiting the async closure in `visit_fn`. This should've been a `visit_expr`, since `visit_expr` had meaningful side-effects (collecting macro invocations).
We should have some sort of lint to prevent this pattern outside of regular `visit_X` -> `walk_X` situation, and compiler devs who need to do something special can do `#[allow(whatever_lint_name_for_sketchy_visitor_pattern)]` to justify that what they're doing is okay. We already have some comments that document when this happens (I think, I feel like I've seen them around), but it can be done by accident which causes bugs! | C-cleanup,A-lints,T-compiler | low | Critical |
2,499,542,399 | deno | jsxImportSourceTypes not working for react | Version: Deno 1.46.2
jsxImportSourceTypes was introduced in v1.43.0 by the following PR:
https://github.com/denoland/deno/pull/23419
It doesn't appear to be working as expected. My project isn't getting the types for react like expected with this setting.
You can reproduce the issue by cloning this example project and opening it in vs code. Then open any of the tsx files in the routes components or routes directory.
https://github.com/udibo/react-app-example
EDIT: You'll need to checkout this commit, https://github.com/udibo/react-app-example/tree/388e2bb7540daf1c88046806808f736a89467e6a because I added a react.d.ts file to workaround this issue for now.
If you try adding useState, autocomplete and import suggestions don't work right for react.
With jsxImportSourceTypes set, it won't give any suggestions from react when you type `use`.

With jsxImportSourceTypes not set, it will suggest importing the types file from my node_modules dir.

Then if I do that, it will get an error and the type obviously won't work.

If I try changing nodeModulesDir to false and repeating the tests above, the only difference is that it doesn't have any react suggestions when I type use. In that case the results are the same whether or not jsxImportSourceTypes is set.

If you change the import to import from react, the errors go away but it never gets the type information.
Hovering useState it just says "import useState":

Hovering x it has the type any:

The only difference I found from having jsxImportSourceTypes set is that it will stop suggesting importing from the node_modules directory if you have one. Without it, I'm not getting the errors from one of the issue it was meant to resolve: https://github.com/denoland/deno/issues/16653 | needs investigation | low | Critical |
2,499,546,031 | ollama | function_name = function_json_output["function"] KeyError: 'function' on MemGPT but Ollama not formating the response that's MemGPT | ### What is the issue?
**Describe the bug**
I am getting a function error when testing the connection to Ollama. It seems I get a error staying. To me it seems the because of the model and Ollama results it's not working with MemGPT so it's creates a error. I'm troubleshooting this error. I have MemGPT to use Ollama as the backend but Ollama cannot communicate with MemGPT so it's throwing a error.
I could use some help as I'm trying to put as much details as possible. I tried the Airboros and Dolphin Mistral wrapper but it's the same error.
`~$ memgpt run
[nltk_data] Downloading package punkt_tab to
[nltk_data] /home/vivienne/.local/lib/python3.10/site-
[nltk_data] packages/llama_index/core/_static/nltk_cache...
[nltk_data] Unzipping tokenizers/punkt_tab.zip.
? Would you like to select an existing agent? Yes
? Select agent: TremendousSeashell
🔁 Using existing agent TremendousSeashell
Hit enter to begin (will request first MemGPT message)
An exception occurred when running agent.step():
Traceback (most recent call last):
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/llm_chat_completion_wrappers/dolphin.py", line 231, in output_to_chat_completion_response
function_name = function_json_output["function"]
KeyError: 'function'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 191, in get_chat_completion
chat_completion_result = llm_wrapper.output_to_chat_completion_response(result)
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/llm_chat_completion_wrappers/dolphin.py", line 234, in output_to_chat_completion_response
raise LLMJSONParsingError(f"Received valid JSON from LLM, but JSON was missing fields: {str(e)}")
memgpt.errors.LLMJSONParsingError: Received valid JSON from LLM, but JSON was missing fields: 'function'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/main.py", line 459, in run_agent_loop
new_messages, user_message, skip_next_user_input = process_agent_step(user_message, no_verify)
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/main.py", line 427, in process_agent_step
new_messages, heartbeat_request, function_failed, token_warning, tokens_accumulated = memgpt_agent.step(
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 800, in step
raise e
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 698, in step
response = self._get_ai_reply(
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 409, in _get_ai_reply
raise e
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 378, in _get_ai_reply
response = create(
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 223, in wrapper
raise e
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 196, in wrapper
return func(*args, **kwargs)
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 464, in create
return get_chat_completion(
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 194, in get_chat_completion
raise LocalLLMError(f"Failed to parse JSON from local LLM response - error: {str(e)}")
memgpt.errors.LocalLLMError: Failed to parse JSON from local LLM response - error: Received valid JSON from LLM, but JSON was missing fields: 'function'
? Retry agent.step()? No
> Enter your message: How are you?
An exception occurred when running agent.step():
Traceback (most recent call last):
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/llm_chat_completion_wrappers/dolphin.py", line 231, in output_to_chat_completion_response
function_name = function_json_output["function"]
KeyError: 'function'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 191, in get_chat_completion
chat_completion_result = llm_wrapper.output_to_chat_completion_response(result)
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/llm_chat_completion_wrappers/dolphin.py", line 234, in output_to_chat_completion_response
raise LLMJSONParsingError(f"Received valid JSON from LLM, but JSON was missing fields: {str(e)}")
memgpt.errors.LLMJSONParsingError: Received valid JSON from LLM, but JSON was missing fields: 'function'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/main.py", line 459, in run_agent_loop
new_messages, user_message, skip_next_user_input = process_agent_step(user_message, no_verify)
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/main.py", line 427, in process_agent_step
new_messages, heartbeat_request, function_failed, token_warning, tokens_accumulated = memgpt_agent.step(
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 800, in step
raise e
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 698, in step
response = self._get_ai_reply(
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 409, in _get_ai_reply
raise e
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 378, in _get_ai_reply
response = create(
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 223, in wrapper
raise e
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 196, in wrapper
return func(*args, **kwargs)
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 464, in create
return get_chat_completion(
File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 194, in get_chat_completion
raise LocalLLMError(f"Failed to parse JSON from local LLM response - error: {str(e)}")
memgpt.errors.LocalLLMError: Failed to parse JSON from local LLM response - error: Received valid JSON from LLM, but JSON was missing fields: 'function'
?`
**Please describe your setup**
- [ ] How did you install memgpt?
pip install pymemgpt
- [ ] Describe your setup
- What's your OS (Windows/MacOS/Linux)?
I have Ubuntu 22.04 running with 16 gigs of ram and a Nvidia gtx 1660 super. Currently Ollama and Memgpt is installed. It's not powerful but it can run AI.
- How are you running `memgpt`? (`cmd.exe`/Powershell/Anaconda Shell/Terminal)
I am running from command line
**Screenshots**
I know memgpt can connect to the ollama ai interface with the dolphin mistral model.
**Additional context**
I tried both wrapper for mistral and airboros. Similiar error.
**MemGPT Config**
Please attach your `~/.memgpt/config` file or copy past it below.
[defaults]
preset = memgpt_chat
persona = sam_pov
human = basic
[model]
model = dolphin2.2-mistral:7b-q6_K
model_endpoint = http://localhost:11434
model_endpoint_type = ollama
model_wrapper = airoboros-l2-70b-2.1
context_window = 8192
[embedding]
embedding_endpoint_type = local
embedding_model = BAAI/bge-small-en-v1.5
embedding_dim = 384
embedding_chunk_size = 300
[archival_storage]
type = chroma
path = /home/vivienne/.memgpt/chroma
[recall_storage]
type = sqlite
path = /home/vivienne/.memgpt
[metadata_storage]
type = sqlite
path = /home/vivienne/.memgpt
[version]
memgpt_version = 0.3.25
[client]
anon_clientid = 00000000-0000-0000-0000-000000000000
---
If you're not using OpenAI, please provide additional information on your local LLM setup:
**Local LLM details**
If you are trying to run MemGPT with local LLMs, please provide the following information:
- [ ] The exact model you're trying to use (e.g. `dolphin-2.1-mistral-7b.Q6_K.gguf`)
- [ ] The local LLM backend you are using (web UI? LM Studio?)
I am using Ollama. I have not figured how to add a web ui that I can access memgpt or ollama chat at the same time.
- [ ] Your hardware for the local LLM backend (local computer? operating system? remote RunPod?)
system Computer
/0 bus Motherboard
/0/0 memory 16GiB System memory
/0/1 processor Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
/0/100 bridge 2nd Generation Core Processor Family DRAM Controller
/0/100/1 bridge Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port
/0/100/1/0 display TU116 [GeForce GTX 1660 SUPER]
/0/100/1/0.1 card1 multimedia TU116 High Definition Audio Controller
### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.3.9 | bug | low | Critical |
2,499,550,432 | kubernetes | Add tracer to call device-plugin before | ### What would you like to be added?
1. Add trace record in the call device-plugin before.
2. Transfer trace info(e.g pod info) to device-plugin server from gRPC header.
### Why is this needed?
Without modifying the API directly, I think the best way is to pass it to the device-plugin through Trace, and there are many similar requirements.
- https://github.com/kubernetes/kubernetes/issues/59109
- https://github.com/kubernetes/kubernetes/issues/107244
| sig/node,kind/feature,lifecycle/rotten,needs-triage | medium | Major |
2,499,553,759 | PowerToys | Button cannot released automatically | ### Microsoft PowerToys version
0.83.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
<img width="491" alt="keymanagersettings" src="https://github.com/user-attachments/assets/031d642b-8d95-43cc-92fa-a9a57831f9c8">
[PowerToysReport_2024-09-01-23-26-19.zip](https://github.com/user-attachments/files/16829503/PowerToysReport_2024-09-01-23-26-19.zip)
I remapped the shortcut `Ctrl` + `'` to `Alt(left)` + `Tab`. When I press `Ctrl (right)` + `'`, it switches to another program as expected, but the `Ctrl(Right)` doesn’t release. If you are in MS Word or Edge and roll the mouse wheel, the page zooms instead of scrolling. I must press `Ctrl (right)` (instead of `Ctrl (left)`) again to release it manually. However, when I press `Ctrl (left)` + `'`, the left Ctrl button releases automatically. I tried another keyboard and confirmed it’s not a hardware issue.
### ✔️ Expected Behavior
`ctrl(right)` will release automatically
### ❌ Actual Behavior
I have to release `ctrl(right)` manually
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,499,561,800 | pytorch | register_after_fork fails on python built without multiprocessing support | ### 🐛 Describe the bug
An embeddable python without multiprocessing support (built without fork/exec symbols) cannot import torch:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File ".../torch/__init__.py", line 1862, in <module>
register_after_fork(torch.get_num_threads)
File ".../torch/multiprocessing/_atfork.py", line 35, in register_after_fork
_register(func)
File ".../torch/multiprocessing/_atfork.py", line 20, in _register
os.register_at_fork(after_in_child=func)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'os' has no attribute 'register_at_fork'
```
But it works well after a simple patch:
```diff
diff --git a/torch/multiprocessing/_atfork.py b/torch/multiprocessing/_atfork.py
--- torch/multiprocessing/_atfork.py
+++ torch/multiprocessing/_atfork.py
@@ -16,9 +16,10 @@
else:
import os
def _register(func):
- os.register_at_fork(after_in_child=func)
+ if hasattr(os, "register_at_fork"):
+ os.register_at_fork(after_in_child=func)
def register_after_fork(func):
"""Register a callable to be executed in the child process after a fork.
```
### Versions
```
Collecting environment information...
...
OSError: [Errno 45] xxx does not support processes.
```
cc @VitalyFedyunin @albanD | module: multiprocessing,triaged,module: python frontend | low | Critical |
2,499,562,472 | next.js | Code imported inside instrumentation.ts can't mutate imported modules | ### Link to the code that reproduces this issue
https://github.com/shkreios/next-js-instrumentation-bug-reproduction
### To Reproduce
1. Open the [codesandbox](https://codesandbox.io/p/github/shkreios/next-js-instrumentation-bug-reproduction/main?import=true&layout=%257B%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522rootPanelGroup%2522%253A%257B%2522direction%2522%253A%2522horizontal%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522id%2522%253A%2522ROOT_LAYOUT%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522cm0jrhq8100053b6ikn0p21gt%2522%252C%2522sizes%2522%253A%255B100%255D%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522EDITOR%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522id%2522%253A%2522cm0jrhq8100023b6ig8egxvdf%2522%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522SHELLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522id%2522%253A%2522cm0jrhq8100033b6iqqmal62q%2522%257D%255D%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522DEVTOOLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522id%2522%253A%2522cm0jrhq8100043b6ixl2b7wdp%2522%257D%255D%257D%255D%252C%2522sizes%2522%253A%255B50%252C50%255D%257D%252C%2522tabbedPanels%2522%253A%257B%2522cm0jrhq8100023b6ig8egxvdf%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm0jrhq8100013b6ixhcnjd6m%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522filepath%2522%253A%2522%252FREADME.md%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%255D%252C%2522id%2522%253A%2522cm0jrhq8100023b6ig8egxvdf%2522%252C%2522activeTabId%2522%253A%2522cm0jrhq8100013b6ixhcnjd6m%2522%257D%252C%2522cm0jrhq8100043b6ixl2b7wdp%2522%253A%257B%2522id%2522%253A%2522cm0jrhq8100043b6ixl2b7wdp%2522%252C%2522activeTabId%2522%253A%2522cm0jrk5a9000g3b6igbd81nr3%2522%252C%2522tabs%2522%253A%255B%257B%2522type%2522%253A%2522SETUP_TASKS%2522%252C%2522id%2522%253A%2522cm0jrhrjn000p3b6i28okzqzk%2522%252C%2522mode%2522%253A%2522permanent%2522%257D%252C%257B%2522type%2522%253A%2522ENV_SETUP%2522%252C%2522id%2522%253A%2522cm0jrk5a9000g3b6igbd81nr3%2522%252C%2522mode%2522%253A%2522permanent%2522%257D%255D%257D%252C%2522cm0jrhq8100033b6iqqmal62q%2522%253A%257B%2522id%2522%253A%2522cm0jrhq8100033b6iqqmal62q%2522%252C%2522activeTabId%2522%253A%2522cm0jri4hx00153b6ivspu4uap%2522%252C%2522tabs%2522%253A%255B%257B%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522dev%2522%252C%2522id%2522%253A%2522cm0jri4hx00153b6ivspu4uap%2522%252C%2522mode%2522%253A%2522permanent%2522%257D%255D%257D%257D%252C%2522showDevtools%2522%253Atrue%252C%2522showShells%2522%253Atrue%252C%2522showSidebar%2522%253Afalse%252C%2522sidebarPanelSize%2522%253A15%257D)
2. Start the next dev server
3. Open the app on http://localhost:3000
4. Click the request button
5. Inspect the logs see that the mutating of the store object inside instrumentation.ts has no effect to the imported store object inside api/update-store
### Current vs. Expected behavior
I would expect the code inside `instrumentation.ts` to be able to mutate the store object.
Console output
```log
// log from inside instrumentation.ts successfully setting the onUpdate prop on the store object
Store listener is ready { update: [Function: update], onUpdate: [Function (anonymous)] }
log from inside the api/update-store/route.ts showing a different object which does not have the change
{ update: [Function: update] }
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 8
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: 1.22.19
pnpm: 9.9.0
Relevant Packages:
next: 15.0.0-canary.137 // Latest available version is detected (15.0.0-canary.137).
eslint-config-next: N/A
react: 19.0.0-rc-7771d3a7-20240827
react-dom: 19.0.0-rc-7771d3a7-20240827
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Instrumentation, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
My main use case for this approach, as demonstrated in the example, is to attach listeners to the data layer once. A more complex scenario would involve attempting to share an `EventEmitter` object. However, it would be less obvious that the objects within instrumentation.ts and the API route are not identical. | bug,Runtime,Instrumentation | low | Critical |
2,499,585,779 | opencv | Unable to build openCV on Windows | ### System Information
Windows version:
```
Edition Windows 11 Home
Version 23H2
Installed on 2/17/2024
OS build 22635.4076
Experience Windows Feature Experience Pack 1000.22700.1036.0
```
VS version:
```
17.12.0 Preview 1.0
```
Clang version:
```
clang version 20.0.0git (https://github.com/llvm/llvm-project.git 4db37a49a72bb9cff7a78e77439008c058383099)
Target: x86_64-pc-windows-msvc
Thread model: posix
InstalledDir: C:\Users\rysza\bin\LLVM\bin
```
Intel OneAPI base toolkit version:
```
2024.2
```
Python version:
```
Python 3.12.5
```
CMake configuration reported after configuring:
```
--
-- General configuration for OpenCV 4.10.0-dev =====================================
-- Version control: 4d66541
--
-- Platform:
-- Timestamp: 2024-09-01T16:56:19Z
-- Host: Windows 10.0.22635 AMD64
-- CMake: 3.29.5-msvc4
-- CMake generator: Ninja
-- CMake build tool: C:/ProgramData/chocolatey/bin/ninja.exe
-- MSVC: 1942
-- Configuration: Release
-- Algorithm Hint: ALGO_HINT_ACCURATE
--
-- CPU/HW features:
-- Baseline: SSE SSE2 SSE3 SSSE3 SSE4_1 POPCNT SSE4_2 AVX FP16 AVX2 FMA3
-- requested: AVX2
-- Dispatched code generation: AVX512_SKX
-- requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
-- AVX512_SKX (8 files): + AVX_512F AVX512_COMMON AVX512_SKX
--
-- C/C++:
-- Built as dynamic libs?: YES
-- C++ standard: 11
-- C++ Compiler: C:/Users/rysza/bin/LLVM/bin/clang-cl.exe (ver 20.0.0)
-- C++ flags (Release): /DWIN32 /D_WINDOWS /W4 /GR /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise -W -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winconsistent-missing-override -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-deprecated-enum-enum-conversion -Wno-deprecated-anon-enum-enum-conversion -Wno-long-long -Qunused-arguments /FS -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mavx2 -mfma /EHa /wd4127 /wd4251 /wd4324 /wd4275 /wd4512 /wd4589 /wd4819 /O2 /Ob2 /DNDEBUG -DNDEBUG
-- C++ flags (Debug): /DWIN32 /D_WINDOWS /W4 /GR /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise -W -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winconsistent-missing-override -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-deprecated-enum-enum-conversion -Wno-deprecated-anon-enum-enum-conversion -Wno-long-long -Qunused-arguments /FS -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mavx2 -mfma /EHa /wd4127 /wd4251 /wd4324 /wd4275 /wd4512 /wd4589 /wd4819 /Zi /Ob0 /Od /RTC1 -O0 -DDEBUG -D_DEBUG
-- C Compiler: C:/Users/rysza/bin/LLVM/bin/clang-cl.exe
-- C flags (Release): /DWIN32 /D_WINDOWS /W3 /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise -W -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winconsistent-missing-override -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-deprecated-enum-enum-conversion -Wno-deprecated-anon-enum-enum-conversion -Wno-long-long -Qunused-arguments /FS -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mavx2 -mfma /O2 /Ob2 /DNDEBUG -DNDEBUG
-- C flags (Debug): /DWIN32 /D_WINDOWS /W3 /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise -W -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winconsistent-missing-override -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-deprecated-enum-enum-conversion -Wno-deprecated-anon-enum-enum-conversion -Wno-long-long -Qunused-arguments /FS -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mavx2 -mfma /Zi /Ob0 /Od /RTC1 -O0 -DDEBUG -D_DEBUG
-- Linker flags (Release): /machine:x64 /INCREMENTAL:NO
-- Linker flags (Debug): /machine:x64 /debug /INCREMENTAL
-- ccache: YES
-- Precompiled headers: NO
-- Extra dependencies:
-- 3rdparty dependencies:
--
-- OpenCV modules:
-- To be built: calib3d core dnn features2d flann gapi highgui imgcodecs imgproc java ml objdetect photo python3 stitching ts video videoio
-- Disabled: world
-- Disabled by dependency: -
-- Unavailable: python2
-- Applications: tests perf_tests examples apps
-- Documentation: NO
-- Non-free algorithms: NO
--
-- Windows RT support: NO
--
-- GUI: WIN32UI
-- Win32 UI: YES
-- VTK support: NO
--
-- Media I/O:
-- ZLib: build (ver 1.3.1)
-- JPEG: build-libjpeg-turbo (ver 3.0.3-70)
-- SIMD Support Request: YES
-- SIMD Support: YES
-- WEBP: build (ver encoder: 0x020f)
-- PNG: build (ver 1.6.43)
-- SIMD Support Request: YES
-- SIMD Support: YES (Intel SSE)
-- TIFF: build (ver 42 - 4.6.0)
-- JPEG 2000: build (ver 2.5.0)
-- OpenEXR: build (ver 2.3.0)
-- HDR: YES
-- SUNRASTER: YES
-- PXM: YES
-- PFM: YES
--
-- Video I/O:
-- FFMPEG: YES (prebuilt binaries)
-- avcodec: YES (58.134.100)
-- avformat: YES (58.76.100)
-- avutil: YES (56.70.100)
-- swscale: YES (5.9.100)
-- avresample: YES (4.0.0)
-- GStreamer: NO
-- DirectShow: YES
-- Media Foundation: YES
-- DXVA: YES
--
-- Parallel framework: TBB (ver 2021.13 interface 12130)
--
-- Trace: YES (with Intel ITT)
--
-- Other third-party libraries:
-- Intel IPP: 2021.12.0 [2021.12.0]
-- at: C:/Users/rysza/lib/opencv/build/3rdparty/ippicv/ippicv_win/icv
-- Intel IPP IW: sources (2021.12.0)
-- at: C:/Users/rysza/lib/opencv/build/3rdparty/ippicv/ippicv_win/iw
-- Lapack: YES (C:/Program Files (x86)/Intel/oneAPI/mkl/latest/lib/mkl_intel_lp64.lib C:/Program Files (x86)/Intel/oneAPI/mkl/latest/lib/mkl_sequential.lib C:/Program Files (x86)/Intel/oneAPI/mkl/latest/lib/mkl_core.lib)
-- Eigen: NO
-- Custom HAL: NO
-- Protobuf: build (3.19.1)
-- Flatbuffers: builtin/3rdparty (23.5.9)
--
-- OpenCL: YES (NVD3D11)
-- Include path: C:/Users/rysza/lib/opencv/3rdparty/include/opencl/1.2
-- Link libraries: Dynamic load
--
-- Python 3:
-- Interpreter: C:/Python312/python.exe (ver 3.12.5)
-- Libraries: C:/Python312/libs/python312.lib (ver 3.12.5)
-- Limited API: NO
-- numpy: C:/Python312/Lib/site-packages/numpy/core/include (ver 1.26.4)
-- install path: C:/Python312/Lib/site-packages/cv2/python-3.12
--
-- Python (for build): C:/Python312/python.exe
--
-- Java:
-- ant: NO
-- Java: YES (ver 21.0.4)
-- JNI: C:/Program Files/Eclipse Adoptium/jdk-21.0.4.7-hotspot/include C:/Program Files/Eclipse Adoptium/jdk-21.0.4.7-hotspot/include/win32 C:/Program Files/Eclipse Adoptium/jdk-21.0.4.7-hotspot/include
-- Java wrappers: YES (JAVA)
-- Java tests: NO
--
-- Install to: C:/Users/rysza/local
-- -----------------------------------------------------------------
--
-- Configuring done (152.7s)
-- Generating done (1.8s)
-- Build files have been written to: C:/Users/rysza/lib/opencv/build
```
### Detailed description
When building with cmake + Ninja and clang-cl at some point the build freezes with broken stdout. If I terminate with ctrl-c and start again with cmake I get stuck on linking opencv_core4100.dll. This is not because it is slow. It gets stuck for multiple hours, with high CPU usage but basically no memory usage, maybe 400MB. Tried with both ENABLE_LTO and ENABLE_THIN_LTO. Here is a screenshot of it.

### Steps to reproduce
Building with this cmake command:
```
cmake -Bbuild -GNinja -DCMAKE_BUILD_TYPE=Release -DCMAKE_LINKER=lld-link
-DCMAKE_C_COMPILER=clang-cl -DCMAKE_CXX_COMPILER=clang-cl -DCMAKE_INSTALL_PREFIX=C:/Users/rysza/local
-DCPU_BASELINE:STRING=AVX2 -DOPENCV_IP_ENABLE_ALL=ON -DWITH_TBB=ON
-DCMAKE_C_COMPILER_LAUNCHER=sccache -DCMAKE_CXX_COMPILER_LAUNCHER=sccache -DBUILD_EXAMPLES=ON
```
and
```
cmake --build build
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,feature,category: build/install | low | Critical |
2,499,598,184 | godot | Saving to .md files in the editor removes trailing spaces (changing markdown syntax unintentionally) | ### Tested versions
- Reproducible in 4.3 editor
- Other versions not tested
### System information
Windows 10 - Godot 4.3 - Forward+
### Issue description
The godot editor recognizes .md as a file format. I can open such files as text files in and crate a new file of that type. It's part of the list of possible text formats. I edited my readme.md directly in the godot editor and saved it. I then noticed that it removed trailing spaces, which are used to denote newlines in markdown. Godot shouldn't sanitize these upon saving a .md as that effectively changes the syntax
### Steps to reproduce
- Make an empty project
- Create a new text file with the .md file ending
- Write lines with double spaces at the end for line breaks
- Save, the double spaces get "cleaned off"
### Minimal reproduction project (MRP)
[markdown-example.zip](https://github.com/user-attachments/files/16829937/markdown-example.zip)
| bug,topic:editor | low | Minor |
2,499,628,463 | tauri | [bug] Can't open link with right click menu on macOS | ### Describe the bug
When right clicking on a link displayed in the webview, and choosing open link in new window it doesn't open.
I don't see any error in the console / terminal.
<img src="https://github.com/user-attachments/assets/09c4591b-4c56-4e56-b456-f6ac631d4a77" width=100>
### Reproduction
Create `<a/>` and right click it then choose open link in new window.
### Expected behavior
_No response_
### Full `tauri info` output
```text
bunx tauri info
[✔] Environment
- OS: Mac OS 14.5.0 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.79.0 (129f3b996 2024-06-10)
✔ cargo: 1.79.0 (ffa9cf99a 2024-06-03)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.15.1
- npm: 10.7.0
- bun: 1.1.18
[-] Packages
- tauri [RUST]: git+https://github.com/thewh1teagle/tauri?branch=2.0.0.beta.17.patched#0088bad8530a07e2427d30790f9b204f73aadcab (2.0.0-beta.17)
- tauri-build [RUST]: no manifest (2.0.0-beta.13, 2.0.0-beta.13)
- wry [RUST]: 0.39.3
- tao [RUST]: 0.27.0
- @tauri-apps/api : not installed!
- @tauri-apps/cli [NPM]: 2.0.0-beta.22
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../src
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,499,639,711 | PowerToys | Can't disable Mouse Utilities, Refusing to Save | ### Microsoft PowerToys version
0.83.0
### Installation method
Other (please specify in "Steps to Reproduce")
### Running as admin
Yes
### Area(s) with issue?
Mouse Utilities
### Steps to reproduce
Downloaded from Microsoft's Own Domain
This is all thats required to break the software.
### ✔️ Expected Behavior
When 'Mouse Utilities' Module 'Find My Mouse' is DISABLED, it should be DISABLED.
### ❌ Actual Behavior
When 'Mouse Utilities' Module 'Find My Mouse' is DISABLED, it is ENABLED ALWAYS.
**Video**
https://www.youtube.com/watch?v=PtSNWLCyhcQ
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,499,639,756 | rust | async code fails to compile with `-Znext-solver` | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```sh
cd $(mktemp -d)
git clone https://github.com/wvwwvwwv/scalable-concurrent-containers.git
cd scalable-concurrent-containers/
git checkout c014a2c7ce98a13237b842bd03b80206ff8bb66e
```
compiling with normal rustc succeeds:
```
cargo +nightly check --locked
Checking sdd v3.0.2
Checking scc v2.1.16 (/tmp/tmp.RjDtqDRisT/scalable-concurrent-containers)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.72s
```
using next-solver fails
```
RUSTFLAGS='-Znext-solver' cargo +nightly check
Checking sdd v3.0.2
Checking scc v2.1.16 (/tmp/tmp.RjDtqDRisT/scalable-concurrent-containers)
error: concrete type differs from previous defining opaque type use
--> src/hash_map.rs:1439:5
|
1439 | async fn cleanse_old_array_async(&self, current_array: &BucketArray<K, V, (), SEQUENTIAL>) {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `{async fn body of hash_map::HashMap<K, V, H>::cleanse_old_array_async()}`, got `{async fn body of hash_map::HashMap<K, V, H>::cleanse_old_array_async()}`
|
= note: previous use here
error: concrete type differs from previous defining opaque type use
--> src/hash_index.rs:1090:5
|
1090 | async fn cleanse_old_array_async(&self, current_array: &BucketArray<K, V, (), OPTIMISTIC>) {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `{async fn body of HashIndex<K, V, H>::cleanse_old_array_async()}`, got `{async fn body of HashIndex<K, V, H>::cleanse_old_array_async()}`
|
= note: previous use here
error: concrete type differs from previous defining opaque type use
--> src/hash_cache.rs:1064:5
|
1064 | / async fn cleanse_old_array_async(
1065 | | &self,
1066 | | current_array: &BucketArray<K, V, DoublyLinkedList, CACHE>,
1067 | | ) {
| |_____^ expected `{async fn body of HashCache<K, V, H>::cleanse_old_array_async()}`, got `{async fn body of HashCache<K, V, H>::cleanse_old_array_async()}`
|
= note: previous use here
error: could not compile `scc` (lib) due to 3 previous errors
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
cargo 1.82.0-nightly (8f40fc59f 2024-08-21)
release: 1.82.0-nightly
commit-hash: 8f40fc59fb0c8df91c97405785197f3c630304ea
commit-date: 2024-08-21
host: x86_64-unknown-linux-gnu
libgit2: 1.8.1 (sys:0.19.0 vendored)
libcurl: 8.9.0-DEV (sys:0.4.74+curl-8.9.0 vendored ssl:OpenSSL/1.1.1w)
ssl: OpenSSL 1.1.1w 11 Sep 2023
os: Fedora 40.0.0 [64-bit]
```
</p>
</details>
I'm not sure even if it is an issue, but here's report anyway :)
| T-compiler,C-bug,A-async-await,AsyncAwait-Triaged,S-has-mcve,T-types,WG-trait-system-refactor | low | Critical |
2,499,662,961 | neovim | lua-cjson crash after opening large log file with LSP and/or Copilot | ### Problem
This is very similar to #29037 and might be the same.
I was opening a 587MB restic log file and after a few seconds neovim would crash with one of these:
1. `realloc(): invalid next size`
2. `malloc(): invalid next size (unsorted)`
3. `corrupted size vs. prev_size`
there might be more.
Disabling the `copilot.lua` plugin does seem to circumvent the issue.
Unfortunately it seems this depends a lot on my full setup and the stacktraces seem to have gaps in them.
A `find / -xdev -ls > large_file` + 'nvim large_file` seems to also trigger the issue, maybe it will on other machines also.
Maybe this is just a luajit issue?
What I can provide next or do to help?
Details:
neovim from Arch (extra) with astronvim and a bunch of plugins.
```
NVIM v0.10.1
Build type: Release
LuaJIT 2.1.1723675123
```
Backtraces:
1.`realloc(): invalid next size`
```
#0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44
#1 0x00007633cdcae463 in __pthread_kill_internal (threadid=<optimized out>, signo=6) at pthread_kill.c:78
#2 0x00007633cdc55120 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#3 0x00007633cdc3c4c3 in __GI_abort () at abort.c:79
#4 0x00007633cdc3d354 in __libc_message_impl (fmt=fmt@entry=0x7633cddcb2f5 "%s\n") at ../sysdeps/posix/libc_fatal.c:132
#5 0x00007633cdcb8765 in malloc_printerr (str=str@entry=0x7633cddc9169 "realloc(): invalid next size") at malloc.c:5772
#6 0x00007633cdcbca04 in _int_realloc (av=av@entry=0x7633cddffac0 <main_arena>, oldp=oldp@entry=0x5a2f33193650, oldsize=oldsize@entry=8192, nb=nb@entry=16384) at malloc.c:4939
#7 0x00007633cdcbda15 in __GI___libc_realloc (oldmem=0x5a2f33193660, bytes=16368) at malloc.c:3508
#8 0x00005a2ede3f3b9c in ?? ()
#9 0x00005a2ede3f4470 in ?? ()
#10 0x00005a2ede3f4470 in ?? ()
#11 0x00005a2ede3f4470 in ?? ()
#12 0x00005a2ede3f4b48 in ?? ()
#13 0x00007633cdf60f06 in lj_BC_FUNCC () at buildvm_x86.dasc:857
#14 0x00007633cdf7512a in lua_pcall (L=0x7633ce096380, nargs=<optimized out>, nresults=0, errfunc=<optimized out>) at /usr/src/debug/luajit/LuaJIT-ae4735f621d89d84758769b76432d2319dda9827/src/lj_api.c:1122
#15 0x00005a2ede1e8e08 in ?? ()
#16 0x00005a2ede35af18 in state_handle_k_event ()
#17 0x00005a2ede270ae1 in ?? ()
#18 0x00005a2ede260b67 in ?? ()
#19 0x00005a2ede35acc3 in state_enter ()
#20 0x00005a2ede25dd2a in normal_enter ()
#21 0x00005a2eddff7b27 in main ()
```
2. `malloc(): invalid next size (unsorted)`
```
#0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44
#1 0x00007131068c9463 in __pthread_kill_internal (threadid=<optimized out>, signo=6) at pthread_kill.c:78
#2 0x0000713106870120 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#3 0x00007131068574c3 in __GI_abort () at abort.c:79
#4 0x0000713106858354 in __libc_message_impl (fmt=fmt@entry=0x7131069e62f5 "%s\n") at ../sysdeps/posix/libc_fatal.c:132
#5 0x00007131068d3765 in malloc_printerr (str=str@entry=0x7131069e9908 "malloc(): invalid next size (unsorted)") at malloc.c:5772
#6 0x00007131068d6d2c in _int_malloc (av=av@entry=0x713106a1aac0 <main_arena>, bytes=bytes@entry=16369) at malloc.c:4081
#7 0x00007131068d7902 in _int_realloc (av=av@entry=0x713106a1aac0 <main_arena>, oldp=oldp@entry=0x5b6b7a2a8d80, oldsize=oldsize@entry=8192, nb=nb@entry=16384) at malloc.c:4975
#8 0x00007131068d8a15 in __GI___libc_realloc (oldmem=0x5b6b7a2a8d90, bytes=16368) at malloc.c:3508
#9 0x00005b6b3a2b5b9c in ?? ()
#10 0x00005b6b3a2b6470 in ?? ()
#11 0x00005b6b3a2b6470 in ?? ()
#12 0x00005b6b3a2b6470 in ?? ()
#13 0x00005b6b3a2b6b48 in ?? ()
#14 0x0000713106b7bf06 in lj_BC_FUNCC () at buildvm_x86.dasc:857
#15 0x0000713106b9012a in lua_pcall (L=0x713106cb1380, nargs=<optimized out>, nresults=0, errfunc=<optimized out>)
at /usr/src/debug/luajit/LuaJIT-ae4735f621d89d84758769b76432d2319dda9827/src/lj_api.c:1122
#16 0x00005b6b3a0aae08 in ?? ()
#17 0x00005b6b3a21cf18 in state_handle_k_event ()
#18 0x00005b6b3a132ae1 in ?? ()
#19 0x00005b6b3a122b67 in ?? ()
#20 0x00005b6b3a21ccc3 in state_enter ()
#21 0x00005b6b3a11fd2a in normal_enter ()
#22 0x00005b6b39eb9b27 in main ()
```
3.`corrupted size vs. prev_size`
```
#0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44
#1 0x00007d2f37278463 in __pthread_kill_internal (threadid=<optimized out>, signo=6) at pthread_kill.c:78
#2 0x00007d2f3721f120 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#3 0x00007d2f372064c3 in __GI_abort () at abort.c:79
#4 0x00007d2f37207354 in __libc_message_impl (fmt=fmt@entry=0x7d2f373952f5 "%s\n") at ../sysdeps/posix/libc_fatal.c:132
#5 0x00007d2f37282765 in malloc_printerr (str=str@entry=0x7d2f37392ff8 "corrupted size vs. prev_size") at malloc.c:5772
#6 0x00007d2f372833a6 in unlink_chunk (p=p@entry=0x5fcc0a155fd0, av=0x7d2f373c9ac0 <main_arena>) at malloc.c:1611
#7 0x00007d2f372869b8 in _int_realloc (av=av@entry=0x7d2f373c9ac0 <main_arena>, oldp=oldp@entry=0x5fcc0a153fd0, oldsize=oldsize@entry=8192, nb=nb@entry=16384) at malloc.c:4969
#8 0x00007d2f37287a15 in __GI___libc_realloc (oldmem=0x5fcc0a153fe0, bytes=16368) at malloc.c:3508
#9 0x00005fcbc5375b9c in ?? ()
#10 0x00005fcbc5376470 in ?? ()
#11 0x00005fcbc5376470 in ?? ()
#12 0x00005fcbc5376470 in ?? ()
#13 0x00005fcbc5376b48 in ?? ()
#14 0x00007d2f3752af06 in lj_BC_FUNCC () at buildvm_x86.dasc:857
#15 0x00007d2f3753f12a in lua_pcall (L=0x7d2f37660380, nargs=<optimized out>, nresults=0, errfunc=<optimized out>) at /usr/src/debug/luajit/LuaJIT-ae4735f621d89d84758769b76432d2319dda9827/src/lj_api.c:1122
#16 0x00005fcbc516ae08 in ?? ()
#17 0x00005fcbc52dcf18 in state_handle_k_event ()
#18 0x00005fcbc51f2ae1 in ?? ()
#19 0x00005fcbc51e2b67 in ?? ()
#20 0x00005fcbc52dccc3 in state_enter ()
#21 0x00005fcbc51dfd2a in normal_enter ()
#22 0x00005fcbc4f79b27 in main ()
```
### Steps to reproduce
1. Open a large text file, somewhere in the range of > 400MB
2. `:lua a = vim.json.encode(table.concat(vim.api.nvim_buf_get_lines(0, 1, -1, false), "\n"))`
### Expected behavior
no crash.
### Neovim version (nvim -v)
v0.10.1
### Vim (not Nvim) behaves the same?
unknown
### Operating system/version
Arch
### Terminal name/version
wezterm 20240722-080956-7e8fdc11
### $TERM environment variable
xterm-256color
### Installation
Arch (extra) | has:repro,has:backtrace,bug-crash,lsp,dependencies | medium | Critical |
2,499,673,914 | neovim | Support cursorline_hl_group for diagnostic signs | ### Problem
Currently, for `vim.diagnostic` we may configure `numhl` and `linehl` properties. These configuration values are "forwarded" to `nvim_buf_set_extmark()`'s `number_hl_group` and `line_hl_group` options. However, `cursorline_hl_group` is a valid option to `nvim_buf_set_extmark` as well, and is necessary to get a coherent cursorline extending through the signcolumn.
Below is a picture of the problem: (We desire for the background of the sign to take on the CursorLineSign background as well.
<img width="342" alt="Screenshot 2024-09-01 at 1 37 58 PM" src="https://github.com/user-attachments/assets/ede5b398-2d77-45ec-b3d2-08a43b89cd24">
### Expected behavior
Proposed solution: we add `culhl` configuration property for `vim.diagnostic.config({})`, mapping diagnostic severity to the highlight group used for the sign when the cursorline is on the line. Just like `numhl` and `linehl`, we forward this value to `cursorline_hl_group` option of `nvim_buf_set_extmark()` here:
https://github.com/neovim/neovim/blob/master/runtime/lua/vim/diagnostic.lua#L1455-L1461
My personal opinion is this is a small, low-cost change that does not add complexity to the codebase. Please consider!
Gitsigns.nvim added support for culhl, so this is the last frontier for me to get that nice coherent cursorline :] | enhancement,highlight,diagnostic | low | Minor |
2,499,682,431 | godot | Newly instantiated PointLight2D nodes set to Mix blend mode only render on sprites present in scene at spawn time. | ### Tested versions
- Reproducible in both versions I tried: 4.2.1 and 4.3stable.
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2080 (NVIDIA; 32.0.15.5599) - Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (12 Threads)
### Issue description
A newly instantiated PointLight2D node using Mix blend mode will only render on CanvasItem textures that are present in the scene at the time of its spawn. Either instantiating a pre-saved scene with a point light or creating PointLight2D.new() both deliver similar results.
Changing the light blend mode to Add fixes the issue, PointLight2D's will now render on every sprite, but using Add blend mode piles up brightness in an undesirable way for my project.
I created a test project in both Godot 4.2 and 4.3 with the same result. I instantiate moving sprites one at a time with normal-mapped CanvasItem textures. After each sprite a new PointLight2D is created and tied to its position. (I've tried creating a single scene with both sprite and point light inside which causes the same issue.)
The first sprite to spawn will only render lights that have been created after it. It renders all future lights, while the last sprite will render only the single light created just after it.

### Steps to reproduce
- Create a CanvasModulate node set to black.
- Instantiate one or more Sprite2Ds with a normal map set in its CanvasItem texture.
- Instantiate a new PointLight2D, add a texture and set blend mode to Mix.
- Instantiate one or more new Sprite2Ds.
- The second set of Sprite2Ds do not accept light from the PointLight2Ds created before them.
### Minimal reproduction project (MRP)
[LightTest.zip](https://github.com/user-attachments/files/16830492/LightTest.zip)
| bug,topic:rendering,topic:2d | low | Minor |
2,499,704,450 | PowerToys | Allow us to uninstall unused feature and redownload them when we need them | ### Description of the new feature / enhancement
I love all the features, but I don't use them very often, so I usually disable them. The issue is that the software has become quite large (almost 1GB). My ambitious feature request is to allow users to delete unused features. If we want to re-enable them later, we could download a specific package. Thank you, team!
### Scenario when this would be used?
In laptops with limited storage. Mine have soldered internal storage :) Ngl, it is currently the second largest app in me.
### Supporting information

| Needs-Triage | low | Minor |
2,499,732,293 | godot | If a modifier key like ^ ´ or ¨ is active (even if it hit before launching Godot) physical keycode output gets corrupted | ### Tested versions
- Reproducible in 4.3
### System information
Windows 11 - Godot 4.3 stable
### Issue description
Keyboards in some regions have keys that when pressed do nothing until you press a second key.
As an example hitting "¨" and then "A" results in "Ä". ^ + A = Â etc.
"event.get_physical_keycode()" is applying said modifiers to all of it's output, if one is currently active.
Here's my test script:
```gdscript
extends Node
func _ready() -> void:
var actions: = InputMap.get_actions()
for action in actions:
var events: Array[InputEvent] = InputMap.action_get_events(action)
for event in events:
if (event is InputEventKey):
var keycode : Key = event.get_physical_keycode()
var physical_label: = DisplayServer.keyboard_get_label_from_physical(keycode)
var key_name: = OS.get_keycode_string(physical_label)
print(key_name)
```
It normally prints:
```
key_name: A
key_name: B
key_name: C
key_name: D
key_name: E
key_name: F
key_name: G
```
But if I hit ¨ key before launching the project:
```
key_name: Ä
key_name: ¨
key_name: ¨
key_name: ¨
key_name: Ë
key_name: ¨
key_name: ¨
```
To clarify: That's the entire script. No _input() is involved at all.
### Steps to reproduce
Have keyboard with a modifier key like ¨, ´ or ^.
1. Hit said key once.
2. Launch the project.
### Minimal reproduction project (MRP)
[umlaut-caret-bug-test.zip](https://github.com/user-attachments/files/16830764/umlaut-caret-bug-test.zip)
| bug,topic:input | low | Critical |
2,499,735,770 | go | cmd/go: standard library tests fail with GO111MODULE=auto | ### Go version
go version go1.22.6 darwin/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='auto'
GOARCH='amd64'
GOBIN=''
GOCACHE='/tmp/.gocache'
GOENV='/Users/rittneje/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/rittneje/go/pkg/mod'
GONOPROXY='[REDACTED]'
GONOSUMDB='[REDACTED]'
GOOS='darwin'
GOPATH='/Users/rittneje/go'
GOPRIVATE='[REDACTED]'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/rittneje/go1.22.6'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/Users/rittneje/go1.22.6/pkg/tool/darwin_amd64'
GOVCS='[REDACTED]'
GOVERSION='go1.22.6'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD=''
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/kf/kr7_s3xx0l12zbj3jrn082hmzy5gvy/T/go-build1131392119=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
go test net/http -v -run=^TestReadResponseErrors$
### What did you see happen?
> === RUN TestReadResponseErrors
response_test.go:944: status "200 OK" "Content-Length:\r\nContent-Length:\r\n\r\nGophers\r\n": unexpected success; want error with substring "invalid empty Content-Length"
--- FAIL: TestReadResponseErrors (0.00s)
FAIL
FAIL net/http 0.636s
FAIL
### What did you expect to see?
> === RUN TestReadResponseErrors
--- PASS: TestReadResponseErrors (0.00s)
PASS
ok net/http 0.278s
This is because `GODEBUG` is not getting the expected default value due to `GO111MODULE=auto`.
Explicitly running `GO111MODULE=on go test net/http -v -run=^TestReadResponseErrors$` causes it to pass.
| GoCommand | low | Critical |
2,499,738,535 | vscode | Emmet Wrap with fragments do not work at all | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: **Yes**/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2 (Universal)
- OS Version: macOS Sequoia 15.1 Developer Beta
Steps to Reproduce:
1. Create a markup element
2. Make sure you're selecting an element
3. Use the "Emmet: Wrap with Abbreviation"
https://github.com/user-attachments/assets/52080ea9-c229-407d-a310-30106abc78a0 | bug,emmet,confirmation-pending | low | Critical |
2,499,759,054 | pytorch | On Kaggle : libcusparse.so.12: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12 | ### 🐛 Describe the bug
I have tried everything but no luck
Waiting your inputs to try more
I tried torch 2.4.0, 2.5 - dev, cu 118, cu121 and cu124 - all same error
This below code - I got the same error when using famous ComfyUI via SwarmUI
```
import os
# Set CUDA_HOME environment variable
os.environ['CUDA_HOME'] = '/opt/conda'
# Add CUDA binary directory to PATH
os.environ['PATH'] = f"/opt/conda/bin:{os.environ['PATH']}"
# Set LD_LIBRARY_PATH to include CUDA libraries
os.environ['LD_LIBRARY_PATH'] = f"/opt/conda/lib:{os.environ.get('LD_LIBRARY_PATH', '')}"
# Verify CUDA version
!nvcc --version
# Optional: Check if CUDA is available in Python
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
if torch.cuda.is_available():
print(f"CUDA version: {torch.version.cuda}")
```
giving below error
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
---------------------------------------------------------------------------
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[49], line 16
13 get_ipython().system('nvcc --version')
15 # Optional: Check if CUDA is available in Python
---> 16 import torch
17 print(f"CUDA available: {torch.cuda.is_available()}")
18 if torch.cuda.is_available():
File /opt/conda/lib/python3.10/site-packages/torch/__init__.py:368
366 if USE_GLOBAL_DEPS:
367 _load_global_deps()
--> 368 from torch._C import * # noqa: F403
371 class SymInt:
372 """
373 Like an int (including magic methods), but redirects all operations on the
374 wrapped node. This is used in particular to symbolically record operations
375 in the symbolic shape workflow.
376 """
ImportError: /opt/conda/lib/python3.10/site-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12
```
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.154+-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.36
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 2 MiB (2 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] optree==0.11.0
[pip3] pytorch-ignite==0.5.1
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240901+cu124
[pip3] torchaudio==2.5.0.dev20240901+cu124
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.20.0.dev20240901+cu124
[pip3] triton==3.0.0
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2024.2.1 ha957f24_103 conda-forge
[conda] numpy 1.26.4 py310hb13e2d6_0 conda-forge
[conda] optree 0.11.0 pypi_0 pypi
[conda] pytorch-ignite 0.5.1 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240901+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20240901+cu124 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240901+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @atalman @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @ptrblck @eqy | high priority,module: binaries,module: sparse,module: cuda,triaged,module: regression | medium | Critical |
2,499,792,058 | pytorch | DISABLED test_scaled_dot_product_attention_cuda_dynamic_shapes_cuda_wrapper (__main__.DynamicShapesCudaWrapperCudaTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_scaled_dot_product_attention_cuda_dynamic_shapes_cuda_wrapper&suite=DynamicShapesCudaWrapperCudaTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29537030945).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_scaled_dot_product_attention_cuda_dynamic_shapes_cuda_wrapper`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 422, in compile_fx_wrapper
return compile_fx(model_, example_inputs_)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1295, in compile_fx
return compile_fx(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1332, in load
compiled_graph = compile_fx_fn(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 878, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1909, in compile_to_fn
return self.compile_to_module().call
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1835, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1841, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1766, in codegen_with_cpp_wrapper
return self.codegen()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1787, in codegen
result = self.wrapper_code.generate(self.is_inference)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_cuda.py", line 214, in generate
return super().generate(is_inference)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_cpu.py", line 904, in generate
return super().generate(is_inference)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 857, in generate
return self._generate(is_inference)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 924, in _generate
return result.getvaluewithlinemap()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 871, in getvaluewithlinemap
line = line()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_cuda.py", line 146, in __call__
grid = self.grid()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_cuda.py", line 106, in __call__
return grid_fn(block_cfg)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1926, in seq_grid_fn
[
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1927, in <listcomp>
-x if x <= 0 else get_grid_dim(x, meta.get("XBLOCK", 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/core/relational.py", line 516, in __bool__
raise TypeError("cannot determine truth value of Relational")
TypeError: cannot determine truth value of Relational
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_cuda_cpp_wrapper.py", line 113, in fn
_, code = test_torchinductor.run_and_get_cpp_code(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 1917, in run_and_get_cpp_code
result = fn(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 11370, in new_test
return value(self)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 10238, in test_scaled_dot_product_attention
self.common(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 615, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 430, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1251, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 523, in __call__
return _compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 915, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 663, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 696, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 216, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 631, in transform
tracer.run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2987, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2972, in _return
self.output.compile_subgraph(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1117, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1369, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1416, in call_user_compiler
return self._call_user_compiler(gm)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1465, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='compile_fx_wrapper' raised:
TypeError: cannot determine truth value of Relational
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/inductor/test_cuda_cpp_wrapper.py DynamicShapesCudaWrapperCudaTests.test_scaled_dot_product_attention_cuda_dynamic_shapes_cuda_wrapper
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_cuda_cpp_wrapper.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,499,792,059 | pytorch | DISABLED test_randint_randomness_for_large_range (__main__.TestCuda) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_randint_randomness_for_large_range&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29537092747).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_randint_randomness_for_large_range`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 1027, in test_randint_randomness_for_large_range
assert abs(run(torch.device("cuda")) - run(torch.device("cpu"))) < 10_000
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 1024, in run
return torch.stack([t1, t2]).unique().shape[0]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 991, in unique
return torch.unique(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_jit_internal.py", line 624, in fn
return if_false(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_jit_internal.py", line 624, in fn
return if_false(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/functional.py", line 1075, in _return_output
output, _, _ = _unique_impl(input, sorted, return_inverse, return_counts, dim)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/functional.py", line 968, in _unique_impl
output, inverse_indices, counts = torch._unique2(
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCuda.test_randint_randomness_for_large_range
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,499,792,521 | pytorch | DISABLED test_randint_cuda_dynamic_shapes_cuda_wrapper (__main__.DynamicShapesCudaWrapperCudaTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_randint_cuda_dynamic_shapes_cuda_wrapper&suite=DynamicShapesCudaWrapperCudaTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29537030667).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_randint_cuda_dynamic_shapes_cuda_wrapper`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_cuda_cpp_wrapper.py", line 113, in fn
_, code = test_torchinductor.run_and_get_cpp_code(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 1917, in run_and_get_cpp_code
result = fn(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 11370, in new_test
return value(self)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 8118, in test_randint
self.assertEqual(c0, c1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3885, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: Tensor-likes are not close!
Mismatched elements: 1598 / 1600 (99.9%)
Greatest absolute difference: 3.37277889251709 at index (28, 17) (up to 1e-05 allowed)
Greatest relative difference: 1.423457129854089e+38 at index (28, 17) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
python test/inductor/test_cuda_cpp_wrapper.py DynamicShapesCudaWrapperCudaTests.test_randint_cuda_dynamic_shapes_cuda_wrapper
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_cuda_cpp_wrapper.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,499,813,741 | tensorflow | `np.cumprod` on ndarray with type `tensorflow.python.framework.dtypes.bfloat16.as_numpy_dtype` can cause `segmentfault | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf-nightly 2.18.0.dev20240828
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I encountered an `segmentfault issue` in TensorFlow when I used API `np.cumprod` on ndarray with type `tensorflow.python.framework.dtypes.bfloat16.as_numpy_dtype` . I have confirmed that the code would crash on `tf-nightly-2.18.0.dev20240817` and `tf-nightly-2.18.0.dev20240828` (nightly-build)
### Standalone code to reproduce the issue
```shell
import numpy as np
from tensorflow.python.framework import dtypes
def numpy_reverse(x, axis):
length = len(x.shape)
if axis < 0:
axis = length + axis
ix = [slice(None, None, -1) if i == axis else slice(None) for i in range(length)]
return x[tuple(ix)]
axis = 0
x = np.zeros([595]).astype(dtypes.bfloat16.as_numpy_dtype) # crash
# x = np.zeros([595]).astype(float) # works well
length = len(x.shape)
x = numpy_reverse(x, axis=0)
ix_head = [slice(0, 1) if i == axis else slice(None) for i in range(length)]
ix_init = [slice(0, -1) if i == axis else slice(None) for i in range(length)]
init = np.ones_like(x[tuple(ix_head)])
np_out = np.concatenate([init, np.cumprod(x[tuple(ix_init)], axis)], axis=axis)
```
### Relevant log output
```shell
Segmentation fault (core dumped)
```
| stat:awaiting tensorflower,type:support,comp:ops,2.17 | low | Critical |
2,499,817,988 | stable-diffusion-webui | [Feature Request]: Keep original file name for img-img | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
There doesn't seem to be an option to save img-img generations with its original file name, this is pretty annoying when you're working with 100's of files and you need them to be named just like the original (e.g game textures), takes a lot of extra time to go into the output folder and rename them.
For batch processing you do have this option, which is great, however sometimes i want to just write individual prompts, though i'm less likely to want to do it because the tediousness of changing filenames all the time.
### Proposed workflow
checkbox
### Additional information
_No response_ | enhancement | low | Minor |
2,499,843,238 | yt-dlp | Add support for onsen.ag | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Japan
### Example URLs
- Single Video: https://www.onsen.ag/program/lycoris-recoil?c=MTEyNjU
- Single Video: https://www.onsen.ag/program/lycoris-recoil?c=MTEyNjY
- Playlist: https://www.onsen.ag/program/lycoris-recoil
<br>
- Single Video: https://share.onsen.ag/program/ganbatte?c=MTk0MTA
- Playlist: https://www.onsen.ag/program/ganbatte
### Provide a description that is worded well enough to be understood
The site is not supported and the extractor was not able to find the video.
I'm able to download the first video using
`yt-dlp https://onsen-ma3phlsvod.sslcs.cdngc.net/onsen-ma3pvod/_definst_/202209/lycoris-recoil220905fMB1-01.mp4/playlist.m3u8 --referer "https://www.onsen.ag/program/lycoris-recoil?c=MTEyNjU"`
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.onsen.ag/program/lycoris-recoil']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (debian*)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-41-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.46.0, requests-2.31.0, sqlite3-3.45.1, urllib3-2.0.7, websockets-10.4
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.onsen.ag/program/lycoris-recoil
[generic] lycoris-recoil: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] lycoris-recoil: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.onsen.ag/program/lycoris-recoil
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/extractor/common.py", line 740, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/extractor/generic.py", line 2526, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.onsen.ag/program/lycoris-recoil
```
| site-request | low | Critical |
2,499,860,029 | pytorch | [torch.jit.script] INTERNAL ASSERT FAILED at "../aten/src/ATen/core/jit_type_base.h":556, please report a bug to PyTorch | ### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using the `torch.jit.script`. The code is as follows:
```python
import inspect
from typing import Dict, Iterator, List, Optional, Tuple, Any
import torch
import torch.testing._internal.jit_utils
from torch.testing._internal.common_utils import enable_profiling_mode_for_profiling_tests, ProfilingMode
import textwrap
def get_frame_vars(frames_up):
frame = inspect.currentframe()
if not frame:
raise RuntimeError("failed to inspect frame")
i = 0
while i < frames_up + 1:
frame = frame.f_back
if not frame:
raise RuntimeError("failed to get frame")
i += 1
defined_vars: Dict[str, Any] = {}
defined_vars.update(frame.f_locals)
defined_vars.update(frame.f_globals)
return defined_vars
def execWrapper(code, glob, loc):
exec(code, glob, loc)
def checkScript(script,
inputs,
name='func',
optimize=True,
inputs_requires_grad=False,
capture_output=False,
frames_up=1,
profiling=ProfilingMode.PROFILING,
atol=None,
rtol=None):
with torch.jit.optimized_execution(optimize):
with enable_profiling_mode_for_profiling_tests():
extra_profile_runs = any(isinstance(x, torch.Tensor) and x.requires_grad for x in inputs)
if isinstance(script, str):
cu = torch.jit.CompilationUnit(script, _frames_up=frames_up)
frame = get_frame_vars(frames_up)
the_locals: Dict[str, Any] = {}
execWrapper(script, glob=frame, loc=the_locals)
frame.update(the_locals)
scripted_fn = getattr(cu, name)
else:
source = textwrap.dedent(inspect.getsource(script))
checkScript(
source,
inputs,
script.__name__,
optimize=optimize,
inputs_requires_grad=inputs_requires_grad,
capture_output=capture_output,
profiling=profiling,
frames_up=2)
# Continue checking the Python frontend
scripted_fn = torch.jit.script(script, _frames_up=1)
# profiling run
script_outputs = scripted_fn(*inputs)
if inputs_requires_grad or extra_profile_runs:
opt_script_outputs = scripted_fn(*inputs)
opt_script_outputs = scripted_fn(*inputs)
def fn():
a: List[int] = []
b: torch.Tensor = torch.ones(2, 2)
c: Optional[torch.Tensor] = None
d: Optional[torch.Tensor] = torch.ones(3, 4)
for _ in range(10):
a.append(4)
c = torch.ones(2, 2)
d = None
return [a, b, c, d]
checkScript(fn, [])
```
Error messages:
> Traceback (most recent call last):
> File "/data/code.py", line 87, in <module>
> checkScript(fn, [])
> File "/data/code.py", line 59, in checkScript
> checkScript(
> File "/data/code.py", line 75, in checkScript
> opt_script_outputs = scripted_fn(*inputs)
> File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 826, in prof_func_call
> return prof_callable(func_call, *args, **kwargs)
> File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 823, in prof_callable
> return callable(*args, **kwargs)
> RuntimeError: r INTERNAL ASSERT FAILED at "../aten/src/ATen/core/jit_type_base.h":556, please report a bug to PyTorch.
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` .
### Versions
PyTorch version: 2.5.0.dev20240815+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 900.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cpu
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cpu pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,499,881,037 | kubernetes | Panic when comparing two interface. | ### What happened?
When I used ArgoCD, I found [this line](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/strategicpatch/patch.go#L955) will result in panic if two compared values are `map[string]interface`. The call was initiated from [here](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/strategicpatch/patch.go#L2105). If the passed in entry does not follow the schema, it may result the two compared values are `map[string]interface` and will raise panic.
### What did you expect to happen?
From the function `CreateThreeWayMergePatch`'s comments, it should either return the error said the input does not follow the schema or do the comparison. Panic should not be raised. I proposed reflect.DeepEqual could be used instead of "=="
### How can we reproduce it (as minimally and precisely as possible)?
If the struct does not follow the schema and give one field as map[string]interface instead of string, and calling `CreateThreeWayMergePatch`, the panic will be raised.
### Anything else we need to know?
I could raise PR if it does make sense.
### Kubernetes version
1.29.5
### Cloud provider
N/A
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,499,960,910 | flutter | Add preventGestureDelay Option to flutter_webview Plugin | ### Use case
Problem Description: The problem I'm facing is a noticeable delay in touch interactions within the flutter_webview plugin. This delay, which typically occurs between tapping a button and being able to scroll or perform another action, degrades the user experience by making the app feel unresponsive. This issue is particularly problematic in web-based mobile applications where immediate feedback from touch interactions is crucial.
Relation to a Problem: The issue is tied to Flutter’s DelayingGestureRecognizer, which introduces a delay to differentiate between single and double taps. While this may be useful in certain contexts, it hinders the fluidity of interactions in a WebView, where immediate touch response is often more important. This is especially frustrating in applications that require quick, responsive interactions, like scrolling immediately after tapping a button.
Alternative Solutions Considered: I have considered using alternative packages like flutter_inappwebview, which offers a preventGestureDelay option that effectively eliminates this delay. However, this package hasn't been maintained for several months, leading to concerns about long-term support and compatibility. Additionally, the flutter_webview plugin is the official and more stable option, making it the preferred choice for many developers, including myself.
### Proposal
Feature Request: I propose adding a preventGestureDelay option to the flutter_webview plugin. This option should allow developers to toggle the prevention of gesture delays, specifically addressing the delay caused by the DelayingGestureRecognizer. This will enable WebView to respond to touch interactions, such as taps and scrolls, immediately, improving the overall user experience.
Why This Should Be Provided by Flutter: Given that flutter_webview is the official WebView plugin for Flutter, it is crucial for it to support common use cases that demand responsive touch interactions. This feature would enhance the plugin's utility and ensure it meets the needs of modern mobile applications that require quick, fluid touch gestures. While this could potentially be implemented in a third-party package, having it as part of the official plugin ensures better integration, maintenance, and stability across Flutter projects. | p: webview,package,c: proposal,team-ecosystem,P2,triaged-ecosystem | low | Major |
2,499,972,364 | ui | [bug]: Error: You forgot to add 'mini-css-extract-plugin' plugin (i.e. `{ plugins: [new MiniCssExtractPlugin()] }`), | ### Describe the bug
When creating a new project with the CLI (without an existing Next.js project) using pnpm I get this error.
### Affected component/components
-
### How to reproduce
- Empty directory
- pnpm dlx shadcn@latest init
- pnpm dlx shadcn@latest add button
- Render a button on app/page.tsx
- pnpm run dev
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
Error: You forgot to add 'mini-css-extract-plugin' plugin (i.e. `{ plugins: [new MiniCssExtractPlugin()] }`), please read https://github.com/webpack-contrib/mini-css-extract-plugin#getting-started
Import trace for requested module:
./app/globals.css
```
### System Info
```bash
MacOS, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,500,029,108 | deno | line numbers don't match source code in chrome://inspect performance tab | Version: Deno 1.46.2
Chrome 128.0.6613.114
I'm using the --inspect-wait flag to profile some Deno code. To reproduce:
```
git clone git@github.com:skybrian/repeat-test.git
cd repeat-test
git checkout 389357e1b3e2546dbefc6b9d9984760509b13ee2
deno --inspect-wait performance/profile_take_chars.ts
```
The profile runs automatically after the inspector connects and the flame graph appears after a second or two, but when I look at individual functions in the call graph, the line numbers are off. For example, there are two prune() functions that are reported to be at lines 66 and 263 of pick_tree.ts, but in [the pick_tree source code](https://github.com/skybrian/repeat-test/blob/main/src/pick_tree.ts), these functions are actually at 111-136 and 342-365.
Presumably the code is being transformed in some way before it's run? It would be nice if that transformation preserved newlines so that the line numbers match.
| bug | low | Major |
2,500,059,729 | electron | [Feature Request]: protocol.handle for Websocket | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
protocol.handle() works for http but not WebSocket. so we can do protocol.handle('someProto') and then we can fetch('someProto://example/file.txt'), but we can not `new WebSocket('someProto://example/path')`
### Proposed Solution
having protocol.handle() work with both fetch and `new WebSocket()` or an option for protocol.handle() that enables it to handle WebSocket
### Alternatives Considered
having a separate function to register protocol handlers for WebSocket
### Additional Information
_No response_ | enhancement :sparkles: | low | Minor |
2,500,072,007 | rust | 32-bit ARM NEON intrinsics are unsound due to subnormal flushing | This is the ARM NEON version of https://github.com/rust-lang/rust/issues/114479. Example by @beetrees, to be compiled with `--target armv7-unknown-linux-gnueabihf -O -Ctarget-feature=+neon`:
```rust
#![feature(stdarch_arm_neon_intrinsics)]
use std::arch::arm::{float32x2_t, vadd_f32};
use std::mem::transmute;
#[inline(never)]
fn print_vals(x: float32x2_t, i: usize, vals_i: u32) {
println!("x={x:?} i={i} vals[i]={vals_i}");
}
const ZERO: float32x2_t = unsafe { transmute([0, 0]) };
const INC: float32x2_t = unsafe { transmute([f32::MIN_POSITIVE / 128.0, f32::MIN_POSITIVE / 128.0]) };
const TARGET: [u32; 2] = unsafe { transmute([f32::MIN_POSITIVE, f32::MIN_POSITIVE]) };
#[inline(never)]
pub fn evil(vals: &[u32; 300]) {
let mut x: float32x2_t = ZERO;
let mut i: usize = 0;
while unsafe { transmute::<float32x2_t, [u32; 2]>(x) } != TARGET {
print_vals(x, i, vals[i]);
x = unsafe { vadd_f32(x, INC) };
x = unsafe { vadd_f32(x, INC) };
i += 2;
}
}
pub fn main() {
let mut vals: [u32; 300] = [0; 300];
for i in 0..300 { vals[i as usize] = i; }
evil(&vals);
}
#[cfg(not(target_feature = "neon"))]
compile_error!("-Ctarget-feature=+neon required");
```
LLVM's optimizations assume they can calculate what that loop does, and that it follows IEEE semantics. But LLVM's codegen produces code that does not have IEEE semantics, and instead flushes subnormals to zero. :boom:
This almost surely also affects the unstable `std::simd` on ARM. | O-Arm,P-medium,T-compiler,I-unsound,T-libs,A-floating-point,I-miscompile | low | Critical |
2,500,080,885 | godot | Tiny mouse pointer under wayland | ### Tested versions
Godot v4.3.stable
### System information
Godot v4.3.stable - Manjaro Linux #1 SMP PREEMPT_DYNAMIC Mon Aug 19 09:51:26 UTC 2024 - Wayland - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (nvidia; 550.107.02) - 12th Gen Intel(R) Core(TM) i5-12400F (12 Threads)
### Issue description
[Screencast_20240902_080033.webm](https://github.com/user-attachments/assets/006f0853-1956-44bc-b082-e6356b43b242)
As seen in the video, the mouse is tiny compared with the OS one (the size changes when hovering the system titlebar)
### Steps to reproduce
Just start godot with wayland support (--display-driver wayland) under linux
### Minimal reproduction project (MRP)
Not applicable | bug,platform:linuxbsd | low | Major |
2,500,133,694 | pytorch | DISABLED test_random_no_reused_random_states_float32 (__main__.TestCuda) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_random_no_reused_random_states_float32&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29541618910).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 9 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_random_no_reused_random_states_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 1047, in test_random_no_reused_random_states
run(func, torch.device("cuda"), dtype)
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 1042, in run
return torch.stack([t1, t2]).unique().shape[0]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 991, in unique
return torch.unique(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_jit_internal.py", line 624, in fn
return if_false(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_jit_internal.py", line 624, in fn
return if_false(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/functional.py", line 1075, in _return_output
output, _, _ = _unique_impl(input, sorted, return_inverse, return_counts, dim)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/functional.py", line 968, in _unique_impl
output, inverse_indices, counts = torch._unique2(
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCuda.test_random_no_reused_random_states_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda_expandable_segments.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,500,147,307 | puppeteer | [Bug]: Scale parameter in page.pdf() has no effect when using Firefox as browser engine/backend | ### Minimal, reproducible example
```TypeScript
const puppeteer = require('puppeteer')
const firefoxOptions = {
browser: 'firefox',
executablePath: '/opt/firefox/129.0.2/firefox',
headless: true
};
function delay(time) {
return new Promise(function(resolve) {
setTimeout(resolve, time)
});
}
async function printPDF() {
console.log('printPDF() called');
const browser = await puppeteer.launch(firefoxOptions);
const page = await browser.newPage();
/* await page.setViewport({
width: 1600,
height: 1131,
deviceScaleFactor: 1.0,
}); */
await page.goto('http://foo.bar/site', {waitUntil: 'networkidle0'});
console.log('Waiting for 20s');
await delay(20000);
console.log('Generating PDF...');
await page.pdf({path: 'page.pdf',
format: 'A4',
scale: 0.4,
timeout: 900000,
margin: { left: '0.5cm', top: '1.5cm', right: '0.5cm', bottom: '0.5' }});
await browser.close();
console.log('Finished');
};
printPDF();
```
### Background
I used to have Puppeteer v19.x and Firefox version nightly 2023-11-27 where PDF printing with scale option worked just fine.
Now I updated Puppeteer to 23.2.1 (NodeJS 20.x LTS) and Firefox to 129.0.2 (which is mentioned here: https://github.com/puppeteer/puppeteer/blob/main/packages/puppeteer-core/src/revisions.ts), and the scale option does not have effect. I tried multiple values, and it does not work (it defaults to value 1 always).
Firefox was downloaded from here: https://download-installer.cdn.mozilla.net/pub/firefox/releases/129.0.2/linux-x86_64/en-US/
### Expectation
https://pptr.dev/api/puppeteer.pdfoptions has the "scale" option listed, and it has worked with Firefox earier, but it's not working anymore. I would like to have this option, because by using that, I'm able to get more information/page view in one PDF sheet ("zoom out" effect).
### Reality
Scale option does not have any effect. PDF prints are defaulting to scale parameter "1", even I put 0.4 or 0.6 as its value.
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
23.2.1
### Node version
20.8.1
### Package manager
npm
### Package manager version
10.1.0
### Operating system
Linux | bug,upstream,confirmed,bidi,P3,firefox | low | Critical |
2,500,147,655 | react | [React 19] Need suggestion on upgrade. | ## Summary
We are upgrading the react in our project and we see in the documentation the support for the defaultprops and proptypes is removed.
We are using the jsx and migrating to typescript is not possible at the moment as our project is fairly big. Any suggestion on how to do defaultprop and proptype check. | React 19 | low | Minor |
2,500,247,696 | pytorch | Broadcasting Parameter destroys hooks in PyTorch 2.4 | ### 🐛 Describe the bug
###
Following code runs successful in PyTorch 2.1.1
However in PyTorch 2.4, the hook will not be called
```
import os
import torch
import torch.nn as nn
import torch.distributed as dist
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "8881"
os.environ["WORLD_SIZE"] = "1"
os.environ["RANK"] = "0"
dist.init_process_group(backend='gloo', init_method='env://')
weight = nn.Parameter(torch.ones(1))
hook_fired = False
def grad_hook(_):
global hook_fired
hook_fired = True
weight.register_hook(grad_hook)
dist.broadcast(weight, src=0)
weight.backward()
dist.destroy_process_group()
assert hook_fired # AssertionError for pytorch 2.4
```
Further, hooks that are registered after `dist.broadcast` are also not called
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-36-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7232P 8-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 90%
CPU max MHz: 3100.0000
CPU min MHz: 1500.0000
BogoMIPS: 6200.10
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,500,253,606 | pytorch | Multi-threading with AOT inductor | ### 🐛 Describe the bug
Execute multiple-theading forward with one AOT runner https://pytorch.org/docs/main/torch.compiler_aot_inductor.html shows very pool performance.
I generated a tiny model.so with
```
import torch
from torch._inductor import config as inductor_config
inductor_config.cpp_wrapper = True
inductor_config.freezing = True
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear1 = torch.nn.Linear(512, 512).bfloat16()
def forward(self, x):
return self.linear1(x)
model = Model().eval()
inputs = torch.randn(128, 512)
with torch.no_grad(), torch.amp.autocast("cpu", enabled=True, dtype=torch.bfloat16):
so_path = torch._export.aot_compile(
model, (inputs, ),
options={"aot_inductor.output_path":os.path.join(os.getcwd(), "model.so")}
)
```
After that, I take https://pytorch.org/docs/main/torch.compiler_aot_inductor.html as a reference to create a runner on `model.so`, and invoke `runner.run` within different threads. The performance is very pool and CPU usage is only **~3%**.

Is there any `mutex` here? The `JIT` path should not have any mutex and can reach 100% CPU usage in similar multi-threading cases.
My `inference.cpp`
```
#include <iostream>
#include <vector>
#include <torch/torch.h>
#include <torch/csrc/inductor/aoti_runner/model_container_runner_cpu.h>
int main() {
c10::InferenceMode mode;
torch::inductor::AOTIModelContainerRunnerCpu runner("/localdisk/haozhe/aoti_example/model1.so");
std::vector<torch::Tensor> inputs = {torch::randn({128, 512})};
std::vector<torch::Tensor> outputs = runner.run(inputs);
std::cout << "Result from the first inference:"<< std::endl;
std::cout << outputs[0] << std::endl;
std::vector<std::vector<torch::Tensor>> minputs;
for (int i = 0; i < 40; i++) minputs.push_back(inputs);
// std::vector<torch::inductor::AOTIModelContainerRunnerCpu> runners;
// for (int i = 0; i < 40; i++) runners.push_back(torch::inductor::AOTIModelContainerRunnerCpu("/localdisk/haozhe/aoti_example/model1.so"));
std::vector<std::thread> callers;
for (const auto thread_id : c10::irange(40)) {
callers.emplace_back([&, thread_id]() {
int iter = 0;
while (iter < 100000) {
iter += 1;
// std::vector<torch::Tensor> outputs = runners[thread_id].run(minputs[thread_id]);
std::vector<torch::Tensor> outputs = runner.run(minputs[thread_id]);
}
});
}
for (auto& t : callers) {
t.join();
}
return 0;
}
```
### Versions
```
PyTorch version: 2.5.0a0+gitde3a641
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.34
Python version: 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.6.0-gnr.bkc.6.6.20.7.30.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 480
On-line CPU(s) list: 0-479
Vendor ID: GenuineIntel
Model name: GENUINE INTEL(R) XEON(R)
CPU family: 6
Model: 173
Thread(s) per core: 2
Core(s) per socket: 120
Socket(s): 2
Stepping: 0
CPU max MHz: 2500.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 11.3 MiB (240 instances)
L1i cache: 15 MiB (240 instances)
L2 cache: 480 MiB (240 instances)
L3 cache: 1008 MiB (2 instances)
NUMA node(s): 6
NUMA node0 CPU(s): 0-39,240-279
NUMA node1 CPU(s): 40-79,280-319
NUMA node2 CPU(s): 80-119,320-359
NUMA node3 CPU(s): 120-159,360-399
NUMA node4 CPU(s): 160-199,400-439
NUMA node5 CPU(s): 200-239,440-479
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] intel_extension_for_pytorch==2.5.0+gitf86e93e
[pip3] mypy==1.11.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] onnx==1.16.2
[pip3] optree==0.12.1
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] torch==2.5.0a0+gitde3a641
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.3.1
[pip3] torchaudio==2.4.0a0+b3f6f51
[pip3] torchdata==0.7.0a0+11bb5b8
[pip3] torchmetrics==0.11.0
[pip3] torchmultimodal==0.1.0b0
[pip3] torchrec==0.3.2
[pip3] torchsnapshot==0.1.0
[pip3] torchtext==0.17.0a0+09e2690
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] intel-extension-for-pytorch 2.5.0+gitf86e93e dev_0 <develop>
[conda] mkl-include 2024.2.0 pypi_0 pypi
[conda] mkl-static 2024.2.0 pypi_0 pypi
[conda] numpy 1.21.2 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] torch 2.5.0a0+gitde3a641 dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchao 0.3.1 pypi_0 pypi
[conda] torchaudio 2.4.0a0+b3f6f51 dev_0 <develop>
[conda] torchdata 0.7.0a0+11bb5b8 dev_0 <develop>
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchrec 0.3.2 pypi_0 pypi
[conda] torchsnapshot 0.1.0 pypi_0 pypi
[conda] torchtext 0.17.0a0+09e2690 dev_0 <develop>
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | oncall: pt2,oncall: export,module: aotinductor | low | Critical |
2,500,266,462 | react-native | StyleSheet.flatten with a falsy value returns undefined | ### Description
On iOS and Android (not web), calling `StyleSheet.flatten` with null/undefined/false returns `undefined`.
The TypeScript signature of `flatten` is `flatten<T>(style?: StyleProp<T>): T`. null/undefined/false are valid `StyleProp`s, so passing them to flatten should be valid, but flatten is supposed to return `T` (some style object), and undefined is usually not a valid `T`.
Widening the return type of `flatten` to `T | undefined` is an option, but would be a public API change. I think changing the function to return `{}` for falsy values is more correct, since you probably want to get a final computed, flattened style object, and don't care what the input was or if it was falsy.
### Steps to reproduce
On iOS or Android:
```
console.log(typeof StyleSheet.flatten(false)) // => 'undefined'
```
### React Native Version
0.75.2
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.2
CPU: (8) arm64 Apple M2
Memory: 339.17 MB / 8.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.3.0
path: ~/.volta/tools/image/node/22.3.0/bin/node
Yarn:
version: 3.6.4
path: ~/.volta/tools/image/yarn/4.3.1/bin/yarn
npm:
version: 10.8.1
path: ~/.volta/tools/image/node/22.3.0/bin/npm
Watchman:
version: 2024.07.01.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/hazel/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: Not Found
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 2.7.4
path: /Users/hazel/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.2
wanted: 0.75.2
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: fals
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://snack.expo.dev/@catboyfan/stylesheet-flatten-undefined
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro,API: StyleSheet | low | Minor |
2,500,295,447 | vscode | Merge Editor Code lens renders incorrectly | Possibly caused by @jrieken changes as it uses the same type of code lens
I opened the merge editor and the code lenses were rendered twice or at the wrong location after scrolling.
<img width="566" alt="image" src="https://github.com/user-attachments/assets/4c8a8da4-4b1f-409a-8718-9af237c66f16">
| bug,merge-editor | low | Minor |
2,500,295,610 | next.js | i18n in next.config.js introduces significant latency in App Router | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/z3jmlt
### To Reproduce
1. Navigate to provided link
2. Fork it
3. Start server by running `pnpm dev`
### Current vs. Expected behavior
The combination of App Router with Pages Router seems to introduce large latency.
We migrated some of our page routes to app routes as being suggested by an official Next.JS guide(gradual migration) but this process wasn't smooth and we still couldn't go life due to the latency issues on App Router.
We have done profiling of pages routes vs app routes with the help of load tests and could identify that P95 and P90 latency are 10X higher on App Router compare to Pages Router.
See below screenshot.

I also created a sandbox environment to try and confirm if this behaviour can also be observed there and below on dev server I can see near similar results related to latency.
I got `/test-app-router` in the app folder and `/test-pages-router` in pages folder.
Results are below:
```
✓ Starting...
✓ Ready in 2.4s
Browserslist: caniuse-lite is outdated. Please run:
npx update-browserslist-db@latest
Why you should do it regularly: https://github.com/browserslist/update-db#readme
○ Compiling / ...
✓ Compiled / in 4.5s (544 modules)
GET / 200 in 4728ms
GET / 200 in 35ms
○ Compiling /test-app-router ...
✓ Compiled /test-app-router in 512ms (516 modules)
GET /test-app-router 200 in 767ms
⚠ Fast Refresh had to perform a full reload. Read more: https://nextjs.org/docs/messages/fast-refresh-reload
○ Compiling /test-pages-router ...
✓ Compiled /test-pages-router in 1761ms (744 modules)
GET /test-pages-router 200 in 1879ms
⚠ Fast Refresh had to perform a full reload. Read more: https://nextjs.org/docs/messages/fast-refresh-reload
GET /test-pages-router 200 in 14ms
GET /test-pages-router 200 in 10ms
GET /test-app-router 200 in 95ms
GET /test-app-router 200 in 25ms
GET /test-pages-router 200 in 8ms
GET /test-app-router 200 in 24ms
GET /test-pages-router 200 in 7ms
GET /test-app-router 200 in 19ms
GET /test-pages-router 200 in 6ms
GET /test-app-router 200 in 16ms
⚠ Fast Refresh had to perform a full reload. Read more: https://nextjs.org/docs/messages/fast-refresh-reload
GET /test-pages-router 200 in 54ms
GET /test-app-router 200 in 28ms
GET /test-pages-router 200 in 7ms
GET /test-app-router 200 in 25ms
GET /test-pages-router 200 in 7ms
GET /test-app-router 200 in 18ms
GET /test-pages-router 200 in 6ms
```
After caching is stabilised in browser we can clearly see that app router is 3x times slower than pages router.
This behaviour is making gradually migration from pages router to app router impossible.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.11.0
npm: 10.2.4
Yarn: N/A
pnpm: 9.6.0
Relevant Packages:
next: 14.2.4 // There is a newer version (14.2.7) available, upgrade recommended!
eslint-config-next: N/A
react: 18.2.0
react-dom: 18.2.0
typescript: 5.2.2
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), Other (Deployed)
### Additional context
Our production app is running on Linux system.
Same behaviour can be observed both on Linux(prod), Darwin(local) and sandbox environment provided in the link. | bug,Internationalization (i18n) | low | Major |
2,500,340,623 | pytorch | torch.hub.load will raise RuntimeError: operator torchvision::nms does not exist | ### 🐛 Describe the bug
When loading models from the torch hub, it will raise `RuntimeError: operator torchvision::nms does not exist`. I tested on both Linux and Mac.
```python
import torch
torch.hub.load('pytorch/vision', 'resnet18')
```
### Versions
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.29.0
Libc version: N/A
Python version: 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:26:40) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
cc @nairbv @NicolasHug @vmoens @jdsgomes | triaged,module: hub | low | Critical |
2,500,364,025 | pytorch | Add Validity Checks for Parameter Combinations and Input Shapes in torch.nn.MaxPool2d to Prevent Unexpected Outputs | ### 🐛 Describe the bug
When using `torch.nn.MaxPool2d`, specific parameter combinations (such as `kernel_size=2`, `stride=7`, `padding=1`, `dilation=(2, 8)`, `ceil_mode=True`) can result in an output of `-inf`. The following code example reproduces this issue:
```python
import torch
import torch.nn as nn
pool = nn.MaxPool2d(kernel_size=2, stride=7, padding=1, dilation=(2, 8), ceil_mode=True)
x = torch.tensor([[[[0.1772, 0.1772, 0.1772],
[0.1772, 0.1772, 0.1772],
[0.1772, 0.1772, 0.1772]],
[[0.1830, 0.1830, 0.1830],
[0.1830, 0.1830, 0.1830],
[0.1830, 0.1830, 0.1830]],
[[0.1841, 0.1841, 0.1841],
[0.1841, 0.1841, 0.1841],
[0.1841, 0.1841, 0.1841]]]])
output = pool(x)
print(output) # Output: tensor([[[[-inf]], [[-inf]], [[-inf]]]])
```
To enhance user experience and model robustness, it is recommended to add validity checks in `torch.nn.MaxPool2d` and other related layers. Specific suggestions include:
1. **Check output dimensions**: Before performing pooling operations, calculate the expected output dimensions and check if they are positive. If the output dimensions are zero or negative, immediately raise an error to prompt the user to adjust the parameters.
2. **Validate parameter combinations**: For example, check if the combination of `dilation` and `padding` could cause the pooling window to exceed the input boundaries. If so, suggest the user reset these parameters. ceil_mode is set to true and you should alert if there are abnormal values in the result.
3. **Provide user-friendly error messages**: When detecting unreasonable parameter settings, provide detailed error messages and suggestions to help users understand and resolve the issue.
```python
def max_pool2d_check(input_shape, kernel_size, stride, padding, dilation):
# Ensure all parameters are tuples for consistent processing
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size)
if isinstance(stride, int):
stride = (stride, stride)
if isinstance(padding, int):
padding = (padding, padding)
if isinstance(dilation, int):
dilation = (dilation, dilation)
# Calculate output dimensions
output_height = ((input_shape[-2] + 2 * padding[0] - dilation[0] * (kernel_size[0] - 1) - 1) // stride[0] + 1)
output_width = ((input_shape[-1] + 2 * padding[1] - dilation[1] * (kernel_size[1] - 1) - 1) // stride[1] + 1)
# Check if output dimensions are valid
if output_height <= 0 or output_width <= 0:
raise ValueError(f"Invalid pooling parameters with input shape {input_shape}: resulting output size ({output_height}, {output_width}) is invalid.")
pool = nn.MaxPool2d(kernel_size=2, stride=7, padding=1, dilation=(2, 8), ceil_mode=True)
# 测试数据
x = torch.tensor([[[[0.1772, 0.1772, 0.1772],
[0.1772, 0.1772, 0.1772],
[0.1772, 0.1772, 0.1772]],
[[0.1830, 0.1830, 0.1830],
[0.1830, 0.1830, 0.1830],
[0.1830, 0.1830, 0.1830]],
[[0.1841, 0.1841, 0.1841],
[0.1841, 0.1841, 0.1841],
[0.1841, 0.1841, 0.1841]]]])
input_shape = (1, 3, 3, 3) # 输入张量形状
kernel_size = 2
stride = 7
padding = 1
dilation = (2, 8)
max_pool2d_check(input_shape, kernel_size, stride, padding, dilation)
output = pool(x)
print(output)
```
For example, adding the above checking code would prevent -inf from appearing in the output.
### Versions
pytorch 2.0.0
python 3.9
cuda 11.7
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,500,388,723 | kubernetes | [Pod Auto Scaler] Why use Pod Request Resource As Base? | ### What would you like to be added?
Why do we decide to use Pod's request as the basis for calculating indicators? Usually our scenario is to use limit to set the threshold.
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/replica_calculator.go#L437
### Why is this needed?
support limit as base resource | sig/autoscaling,kind/feature,lifecycle/rotten,needs-triage | low | Major |
2,500,404,665 | rust | [ICE]: `None` in compiler/rustc_middle/src/ty/sty.rs | ### Code
```rust
// crate `import`
#![feature(generic_const_exprs)]
pub struct Error(());
pub trait FromSlice: Sized {
const SIZE: usize = std::mem::size_of::<Self>();
fn validate_slice(bytes: &[[u8; Self::SIZE]]) -> Result<(), Error>;
}
```
```rust
// crate `compile`
struct Wrapper<const F: usize>(i64);
impl<const F: usize> import::FromSlice for Wrapper<F> {
fn validate_slice(_: &[[u8; Self::SIZE]]) -> Result<(), import::Error> {
Ok(())
}
}
```
Here is a [reproduction repository](https://github.com/jean-airoldie/generic_const_exprs_bug)
### Affected release channels
- [ ] Previous Stable
- [ ] Current Stable
- [ ] Current Beta
- [X] Current Nightly
### Rust Version
```Shell
rustc 1.83.0-nightly (94885bc69 2024-09-01)
binary: rustc
commit-hash: 94885bc699512cfee8560e73c2a01ee6b4b76563
commit-date: 2024-09-01
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
### Backtrace
<details>
```Shell
thread 'rustc' panicked at compiler/rustc_middle/src/ty/sty.rs:362:36:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: 0x7f7b49be0fda - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h9e4ea8242375332b
1: 0x7f7b4a403317 - core::fmt::write::h9b917be92761bedd
2: 0x7f7b4b68e511 - std::io::Write::write_fmt::hf10f74df1e5b6fa4
3: 0x7f7b49be36ab - std::panicking::default_hook::{{closure}}::h799453937b6df850
4: 0x7f7b49be331e - std::panicking::default_hook::hc680c13beada54ab
5: 0x7f7b48d465bf - std[ecf59b95e6b43a12]::panicking::update_hook::<alloc[31dbd1cac8f6337c]::boxed::Box<rustc_driver_impl[f58cdf0c667c34ef]::install_ice_hook::{closure#0}>>::{closure#0}
6: 0x7f7b49be3fc7 - std::panicking::rust_panic_with_hook::h2e4a6bdb3bbbd61b
7: 0x7f7b49be3c53 - std::panicking::begin_panic_handler::{{closure}}::hc84e1fbc96f34988
8: 0x7f7b49be1489 - std::sys::backtrace::__rust_end_short_backtrace::h7aeaaa7122aa2277
9: 0x7f7b49be3954 - rust_begin_unwind
10: 0x7f7b46a7b8f3 - core::panicking::panic_fmt::hfbc6547a8c5ab97a
11: 0x7f7b46c56b4c - core::panicking::panic::h4bcfaeeec73b3c98
12: 0x7f7b4705ce29 - core::option::unwrap_failed::h5cf8ee76ee01cf0d
13: 0x7f7b4be5832a - <rustc_middle[71f8a8be38220e7e]::ty::sty::ParamConst>::find_ty_from_env.cold
14: 0x7f7b46a46ce3 - <rustc_trait_selection[ab06c6a7623855cc]::traits::fulfill::FulfillProcessor as rustc_data_structures[b731aafeb3707255]::obligation_forest::ObligationProcessor>::process_obligation
15: 0x7f7b4a6d0fc1 - <rustc_data_structures[b731aafeb3707255]::obligation_forest::ObligationForest<rustc_trait_selection[ab06c6a7623855cc]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[ab06c6a7623855cc]::traits::fulfill::FulfillProcessor>
16: 0x7f7b46e7099c - rustc_traits[fb2d6fa60ddec9cc]::codegen::codegen_select_candidate
17: 0x7f7b4a6252f7 - rustc_query_impl[244de25550980a16]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[244de25550980a16]::query_impl::codegen_select_candidate::dynamic_query::{closure#2}::{closure#0}, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 16usize]>>
18: 0x7f7b4a6251e7 - <rustc_query_impl[244de25550980a16]::query_impl::codegen_select_candidate::dynamic_query::{closure#2} as core[5fb974e6fca9f855]::ops::function::FnOnce<(rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt, (rustc_middle[71f8a8be38220e7e]::ty::ParamEnv, rustc_type_ir[6f674afaeb747360]::predicate::TraitRef<rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt>))>>::call_once
19: 0x7f7b4a6251a9 - <rustc_query_system[103c552ffa7387b6]::query::plumbing::execute_job_incr<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::instance::InstanceKind, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt>::{closure#2}::{closure#2} as core[5fb974e6fca9f855]::ops::function::FnOnce<((rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::instance::InstanceKind, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 16usize]>>, false, false, false>), rustc_middle[71f8a8be38220e7e]::ty::instance::InstanceKind)>>::call_once
20: 0x7f7b4a623cf8 - rustc_query_system[103c552ffa7387b6]::query::plumbing::try_execute_query::<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<(rustc_middle[71f8a8be38220e7e]::ty::ParamEnv, rustc_type_ir[6f674afaeb747360]::predicate::TraitRef<rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt>), rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, true>
21: 0x7f7b4a622e0a - rustc_query_impl[244de25550980a16]::query_impl::codegen_select_candidate::get_query_incr::__rust_end_short_backtrace
22: 0x7f7b47768473 - rustc_ty_utils[6cd42ba9b28f8a5]::instance::resolve_instance_raw
23: 0x7f7b4a9e1469 - rustc_query_impl[244de25550980a16]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[244de25550980a16]::query_impl::resolve_instance_raw::dynamic_query::{closure#2}::{closure#0}, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 32usize]>>
24: 0x7f7b4a9e46a3 - rustc_query_system[103c552ffa7387b6]::query::plumbing::try_execute_query::<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<(rustc_span[610038f17dad7590]::def_id::DefId, &rustc_middle[71f8a8be38220e7e]::ty::list::RawList<(), rustc_middle[71f8a8be38220e7e]::ty::generic_args::GenericArg>)>, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 32usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, true>
25: 0x7f7b4a9e3371 - rustc_query_impl[244de25550980a16]::query_impl::resolve_instance_raw::get_query_incr::__rust_end_short_backtrace
26: 0x7f7b4743d3a6 - <rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt>::const_eval_resolve
27: 0x7f7b4b045f13 - rustc_const_eval[1913f98e05e774cc]::const_eval::eval_queries::eval_to_allocation_raw_provider
28: 0x7f7b4b043936 - rustc_query_impl[244de25550980a16]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[244de25550980a16]::query_impl::eval_to_allocation_raw::dynamic_query::{closure#2}::{closure#0}, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>
29: 0x7f7b4b0438ef - <rustc_query_impl[244de25550980a16]::query_impl::eval_to_allocation_raw::dynamic_query::{closure#2} as core[5fb974e6fca9f855]::ops::function::FnOnce<(rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt, rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>)>>::call_once
30: 0x7f7b4b03f9b1 - <rustc_query_system[103c552ffa7387b6]::query::plumbing::execute_job_incr<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt>::{closure#2}::{closure#2} as core[5fb974e6fca9f855]::ops::function::FnOnce<((rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>, false, false, false>), rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>)>>::call_once
31: 0x7f7b4b02aaae - rustc_query_system[103c552ffa7387b6]::query::plumbing::try_execute_query::<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, true>
32: 0x7f7b4b029fb6 - rustc_query_impl[244de25550980a16]::query_impl::eval_to_allocation_raw::get_query_incr::__rust_end_short_backtrace
33: 0x7f7b4b0260b3 - rustc_const_eval[1913f98e05e774cc]::const_eval::valtrees::eval_to_valtree
34: 0x7f7b4b025ec3 - <rustc_const_eval[1913f98e05e774cc]::provide::{closure#0} as core[5fb974e6fca9f855]::ops::function::FnOnce<(rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt, rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>)>>::call_once
35: 0x7f7b4b025e7a - rustc_query_impl[244de25550980a16]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[244de25550980a16]::query_impl::eval_to_valtree::dynamic_query::{closure#2}::{closure#0}, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>
36: 0x7f7b4b025e3b - <rustc_query_impl[244de25550980a16]::query_impl::eval_to_valtree::dynamic_query::{closure#2} as core[5fb974e6fca9f855]::ops::function::FnOnce<(rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt, rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>)>>::call_once
37: 0x7f7b4b03f9b1 - <rustc_query_system[103c552ffa7387b6]::query::plumbing::execute_job_incr<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt>::{closure#2}::{closure#2} as core[5fb974e6fca9f855]::ops::function::FnOnce<((rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>, false, false, false>), rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>)>>::call_once
38: 0x7f7b4b02aaae - rustc_query_system[103c552ffa7387b6]::query::plumbing::try_execute_query::<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, true>
39: 0x7f7b4b5547d2 - rustc_query_impl[244de25550980a16]::query_impl::eval_to_valtree::get_query_incr::__rust_end_short_backtrace
40: 0x7f7b4ad1ed31 - rustc_middle[71f8a8be38220e7e]::query::plumbing::query_get_at::<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_middle[71f8a8be38220e7e]::ty::ParamEnvAnd<rustc_middle[71f8a8be38220e7e]::mir::interpret::GlobalId>, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 24usize]>>>
41: 0x7f7b4ad1e76c - <rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt>::const_eval_global_id_for_typeck
42: 0x7f7b4ad1d70a - <rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt>::const_eval_resolve_for_typeck
43: 0x7f7b4ad1d3ca - <rustc_middle[71f8a8be38220e7e]::ty::consts::Const>::eval
44: 0x7f7b4ad1d2b1 - <rustc_trait_selection[ab06c6a7623855cc]::traits::normalize_param_env_or_error::{closure#0}::ConstNormalizer as rustc_type_ir[6f674afaeb747360]::fold::TypeFolder<rustc_middle[71f8a8be38220e7e]::ty::context::TyCtxt>>::fold_const
45: 0x7f7b4a6ae8f2 - rustc_trait_selection[ab06c6a7623855cc]::traits::normalize_param_env_or_error
46: 0x7f7b4683ac61 - rustc_hir_analysis[8068b17b6f30bb44]::check::check::check_impl_items_against_trait
47: 0x7f7b47a57802 - rustc_hir_analysis[8068b17b6f30bb44]::check::wfcheck::check_well_formed
48: 0x7f7b4a5fc547 - rustc_query_impl[244de25550980a16]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[244de25550980a16]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 1usize]>>
49: 0x7f7b4a5ffc76 - rustc_query_system[103c552ffa7387b6]::query::plumbing::try_execute_query::<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::VecCache<rustc_hir[701aa597c7450dd7]::hir_id::OwnerId, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, true>
50: 0x7f7b4a5ff684 - rustc_query_impl[244de25550980a16]::query_impl::check_well_formed::get_query_incr::__rust_end_short_backtrace
51: 0x7f7b4a5fd2ff - rustc_hir_analysis[8068b17b6f30bb44]::check::wfcheck::check_mod_type_wf
52: 0x7f7b4a5fd11f - rustc_query_impl[244de25550980a16]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[244de25550980a16]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 1usize]>>
53: 0x7f7b4b36ab0c - rustc_query_system[103c552ffa7387b6]::query::plumbing::try_execute_query::<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::DefaultCache<rustc_span[610038f17dad7590]::def_id::LocalModDefId, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, true>
54: 0x7f7b4b36b7db - rustc_query_impl[244de25550980a16]::query_impl::check_mod_type_wf::get_query_incr::__rust_end_short_backtrace
55: 0x7f7b4a6027fd - rustc_hir_analysis[8068b17b6f30bb44]::check_crate
56: 0x7f7b4a615391 - rustc_interface[de5c4d81e4829e2d]::passes::run_required_analyses
57: 0x7f7b4b1cc1de - rustc_interface[de5c4d81e4829e2d]::passes::analysis
58: 0x7f7b4b1cc1b1 - rustc_query_impl[244de25550980a16]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[244de25550980a16]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 1usize]>>
59: 0x7f7b4b5aa8cd - rustc_query_system[103c552ffa7387b6]::query::plumbing::try_execute_query::<rustc_query_impl[244de25550980a16]::DynamicConfig<rustc_query_system[103c552ffa7387b6]::query::caches::SingleCache<rustc_middle[71f8a8be38220e7e]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[244de25550980a16]::plumbing::QueryCtxt, true>
60: 0x7f7b4b5aa57a - rustc_query_impl[244de25550980a16]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
61: 0x7f7b4b1ae72c - rustc_interface[de5c4d81e4829e2d]::interface::run_compiler::<core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>, rustc_driver_impl[f58cdf0c667c34ef]::run_compiler::{closure#0}>::{closure#1}
62: 0x7f7b4b26e550 - std[ecf59b95e6b43a12]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[de5c4d81e4829e2d]::util::run_in_thread_with_globals<rustc_interface[de5c4d81e4829e2d]::util::run_in_thread_pool_with_globals<rustc_interface[de5c4d81e4829e2d]::interface::run_compiler<core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>, rustc_driver_impl[f58cdf0c667c34ef]::run_compiler::{closure#0}>::{closure#1}, core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>>::{closure#0}, core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>>
63: 0x7f7b4b26ebba - <<std[ecf59b95e6b43a12]::thread::Builder>::spawn_unchecked_<rustc_interface[de5c4d81e4829e2d]::util::run_in_thread_with_globals<rustc_interface[de5c4d81e4829e2d]::util::run_in_thread_pool_with_globals<rustc_interface[de5c4d81e4829e2d]::interface::run_compiler<core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>, rustc_driver_impl[f58cdf0c667c34ef]::run_compiler::{closure#0}>::{closure#1}, core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>>::{closure#0}, core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5fb974e6fca9f855]::result::Result<(), rustc_span[610038f17dad7590]::ErrorGuaranteed>>::{closure#1} as core[5fb974e6fca9f855]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
64: 0x7f7b4b26ef2b - std::sys::pal::unix::thread::Thread::new::thread_start::hfdfe3dde17dd08a8
65: 0x7f7b456a8144 - start_thread
at ./nptl/pthread_create.c:442:8
66: 0x7f7b457287dc - clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
67: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/maxence/bug/generic_const_exprs_bug/rustc-ice-2024-09-02T09_05_03-321153.txt` to your bug report
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [codegen_select_candidate] computing candidate for `<Wrapper<F> as import::FromSlice>`
#1 [resolve_instance_raw] resolving instance `<Wrapper<F> as import::FromSlice>::SIZE`
#2 [eval_to_allocation_raw] const-evaluating + checking `import::FromSlice::validate_slice::{constant#0}`
#3 [eval_to_valtree] evaluating type-level constant
#4 [check_well_formed] checking that `<impl at compile/src/lib.rs:3:1: 3:54>` is well-formed
#5 [check_mod_type_wf] checking that types are well-formed in top-level module
#6 [analysis] running analysis passes on this crate
end of query stack
error: could not compile `compile` (lib)
```
</details>
### Anything else?
Issue is solved when `generic_const_exprs` is enabled in the `compile` crate. In other words, when a dependency requires `generic_const_exprs` but it is not set in the current library, this ICE occurs.
I'm guessing this is related to #125958 based on the other issues i've seen.
EDIT1: Updated to latest nightly
EDIT2: Added actual code example instead of just repository
EDIT3: Made backtrace collapsible and removed error output | I-ICE,T-compiler,C-bug,F-generic_const_exprs | low | Critical |
2,500,416,107 | ui | [bug]: new shadcn ui incompatible width fontSize: [string, string] | ### Describe the bug
config example:
```javascript
{
'3xs': ['0.5rem', '1rem'],
'2xs': ['0.625rem', '1rem'],
xs: ['0.875rem', '1.25rem'],
sm: ['1rem', '1.25rem'],
lg: ['1.125rem', '1.75rem'],
xl: ['1.25rem', '1.875rem'],
'2xl': ['1.5rem', '2rem'],
'3xl': ['1.75rem', '2.5rem'],
'4xl': ['2rem', '2.5rem'],
'5xl': ['3rem', '3.375rem'],
'6xl': ['4rem', '6.25rem'],
'7xl': ['5rem', '6.25rem'],
'8xl': ['5rem', '5rem'],
'9xl': ['7.5rem', '6.25rem'],
'10xl': ['8.75rem', '6.25rem'],
'11xl': ['12.5rem', '6.25rem'],
'12xl': ['15rem', '10.25rem']
},
```
output:
```
../../Users/poylar/AppData/Local/Temp/shadcn-XXXXXXLDBR8c/shadcn-tailwind.config.ts:35:10 - error TS1005: ';' expected.
35 '6xl': '4rem',
~
../../Users/poylar/AppData/Local/Temp/shadcn-XXXXXXLDBR8c/shadcn-tailwind.config.ts:36:10 - error TS1005: ';' expected.
36 '7xl': '5rem',
~
../../Users/poylar/AppData/Local/Temp/shadcn-XXXXXXLDBR8c/shadcn-tailwind.config.ts:37:10 - error TS1005: ';' expected.
37 '8xl': '5rem',
~
../../Users/poylar/AppData/Local/Temp/shadcn-XXXXXXLDBR8c/shadcn-tailwind.config.ts:38:10 - error TS1005: ';' expected.
38 '9xl': '7.5rem',
~
../../Users/poylar/AppData/Local/Temp/shadcn-XXXXXXLDBR8c/shadcn-tailwind.config.ts:39:11 - error TS1005: ';' expected.
39 '10xl': '8.75rem',
```
and
config:
```javascript
fontFamily: {
sans: ['var(--font-sans)'],
serif: ['var(--font-grotesk)']
},
```
output error:
```
Path: C:/Users/poylar/AppData/Local/Temp/shadcn-XXXXXXLDBR8c/shadcn-tailwind.config.ts
Text: "...s,tsx}',\n './components/**/*.{ts,tsx}',\n './app/**/*.{ts,tsx}',\n './src/**/*.{ts,tsx}'\n ],\n theme: {\n \tcontainer: {\n \t\tcenter: 'true',\n \t\tpadding: '2rem',\n \t\tscreens: {\n \t\t\t'2xl': '1600px',\n \t\t\t'3xl': '1840px'\n \t\t}\n \t},\n \tfontFamily: {\n \t\tsans: [var(--font-sans)],\n \t\tserif: [var(--font-grotesk)]\n \t},\n \tfontSize: {\n \t\t'3xs': '0.5rem',\n \t\t'2xs': '0.625rem',\n \t\txs: '0.875rem',\n \t\tsm: '1rem',\n \t\tlg: '1.125rem',\n \t\txl: '1.25rem',\n \t\t'2xl': '1.5rem',\n \t\t'3xl': '1.75rem',\n \t\t'4xl': '2rem',\n \t\t'5xl': '3rem',\n \t\t'6xl': '4rem',\n \t\t'7xl': '5rem',\n \t\t'8xl': '5rem',\n \t\t'9xl': '7.5rem',\n \t\t'10xl': '8.75rem',\n \t\t'11xl': '12.5rem',\n \t\t'12xl': '15rem'\n \t},\n \textend: {\n \t\tcolors: {\n \t\t\tblue: {\n \t\t\t\t'100': '#DCE5ED',\n \t\t\t\t'200': '#E8EAEE',\n \t\t\t\t'400': '#3B81FF',\n \t\t\t\t'500': '#005CFF',\n \t\t\t\t'900': '#474E54',\n \t\t\t\t'1000': '#242C34'\n \t\t\t}\n \t\t},\n \t\tkeyframes: {\n \t\t\t'accordion-down': {\n \t\t\t\tfrom: {\n \t\t\t\t\theight: '0'\n \t\t\t\t},\n \t\t\t\tto: {\n \t\t\t\t\theight: 'var(--radix-accordion-content-height)'\n \t\t\t\t}\n \t\t\t},\n \t\t\t'accordion-up': {\n \t\t\t\tfrom: {\n \t\t\t\t\theight: 'var(--radix-accordion-content-height)'\n \t\t\t\t},\n \t\t\t\tto: {\n \t\t\t\t\theight: '0'\n \t\t\t\t}\n \t\t\t},\n \t\t\tmarquee: {\n \t\t\t\tfrom: {\n \t\t\t\t\ttransform: 'translateX(0)'\n \t\t\t\t},\n \t\t\t\tto: {\n \t\t\t\t\ttransform: 'translateX(calc(-100% - var(--gap)))'\n \t\t\t\t}\n \t\t\t},\n \t\t\t'marquee-vertical': {\n \t\t\t\tfrom: {\n \t\t\t\t\ttransform: 'translateY(0)'\n \t\t\t\t},\n \t\t\t\tto: {\n \t\t\t\t\ttransform: 'translateY(calc(-100% - var(--gap)))'\n \t\t\t\t}\n \t\t\t}\n \t\t},\n \t\tanimation: {\n \t\t\t'accordion-down': 'accordion-down 0.2s ease-out',\n \t\t\t'accordion-up': 'accordion-up 0.2s ease-out',\n \t\t\tmarquee: 'marquee var(--duration) linear infinite',\n \t\t\t'marquee-vertical': 'marquee-vertical var(--duration) linear infinite'\n \t\t}\n \t}\n },\n plugins: [require('tailwindcss-animate'), require('@tailwindcss/typography')]\n};\n\nexport default ..."
Stack: Error: Error replacing tree: The children of the old and new trees were expected to have the same count (4:9).
at ParentFinderReplacementNodeHandler.handleChildren (C:\Users\poylar\AppData\Local\npm-cache\_npx\16e1988cfd51310d\node_modules\ts-morph\dist\ts-morph.js:1436:19)
at ParentFinderReplacementNodeHandler.handleNode (C:\Users\poylar\AppData\Local\npm-cache\_npx\16e1988cfd51310d\node_modules\ts-morph\dist\ts-morph.js:1430:18)
at ParentFinderReplacementNodeHandler.handleNode (C:\Users\poylar\AppData\Local\npm-cache\_npx\16e1988cfd51310d\node_modules\ts-morph\dist\ts-morph.js:1570:19)
at doManipulation (C:\Users\poylar\AppData\Local\npm-cache\_npx\16e1988cfd51310d\node_modules\ts-morph\dist\ts-morph.js:2282:21)
at insertIntoParentTextRange (C:\Users\poylar\AppData\Local\npm-cache\_npx\16e1988cfd51310d\node_modules\ts-morph\dist\ts-morph.js:2317:5)
at ObjectLiteralExpression.replaceWithText (C:\Users\poylar\AppData\Local\npm-cache\_npx\16e1988cfd51310d\node_modules\ts-morph\dist\ts-morph.js:3644:9)
at zt (file:///C:/Users/poylar/AppData/Local/npm-cache/_npx/16e1988cfd51310d/node_modules/shadcn/dist/index.js:5:4822)
at async _t (file:///C:/Users/poylar/AppData/Local/npm-cache/_npx/16e1988cfd51310d/node_modules/shadcn/dist/index.js:5:3839)
at async Fe (file:///C:/Users/poylar/AppData/Local/npm-cache/_npx/16e1988cfd51310d/node_modules/shadcn/dist/index.js:5:3433)
at async ee (file:///C:/Users/poylar/AppData/Local/npm-cache/_npx/16e1988cfd51310d/node_modules/shadcn/dist/index.js:14:8868)
````
full config example:
```javascript
import type { Config } from 'tailwindcss';
const config: Config = {
content: [
'./pages/**/*.{ts,tsx}',
'./components/**/*.{ts,tsx}',
'./app/**/*.{ts,tsx}',
'./src/**/*.{ts,tsx}'
],
theme: {
container: {
center: true,
padding: '2rem',
screens: {
'2xl': '1600px',
'3xl': '1840px'
}
},
fontFamily: {
sans: ['var(--font-sans)'],
serif: ['var(--font-grotesk)']
},
fontSize: {
'3xs': ['0.5rem', '1rem'],
'2xs': ['0.625rem', '1rem'],
xs: ['0.875rem', '1.25rem'],
sm: ['1rem', '1.25rem'],
lg: ['1.125rem', '1.75rem'],
xl: ['1.25rem', '1.875rem'],
'2xl': ['1.5rem', '2rem'],
'3xl': ['1.75rem', '2.5rem'],
'4xl': ['2rem', '2.5rem'],
'5xl': ['3rem', '3.375rem'],
'6xl': ['4rem', '6.25rem'],
'7xl': ['5rem', '6.25rem'],
'8xl': ['5rem', '5rem'],
'9xl': ['7.5rem', '6.25rem'],
'10xl': ['8.75rem', '6.25rem'],
'11xl': ['12.5rem', '6.25rem'],
'12xl': ['15rem', '10.25rem']
},
extend: {
colors: {
blue: {
'100': '#DCE5ED',
'200': '#E8EAEE',
'400': '#3B81FF',
'500': '#005CFF',
'900': '#474E54',
'1000': '#242C34'
}
},
keyframes: {
'accordion-down': {
from: {
height: '0'
},
to: {
height: 'var(--radix-accordion-content-height)'
}
},
'accordion-up': {
from: {
height: 'var(--radix-accordion-content-height)'
},
to: {
height: '0'
}
},
marquee: {
from: {
transform: 'translateX(0)'
},
to: {
transform: 'translateX(calc(-100% - var(--gap)))'
}
},
'marquee-vertical': {
from: {
transform: 'translateY(0)'
},
to: {
transform: 'translateY(calc(-100% - var(--gap)))'
}
}
},
animation: {
'accordion-down': 'accordion-down 0.2s ease-out',
'accordion-up': 'accordion-up 0.2s ease-out',
marquee: 'marquee var(--duration) linear infinite',
'marquee-vertical': 'marquee-vertical var(--duration) linear infinite'
}
}
},
plugins: [require('tailwindcss-animate'), require('@tailwindcss/typography')]
};
export default config;
```
### Affected component/components
accordtion
### How to reproduce
I was working with a dev project that used the previous shadcn cli on that computer and project
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Operating System:
──────────────────────────────────────────────────────────────────────────────────────────
Platform : Windows
Distro : Microsoft Windows 11 Pro
Release : 10.0.22631
Codename :
Kernel : 10.0.22631
Arch : x64
Hostname : poylar-home
Codepage : 866
Build : 22631
Hypervisor :
RemoteSession :
System:
──────────────────────────────────────────────────────────────────────────────────────────
Manufacturer : Gigabyte Technology Co., Ltd.
Model : B550 AORUS ELITE V2
Version : Default string
Virtual :
CPU:
──────────────────────────────────────────────────────────────────────────────────────────
Manufacturer : AMD
Brand : Ryzen 7 5800X 8-Core Processor
Family : 25
Model : 33
Stepping : 2
Speed : 3.8
Cores : 16
PhysicalCores : 8
PerformanceCores : 16
EfficiencyCores :
Processors : 1
Socket : AM4
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,500,434,916 | deno | Support `import foo from "./bar.txt" with { as: "bytes" }` | Would import the relevant file as a byte buffer (`Uint8Array`). Maybe also `import foo from "./bar.txt" with { as: "text" }`? | suggestion | medium | Major |
2,500,440,578 | flutter | [flutter_markdown] Screenreader: Ignore Bullet Points and Icons | ### Use case
We use the `MarkdownBody` Widget to show some intros for our App. The intro texts are stored in a .json file.
We carried out a professional accessibility test and one of the findings we noticed was that the intro text is read out in an unusual way.
Specifically, it was noted that icons are read aloud and that the bullet point is read aloud each time a list is given.
It would be desirable if the `MarkdownBody` widget offered a configuration option to influence the behavior of the screen reader.
In my opinion, an option to not read out icons when the screen reader is switched on would be useful.
In addition, not to explicitly read out the bullet points for enumerations.
Currently, the behavior of the screen reader cannot be influenced here
### Proposal
Example Text:
```
🚀 **Improvements**
* Feature A
* Feature B
```
--> The Screenreader read the following:
_Rocket_, Improvments
_Dot_, Feature A
_Dot_ Feature B
The accessibility testers noted that reading out the term “rocket” does not offer any added value here, as it is only a visual element. These are not usually read aloud.
The same applies to reading out “dot”. It is implicitly clear that this is an enumeration and reading this out again is confusing.
| a: accessibility,package,c: proposal,team-ecosystem,P2,p: flutter_markdown,triaged-ecosystem | low | Minor |
2,500,456,291 | go | net/http: IdleConnTimeout is invalid for HTTP/2 | ### Go version
go version go1.22.2 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='amd64'
GOBIN=''
GOCACHE='/data/home/sakiluzhang/.cache/go-build'
GOENV='/data/home/sakiluzhang/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/data/home/sakiluzhang/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/data/home/sakiluzhang/go'
GOPRIVATE=''
GOPROXY='https://sakiluzhang:lFEajkXQ@goproxy.woa.com'
GOROOT='/data/home/sakiluzhang/go'
GOSUMDB='sum.woa.com+643d7a06+Ac5f5VOC4N8NUXdmhbm8pZSXIWfhek5JSmWdWrq7pLX4'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/data/home/sakiluzhang/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.22.2'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/data/home/sakiluzhang/DServer/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build929430334=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
```
func (t *Transport) tryPutIdleConn(pconn *persistConn) error {
if t.DisableKeepAlives || t.MaxIdleConnsPerHost < 0 {
return errKeepAlivesDisabled
}
if pconn.isBroken() {
return errConnBroken
}
pconn.markReused()
t.idleMu.Lock()
defer t.idleMu.Unlock()
// HTTP/2 (pconn.alt != nil) connections do not come out of the idle list,
// because multiple goroutines can use them simultaneously.
// If this is an HTTP/2 connection being “returned,” we're done.
if pconn.alt != nil && t.idleLRU.m[pconn] != nil {
**here pconn.idleAt has not been updated and returned.**
return nil
}
......
```
### What did you see happen?
I hope one connection is checked IdleConnTimeout should start at the latest time I used it rather than the first time
I hope *http.Transport.idleLRU list should remove connection not only by MaxIdleConns (in many cases, you can't estimate proper num for in China there maybe Tens of millions users use our product simultaneously and more http requests will be produced) but also by IdleConnTimeout
### What did you expect to see?
read above | NeedsInvestigation | low | Critical |
2,500,481,155 | next.js | BeforeInteractive script not firing after calling notFound() function | ### Link to the code that reproduces this issue
https://github.com/kacyee/before-interactive-2
### To Reproduce
1. Start the application in production mode (npm build -> npm start)
2. Go to http://localhost:3000
3. Look into browser console, there should logged messages:
First -> "I should be beforeInteractive"
Second -> "Im in root layout!"
4. Go to http://localhost:3000/not-existing-route, console should log as in point 3.
5. Go to http://localhost:3000/pl/newsite, console log only "Im in root layout!", but not "I should be beforeInteractive" -
_**which is the problem I am describing**_
### Current vs. Expected behavior
Currently when firing notFound function, for some reason script injected with strategy "beforeInteractive" are not fired.
In that case - CMP (Consent Management Platform) cookie consent is not being initialized and some cookies are already generated.
To avoid that - all scripts with strategy "beforeInteractive" should be fired on every single page, even on notFound.
On other pages it works fine, but not when we call notFound function.
Expected behavior:
scripts with strategy "beforeInteractive" are fired on every single page, even after calling notFound() function.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:19:22 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8112
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.5.0
npm: 9.8.0
Yarn: 3.6.1
pnpm: 8.6.6
Relevant Packages:
next: 14.2.7 // Latest available version is detected (14.2.7).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Script (next/script)
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
I tested few version from 14.2.3 to "15.0.0-canary.135", error occurs on all of them. | bug,Script (next/script),linear: next | low | Critical |
2,500,534,012 | vscode | Issue Reporter should include warning if VSCode is not pure | Some extensions modify our source code. When users with these extensions report a bug, it would be very helpful to know that the users build is corrupted/modified, as they often cause these issues.
You could use the integrity service to check if the installation is pure or not: [isPure](https://github.com/microsoft/vscode/blob/d09064eb4f816690d8ae9199f2d128b5053a54d2/src/vs/workbench/services/integrity/electron-sandbox/integrityService.ts#L65)
Here’s an example of an issue where a user had an extension that caused a bug. It was difficult to identify that the user had a corrupted build. https://github.com/microsoft/vscode/issues/227023 | feature-request,issue-reporter | low | Critical |
2,500,543,072 | three.js | Addons: Support overview in `WebGLRenderer` and `WebGPURenderer`. | ### Description
This issue is intended for tracking which addons are already supported in `WebGPURenderer`.
Not all open tasks must be implemented. Certain addons might not be ported to `WebGPURenderer` because they are outdated (like the old `BloomPass`) or replaced with different APIs (like `MaskPass`). Such addons are marked with a ✖️ sign.
If someone wants to migrate a component, it's best to leave a comment so we can add the name to the "Assignee" column. In this way, we avoid the situation where more than one developer works at the same task.
| Addon | WebGLRenderer | WebGPURenderer | Assignee
|----------|:-------------:|:------:|:------:|
| animation/AnimationClipCreator.js | ✅ | ✅ | -
| animation/CCDIKSolver.js | ✅ | ✅ | -
| animation/MMDAnimationHelper.js | ✅ | ✅ | -
| animation/MMDPhysics.js | ✅ | ✅ | -
| capabilities/WebGL.js | ✅ | ✅ | -
| capabilities/WebGPU.js | ✅ | ✅ | -
| controls/ArcballControls.js | ✅ | ✅ | -
| controls/DragControls.js | ✅ | ✅ | -
| controls/FirstPersonControls.js | ✅ | ✅ | -
| controls/FlyControls.js | ✅ | ✅ | -
| controls/MapControls.js | ✅ | ✅ | -
| controls/OrbitControls.js | ✅ | ✅ | -
| controls/PointerLockControls.js | ✅ | ✅ | -
| controls/TrackballControls.js | ✅ | ✅ | -
| controls/TransformControls.js | ✅ | ✅ | -
| csm/CSM.js | ✅ | ✅| -
| csm/CSMFrustum.js | ✅ | ✅ | -
| csm/CSMHelper.js | ✅ | ✅ | -
| csm/CSMShader.js | ✅ | ✅ | -
| curves/CurveExtras.js | ✅ | ✅ | -
| curves/NURBSCurve.js | ✅ | ✅ | -
| curves/NURBSSurface.js | ✅ | ✅ | -
| curves/NURBSUtils.js | ✅ | ✅ | -
| curves/NURBSVolume.js | ✅ | ✅ | -
| effects/AnaglyphEffect.js | ✅ | ✅ (as AnaglyphPassNode )| -
| effects/AsciiEffect.js | ✅ | ✅ | -
| effects/OutlineEffect.js | ✅ | ✅ (as ToonOutlinePassNode )| -
| effects/ParallaxBarrierEffect.js | ✅ | ✅ (as ParallaxBarrierPassNode )| -
| effects/PeppersGhostEffect.js | ✅ | ✅ | -
| effects/StereoEffect.js | ✅ | ✅ (as StereoPassNode) | -
| environments/DebugEnvironment.js | ✅ | ✅ | -
| environments/RoomEnvironment.js | ✅ | ✅ | -
| exporters/DRACOExporter.js | ✅ | ✅ | -
| exporters/EXRExporter.js | ✅ | ✅| -
| exporters/GLTFExporter.js | ✅ | ✅ | -
| exporters/KTX2Exporter.js | ✅ | ✅ | -
| exporters/MMDExporter.js | ✅ | ✅ | -
| exporters/OBJExporter.js | ✅ | ✅ | -
| exporters/PLYExporter.js | ✅ | ✅ | -
| exporters/STLExporter.js | ✅ | ✅ | -
| exporters/USDZExporter.js | ✅ | ✅ | -
| geometries/BoxLineGeometry.js | ✅ | ✅ | -
| geometries/ConvexGeometry.js | ✅ | ✅ | -
| geometries/DecalGeometry.js | ✅ | ✅ | -
| geometries/InstancedPointsGeometry.js | ✅ | ✅ | -
| geometries/ParametricGeometries.js | ✅ | ✅ | -
| geometries/ParametricGeometry.js | ✅ | ✅ | -
| geometries/RoundedBoxGeometry.js | ✅ | ✅ | -
| geometries/TeapotGeometry.js | ✅ | ✅ | -
| geometries/TextGeometry.js | ✅ | ✅ | -
| helpers/LightProbeHelper.js | ✅ | ✅ (as LightProbeHelperGPU) | -
| helpers/OctreeHelper.js | ✅ | ✅ | -
| helpers/PositionalAudioHelper.js | ✅ | ✅ | -
| helpers/RectAreaLightHelper.js | ✅ | ✅ | -
| helpers/TextureHelper.js | ✅ | ✅ (as TextureHelperGPU) | -
| helpers/VertexNormalsHelper.js | ✅ | ✅ | -
| helpers/VertexTangentsHelper.js | ✅ | ✅ | -
| helpers/ViewHelper.js | ✅ | ✅ | -
| interactive/HTMLMesh.js | ✅ | ✅ | -
| interactive/InteractiveGroup.js | ✅ | ✅ | -
| interactive/SelectionBox.js | ✅ | ✅ | -
| interactive/SelectionHelper.js | ✅ | ✅ | -
| lights/LightProbeGenerator.js | ✅ | ✅ | -
| lights/RectAreaLightTexturesLib.js | ✅ | ✅ | -
| lights/RectAreaLightUniformsLib.js | ✅ | ✅ | -
| lines/Lines.js | ✅ | ✅ (as webgpu/Lines2.js) | -
| lines/LineGeometry.js | ✅ | ✅ | -
| lines/LineMaterial.js | ✅ | ✅ (as Line2NodeMaterial) | -
| lines/LineSegments2.js | ✅ | ✅ (as webgpu/LineSegments2.js) | -
| lines/LineSegmentsGeometry.js | ✅ | ✅ | -
| lines/Wireframe.js | ✅ | ✅ (as webgpu/Wireframe.js) | -
| lines/WireframeGeometry2.js | ✅ | ✅ | -
| loaders/3DMLoader.js | ✅ | ✅ | -
| loaders/3MFLoader.js | ✅ | ✅ | -
| loaders/AMFLoader.js | ✅ | ✅ | -
| loaders/BVHLoader.js | ✅ | ✅ | -
| loaders/ColladaLoader.js | ✅ | ✅ | -
| loaders/DDSLoader.js | ✅ | ✅ | -
| loaders/DRACOLoader.js | ✅ | ✅ | -
| loaders/EXRLoader.js | ✅ | ✅ | -
| loaders/FBXLoader.js | ✅ | ✅ | -
| loaders/FontLoader.js | ✅ | ✅ | -
| loaders/GCodeLoader.js | ✅ | ✅ | -
| loaders/GLTFLoader.js | ✅ | ✅ | -
| loaders/HDRCubeTextureLoader.js | ✅ | ✅ | -
| loaders/IESLoader.js | ✅ | ✅ | -
| loaders/KMZLoader.js | ✅ | ✅ | -
| loaders/KTX2Loader.js | ✅ | ✅ | -
| loaders/KTXLoader.js | ✅ | ✅ | -
| loaders/LDrawLoader.js | ✅ | ✅ | -
| loaders/LottieLoader.js | ✅ | ✅ | -
| loaders/LUT3dlLoader.js | ✅ | ✅ | -
| loaders/LUTCubeLoader.js | ✅ | ✅ | -
| loaders/LUTImageLoader.js | ✅ | ✅ | -
| loaders/LWOLoader.js | ✅ | ✅ | -
| loaders/MaterialXLoader.js | ✅ | ✅ | -
| loaders/MD2Loader.js | ✅ | ✅ | -
| loaders/MDDLoader.js | ✅ | ✅ | -
| loaders/MTLLoader.js | ✅ | ✅ | -
| loaders/NRRDLoader.js | ✅ | ✅ | -
| loaders/OBJLoader.js | ✅ | ✅ | -
| loaders/PCDLoader.js | ✅ | ✅ | -
| loaders/PDBLoader.js | ✅ | ✅ | -
| loaders/PLYLoader.js | ✅ | ✅ | -
| loaders/PVRLoader.js | ✅ | ✅ | -
| loaders/RGBELoader.js | ✅ | ✅ | -
| loaders/RGBMLoader.js | ✅ | ✅ | -
| loaders/STLLoader.js | ✅ | ✅ | -
| loaders/SVGLoader.js | ✅ | ✅ | -
| loaders/TDSLoader.js | ✅ | ✅ | -
| loaders/TGALoader.js | ✅ | ✅ | -
| loaders/TIFFLoader.js | ✅ | ✅ | -
| loaders/TTFLoader.js | ✅ | ✅ | -
| loaders/UltraHDRLoader.js | ✅ | ✅ | -
| loaders/USDZLoader.js | ✅ | ✅ | -
| loaders/VOXLoader.js | ✅ | ✅ | -
| loaders/VRMLLoader.js | ✅ | ✅ | -
| loaders/VTKLoader.js | ✅ | ✅ | -
| loaders/XYZLoader.js | ✅ | ✅ | -
| materials/MeshGouraudMaterial.js | ✅ | ✖️ (won't be ported) | -
| materials/MeshPostProcessingMaterial.js | ✅ | ✖️ (won't be ported) | -
| math/Capsule.js | ✅ | ✅ | -
| math/ColorConverter.js | ✅ | ✅ | -
| math/ConvexHull.js | ✅ | ✅ | -
| math/ImprovedNoise.js | ✅ | ✅ | -
| math/Lut.js | ✅ | ✅ | -
| math/MeshSurfaceSampler.js | ✅ | ✅ | -
| math/OBB.js | ✅ | ✅ | -
| math/Octree.js | ✅ | ✅ | -
| math/SimplexNoise.js | ✅ | ✅ | -
| misc/ConvexObjectBreaker.js | ✅ | ✅ | -
| misc/GPUComputationRenderer.js | ✅ | ✖️ (replaced with compute shaders) | -
| misc/Gyroscope.js | ✅ | ✅ | -
| misc/MD2Character.js | ✅ | ✅ | -
| misc/MD2CharacterComplex.js | ✅ | ✅ | -
| misc/MorphAnimMesh.js | ✅ | ✅ | -
| misc/MorphBlendMesh.js | ✅ | ✅ | -
| misc/ProgressiveLightMap.js | ✅ | ✅ | -
| misc/RollerCoaster.js | ✅ | ✅ | -
| misc/Timer.js | ✅ | ✅ | -
| misc/TubePainter.js | ✅ | ✅ | -
| misc/Volume.js | ✅ | ✅ | -
| misc/VolumeSlice.js | ✅ | ✅ | -
| modifiers/CurveModifier.js | ✅ | ✅ (as CurveModifierGPU) | -
| modifiers/EdgeSplitModifier.js | ✅ | ✅ | -
| modifiers/SimplifyModifier.js | ✅ | ✅ | -
| modifiers/TessellateModifier.js | ✅ | ✅ | -
| objects/GroundedSkybox.js | ✅ | ✅ | -
| objects/Lensflare.js | ✅ | ✅ (as LensflareMesh) | -
| objects/MarchingCubes.js | ✅ | ✅ | -
| objects/Reflector.js | ✅ | ✅ (as ReflectorNode) | -
| objects/ReflectorForSSRPass.js | ✅ | ✖️ (use ReflectorNode instead) | -
| objects/Refractor.js | ✅ | ✅ (as viewportSharedTexture()) | -
| objects/ShadowMesh.js | ✅ | ✅ | -
| objects/Sky.js | ✅ | ✅ (as objects/SkyMesh) | -
| objects/Water.js | ✅ | ✅ (as objects/WaterMesh) | -
| objects/Water2.js | ✅ | ✅ (as objects/Water2Mesh) | -
| physics/AmmoPhysics.js | ✅ | ✅ | -
| physics/JoltPhysics.js | ✅ | ✅ | -
| physics/RapierPhysics.js | ✅ | ✅ | -
| postprocessing/AfterimagePass.js | ✅ | ✅ (as AfterImageNode) | -
| postprocessing/BloomPass.js | ✅ | ✖️ (use BloomNode instead) | -
| postprocessing/BokehPass.js | ✅ | ✅ (as DepthOfFieldNode) | -
| postprocessing/ClearPass.js | ✅ | ✖️ (won't be ported) | -
| postprocessing/CubeTexturePass.js | ✅ | ✖️ (won't be ported) | -
| postprocessing/DotScreenPass.js | ✅ | ✅ (as DotScreenNode) | -
| postprocessing/EffectComposer.js | ✅ | ✅ (as PostProcessing) | -
| postprocessing/FilmPass.js | ✅ | ✅ (as FilmNode) | -
| postprocessing/GlitchPass.js | ✅ | ✖️ (won't be ported) | -
| postprocessing/GTAOPass.js | ✅ | ✅ (as GTAONode) | -
| postprocessing/HalftonePass.js | ✅ | ✖️ (won't be ported) | -
| postprocessing/LUTPass.js | ✅ | ✅ (as Lut3DNode) | -
| postprocessing/MaskPass.js | ✅ | ✖️ (won't be ported) | -
| postprocessing/OutlinePass.js | ✅ | ✅ (as OutlineNode) | -
| postprocessing/OutputPass.js | ✅ | ✅ (as RenderOutputNode) | -
| postprocessing/Pass.js | ✅ | ✅ (as TempNode or TextureNode) | -
| postprocessing/RenderPass.js | ✅ | ✅ (as PassNode) | -
| postprocessing/RenderPixelatedPass.js | ✅ | ✅ (as PixelationNode) | -
| postprocessing/SAOPass.js | ✅ | ✖️ (use GTAONode instead) | -
| postprocessing/SavePass.js | ✅ | ✖️ (won't be ported) | -
| postprocessing/ShaderPass.js | ✅ | ✖️ (won't be ported) | -
| postprocessing/SMAAPass.js | ✅ | ✅ (as SMAANode) | -
| postprocessing/SSAARenderPass.js | ✅ | ✅ (as SSAAPassNode) | -
| postprocessing/SSAOPass.js | ✅ | ✖️ (use GTAONode instead) | -
| postprocessing/SSRPass.js | ✅ | ✅ (as SSRNode) | -
| postprocessing/TAARenderPass.js | ✅ | ✅ (as TRAAPassNode) | -
| postprocessing/TexturePass.js | ✅ | ✅ (as TextureNode) | -
| postprocessing/UnrealBloomPass.js | ✅ | ✅ (as BloomNode) | -
| renderers/CSS2DRenderer.js | ✅ | ✅ | -
| renderers/CSS3DRenderer.js | ✅ | ✅ | -
| renderers/Projector.js | ✅ | ✅ | -
| renderers/SVGRenderer.js | ✅ | ✅ | -
| shaders | ✅ | ✖️ (only dependencies are ported) | -
| textures/FlakesTexture.js | ✅ | ✅ | -
| utils/BufferGeometryUtils.js | ✅ | ✅ | -
| utils/GeometryCompressionUtils.js | ✅ | ✅ | -
| utils/GeometryUtils.js | ✅ | ✅ | -
| utils/LDrawUtils.js | ✅ | ✅ | -
| utils/SceneUtils.js | ✅ | ✅ | -
| utils/ShadowMapViewer.js | ✅ | ✅ (as ShadowMapViewerGPU) | -
| utils/SkeletonUtils.js | ✅ | ✅ | -
| utils/SortUtils.js | ✅ | ✅ | -
| utils/UVsDebug.js | ✅ | ✅ | -
| utils/WebGLTextureUtils.js | ✅ | ✅ (as WebGPUTextureUtils) | -
| utils/WorkerPool.js | ✅ | ✅ | -
| webxr/ARButton.js | ✅ | ✅ | -
| webxr/OculusHandModel.js | ✅ | ✅ | -
| webxr/OculusHandPointerModel.js | ✅ | ✅ | -
| webxr/Text2D.js | ✅ | ✅ | -
| webxr/VRButton.js | ✅ | ✅ | -
| webxr/XRButton.js | ✅ | ✅ | -
| webxr/XRControllerModelFactory.js | ✅ | ✅ | -
| webxr/XREstimatedLight.js | ✅ | ❌ (dependency to WebGLRenderer) | -
| webxr/XRHandMeshModel.js | ✅ | ✅ | -
| webxr/XRHandModelFactory.js | ✅ | ✅ | -
| webxr/XRHandPrimitiveModel.js | ✅ | ✅ | -
| webxr/XRPlanes.js | ✅ | ✅ | -
| WebGPU | low | Critical |
2,500,554,275 | flutter | [web][mobile browser]: jittery behavior upon scrolling slowly. | ### Steps to reproduce
1. Run the code snippet on flutter web
2. Scroll list very slowly.
### Expected results
List is scrolling smoothly
### Actual results
Widgets are jittering up and down.
### Code sample
<details open><summary>Code sample</summary>
```dart
Scaffold(
appBar: AppBar(
title: const Text('Home'),
),
body: ListView.builder(
itemCount: 100,
itemBuilder: (context, index) {
return Column(
children: List.generate(100, (index){
return Text('Item with some text with number $index');
}),
);
}
),
);
//Or this
Scaffold(
appBar: AppBar(
title: const Text('Home'),
),
body: ListView.builder(
itemCount: 100,
itemBuilder: (context, index) {
return Column(
children: List.generate(100, (index) {
return ListTile(
title: Text('Item with index number $index'),
leading: const Icon(Icons.ac_unit),
);
}),
);
}),
);
```
</details>
### Screenshots or Video
<details open>
<summary>
If you can, playback video on full screen vertical monitor will be more noticeable. SingleChildScrollView has the same effect. Indicating something wrong with web layout. It is like, if I scroll by one pixel up, not all widgets will move up by 1 pixel Some will stay, some will scroll. But I am not sure. I think even scrolling faster this could cause widgets jump up and down very fast, that is not noticeable, but causing widgets a little blurry. When user scrolls and stops but keeps finger on screen it might scroll by few pixels causing everything to jitter. Tested on android web, Chrome and Samsung internet.
</summary>
https://github.com/user-attachments/assets/863b3658-c7da-4caa-a647-71e68f9473b8
https://github.com/user-attachments/assets/4bad31ca-5230-4c55-8b8f-69442636eb52
</details>
### Logs
no logs or orros
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.19045.4780], locale lv-LV)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Build Tools 2022 17.5.5)
[√] Android Studio (version 2022.2)
[√] VS Code (version 1.92.2)
[√] Connected device (4 available)
[√] Network resources
```
</details>
| engine,platform-web,has reproducible steps,P2,browser: chrome-android,team-web,triaged-web,found in release: 3.24,found in release: 3.25 | low | Major |
2,500,555,793 | vscode | Task Smoke Tests on Windows time out after build agent update | These started failing after an update to the build agent which shouldn't be related, but likely triggered a timing issue:
```
2) VSCode Smoke Tests (Electron)
Task
Task Quick Pick
Tasks: Run Task
icon - icon only:
Error: Timeout: get element '.single-terminal-tab .codicon' after 20 seconds.
at Code.poll (D:\a\_work\1\s\test\automation\out\code.js:204:23)
at async Code.waitForElement (D:\a\_work\1\s\test\automation\out\code.js:163:16)
at async Terminal.assertTabExpected (D:\a\_work\1\s\test\automation\out\terminal.js:227:17)
at async Terminal.assertSingleTab (D:\a\_work\1\s\test\automation\out\terminal.js:158:9)
at async Task.assertTasks (D:\a\_work\1\s\test\automation\out\task.js:37:17)
at async Context.<anonymous> (out\areas\task\task-quick-pick.test.js:43:17)
3) VSCode Smoke Tests (Electron)
Task
Task Quick Pick
Tasks: Run Task
icon - color only:
Error: Timeout: get element '.single-terminal-tab' after 20 seconds.
at Code.poll (D:\a\_work\1\s\test\automation\out\code.js:204:23)
at async Code.waitForElement (D:\a\_work\1\s\test\automation\out\code.js:163:16)
at async Terminal.assertTabExpected (D:\a\_work\1\s\test\automation\out\terminal.js:223:17)
at async Terminal.assertSingleTab (D:\a\_work\1\s\test\automation\out\terminal.js:158:9)
at async Task.assertTasks (D:\a\_work\1\s\test\automation\out\task.js:37:17)
at async Context.<anonymous> (out\areas\task\task-quick-pick.test.js:48:17)
4) VSCode Smoke Tests (Electron)
Task
Task Quick Pick
Tasks: Run Task
icon - icon & color:
Error: Timeout: get element '.single-terminal-tab' after 20 seconds.
at Code.poll (D:\a\_work\1\s\test\automation\out\code.js:204:23)
at async Code.waitForElement (D:\a\_work\1\s\test\automation\out\code.js:163:16)
at async Terminal.assertTabExpected (D:\a\_work\1\s\test\automation\out\terminal.js:223:17)
at async Terminal.assertSingleTab (D:\a\_work\1\s\test\automation\out\terminal.js:158:9)
at async Task.assertTasks (D:\a\_work\1\s\test\automation\out\task.js:37:17)
at async Context.<anonymous> (out\areas\task\task-quick-pick.test.js:53:17)
```
https://dev.azure.com/monacotools/Monaco/_build/results?buildId=290842&view=logs&j=4801dce2-64f3-53d6-b366-d49a1977c639&t=9ab437b3-ee70-5588-b120-442542e6d570
/cc @lszomoru
Revert: https://github.com/microsoft/vscode/pull/227353 | tasks,debt,smoke-test-failure | low | Critical |
2,500,596,906 | pytorch | Bert model training is not running on cpu (X86/ARM) with torch.compile() mode, generating value error. | ### 🐛 Describe the bug
I was trying to run Bert model training on ICELAKE CPU with torch.compile mode then it is giving a value error, but when i am running it with eager mode then it is running fine without any error.
this is the error which i am getting when running in compile mode.

this is script which i am running.

the similar behavior is observed in Graviton machine also.
### Versions
using torch==2.4, the requirement file which i am using.
[requirement.txt](https://github.com/user-attachments/files/16836429/requirement.txt)
Code:-
`
import os, time
from transformers import AutoTokenizer
from transformers import DataCollatorWithPadding
import evaluate
import numpy as np
import torch
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
from torch.profiler import profile, record_function, ProfilerActivity
from datasets import load_dataset
imdb = load_dataset("imdb",split=['train[:5]','test[:5]'])
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def preprocess_function(examples):
return tokenizer(examples["text"], truncation=True,padding=True)
tokenized_imdb_tr = imdb[0].map(preprocess_function, batched=True)
tokenized_imdb_ts = imdb[1].map(preprocess_function, batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
id2label = {0: "NEGATIVE", 1: "POSITIVE"}
label2id = {"NEGATIVE": 0, "POSITIVE": 1}
model = AutoModelForSequenceClassification.from_pretrained(
"bert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id
)
model = torch.compile(model, backend='inductor')
training_args = TrainingArguments(
output_dir="my_awesome_model2",
learning_rate=2e-5,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=1,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
# push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb_tr,
eval_dataset=tokenized_imdb_ts,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
with profile(with_stack=True,
profile_memory=True, record_shapes=True) as prof:
print("starting the training")
start_time = time.time()
trainer.train()
end_time = time.time()
print(prof.key_averages().table())
total_time = end_time-start_time
print("Training Time Taken with torch.compile:")
print(total_time)
`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames @rec | triaged,oncall: pt2,module: dynamo | low | Critical |
2,500,646,509 | electron | [Bug]: DOMException: Could not start audio source | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
31.4.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10 Pro 22H2 19045.4780
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
Run successfully
### Actual Behavior
The following code fails on some our computers with **DOMException: Could not start audio source** exception
```typescript
const checkMicrophone = async (): Promise<void> => {
try {
await navigator.mediaDevices.getUserMedia({ audio: true })
console.log('SUCCESS')
} catch (ex) {
console.log(ex)
}
}
```
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/windows,bug :beetle:,has-repro-comment,31-x-y | low | Critical |
2,500,682,261 | godot | Can't Open Project from Project manager and can't run any scene. | ### Tested versions
Godot_v4.3
### System information
AMD Ryzen 3 5300U with Radeon Graphics, Clock Speed 2.60 GHz, 8GB RAM
### Issue description
It was fine until i installed Unreal Engine 5 in my laptop. i was able to edit any project and i was able to run any scene, but After Installing UE5, whenever i try to open any project it automatically close the project. after trying multiple time ( 10-20 times ) it finally opened, but cant run any scene. To Run any scene, i had to press run button multiple times ( 10-20 times ). and i had to do it every time when i want to make or edit game projects. I don't know whether the problem is in my laptop or in Godot.
### Steps to reproduce
Click on Godot4.3.exe >>Click any project
### Minimal reproduction project (MRP)
[test_project.zip](https://github.com/user-attachments/files/16836653/test_project.zip)
| needs testing | low | Minor |
2,500,729,733 | ui | [feat]: Improve DX by Setting Default type='button' for Button Component to Prevent Unintended Form Submissions | ### Feature description
Currently, the Button component does not have a default value for the type attribute. This means that if a Button component is placed inside a form element without explicitly setting type='button', it will default to type='submit', following standard HTML behavior. Reference: [MDN - HTML button](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/button#type).
To improve developer experience (DX), it would be beneficial to set the default type of the Button component to 'button'. This change would help prevent accidental form submissions when a Button component is placed within a form element without the explicit type='button' attribute.
### Affected component/components
Button
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,500,749,181 | go | proposal: cmd/go: ignore empty functions in coverage report | ### Proposal Details
A common way to implement union types in Go is with an interface e.g.
```go
type Record struct {
Title string
Value valueType
}
type valueType interface {
isValue()
}
type ValueFoo struct {
Foo string
}
func (ValueFoo) isValue() {}
type ValueBar struct {
Bar string
}
func (ValueBar) isValue() {}
```
This method is amongst others employed by protobuf. The problem is that these empty functions negatively affect coverage, so in projects where coverage is enforced it can lead to having to write nonsensical tests like this:
```go
func TestValueType(*testing.T) {
tests := []valueType{
ValueFoo{},
ValueBar{},
}
for _, tt := range tests {
tt.isValue()
}
}
```
To avoid this it would be great if empty functions were excluded from the coverage report. | Proposal | low | Minor |
2,500,818,953 | vscode | VS code takes a long time to boot. | ## Info
- I'm using ubuntu 24.04
- The window is rendered quite fast but is empty, it then takes approximately 10sec to "fill" it with content
- I've tried disabling all extensions or using an empty workspace and it does not get faster
- I'm using the snap (classical) version but the .deb version is not faster
## Startup Info
[startup_info.txt](https://github.com/user-attachments/files/16837349/startup_info.txt)
## Profile file
[prof-jIqwh3S8.main.cpuprofile.txt](https://github.com/user-attachments/files/16837291/prof-jIqwh3S8.main.cpuprofile.txt)
| freeze-slow-crash-leak | low | Minor |
2,500,838,264 | godot | [C#] Infinite recursion in exported property won't throw StackOverflowException as it should | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3060 Ti (NVIDIA; 31.0.15.3742) - AMD Ryzen 7 5700X 8-Core Processor (16 Threads)
### Issue description
I accidentally made an infinite recursion in one of my project because of a typo.
Godot hangs instead of throwing a `StackOverflowException` as it should.
When running the project:
The game hangs and the scene is never loaded. Nothing happen until the project is stopped from running.
If the script is annoted with `[Tool]`
The editor hangs and must be killed with the TaskManager.
Didn't tested with GDScript (sorry I never tried this language)
### Steps to reproduce
* Add the below code to any node in a scene:
```C#
public partial class StackOverflow : Node2D
{
[Export]
public string ShouldThrowStackOverflow
{
set
{
ShouldThrowStackOverflow = value;
}
get
{
return shouldThrowStackOverflow;
}
}
private string shouldThrowStackOverflow;
}
```
* Assign any value to this property in the editor
### Minimal reproduction project (MRP)
Minimal reproduction project (C#)
[StackOverflow.zip](https://github.com/user-attachments/files/16837384/StackOverflow.zip)
| discussion,needs testing,topic:dotnet | low | Minor |
2,500,847,890 | godot | Impossible to use symbols as a comment marker | ### Tested versions
- 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - Intel(R) Iris(R) Xe Graphics (Intel Corporation; 31.0.101.5186) - 12th Gen Intel(R) Core(TM) i7-1255U (12 Threads)
### Issue description
It seems the text editor does not support, symbols (such as : §, ?, !, !!) as **_comment marker_**

The comment doesn't use the corresponding color and keep the normal comment color compared to a **_comment marker_** with letters and numbers

### Steps to reproduce
1. Open the customisation parameters of the text editor (_Editor Settings_ -> _Text Editor_ -> _Theme_)
2. Set the theme to Custom
3. Add symbols to the list (there are at the end)
4. Comment with these symbols
### Minimal reproduction project (MRP)
N/A | discussion,topic:gdscript,topic:editor,documentation | low | Minor |
2,500,855,928 | rust | PowerPC `altivec` intrinsics are unsound due to subnormal flushing | I tried this code, compiled on a PowerPC target with `-O -Ctarget-feature=+altivec` (`-O -Ctarget-feature=-vsx` is required instead if the chosen PowerPC target uses a default target CPU of `pwr7` or later):
```rust
#![feature(stdarch_powerpc, powerpc_target_feature)]
#[cfg(target_arch = "powerpc64")]
use std::arch::powerpc64::{vector_float, vec_add};
#[cfg(target_arch = "powerpc")]
use std::arch::powerpc::{vector_float, vec_add};
use std::mem::transmute;
#[inline(never)]
fn print_vals(x: vector_float, i: usize, vals_i: u32) {
println!("x={x:?} i={i} vals[i]={vals_i}");
}
const ZERO: vector_float = unsafe { transmute([0, 0, 0, 0]) };
const INC: vector_float = unsafe { transmute([f32::MIN_POSITIVE / 128.0, f32::MIN_POSITIVE / 128.0, f32::MIN_POSITIVE / 128.0, f32::MIN_POSITIVE / 128.0]) };
const TARGET: u128 = unsafe { transmute([f32::MIN_POSITIVE, f32::MIN_POSITIVE, f32::MIN_POSITIVE, f32::MIN_POSITIVE]) };
#[inline(never)]
#[target_feature(enable = "altivec")]
pub unsafe fn evil(vals: &[u32; 300]) {
let mut x: vector_float = ZERO;
let mut i: usize = 0;
while unsafe { transmute::<vector_float, u128>(x) } != TARGET {
print_vals(x, i, vals[i]);
x = unsafe { vec_add(x, INC) };
x = unsafe { vec_add(x, INC) };
i += 2;
}
dbg!(unsafe { transmute::<vector_float, u128>(x) });
}
pub fn main() {
let mut vals: [u32; 300] = [0; 300];
for i in 0..300 { vals[i as usize] = i; }
unsafe { evil(&vals) };
}
#[cfg(not(target_feature = "altivec"))]
compile_error!("-Ctarget-feature=+altivec required");
#[cfg(target_feature = "vsx")]
compile_error!("-Ctarget-feature=-vsx required");
```
I expected to see this happen: The program does not segfault.
Instead, this happened: The program segfaults as LLVM optimisations presume that subnormals are not flushed to zero, whereas the `altivec` instructions flush subnormals to zero. This only occurs when the `vsx` feature is not enabled; the `vsx` instructions do not flush subnormals and therefore do not have this issue.
This is the PowerPC equivalent of #129880.
### Meta
`rustc --version --verbose`:
```
rustc 1.82.0-nightly (a7399ba69 2024-08-31)
binary: rustc
commit-hash: a7399ba69d37b019677a9c47fe89ceb8dd82db2d
commit-date: 2024-08-31
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
``` | P-medium,T-compiler,I-unsound,O-PowerPC,C-bug,A-floating-point,I-miscompile | low | Critical |
2,500,873,356 | next.js | Next.JS 15 RC freezes occasionally when forwarding (`rewrites()`) requests to FastAPI endpoint | ### Link to the code that reproduces this issue
https://github.com/reasv/panoptikon-ui/blob/master/next.config.mjs
### To Reproduce
I do not have a minimal or reliable way of reproducing this issue as it seems to happen inconsistently, but this behaviour seems clearly incorrect to me, and perhaps we can isolate the problem causing it.
Above, I linked the configuration that seems to reproduce the issue with my FastAPI server, specifically the endpoint serving thumbnails:
https://github.com/reasv/panoptikon/blob/master/src/panoptikon/api/routers/items.py#L200
^This is the line of code returning the response that occasionally makes Next.js hang up. As you can see it's just a file response.
When forwarding to this endpoint using the config I linked in the main link, the issue occurs, occasionally, and not consistently with the same files/same requests.
The FastAPI server itself prints this error:
```
RuntimeError: Response content shorter than Content-Length
```
Which led me to blame FastAPI at first, but then I realized that fastAPI itself remained responsive throughout, while Next.JS seemingly freezes, often not serving *any* requests.
Maybe it is an error caused by FastAPI, but this doesn't seem like a robust way of handling it.
### Current vs. Expected behavior
I expect Next.JS to forward my requests to the server's API and return the responses. I am using forwards to avoid cross-origin requests and to allow for *runtime* configurable API URLs, through serverside-only env variables, which otherwise would have to be baked into the build.
This works most of the time using the forwards provided above. Unfortunately, at seemingly random times, the thumbnail endpoint, which returns image files, seems to have issues. The image either does not load on the client, or it half-loads with the remaining part being empty.
Next.js eventually prints ECONNRESET errors, but that happens after a *while*, in the meantime, next.js becomes varying degrees of unresponsive, to the point of sometimes not being able to refresh the page.
Loading the image directly from the FastAPI endpoint bypassing next.js right when this happens yields no hangups, and works flawlessly whenever I tried it.
Meanwhile, if I try to open the image (for a 'broken' image) in a new tab, using the forwarded next.js link, it usually just hangs and loads for a while.
I want to stress that Next.js isn't printing errors while frozen either, the ECONNRESET errors happen after a *while*, so I believe something is definitely wrong, because when it is in a frozen state, often nothing seems to load, instead making the browser wait indefinitely with no console feedback for a while.
I am not 100% sure Next.js is fully to blame here, but it probably shouldn't be freezing like this no matter what.
Sometimes it isn't totally frozen and I can keep navigating, getting some broken images. I suspect the number of failed forwards might influence how responsive Next.js remains.
I should note that I am using the React Compiler as well as React 19 and Next.js 15 RC-0.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 64660
Available CPU cores: 32
Binaries:
Node: 18.17.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-rc.0 // Latest available version is detected (15.0.0-rc.0).
eslint-config-next: N/A
react: 19.0.0-rc-f994737d14-20240522
react-dom: 19.0.0-rc-f994737d14-20240522
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I've only run this in the environment specified, and I will reiterate that I am using React 19 RC and React Compiler. | bug,Middleware | low | Critical |
2,500,892,932 | opencv | Python bindings do not respect optional modules | ### System Information
OpenCV 4.10.0
### Detailed description
When building OpenCV with Python support, it seems that I am forced to compile a bunch of modules that I do not want in order to generate the Python bindings. So far, I have been forced to compile `imgproc`, `features2d`, `gapi`, and `dnn`.
### Steps to reproduce
When running CMake, define the `BUILD_LIST` as `core,python_bindings_generator,python_tests,python3` and watch it fail with `SymbolNotFound` and `TypeResolutionFailed` errors.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [ ] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,incomplete | low | Critical |
2,500,945,033 | rust | Build failure when using LTO with static linking | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I have been experimenting with building crates as cdylibs/staticlibs out of tree as a workaround for emulating cargo artefacts
on stable rust. I have created a repository with an example of this [here](https://github.com/burtonageo/rational_test_app). To minimally reproduce the error, run:
```bash
git clone --depth 1 https://github.com/burtonageo/rational_test_app
cd rational_test_app
cargo build --release
```
When building with static linking and LTO, I get the following error:
```
error: failed to get bitcode from object file for LTO (could not find requested section)
```
I have configured the `release` profile to use lto. When building with dynamic linking and LTO, or static linking
without LTO, the build succeeds. This can be reproduced with either `lto = "thin"` or `lto = true`. To reproduce
the error, run `cargo build --release`. To disable static linking, use the `--no-default-features` flag.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
cargo 1.80.1 (376290515 2024-07-16)
release: 1.80.1
commit-hash: 37629051518c3df9ac2c1744589362a02ecafa99
commit-date: 2024-07-16
host: aarch64-apple-darwin
libgit2: 1.7.2 (sys:0.18.3 vendored)
libcurl: 8.7.1 (sys:0.4.72+curl-8.6.0 system ssl:(SecureTransport) LibreSSL/3.3.6)
ssl: OpenSSL 1.1.1w 11 Sep 2023
os: Mac OS 14.6.1 [64-bit]
```
This issue is also seen on nightly:
`rustc +nightly --version --verbose`:
```
cargo 1.82.0-nightly (8f40fc59f 2024-08-21)
release: 1.82.0-nightly
commit-hash: 8f40fc59fb0c8df91c97405785197f3c630304ea
commit-date: 2024-08-21
host: aarch64-apple-darwin
libgit2: 1.8.1 (sys:0.19.0 vendored)
libcurl: 8.7.1 (sys:0.4.74+curl-8.9.0 system ssl:(SecureTransport) LibreSSL/3.3.6)
ssl: OpenSSL 1.1.1w 11 Sep 2023
os: Mac OS 14.6.1 [64-bit]
```
</p>
</details>
| A-linkage,T-compiler,C-bug,A-LTO | low | Critical |
2,500,953,488 | pytorch | F.one_hot() function on "mps" device crashes on Intel Mac | ### 🐛 Describe the bug
```python
import torch
import torch.nn.functional as F
# Assume num_anchors is the number of anchors, and topk_idxs is your index tensor
num_anchors = 10 # Example number, adjust according to actual situation
topk_idxs = torch.randint(low=0, high=num_anchors, size=(1, 5, 10)) # Randomly generate some index data
# Set the device to MPS
device = torch.device("mps")
topk_idxs = topk_idxs.to(device)
# Perform one-hot encoding and summation operation
try:
one_hot = F.one_hot(topk_idxs, num_anchors)
is_in_topk = one_hot.sum(-2)
print("Operation successful, result shape:", is_in_topk.shape)
except Exception as e:
print("Operation failed:", str(e))
```
Result:
```
/AppleInternal/Library/BuildRoots/4ff29661-3588-11ef-9513-e2437461156c/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:556: failed assertion `MPSKernel MTLComputePipelineStateCache unable to load function reduce_multiple_passes_axes_add.
Compiler encountered an internal error: (null)
'
```
### Versions
(torch_metal) william@wanbinwangdeMacBook-Pro main % pip show torch
Name: torch
Version: 2.2.2
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /opt/anaconda3/envs/torch_metal/lib/python3.11/site-packages
Requires: filelock, fsspec, jinja2, networkx, sympy, typing-extensions
Required-by: accelerate, torchaudio, torchvision
cc @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | triaged,module: intel,module: mps | low | Critical |
2,500,954,142 | deno | Next.js components are not importing types properly | I was excited to try out deno on Next.JS after hearing about it on the syntax podcast. It runs well but unfortunately I'm getting a couple of type errors when importing the next core components like `next/link` and `next/image`.
This only happens in the editor with the LSP so it might not be the right place to post it so let me know.
For good measure in the project that I found it in I tried copy and pasting the compiler options into `deno.json`. It had no effect.
> JSX element type 'Link' does not have any construct or call signatures.deno-ts(2604)
> 'Link' cannot be used as a JSX component.
Its type 'typeof import("file:///Users/richardvanbergen/code/deno-nextjs-bug-example/node_modules/.deno/next@14.2.7/node_modules/next/link")' is not a valid JSX element type.deno-ts(2786)
> JSX element type 'Image' does not have any construct or call signatures.deno-ts(2604)
'Image' cannot be used as a JSX component.
> Its type 'typeof import("file:///Users/richardvanbergen/code/deno-nextjs-bug-example/node_modules/.deno/next@14.2.7/node_modules/next/image")' is not a valid JSX element type.deno-ts(2786)
Reproduction Steps:
1. Download the repo from the link below
2. Enable Deno from the menu command bar `Deno: Enable`
3. Ensure Deno Future is checked in the user settings
4. Just for good measure, add `DENO_FUTURE=1` explicitly in the settings.
5. Navigate to `src/app/page.tsx`.
6. Behold! Errors!
Version: Deno 1.46.2
Reproduction repo: [https://github.com/richardvanbergen/deno-nextjs-bug-example](https://github.com/richardvanbergen/deno-nextjs-bug-example)
Side note: I can deal with it but is there a way to enable wildcard import aliases?
Side side note: There's also an error importing the styles I personally don't care about that but if it's easy to answer then it might be worth documenting it. | bug,upstream,nextjs | medium | Critical |
2,500,979,743 | rust | `doc_auto_cfg` obfuscates documentation formatting | ### Problem
I generated documentation for my project, first using stable and then nightly (to document features). The doc generated by the nightly toolchain is formatted very strangely. After removing / adding the `doc_auto_cfg` nightly feature, it seems that the feature is the cause of this:
| Stable & Beta | Nightly (with `doc_auto_cfg`) | Nightly (no `doc_auto_cfg`) |
|---|---|---|
|  |  |  |
While I would expect feature-dependent items to have their formatting altered, it seems that every item is affected here.
### Steps
1. Document an item that is both
- accessible through public modules
- re-exported publicly (e.g. in a prelude)
2. generate the doc using `cargo +nightly doc`, see result
3. enable the `doc_auto_cfg` feature, repeat step 2
### Possible Solution(s)
FWIW I'm guessing there is some hard-coded html/css somewhere to handle the small feature note; maybe the position of the note puts a constraint on text width?
### Notes
I'm guessing the issue occurs whenever there is a public re-exports of an tiem that "has a public path".
I have a custom build script to conditionally enable some gated features depending on the toolchain used to build:
```rust
// build.rs
#[rustversion::nightly]
fn set_rustc_channel_cfg() -> &'static str {
"nightly"
}
#[rustversion::beta]
fn set_rustc_channel_cfg() -> &'static str {
"beta"
}
#[rustversion::stable]
fn set_rustc_channel_cfg() -> &'static str {
"stable"
}
fn main() {
println!("cargo:rustc-cfg={}", set_rustc_channel_cfg());
}
```
```rust
// lib.rs
// ...
#![allow(unexpected_cfgs)]
#![cfg_attr(nightly, feature(doc_auto_cfg))]
// ...
```
### Version
Nightly
```
cargo 1.83.0-nightly (8f40fc59f 2024-08-21)
release: 1.83.0-nightly
commit-hash: 8f40fc59fb0c8df91c97405785197f3c630304ea
commit-date: 2024-08-21
host: x86_64-unknown-linux-gnu
libgit2: 1.8.1 (sys:0.19.0 vendored)
libcurl: 8.9.0-DEV (sys:0.4.74+curl-8.9.0 vendored ssl:OpenSSL/1.1.1w)
ssl: OpenSSL 1.1.1w 11 Sep 2023
os: AlmaLinux 9.4.0 [64-bit]
```
| C-bug,A-rustdoc-ui,T-rustdoc-frontend | low | Minor |
2,501,008,737 | kubernetes | kubectl get nodes with JSONPath doesn't report node.kubernetes.io/unschedulable taint | ### What happened?
I cordoned a node, then ran the following command to list all nodes that have the `node.kubernetes.io/unschedulable` taint:
```bash
kubectl get nodes -o jsonpath="{.items[?(@.spec.taints[].key=='node.kubernetes.io/unschedulable')].metadata.name}"
```
the cordoned node was not included in the list, even though describing the node I can see that it does have the taint and the field `.spec.unschedulable` is set to `true`.
### What did you expect to happen?
I expected to get `metadata.name` of the node that has been marked as unschedulable instead.
### How can we reproduce it (as minimally and precisely as possible)?
### Common
1. Cordon a node.
### Test 1
2. Run the command:
```bash
kubectl get nodes -o jsonpath="{.items[?(@.spec.taints[].key=='node.kubernetes.io/unschedulable')].metadata.name}"
```
3. The node is **not** listed.
### Test 2
4. Run the command:
```bash
kubectl get nodes -o jsonpath="{.items[?(@.spec.unschedulable)].metadata.name}"
```
5. The node's `metadata.name` is printed.
### Test 3
6. Run the command:
```bash
kubectl get nodes -ojsonpath='{}' | jq -r '.items[] | select(try .spec.taints[].key == "node.kubernetes.io/unschedulable") .metadata.name'
```
7. The node's `metadata.name` is printed.
### Anything else we need to know?
THe following also works:
```bash
kubectl get nodes -ojson | jq -r '.items[] | select(try .spec.taints[].key == ("node.kubernetes.io/unschedulable")) .metadata.name'
```
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.29.3
```
</details>
### Cloud provider
<details>
Kubeadm cluster
</details>
### OS version
<details>
```console
Client running on macos, Kubernetes API server running on:
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
$ uname -a
Linux master1 6.8.0-41-generic #41-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 2 23:26:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/cli,lifecycle/rotten,needs-triage | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.