id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,472,828,135 | pytorch | [Bug Report] when using % in torch.jit.script function, the output conforms to the form of C++ % operation, rather than Python % operation. | ### ๐ Describe the bug
By testing with the following code, `angle_pi` and `angle_pi_jit` output different results
````python
import torch
angle = torch.zeros(10,device='cpu').uniform_(-7, -3)
def wrap_to_pi(angles: torch.Tensor) -> torch.Tensor:
angles = angles.clone()
angles %= 2 * torch.pi
angles -= 2 * torch.pi * (angles > torch.pi)
return angles
angle_pi = wrap_to_pi(angle)
@torch.jit.script
def wrap_to_pi_jit(angles: torch.Tensor) -> torch.Tensor:
angles = angles.clone()
angles %= 2 * torch.pi
angles -= 2 * torch.pi * (angles > torch.pi)
return angles
angle_pi_jit = wrap_to_pi_jit(angle)
print("angle:", angle)
print("angle_pi:", angle_pi)
print("angle_pi_jit:", angle_pi_jit)
````

### Versions
Also test on PyTorch version: 1.13.1+cu117, PyTorch version: 2.2.2+cu118
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.0
[pip3] torch==2.3.1
[pip3] triton==2.3.1
[conda] numpy 2.0.0 pypi_0 pypi
[conda] torch 2.3.1 pypi_0 pypi
[conda] triton 2.3.1 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit,actionable | low | Critical |
2,472,831,700 | flutter | `Page.canPop` is not reflected in app bar | ### Steps to reproduce
Create a `Navigator` with `pages` where the second page in the stack (the currently visible one) has a `canPop=false` property.
See the example in https://dartpad.dev/?id=5adfc97d87097d8f41e59c2225c37852
### Expected results
I would expect the `canPop` flag to be influenced the app bar's (`AppBar` or `CupertinoNavigationBar`) back button (via `automaticallyImplyLeading=true`).
### Actual results
The back button is always shown and does not update with the default Flutter 3.24.0.
When using a custom `PageRoute` subclass which forwards the `Page.canPop` property (see `MyMaterialPage` in the example), the adaptive back button works with the `CupertinoNavigationBar`. But the Material's `AppBar` example (after commenting this in) does not update accordingly.
EDIT: For the Material `AppBar` to update I would have to override `impliesAppBarDismissal` on the route (forwarding likewise to `_page.canPop`).
### Code sample
<details open><summary>Code sample</summary>
See the example in https://dartpad.dev/?id=5adfc97d87097d8f41e59c2225c37852
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
### Cupertino widgets + stock MaterialPage

Back button shown though `canPop=false`
### Cupertino widgets + fixed MaterialPage / PageRoute subclass

Now the back button reacts to the `Page`'s `onPop`
### Material widgets + fixed MaterialPage as above

The change in `onPop` is not reflected in Material's `AppBar`
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
Tested locally with
```console
Flutter (Channel stable, 3.24.0, on macOS 14.6.1 23G93 darwin-arm64, locale en-DE)
```
but the same issue can be seen in DartPad.
</details>
| framework,f: routes,has reproducible steps,P3,team-framework,triaged-framework,found in release: 3.24,found in release: 3.25 | low | Minor |
2,472,847,243 | go | x/pkgsite: report whether a subpackage is a command | ### What is the URL of the page with the issue?
https://pkg.go.dev/golang.org/x/perf, for example.
### Screenshot
<img width="196" alt="image" src="https://github.com/user-attachments/assets/065fb4a6-661e-45eb-b064-4f68ef5d24a0">
### What did you expect to see?
Subpackage that are commands are reported as such with a `command` chip (similar to search results / package page).
### What did you see instead?
Only modules are reported with a `module` chip.
---
I'm working on https://github.com/avamsi/gobin and would love this information! :)
I currently don't process the subpackages (well, other than those that are inlined in the search results) but would love to, and if I'm not wrong, the only way to do this today is to hit the package pages for all the subpackages. | FeatureRequest,pkgsite,FixPending | low | Minor |
2,472,890,566 | PowerToys | New update failed! and the system tray keyboard options popup window opens on the left. | ### Microsoft PowerToys version
0.78.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
System tray interaction
### Steps to reproduce
I have been postponing updating the application for a long time and today I decided to update it. Since it was taking a long time, I got up from my computer. When I got back to my computer, I restarted it and the keyboard popup window in the system tray opened on the left as in the screenshot below. Also, the update seems to have failed.
### โ๏ธ Expected Behavior
I would expect it to revert to the previous update.
### โ Actual Behavior
From what I understand, some things were changed and the update was left unfinished.
### Other Software
Windows 1123H2 Build: 22631.3880
<a href="https://imgur.com/IF11HQJ"><img src="https://i.imgur.com/IF11HQJ.png" title="source: imgur.com" /></a>
<a href="https://imgur.com/NAca5yN"><img src="https://i.imgur.com/NAca5yN.png" title="source: imgur.com" /></a> | Issue-Bug,Needs-Triage | low | Critical |
2,472,910,568 | go | x/build/cmd/relui: make new downloads at go.dev/dl/ appear after "wait to announce" step of a release | The Go release workflows in relui make the new downloads visible on the https://go.dev/dl/ page after "Wait for Release Coordinator Approval" is approved, as part of publishing the release. Then, after "Wait to Announce" is approved, the published release is announced on the golang-announce mailing list and social media sites.
This means there's a window of time when the new files are already visible on https://go.dev/dl/, but the release is yet announced. It's usually quite short, but nevertheless if someone sees new files without an announcement, it can be confusing.
This issue tracks making the new files become visible more atomic with the announcement, after "Wait to Announce" is approved. The go.dev/dl/ page has some caching that needs to be taken into account, as the announcement shouldn't be sent while the /dl/ page isn't yet displaying the new files. | Builders,NeedsFix,FeatureRequest | low | Minor |
2,472,940,250 | pytorch | CatArrayBatchedCopy and AllGather don't overlap during FSDP backward | ### ๐ Describe the bug
I used Hugging face training code.
I found during backward of training by using FSDP, the AllGather kernel doesn't overlap CatArrayBatchedCopy kernel. I don't know why.
stream20 AllGather ---------------------------ReduceScatter ---------------------------AllGather
stream24 ----------- CatArrayBatchedCopy---------------------------------------------------------CatArrayBatchedCopy
I used this code.
https://github.com/tatsu-lab/stanford_alpaca.git
Training command:
nsys profile --trace=cuda,nvtx torchrun --nproc_per_node=4 --master_port=29505 train.py --model_name_or_path ./Llama-2-13b-hf/ --data_path ./alpaca_data.json --bf16 True --output_dir ./output --num_train_epochs 1 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --gradient_accumulation_steps 1 --evaluation_strategy "no" --save_strategy "steps" --save_steps 2000 --save_total_limit 1 --learning_rate 1e-5 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine" --logging_steps 1 --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' --tf32 True --report_to none --fsdp_config ./fsdp_config.json
fsdp_config.json
{
"backward_prefetch": "backward_pre",
}
### Versions
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.4 (Ootpa) (x86_64)
GCC version: (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.28
Python version: 3.9.2 (default, Mar 5 2021, 01:49:45) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] (64-bit runtime)
Python platform: Linux-4.18.0-305.el8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7543 32-Core Processor
Stepping: 1
CPU MHz: 2965.950
BogoMIPS: 5589.98
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-7,64-71
NUMA node1 CPU(s): 8-15,72-79
NUMA node2 CPU(s): 16-23,80-87
NUMA node3 CPU(s): 24-31,88-95
NUMA node4 CPU(s): 32-39,96-103
NUMA node5 CPU(s): 40-47,104-111
NUMA node6 CPU(s): 48-55,112-119
NUMA node7 CPU(s): 56-63,120-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall sev_es fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.2.1
[pip3] torchvision==0.17.1
[pip3] triton==2.2.0
[conda] blas 1.0 mkl
[conda] mkl 2019.0 118
[conda] mkl-service 1.1.2 py37h90e4bf4_5
[conda] mkl_fft 1.0.4 py37h4414c95_1
[conda] mkl_random 1.0.1 py37h4414c95_1
[conda] numpy 1.15.1 py37h1d66e8a_0
[conda] numpy-base 1.15.1 py37h81de0dd_0
[conda] numpydoc 0.8.0 py37_0
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang | triaged,module: fsdp | low | Critical |
2,472,961,040 | angular | Some words are hidden in dark version | ### Describe the problem that you experienced
In angular.dev, when switched to dark version some words are hidden, due to their color.
### Enter the URL of the topic with the problem
_No response_
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
Just change the color of these words. As shown below, in light version these words are red, but in dark version they are not visible
### Add a screenshot if that helps illustrate the problem
<img width="946" alt="Screen Shot 2024-08-19 at 16 00 18" src="https://github.com/user-attachments/assets/28883a33-4065-4cb2-bb15-656514894642">
<img width="929" alt="Screen Shot 2024-08-19 at 16 00 39" src="https://github.com/user-attachments/assets/77c95caa-f623-4abd-9a40-8eb38e0b8ba2">
### If this problem caused an exception or error, please paste it here
_No response_
### If the problem is browser-specific, please specify the device, OS, browser, and version
_No response_
### Provide any additional information here in as much as detail as you can
_No response_ | P2,area: docs-infra | low | Critical |
2,473,018,662 | PowerToys | It could be cool if the tool could check the text type | ### Description of the new feature / enhancement
it would be cool if you could code the tool to be able to check what text type a site or app is using.
### Scenario when this would be used?
im educating as a webdeveloper and i just hate when there is a photo with custom text that i dont know what is, so i would use the tool to check what text type it is in that scenario.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,473,077,017 | storybook | [Bug]: I use MUI and Storybook to create the new UI library, here I used light and dark theme to customize 'Button' component color, its theme is working properly in local dev environment. but not working on using the component from 'dist' directory after `npm run build`. | ### Describe the bug
I use MUI and Storybook to create the new UI library, here I used light and dark theme to customize 'Button' component color, its theme is working properly in local dev environment. but not working on using the component from 'dist' directory after `npm run build`. please see the detail of 'App.tsx' file, in which its working good on `import { Button } from './components'` and will lose theme when `import { Button } from '../dist/try-ui-core-component'`.
### Reproduction link
https://stackblitz.com/~/github.com/rick-liyue-huang/try-ui-core-component
### Reproduction steps
_No response_
### System
```bash
window, 'npm create vite@latest my-vue-app -- --template reat-ts', 'npx storybook@latest init', 'npm install @mui/material @emotion/react @emotion/styled'
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,473,125,970 | pytorch | Inconsistent tensor free behavior on cpu and gpu when call AOTIModelContainerRunner.update_constant_buffer | ### ๐ Describe the bug
When USE_CUDA is defined, user should free the tensor after call `update_constant_buffer`, otherwise user should not free them, that's cause user logic inconsistent.
https://github.com/pytorch/pytorch/blob/main/torch/csrc/inductor/aoti_runtime/model_container.h#L270
### Versions
No need, review the source code should known this issue.
cc @ezyang @chauhang @penguinwu @desertfire @chenyang78 | triaged,oncall: pt2,module: aotinductor | low | Critical |
2,473,160,205 | ant-design | Carousel component is buggy when using ondemand lazyload option | ### Reproduction link
[](https://stackblitz.com/edit/antd-reproduce-5x-fssws4?file=demo.tsx)
### Steps to reproduce
When putting `lazyload='ondemand'`to Carousel option, the dot selection will not behave correctly
### What is expected?
Clicking on the 2nd dot should show the second element, not the first
### What is actually happening?
The Carousel, when clicking on the second dot with lazyLoad=ondemand, will display the 1st element
| Environment | Info |
| --- | --- |
| antd | 5.20.1 |
| React | latest |
| System | Macos - latest |
| Browser | Chrome - latest |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | help wanted,Inactive,๐ External Dependency,๐งถ Low Priority | low | Critical |
2,473,200,717 | deno | Support specifying the https: protocol for Request.url when using a SSL-terminating proxy | At Enatom, we run Deno behind NGINX, which terminates SSL. This allows integration with certbot for automatic SSL certificate renewal among some other benefits. Unfortunately, this means that all requests received by Deno are http, and Request.url will then start with `http://`, instead of `https://`. Certain libraries, such as [deno_kv_oauth](/denoland/kv_oauth) inspect the Request.url property to determine if the received request is sent over https, which won't work in this scenario.
Ideally, this would be a command-line flag (e.g. `--assume-all-requests-https`) or an environment variable to specify at start up. An additional header (`X-DENO-HTTPS-REQUEST`) could also work. | public API,suggestion | low | Minor |
2,473,240,777 | godot | Inconsistent behavior when rotating control nodes | ### Tested versions
Reproduciable in 4.3 stable, not reproduciable in 4.2.2 stable
### System information
Windows 10
### Issue description
Rotating a control node doesn't rotate on the pivot offset as expected if you change the node's position while rotating
### Steps to reproduce
1. Open the attached project in godot 4.3 and run. click anywhere to pick up the icon and right click to rotate it. you will see it does not rotate on the pivot if you try to rotate while the icon is picked up. However, it does rotate correctly if you right click without dragging
2. Open the attached project in godot 4.2.2 and do the same, you will see it does rotate on the pivot as expected, in both cases.
### Minimal reproduction project (MRP)
[RotationTest.zip](https://github.com/user-attachments/files/16660140/RotationTest.zip)
| discussion,needs testing,regression,topic:gui | low | Major |
2,473,252,034 | godot | Export apk res/values-zh-rCN/strings.xml not found "godot_project_name_string" resource item | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2080 Ti (NVIDIA; 32.0.15.6070) - 12th Gen Intel(R) Core(TM) i5-12490F (12 Threads)
### Issue description
[application]
config/name="Lang-zh_CN-cmn_CN-example"
config/name_localized={
"cmn_CN": "b",
"zh_CN": "a",
"zh_TW": "c"
}
export apk res/values-zh-rCN/strings.xml not found "godot_project_name_string" item;
but, res/values-zh-rTW/strings.xml found <string name="godot_project_name_string">c</string>
### Steps to reproduce
See MRP.
### Minimal reproduction project (MRP)
Source - [lang-zh_cn-cmn_cn-example.zip](https://github.com/user-attachments/files/16660211/lang-zh_cn-cmn_cn-example.zip)
APK - [Lang-zh_CN-cmn_CN-example.zip](https://github.com/user-attachments/files/16660221/Lang-zh_CN-cmn_CN-example.zip)
| platform:android,topic:editor,needs testing | low | Minor |
2,473,286,268 | PowerToys | CoPilot (F23) Button Remapping Doesn't Work on Surface Laptop 7 | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Keyboard Manager -> Remap a key ->
`F23 to <any button>`

### โ๏ธ Expected Behavior
I expect the key remapping to work.
### โ Actual Behavior
I tested this with Ctrl(right), Apps/Menu, Win (Right). None of these worked.
One additional observation is when I try using any of the Ctrl options, and save it, and try using it normally in for example Google Chrome to open a new tab, whenever I hit the remapped key, a new error message pops-up from Windows Explorer in the "Notifications" section bottom right, saying "Error generated while getting the trace"

### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,473,290,381 | svelte | [runes] Add the ability to get a list of dependencies | ### Describe the problem
With automatic dependency tracking, it sometimes becomes difficult to understand what is really going on. Especially if there are classes, functions that call other functions, libraries, conditional dependencies, etc.
It would be nice to be able to get a list of current dependencies with details.
### Describe the proposed solution
- contains the value of the dependency
- contains the path from root variable to dependency (`obj.a.b.c`, `arr[0].a`)
- contains the value of the root variable (`obj`, `arr`)
- contains the call stack (or stacks) for the place in the code where this dependency is added (referenced)
- contains the type of dependency (`$state`, `$derived`, ...)
- for `$derived`, `$effect`, markup
- through a rune or a utility? add to `$inspect`?
- it is possible only through development tools and plugins?
- it is possible only in development mode?
### Importance
nice to have | feature request | low | Minor |
2,473,314,434 | create-react-app | Bad Resource link in https://create-react-app.dev/ | <!--
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
In either case, please use this template and fill in as many fields below as you can.
Note that we don't provide help for webpack questions after ejecting.
You can find webpack docs at https://webpack.js.org/.
-->
### Describe the bug
An Image resource is not being loaded in the create-react-app.dev when inspected the resource is returning Bad Signature
### Did you try recovering your dependencies?
Not applicable .
### Which terms did you search for in User Guide?
Not applicable .
### Environment
https://create-react-app.dev/
<!--
To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required.
This enables the maintainers quickly reproduce the issue and give feedback.
Run the following command in your React app's folder in terminal.
Note: The result is copied to your clipboard directly.
`npx create-react-app --info`
Paste the output of the command in the section below.
-->
### Steps to reproduce
<!--
How would you describe your issue to someone who doesnโt know you or your project?
Try to write a sequence of steps that anybody can repeat to see the issue.
-->
1. Going to the website and seeing class gettingStartedSection_6aRe
2. Seeing the broken image link
### Expected behavior
Image should be visible and be able to load
<!--
How did you expect the tool to behave?
Itโs fine if youโre not sure your understanding is correct.
Just write down what you thought would happen.
-->
### Actual behavior
Alt text is being shown, because of the inability of the resource to be fetched.

<!--
Did something go wrong?
Is something broken, or not behaving as you expected?
Please attach screenshots if possible! They are extremely helpful for diagnosing issues.
-->
### Reproducible demo
https://create-react-app.dev/
| needs triage,issue: bug report | low | Critical |
2,473,357,356 | vscode | Symbol Not Highlighted After Navigation | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: **Yes**
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: **1.92.2**
- OS Version: Windows 11 Pro Version **10.0.22631** Build **22631**
**Steps to Reproduce:**
1. Create a symbol in your code.
2. Use that symbol in another locations within the code.
3. Place the cursor on an instance of the symbol.
4. Press <kbd>f12</kbd> to 'Go to Definition'.
5. Press <kbd>f7</kbd> to 'Go to Next Symbol Highlight'.
**Observed Behavior:**
The symbol is not highlighted after using <kbd>f12</kbd>, so pressing <kbd>f7</kbd> does nothing.
**Expected Behaviour:**
After pressing <kbd>f12</kbd>, the symbol should be highlighted, and pressing <kbd>f7</kbd> should navigate to the next occurrence of the symbol in the code.
| feature-request,editor-highlight,notebook-cell-editor | low | Critical |
2,473,374,489 | deno | [publish] slow types error with implicit string return | ## mcve
any of the following fail to publish due to slow types:
```ts
export function foo() {
return `hello ${123}`
}
export function bar() {
return `hello`
}
export function baz() {
return 'hello'
}
export const egg = () => `hello ${123}`
```
but compile successfully under `--isolatedDeclarations`
## error
```
1 | export function foo() {
| ^^^ this function is missing an explicit return type
|
= hint: add an explicit return type to the function
info: all functions in the public API must have an explicit return type
docs: https://jsr.io/go/slow-type-missing-explicit-return-type
...etc...
```
## version
`deno 1.46.0-rc.1+526f39f`
| feat,publish | low | Critical |
2,473,470,692 | tauri | [bug] If an underscore โ_โ is used for an argument, the function will not work. | ### Describe the bug
If an underscore is used in the variable name of the argument on the React side, the function on the Rust side will not be called and no error message will be displayed in dev mode.
### Reproduction
App.tsx is here
```
import { useState } from "react";
import { invoke } from "@tauri-apps/api/tauri";
import "./App.css";
function App() {
const [result, setResult] = useState("");
async function handleProcess() {
setResult(await invoke("greet", {
msg_: "World"
}));
}
return (
<div>
<button onClick={() => handleProcess()}>Print</button>
<p>{result}</p>
</div>
);
}
export default App;
```
main.rs is here
```
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]
#[tauri::command]
fn greet(msg_: String) -> String {
format!("Hello {} from Tauri!", msg_)
}
fn main() {
tauri::Builder::default()
.invoke_handler(tauri::generate_handler![greet])
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
```
It is worth noting that when executing the Rust-side greets function in app.tsx, the argument msg_ is used as input. If the argument name contains underscores, the Rust-side greet function will not be executed. This is the bug.
### Expected behavior
It works even if underscores are included.
### Full `tauri info` output
```text
npm run tauri info
> find_vanity_address@0.1.0 tauri
> tauri info
[โ] Environment
- OS: Mac OS 13.6.3 X64
โ Xcode Command Line Tools: installed
โ rustc: 1.80.1 (3f5fd8dd4 2024-08-06) (Homebrew)
โ cargo: 1.80.1
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain:
- node: 18.17.0
- yarn: 1.22.22
- npm: 9.6.7
[-] Packages
- tauri [RUST]: 1.7.1
- tauri-build [RUST]: 1.5.3
- wry [RUST]: 0.24.10
- tao [RUST]: 0.16.9
- @tauri-apps/api [NPM]: 1.6.0
- @tauri-apps/cli [NPM]: 1.6.0
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../dist
- devPath: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,473,482,154 | PowerToys | remaping shorcuts | ### Description of the new feature / enhancement
Hi!
In remapping keyboard feature, the "action" of a chossen accord shortcut could be with more than only two keys.
In one of my specifc situation I need "ctrl+B+0" to perform a "Bold" text.
Thank you!
### Scenario when this would be used?
Many of my "daily life" shortcuts have more than two keys.
:-)
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,473,510,610 | go | proposal: x/exp/xiter: add Nop, Nop2, Value, Value2 | ### Proposal Details
This could probably go in golang.org/x/exp/xiter for now, but it should also be considered for plain package iter.
Nop yields an empty (no op) sequence. Nop2 yields an empty iter.Seq2.
```go
// Value return an iter.Seq yielding the single value v.
func Value[T any](v T) iter.Seq[T] {
return func(yield func(T) bool) {
_ = yield(v)
}
}
```
`Value2` isn't a great name, but seems the most straightforward.
See #68931, https://github.com/golang/go/issues/65629 and I forget where iter.Value first came up. | Proposal | low | Minor |
2,473,511,658 | rust | some tier 2 rustc 1.81 hosts cannot bootstrap stable 1.82 | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I am trying to build rust compiler from source using default method provided in the documentation for Banana Pi F3 which is RISC-V 64-bit SBC. Operating system is Bianbu 1.0.9 (it is debian based distribution)
I tried this code:
Compiling the rust from the source for riscv64
```shell
git clone --branch master --single-branch --depth=1 https://github.com/rust-lang/rust.git
./configure --set install.prefix=$PATH_TO_INSTALL
export RUST_MIN_STACK=1000000000
cd rust || exit
./x.py build && ./x.py install
```
I expected to see this happen: Install successfully on the desired machine
Instead, this happened: Getting `help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000 error: rustc interrupted by SIGSEGV, printing backtrace`
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
master branch commit: 45fbf41deb24581471e3e56824d9318d3d415cb8
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace (adding full CI pipeline log; also available at [this link](https://dash.cloud-v.co/view/Cloud-V%20Builds/job/cloud-v-builds-folder/job/rust-riscv-build/20/console))</summary>
<p>
```
Started by user [cloud-v-admin](https://dash.cloud-v.co/user/cloud-v-admin)
Checking out git https://github.com/alitariq4589/cloud-v-builds into /home/jenkins_user/.jenkins/workspace/cloud-v-builds-folder/rust-riscv-build@script/38df899f100d5dacd0d1397cf2a667bf679162c48c27ba004e48cc05ac0676b5 to read rust
The recommended git tool is: NONE
No credentials specified
> git rev-parse --resolve-git-dir /home/jenkins_user/.jenkins/workspace/cloud-v-builds-folder/rust-riscv-build@script/38df899f100d5dacd0d1397cf2a667bf679162c48c27ba004e48cc05ac0676b5/.git # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/alitariq4589/cloud-v-builds # timeout=10
Fetching upstream changes from https://github.com/alitariq4589/cloud-v-builds
> git --version # timeout=10
> git --version # 'git version 2.34.1'
> git fetch --tags --force --progress -- https://github.com/alitariq4589/cloud-v-builds +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse origin/main^{commit} # timeout=10
Checking out Revision ad0ce79faadb91afc6af36727d9e9f291931b5ba (origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f ad0ce79faadb91afc6af36727d9e9f291931b5ba # timeout=10
Commit message: "Increased stack size"
> git rev-list --no-walk 29ae58dae77a5d5152abbf95e4226857247b5e22 # timeout=10
[Pipeline] Start of Pipeline
[Pipeline] node
Running on [J-BPF3-1-admin](https://dash.cloud-v.co/computer/J%2DBPF3%2D1%2Dadmin/) in /home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Clean Workspace)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Installing Dependencies)
[Pipeline] sh
Hit:1 http://archive.spacemit.com/bianbu-ports mantic/snapshots/v1.0.9 InRelease
Hit:2 http://archive.spacemit.com/bianbu-ports mantic-security/snapshots/v1.0.9 InRelease
Hit:3 http://archive.spacemit.com/bianbu-ports mantic-spacemit/snapshots/v1.0.9 InRelease
Hit:4 http://archive.spacemit.com/bianbu-ports mantic-porting/snapshots/v1.0.9 InRelease
Hit:5 http://archive.spacemit.com/bianbu-ports mantic-customization/snapshots/v1.0.9 InRelease
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
git is already the newest version (1:2.40.1-1ubuntu1).
curl is already the newest version (8.2.1-1ubuntu3.1).
pkg-config is already the newest version (1.8.1-2).
g++ is already the newest version (4:13.2.0-1ubuntu1).
libssl-dev is already the newest version (3.0.10-1ubuntu2.3-bb1).
ninja-build is already the newest version (1.11.1-2).
make is already the newest version (4.3-4.1build1).
cmake is already the newest version (3.27.4-1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Setting Directories and clone)
[Pipeline] sh
Cloning into 'rust'...
Updating files: 99% (48834/49153)
Updating files: 100% (49153/49153)
Updating files: 100% (49153/49153), done.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run system_info)
[Pipeline] sh
=============================================================
CPU INFO START
=============================================================
processor : 0
hart : 0
model name : Spacemit(R) X60
isa : rv64imafdcv_sscofpmf_sstc_svpbmt_zicbom_zicboz_zicbop_zihintpause
mmu : sv39
mvendorid : 0x710
marchid : 0x8000000058000001
mimpid : 0x1000000049772200
processor : 1
hart : 1
model name : Spacemit(R) X60
isa : rv64imafdcv_sscofpmf_sstc_svpbmt_zicbom_zicboz_zicbop_zihintpause
mmu : sv39
mvendorid : 0x710
marchid : 0x8000000058000001
mimpid : 0x1000000049772200
processor : 2
hart : 2
model name : Spacemit(R) X60
isa : rv64imafdcv_sscofpmf_sstc_svpbmt_zicbom_zicboz_zicbop_zihintpause
mmu : sv39
mvendorid : 0x710
marchid : 0x8000000058000001
mimpid : 0x1000000049772200
processor : 3
hart : 3
model name : Spacemit(R) X60
isa : rv64imafdcv_sscofpmf_sstc_svpbmt_zicbom_zicboz_zicbop_zihintpause
mmu : sv39
mvendorid : 0x710
marchid : 0x8000000058000001
mimpid : 0x1000000049772200
processor : 4
hart : 4
model name : Spacemit(R) X60
isa : rv64imafdcv_sscofpmf_sstc_svpbmt_zicbom_zicboz_zicbop_zihintpause
mmu : sv39
mvendorid : 0x710
marchid : 0x8000000058000001
mimpid : 0x1000000049772200
processor : 5
hart : 5
model name : Spacemit(R) X60
isa : rv64imafdcv_sscofpmf_sstc_svpbmt_zicbom_zicboz_zicbop_zihintpause
mmu : sv39
mvendorid : 0x710
marchid : 0x8000000058000001
mimpid : 0x1000000049772200
processor : 6
hart : 6
model name : Spacemit(R) X60
isa : rv64imafdcv_sscofpmf_sstc_svpbmt_zicbom_zicboz_zicbop_zihintpause
mmu : sv39
mvendorid : 0x710
marchid : 0x8000000058000001
mimpid : 0x1000000049772200
processor : 7
hart : 7
model name : Spacemit(R) X60
isa : rv64imafdcv_sscofpmf_sstc_svpbmt_zicbom_zicboz_zicbop_zihintpause
mmu : sv39
mvendorid : 0x710
marchid : 0x8000000058000001
mimpid : 0x1000000049772200
=============================================================
CPU INFO END
=============================================================
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run configure)
[Pipeline] sh
configure: processing command line
configure:
configure: build.configure-args := ['--set', 'install.prefix=/home/riscv-builds/r ...
configure: install.prefix := /home/riscv-builds/runner_dir/workspace/cloud- ...
configure: profile := dist
configure:
configure: writing `config.toml` in current directory
configure:
configure: run `python /home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/x.py --help`
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (make)
[Pipeline] sh
downloading https://static.rust-lang.org/dist/2024-07-26/rust-std-beta-riscv64gc-unknown-linux-gnu.tar.xz
########################################################### 83.2%
######################################################################## 100.0%
downloading https://static.rust-lang.org/dist/2024-07-26/rustc-beta-riscv64gc-unknown-linux-gnu.tar.xz
################################################################## 91.9%
#################################################################### 95.4%
##################################################################### 96.0%
######################################################################## 100.0%
downloading https://static.rust-lang.org/dist/2024-07-26/cargo-beta-riscv64gc-unknown-linux-gnu.tar.xz
######## 12.1%
######################################################################## 100.0%
extracting /home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/cache/2024-07-26/rust-std-beta-riscv64gc-unknown-linux-gnu.tar.xz
extracting /home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/cache/2024-07-26/rustc-beta-riscv64gc-unknown-linux-gnu.tar.xz
extracting /home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/cache/2024-07-26/cargo-beta-riscv64gc-unknown-linux-gnu.tar.xz
Building bootstrap
Compiling proc-macro2 v1.0.86
Compiling unicode-ident v1.0.12
Compiling memchr v2.7.4
Compiling typenum v1.17.0
Compiling version_check v0.9.5
Compiling libc v0.2.157
Compiling cc v1.0.97
Compiling serde v1.0.208
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f88a5f7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f9088c800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f87c3528a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f87d76aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f8d9fe1fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f8d9ff456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f8d9fecea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f8d9ff142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f8d9fee2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f87d7d83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f87bea956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f87c3c010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
Compiling crossbeam-utils v0.8.20
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f90c117f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f98a3e800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f8fde728a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f8ff28aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f95bb01fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f95bb1456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f95bb0cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f95bb1142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f95bb0e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f8ff2f83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f8fd9c956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f8fdee010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f8f2b97f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f970e6800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f8e48f28a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f8e5d0aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f942581fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f94259456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f94258cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f94259142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f94258e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f8e5d783a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f8e444956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f8e496010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f7b0b97f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f82ee6800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f7a28f28a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f7a3d0aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f800581fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f80059456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f80058cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f80059142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f80058e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f7a3d783a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f7a244956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f7a296010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
Compiling rustix v0.38.34
Compiling regex-syntax v0.8.4
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3fa20b37f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3fa9ee0800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3fa128928a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3fa13caaac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3fa70521fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3fa7053456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3fa7052cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3fa7053142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3fa7052e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3fa13d183a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3fa123e956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3fa1290010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f9be7e7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3fa3cab800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f9b05428a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f9b195aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3fa0e1d1fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3fa0e1e456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3fa0e1dcea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3fa0e1e142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3fa0e1de2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f9b19c83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f9b009956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f9b05b010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
Compiling generic-array v0.14.7
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f9e7b97f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3fa65e6800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f9d98f28a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f9dad0aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3fa37581fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3fa3759456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3fa3758cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3fa3759142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3fa3758e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f9dad783a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f9d944956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f9d996010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f7c2dd7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f8410a800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f7b4b328a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f7b5f4aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f8127c1fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f8127d456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f8127ccea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f8127d142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f8127ce2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f7b5fb83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f7b468956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f7b4ba010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
Compiling cfg-if v1.0.0
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3fa83d77f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3fb0204800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3fa75ad28a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3fa76eeaac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3fad3761fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3fad377456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3fad376cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3fad377142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3fad376e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3fa76f583a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3fa7562956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3fa75b4010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
Compiling aho-corasick v1.1.3
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f8186e7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f8969b800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f80a4428a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f80b85aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f8680d1fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f8680e456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f8680dcea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f8680e142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f8680de2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f80b8c83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f809f9956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f80a4b010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
Compiling linux-raw-sys v0.4.14
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f8a9fc7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f92829800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f89bd228a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f89d13aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f8f99b1fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f8f99c456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f8f99bcea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f8f99c142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f8f99be2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f89d1a83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f89b87956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f89bd9010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
Compiling pkg-config v0.3.30
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f7ac817f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f82aae800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f79e5728a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f79f98aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f7fc201fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f7fc21456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f7fc20cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f7fc21142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f7fc20e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f79f9f83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f79e0c956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f79e5e010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
Compiling quote v1.0.36
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3fa602f7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3fade5c800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3fa520528a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3fa5346aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3faafce1fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3faafcf456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3faafcecea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3faafcf142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3faafcee2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3fa534d83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3fa51ba956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3fa520c010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3fa3ede7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3fabd0b800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3fa30b428a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3fa31f5aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3fa8e7d1fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3fa8e7e456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3fa8e7dcea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3fa8e7e142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3fa8e7de2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3fa31fc83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3fa3069956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3fa30bb010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: could not compile `typenum` (lib)
Caused by:
process didn't exit successfully: `/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/rustc --crate-name typenum --edition=2018 /home/riscv-builds/.cargo/registry/src/index.crates.io-6f17d22bba15001f/typenum-1.17.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values("const-generics", "force_unix_path_separator", "i128", "no_std", "scale-info", "scale_info", "strict"))' -C metadata=932679a5a8e9f33d -C extra-filename=-932679a5a8e9f33d --out-dir /home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/bootstrap/debug/deps -C strip=debuginfo -L dependency=/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/bootstrap/debug/deps --cap-lints allow -Zallow-features= -Wrust_2018_idioms -Wunused_lifetimes -Dwarnings` (signal: 11, SIGSEGV: invalid memory reference)
warning: build failed, waiting for other jobs to finish...
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f7a1fe7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f8202b800]
/lib/riscv64-linux-gnu/libc.so.6(read+0x44)[0x3f793d428a]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys3pal4unix2fs4File4read17h65bbd351998954fcE+0x20)[0x3f79515aac]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1fc)[0x3f7f19d1fc]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f7f19e456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f7f19dcea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f7f19e142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f7f19de2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f7951c83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f79389956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f793db010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f8538c7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f8d1b9800]
/lib/riscv64-linux-gnu/libc.so.6(syscall+0x16)[0x3f845677c2]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync7condvar5futex7Condvar4wait17habace9b224f278c5E+0x6e)[0x3f846ae388]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1a4)[0x3f8a32b1a4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f8a32c456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f8a32bcea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f8a32c142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f8a32be2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f846aa83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f84517956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f84569010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f8b98d7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f937ba800]
/lib/riscv64-linux-gnu/libc.so.6(syscall+0x16)[0x3f8ab687c2]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync7condvar5futex7Condvar4wait17habace9b224f278c5E+0x6e)[0x3f8acaf388]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1a4)[0x3f9092c1a4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f9092d456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f9092ccea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f9092d142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f9092ce2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f8acab83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f8ab18956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f8ab6a010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f94ffb7f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f9ce28800]
/lib/riscv64-linux-gnu/libc.so.6(syscall+0x16)[0x3f941d67c2]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync5mutex5futex5Mutex14lock_contended17hf4ec6a194184e2edE+0x98)[0x3f942d961c]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync7condvar5futex7Condvar4wait17habace9b224f278c5E+0xaa)[0x3f9431d3c4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1a4)[0x3f99f9a1a4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f99f9b456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f99f9acea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f99f9b142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f99f9ae2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f9431983a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f94186956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f941d8010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f7c6617f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f8448e800]
/lib/riscv64-linux-gnu/libc.so.6(syscall+0x16)[0x3f7b83c7c2]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync7condvar5futex7Condvar4wait17habace9b224f278c5E+0x6e)[0x3f7b983388]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1a4)[0x3f816001a4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f81601456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f81600cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f81601142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f81600e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f7b97f83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f7b7ec956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f7b83e010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3f916017f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3f9942e800]
/lib/riscv64-linux-gnu/libc.so.6(syscall+0x16)[0x3f907dc7c2]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync5mutex5futex5Mutex14lock_contended17hf4ec6a194184e2edE+0x98)[0x3f908df61c]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync7condvar5futex7Condvar4wait17habace9b224f278c5E+0xaa)[0x3f909233c4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1a4)[0x3f965a01a4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3f965a1456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3f965a0cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3f965a1142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3f965a0e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3f9091f83a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3f9078c956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3f907de010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
error: rustc interrupted by SIGSEGV, printing backtrace
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0xc4b7f2)[0x3faa6a77f2]
linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x3fb24d4800]
/lib/riscv64-linux-gnu/libc.so.6(syscall+0x16)[0x3fa98827c2]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync5mutex5futex5Mutex14lock_contended17hf4ec6a194184e2edE+0x98)[0x3fa998561c]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(_ZN3std3sys4sync7condvar5futex7Condvar4wait17habace9b224f278c5E+0xaa)[0x3fa99c93c4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5bea1a4)[0x3faf6461a4]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb456)[0x3faf647456]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beacea)[0x3faf646cea]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beb142)[0x3faf647142]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/librustc_driver-8b140680c8414886.so(+0x5beae2e)[0x3faf646e2e]
/home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/../lib/libstd-6fe59118e7b2d661.so(rust_metadata_std_c0ba54d71f59c23d+0x8683a)[0x3fa99c583a]
/lib/riscv64-linux-gnu/libc.so.6(+0x6a956)[0x3fa9832956]
/lib/riscv64-linux-gnu/libc.so.6(+0xbc010)[0x3fa9884010]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=2000000000
failed to run: /home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/build/riscv64gc-unknown-linux-gnu/stage0/bin/cargo build --manifest-path /home/riscv-builds/runner_dir/workspace/cloud-v-builds-folder/rust-riscv-build/rust/src/bootstrap/Cargo.toml
Build completed unsuccessfully in 0:02:52
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
```
</p>
</details>
| I-crash,T-compiler,C-bug,O-riscv | medium | Critical |
2,473,511,825 | godot | the 3D scene isn't rendering | ### Tested versions
- Reproducible in: 4.0 and later 4.3.stable.
- Not reproducible in: 3.5
### System information
windows 10- godot 4.3 cpu(Inter(R) Core(TM) i5-5200U) gpu(intel(R) HD Graphics 5500)-(AMD Radeon R5 M330)
### Issue description
in **godot 4.3** when i create a 3D scene it appear as a black screen
-my **GPU** dose **not** support **vulcan**
-this is in **compatibility mode**
-i faced this problem since **godot 4.0**

the same problem is when i try to edit an **albedo**

### Steps to reproduce
the problem appear directly when i open the project
### Minimal reproduction project (MRP)
[new-game-project.zip](https://github.com/user-attachments/files/16661370/new-game-project.zip)
| bug,topic:rendering,needs testing,topic:3d | low | Major |
2,473,515,404 | next.js | middleware request.cookies.has() fails when route is hit via Link | ### Link to the code that reproduces this issue
https://github.com/janson/nextjs-cookies-middlware/tree/main
### To Reproduce
1. deploy project to Vercel
2. visit /
3. visit /demo
### Current vs. Expected behavior
**Expected**
A _single_ console log message, `***** Setting cookie: experiments *****'` is displayed when the cookie is set upon first visiting `/demo`. No repeated message until the cookie is deleted or expired.
**Current**
Cookie is set several times upon first visiting the `/` page. And again when visiting `/demo`.
<img width="1505" alt="Screenshot 2024-08-19 at 9 23 47โฏAM" src="https://github.com/user-attachments/assets/42039dc2-c200-47bb-a2d4-a0785aa3decf">
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6031
Available memory (MB): 65536
Available CPU cores: 16
Binaries:
Node: 20.12.0
npm: 10.5.0
Yarn: 1.22.21
pnpm: 8.6.11
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: 14.2.5
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
โจ Done in 1.68s.
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed), Other (Deployed)
### Additional context
* Verified on next.js v14.2.3 and v14.2.5
* Bug also observed with the [official doc's cookie+middleware example](https://nextjs.org/docs/app/building-your-application/routing/middleware#using-cookies)
* Removing the explicit has/delete/has in L12-14 & wrapping L18-23 in a has() conditional. | bug,Middleware | low | Critical |
2,473,557,199 | react-native | [iOS] `accessibilityElementsHidden` does not work if the parent view is `accessible` | ### Description
When a parent view has the `accessible` prop set, VoiceOver reads all the text nodes and accessibility labels of its children even if one of them has a parent with the `accessibilityElementsHidden` / `aria-hidden` prop set.
### Steps to reproduce
1. Open the Expo Snack on an iOS device with Expo Go.
2. Enable VoiceOver (triple press on the power button, if enabled in the settings. Otherwise enable it in Settings > Accessibility > VoiceOver).
3. Verify that VoiceOver reads all three lines in both cases when it should only read "First layout" in the second one, like TalkBack on Android does.
### React Native Version
0.74.5
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6.1
CPU: (10) arm64 Apple M1 Pro
Memory: 536.33 MB / 32.00 GB
Shell:
version: "5.9"
path: /opt/homebrew/bin/zsh
Binaries:
Node:
version: 21.6.1
path: ~/.local/share/mise/installs/node/21.6.1/bin/node
Yarn:
version: 1.22.21
path: /opt/homebrew/bin/yarn
npm:
version: 10.2.4
path: ~/.local/share/mise/installs/node/21.6.1/bin/npm
Watchman:
version: 2024.03.18.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/valou/.local/share/mise/installs/ruby/3.2.3/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.2
- iOS 17.2
- macOS 14.2
- tvOS 17.2
- visionOS 1.0
- watchOS 10.2
Android SDK: Not Found
IDEs:
Android Studio: 2023.1 AI-231.9392.1.2311.11255304
Xcode:
version: 15.2/15C500b
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /Users/valou/.local/share/mise/installs/java/adoptopenjdk-17.0.10+7/bin/javac
Ruby:
version: 3.2.3
path: /Users/valou/.local/share/mise/installs/ruby/3.2.3/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.5
wanted: 0.74.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://snack.expo.dev/@vbriand/voiceover-accessible-bug
### Screenshots and Videos
https://github.com/user-attachments/assets/50c266e2-7906-4c31-b138-090f0038abf1
| Platform: iOS,Issue: Author Provided Repro | low | Critical |
2,473,557,214 | pytorch | torch::tensor function does not work | ### ๐ Describe the bug
~~~
#include <torch/torch.h>
int main(){
torch::Tensor x = torch::tensor({1,2,3});
return 0;
}
~~~
Standard Exception: tensor.sizes()[0] == (int64_t)init_list_.size() INTERNAL ASSERT FAILED at "../../libtorch/include/torch/csrc/api/include/torch/detail/TensorDataContainer.h":339, please report a bug to PyTorch. Expected a Tensor with size 3 in its first dimension, but got Tensor with size 0 in its first dimension Exception raised from fill_tensor at ../../libtorch/include/torch/csrc/api/include/torch/detail/TensorDataContainer.h:339 (most recent call first)
### Versions
it works before v1.8, does not work after v2.2
cc @jbschlosser | needs reproduction,module: cpp,triaged | low | Critical |
2,473,571,680 | rust | anyhow Result + collect = missleading error messages | ### Code
```Rust
use std::fmt::Display;
use anyhow::Result;
#[derive(Debug)]
pub struct Error {}
impl std::error::Error for Error {}
impl Display for Error {
fn fmt(&self, _f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
todo!()
}
}
#[derive(Ord, PartialEq, PartialOrd, Eq)]
pub struct Version {}
fn parse(_s: &str) -> std::result::Result<Version, Error> {
todo!()
}
pub fn error1(lines: &[&str]) -> Result<Vec<Version>> {
let mut tags = lines.iter().map(|e| parse(e)).collect()?;
tags.sort();
Ok(tags)
}
pub fn error2(lines: &[&str]) -> Result<Vec<Version>> {
let mut tags: Vec<Version> = lines.iter().map(|e| parse(e)).collect()?;
tags.sort();
Ok(tags)
}
pub fn error3(lines: &[&str]) -> Result<Vec<Version>> {
let mut tags = lines.iter().map(|e| parse(e)).collect::<Vec<_>>()?;
tags.sort();
Ok(tags)
}
pub fn error4(lines: &[&str]) -> Result<Vec<Version>> {
let mut tags = lines
.iter()
.map(|e| parse(e))
.collect::<Result<Vec<Version>>>()?;
tags.sort();
Ok(tags)
}
pub fn correct(lines: &[&str]) -> Result<Vec<Version>> {
let mut tags = lines
.iter()
.map(|e| parse(e))
.collect::<Result<Vec<Version>, _>>()?;
tags.sort();
Ok(tags)
}
```
### Current output
```Shell
error[E0282]: type annotations needed
--> src/lib.rs:23:9
|
23 | let mut tags = lines.iter().map(|e| parse(e)).collect()?;
| ^^^^^^^^
24 |
25 | tags.sort();
| ---- type must be known at this point
|
help: consider giving `tags` an explicit type
|
23 | let mut tags: /* Type */ = lines.iter().map(|e| parse(e)).collect()?;
| ++++++++++++
error[E0283]: type annotations needed
--> src/lib.rs:31:65
|
31 | let mut tags: Vec<Version> = lines.iter().map(|e| parse(e)).collect()?;
| ^^^^^^^ cannot infer type of the type parameter `B` declared on the method `collect`
|
= note: cannot satisfy `_: FromIterator<Result<Version, Error>>`
note: required by a bound in `collect`
--> /home/fedasus/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:2001:19
|
2001 | fn collect<B: FromIterator<Self::Item>>(self) -> B
| ^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `Iterator::collect`
help: consider specifying the generic argument
|
31 | let mut tags: Vec<Version> = lines.iter().map(|e| parse(e)).collect::<Vec<_>>()?;
| ++++++++++
error[E0277]: the `?` operator can only be applied to values that implement `Try`
--> src/lib.rs:38:20
|
38 | let mut tags = lines.iter().map(|e| parse(e)).collect::<Vec<_>>()?;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the `?` operator cannot be applied to type `Vec<Result<Version, Error>>`
|
= help: the trait `Try` is not implemented for `Vec<Result<Version, Error>>`
error[E0277]: a value of type `Result<Vec<Version>, anyhow::Error>` cannot be built from an iterator over elements of type `Result<Version, Error>`
--> src/lib.rs:48:20
|
48 | .collect::<Result<Vec<Version>>>()?;
| ------- ^^^^^^^^^^^^^^^^^^^^ value of type `Result<Vec<Version>, anyhow::Error>` cannot be built from `std::iter::Iterator<Item=Result<Version, Error>>`
| |
| required by a bound introduced by this call
|
= help: the trait `FromIterator<Result<Version, Error>>` is not implemented for `Result<Vec<Version>, anyhow::Error>`
= help: the trait `FromIterator<Result<Version, anyhow::Error>>` is implemented for `Result<Vec<Version>, anyhow::Error>`
= help: for that trait implementation, expected `anyhow::Error`, found `Error`
note: the method call chain might not have had the expected associated types
--> src/lib.rs:47:10
|
45 | let mut tags = lines
| ----- this expression has type `&[&str]`
46 | .iter()
| ------ `Iterator::Item` is `&&str` here
47 | .map(|e| parse(e))
| ^^^^^^^^^^^^^^^^^ `Iterator::Item` changed to `Result<Version, Error>` here
note: required by a bound in `collect`
--> /home/fedasus/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:2001:19
|
2001 | fn collect<B: FromIterator<Self::Item>>(self) -> B
| ^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `Iterator::collect`
Some errors have detailed explanations: E0277, E0282, E0283.
For more information about an error, try `rustc --explain E0277`.
```
### Desired output
1. suggest `let mut tags: Vec<Version>` fix nothing
2. suggest `.collect::<Vec<_>>` but the desired type is `Result<...>`
3. I don't quite understand how the `, _` help rustc to infer the type.
I'm sorry if this is expected error messages. Feel free to close if this is the case
### Rationale and extra context
_No response_
### Other cases
_No response_
### Rust Version
```Shell
rustc 1.80.0 (051478957 2024-07-21)
binary: rustc
commit-hash: 051478957371ee0084a7c0913941d2a8c4757bb9
commit-date: 2024-07-21
host: x86_64-unknown-linux-gnu
release: 1.80.0
LLVM version: 18.1.7
```
### Anything else?
_No response_ | A-diagnostics,T-compiler,D-confusing | low | Critical |
2,473,577,950 | godot | `SubViewport` transparent background makes whole viewport transparent when using physical lighting | ### Tested versions
- Found in 4.2.2.stable.official
### System information
Godot v4.2.2.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 970 (NVIDIA; 32.0.15.5585) - Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz (8 Threads)
### Issue description
I'm trying to render a 3D object in a `SubViewport` to have that object overlaid on the screen with another 3D scene behind it. However when I enable physical lighting units, set the subviewport's `transparent_bg` to true, and use a `CameraAttributesPhysical`, everything in the subviewport is rendered transparent, not just the background. When using `CameraAttributesPractical`, or when having no camera attributes set at all, it works as intended with the subviewport scene being rendered to the parent `SubViewportContainer` and overlaid on the scene. However, when setting camera attributes to an instance of `CameraAttributesPhysical`, whether it's in a `WorldEnvironment` node, the subviewport's world 3D override, or the camera attributes, the whole subviewport becomes transparent when `transparent_bg` is enabled and is not rendered.
### Steps to reproduce
1. Observe when running the example scene or viewing the `SubViewport`'s inspector in the editor that the sphere is not rendered.
2. Clear the camera attributes in the `WorldEnvironment` node or replace them with an instance of `CameraAttributesPractical`
3. Run the scene or view the inspector for the `SubViewport` and observe that the sphere is rendered
### Minimal reproduction project (MRP)
Example scene:
```
[gd_scene load_steps=4 format=3 uid="uid://djjdj454opky6"]
[sub_resource type="Environment" id="Environment_dldea"]
[sub_resource type="CameraAttributesPhysical" id="CameraAttributesPhysical_ukoa2"]
[sub_resource type="SphereMesh" id="SphereMesh_xfy0c"]
[node name="Main" type="Node3D"]
[node name="WorldEnvironment" type="WorldEnvironment" parent="."]
environment = SubResource("Environment_dldea")
camera_attributes = SubResource("CameraAttributesPhysical_ukoa2")
[node name="SubViewportContainer" type="SubViewportContainer" parent="."]
anchors_preset = 15
anchor_right = 1.0
anchor_bottom = 1.0
grow_horizontal = 2
grow_vertical = 2
stretch = true
[node name="SubViewport" type="SubViewport" parent="SubViewportContainer"]
transparent_bg = true
size = Vector2i(1152, 648)
[node name="DirectionalLight3D" type="DirectionalLight3D" parent="SubViewportContainer/SubViewport"]
transform = Transform3D(0.851086, 0.379608, -0.362698, 0, 0.690819, 0.723028, 0.525026, -0.615359, 0.587947, 0, 0, 0)
light_temperature = 5800.0
light_angular_distance = 1.0
shadow_enabled = true
sky_mode = 1
[node name="Camera3D" type="Camera3D" parent="SubViewportContainer/SubViewport"]
fov = 37.8493
[node name="MeshInstance3D" type="MeshInstance3D" parent="SubViewportContainer/SubViewport"]
transform = Transform3D(1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, -2)
mesh = SubResource("SphereMesh_xfy0c")
``` | bug,topic:rendering | low | Minor |
2,473,599,466 | go | x/tools/internal/refactor/inline: a read from a non-address-taken variable should commute with global effects | @lfolger reported a suboptimal inliner result of this form:
```go
var g = foo()
func f(x, y any) { f2(x, y) }
// before
f(complicated(), g)
// after
var ( x = complicated(); y = g )
f2(x, y)
```
In this case, the inliner's effects analysis assumed conservatively that the read from global var g would not commute with the call to complicated(). However, the variable g is not address-taken and is assigned only in its declaration.
The inliner's effects analysis should enumerate all non-address-taken variables (non-exported variables that appear in an lvalue position only in their own declaration) and allow reads of them to commute with global effects such as complicated(), resulting in this more optimal inlining: `f2(g, complicated())`.
| NeedsInvestigation,Tools,Analysis,Refactoring | low | Minor |
2,473,607,295 | transformers | `truncate_dim` on `BertModel` | ### Feature request
I have a pipeline to finetune an instance of `BertModel`, on a `text-classification` task.
I would like to use [this new embedding model](https://huggingface.co/aari1995/German_Semantic_V3#usage) as my base embedding now.
As can be seen in the example they provide, they are able to pass different values for `matryoshka_dim` into the `SentenceTransformer` instance through the `truncate_dim` argument.
However, I was not able to do this on the `BertModel` in the following code snippet that I have in my code:
```python
self.bert_backbone = BertModel.from_pretrained(
pretrained_model_name_or_path=self.config.embedding_model_file.model_name,
cache_dir=Path(self.config.embedding_model_file.cache_dir),
).to(self.device)
```
And I do not want to use a `SentenceTransformer` instance either as in my training loop I would like to be able to get:
```python
bert_outputs: BaseModelOutputWithPoolingAndCrossAttentions = (
self.bert_backbone(
input_ids=input_ids, attention_mask=attention_mask
)
)
bert_logits: Tensor = bert_outputs.last_hidden_state[
:, 0, :
] # Take the [CLS] token output
```
and I am not sure if this code would work also with a simple swap to `SentenceTransformer`. In any case, I think that this is a potential parameter that `BertModel` should support, and maybe it does but I am just missing it.
Thanks in advance!
### Motivation
To be able to extend the `BertModel` further
### Your contribution
. | Feature request | low | Minor |
2,473,619,071 | vscode | Cannot move cell when change the window.zoomLevel | ### Applies To
- [X] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
## Issues
- Cannot move cells using drag and drop
(It is possible to move using `Alt` + `Arrows` )
- It is only possible to move TOP and BOTTOM, but other places is not.
## What I tried
- Reinstall extension
- no effect
- Uninstall other all extensions
- no effect
- Delete settings from settings.json
- It work, and I found that `window.zoomLevel` cause the issue.
- My setting is `window.zoomLevel = -1`
- I tried changing the zoom level below (True -> can move, False -> can't move).
```python
{
-1: False,
-0.1: False,
-0: True,
0: True,
-0.1: False,
-1: False,
}
```
## Remarks
I'm Japanese and not good at English.
If you couldn't understand what I want to say, feel free to ask me.
### VS Code Version
ใใผใธใงใณ: 1.92.2 (system setup) ใณใใใ: fee1edb8d6d72a0ddff41e5f71a671c23ed924b9 ๆฅไป: 2024-08-14T17:29:30.058Z Electron: 30.1.2 ElectronBuildId: 9870757 Chromium: 124.0.6367.243 Node.js: 20.14.0 V8: 12.4.254.20-electron.0 OS: Windows_NT x64 10.0.19045
### Jupyter Extension Version
v2024.7.0
### Jupyter logs
```shell
Visual Studio Code (1.92.2, attached-container, desktop)
Jupyter Extension Version: 2024.7.0.
Python Extension Version: 2024.12.3.
Pylance Extension Version: 2024.8.1.
Platform: linux (x64).
Workspace folder /projects, Home = /root
```
### Coding Language and Runtime Version
Python v3.10.14
### Language Extension Version (if applicable)
v2024.12.3
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
Local | bug,notebook-dnd | low | Minor |
2,473,630,800 | node | Closing WritableStream from TransformStream causes Node.js process to crash | ### Version
v20.16.0
### Platform
```text
Linux laptop 6.1.0-23-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.99-1 (2024-07-15) x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
I ran this code and my Node.js process crashed with no output:
This causes a crash:
```
import { TransformStream } from "node:stream/web";
const { writable } = new TransformStream();
try {
let writer = writable.getWriter();
await writer.write(" ");
writer.releaseLock();
writer = writable.getWriter();
await writer.close();
writer.releaseLock();
console.log("Done!");
} catch (err) {
console.error("Error:", err);
}
```
This doesn't:
```
const stream = new WritableStream();
try {
let writer = stream.getWriter();
await writer.write(" ");
writer.releaseLock();
writer = stream.getWriter();
await writer.close();
writer.releaseLock();
console.log("Done!");
} catch (err) {
console.error("Error:", err);
}
```
### How often does it reproduce? Is there a required condition?
If I close the `WritableStream` on the `TransformStream` example without awaiting the `close()` promise then the process will proceed to log `Done!`.
### What is the expected behavior? Why is that the expected behavior?
I figured the stream would close or throw some kind of error if it cannot.
### What do you see instead?
The Node.js process crashes without any thrown errors and doesn't trigger the `beforeExit` or `exit` process events.
### Additional information
I've simplified my code into the reproducible above, that's why I create multiple writers.
_No response_ | web streams | low | Critical |
2,473,671,699 | next.js | Using `revalidateTag()` inside dynamic route handler is unstable | ### Link to the code that reproduces this issue
https://github.com/younes101020/electra-v2/
### To Reproduce
1. Clone the repository
```bash
youyou@youyou:~/electra$ git clone git@github.com:younes101020/Electra-v2.git
```
2. create .env.development file in root directory and past .env.example into it and for TMDB_API_KEY + TMDB_ACCESS_TOKEN env you can retrieve them from https://www.themoviedb.org/settings/api
3. Install dependencies and start dev environment (you need docker compose)
```bash
youyou@youyou:~/electra$ yarn
youyou@youyou:~/electra$ yarn dev
```
4. go to /src/app/api/movie/[sessionid]/[accountid]/[movieid]/rating/route.ts file and replace:
```ts
// Replace this (this code is the only way I've found to fix the initial issue)
await new Promise((resolve) => {
setTimeout(() => {
resolve(revalidateTag(`rated:${params.accountid}`));
}, 3000);
});
// By this
revalidateTag(`rated:${params.accountid}`)
```
5. Go to http://localhost:3000 logged in, click on any movie card image and add/update movies rating
### Current vs. Expected behavior
I expect the cache initiated by the unstable_cache() method to be revalidated each time a movie's score is assigned.
Sometimes the on-demand cache isn't revalidated while the route that executes revalidateTag() is called. I have the impression that the route handler returns a response even before letting revalidateTag() finish the job properly, so I've temporarily wrapped this function in a promise with a fairly long delay.
I think it would be good to return a promise to revalidateTag() instead of nothing once the cache has been revalidated or to run this function in a worker, it wouldn't bother me personally, I would only have to manage the UI with useOptimistic
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
Available memory (MB): 7908
Available CPU cores: 12
Binaries:
Node: 21.6.0
npm: 10.8.1
Yarn: 1.22.21
pnpm: 9.1.2
Relevant Packages:
next: 14.2.3 // There is a newer version (14.2.5) available, upgrade recommended!
eslint-config-next: 14.2.3
react: 18.3.1
react-dom: 18.3.1
typescript: 5.4.5
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
_No response_ | bug,Runtime | low | Major |
2,473,720,717 | ollama | ollama golang client hides API errors | ### What is the issue?
While testing ollama in combination with k8sgpt I ran into an issue with ollama queries responding with:
```
invalid character 'p' after top-level value
```
After some hunting I found that the documentation for k8sgpt incorrectly adds a suffix to the ollama baseurl (`http://localhost:11434/v1`). Ollama API was responding with a plaintext HTTP body of `404 Not Found` but I was unablwe to see this in the error message without debugging the ollama go client here:
https://github.com/ollama/ollama/blob/main/api/client.go#L166-L189
Ideally we are able to view both the API response and errorResponse (unmarshal error) to aid in quick debugging. I mocked up a rough diff illustrating what I am talking about:
https://github.com/ollama/ollama/compare/main...dcarrier:ollama:unmarshal-fix?expand=1#diff-aa9bfd1a638fbb706f8e8920297902937011160319d9679add5dca56e5ab8277
That code results in this error message:
```
404 Not Found: invalid character 'p' after top-level value
```
We can also manipulate Error() method of StatusError to clean up the formatting but I am hoping this is enough to get the idea across. Another option is to change the NoRoute handler to respond with a json payload with an Error field. However that feels like a riskier change than the aforementioned.
If this is acceptable I am happy to work on this and open a PR.
Thanks for considering!
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
v0.3.6 | bug,api | low | Critical |
2,473,721,871 | kubernetes | e2epod.DeletePodWithWait{,ByName} does not handle pods that get restarted | ### What happened?
`e2epod.DeletePodWithWaitByName` does (simplified)
```
err := c.CoreV1().Pods(podNamespace).Delete(ctx, podName, metav1.DeleteOptions{})
err = WaitForPodNotFoundInNamespace(ctx, c, podName, podNamespace, PodDeleteTimeout)
```
If the pod in question is managed by a controller or operator, then it is possible that it will be restarted in between the Delete and the WaitForPodNotFoundInNamespace, in which case WaitForPodNotFoundInNamespace will wait for the _new_ pod to not be present rather than the old one.
eg, some of the ovn-kubernetes e2e tests use this function to restart ovn-k control-plane pods in various circumstances (the e2e test kills a pod which is managed by a StatefulSet, knowing it will be restarted), and sometimes hits the race condition. eg, [here](https://github.com/ovn-org/ovn-kubernetes/actions/runs/10451772693/job/28939840358?pr=4623):
```
2024-08-19T11:40:56.3501692Z I0819 11:40:56.349677 79626 delete.go:62] Deleting pod "ovnkube-db-2" in namespace "ovn-kubernetes"
2024-08-19T11:40:56.3606091Z I0819 11:40:56.360068 79626 delete.go:70] Wait up to 5m0s for pod "ovnkube-db-2" to be fully deleted
...
2024-08-19T11:45:57.4798660Z [FAILED] failed to delete pod ovnkube-db-2: pod "ovnkube-db-2" was not deleted: expected pod to not be found: Timed out after 300.001s.
2024-08-19T11:45:57.4799987Z Expected
2024-08-19T11:45:57.4800476Z <*v1.Pod | 0xc000eba488>:
2024-08-19T11:45:57.4801044Z metadata:
2024-08-19T11:45:57.4801863Z creationTimestamp: "2024-08-19T11:41:00Z"
...
2024-08-19T11:45:57.5299664Z to be nil
```
note that the `creationTimestamp` of the pod in question is _after_ the deletion.
### What did you expect to happen?
Either
1. DeletePodWithWait{,ByName} checks the `resourceVersion` of the pod before deleting it, and waits specifically for _that_ version of the pod to not exist any more, or
2. DeletePodWithWait{,ByName} are documented as not handling this case
### How can we reproduce it (as minimally and precisely as possible)?
Write an e2e test that deletes a pod that will be recreated automatically (by an `apps` API or by an operator).
### Anything else we need to know?
/sig testing
/area e2e-test-framework
### Kubernetes version
`v1.30.2`
### Cloud provider
N/A
### OS version
_No response_
### Install tools
_No response_
### Container runtime (CRI) and version (if applicable)
_No response_
### Related plugins (CNI, CSI, ...) and versions (if applicable)
_No response_ | kind/bug,sig/testing,lifecycle/rotten,area/e2e-test-framework,needs-triage | low | Critical |
2,473,729,064 | rust | Consider using random keys for incr. comp. hashing | There's been [recent discussion](https://github.com/rust-lang/rust/issues/10389#issuecomment-2250692018) about the problems of using unkeyed SipHash128 in the compiler and if that could be exploited by an attacker.
With respect to incremental compilation, it would be possible to generate random keys and cache them together with the dep-graph. These keys could then affect query result fingerprints and dep-node identifiers. Any new from-scratch compilation session would generate new keys, so finding stable collisions should be impossible.
The only downside is that it would be hard to reproduce an actual collision if we ever found one because the keys have to be known for that. However, reproducing collisions that are due to faulty `HashStable` impls (which is the much more likely case) should be reproducible independent of the keys being used.
| C-enhancement,T-compiler,A-incr-comp,A-reproducibility | low | Minor |
2,473,736,214 | ui | [bug]: Orders code block does not show anything | ### Describe the bug
When copying or using the orders code block (or almost all of them but that the one I've used) it returns a blank page !
the reason is simply that the component TooltipProvider is not imported nor used and if its not used react will return a blank page for some reason, the issue is fixed by importing TooltipProvider component and simply adding it as a parent component for the other tooltip component, I hope you fix it soon so other users wont get confused and thanks for your wonderful ui library <3
### Affected component/components
Tooltip
### How to reproduce
1 - copy almost any building block's code
2 - paste and run it, It will return a blank page
the issue is fixed by importing Tooltip provider and using it as a parent component
### Codesandbox/StackBlitz link
blank page
### Logs
```bash
blank
```
### System Info
```bash
windows 10
chrome browser
nodejs, react with typescript and vite
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,473,757,784 | bitcoin | Control-flow application capabilities for `x86_64-linux-gnu` release binaries | When building static binaries for `x86_64-linux-gnu`, one can verify that both Control-flow Enforcement Technology (CET) capabilities--indirect branch tracking (IBT) and shadow stack--are enabled by running the following command:
```
$ readelf -n src/bitcoind | grep feature
Properties: x86 feature: IBT, SHSTK
```
However, that is not the case for the Guix binaries:
```
$ readelf -n bin/bitcoind | grep feature
Properties: x86 feature used: x86, x87, XMM, YMM, XSAVE
``` | Linux/Unix,Build system | low | Major |
2,473,770,288 | rust | E0050 emitted unexpectedly on missing `:` | ### Code
```Rust
use std::fmt;
struct Hello;
impl fmt::Display for Hello {
fn fmt(&self, f: &fmt:Formatter) -> fmt::Result {
write!(f, "hello")
}
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
error: expected parameter name, found `:`
--> src/lib.rs:6:26
|
6 | fn fmt(&self, f: &fmt:Formatter) -> fmt::Result {
| ^ expected parameter name
error: expected one of `!`, `(`, `)`, `,`, `::`, or `<`, found `:`
--> src/lib.rs:6:26
|
6 | fn fmt(&self, f: &fmt:Formatter) -> fmt::Result {
| ^
| |
| expected one of `!`, `(`, `)`, `,`, `::`, or `<`
| help: missing `,`
error[E0573]: expected type, found module `fmt`
--> src/lib.rs:6:23
|
6 | fn fmt(&self, f: &fmt:Formatter) -> fmt::Result {
| ^^^ not a type
error[E0050]: method `fmt` has 3 parameters but the declaration in trait `std::fmt::Display::fmt` has 2
--> src/lib.rs:6:12
|
6 | fn fmt(&self, f: &fmt:Formatter) -> fmt::Result {
| ^^^^^^^^^^^^^^^^^^^^^^^^ expected 2 parameters, found 3
|
= note: `fmt` from trait: `fn(&Self, &mut Formatter<'_>) -> Result<(), std::fmt::Error>`
Some errors have detailed explanations: E0050, E0573.
For more information about an error, try `rustc --explain E0050`.
error: could not compile `playground` (lib) due to 4 previous errors
```
### Desired output
```Shell
Compiling playground v0.0.1 (/playground)
error: expected parameter name, found `:`
--> src/lib.rs:6:26
|
6 | fn fmt(&self, f: &fmt:Formatter) -> fmt::Result {
| ^ expected parameter name
error: expected one of `!`, `(`, `)`, `,`, `::`, or `<`, found `:`
--> src/lib.rs:6:26
|
6 | fn fmt(&self, f: &fmt:Formatter) -> fmt::Result {
| ^
| |
| expected one of `!`, `(`, `)`, `,`, `::`, or `<`
| help: missing `,`
error[E0573]: expected type, found module `fmt`
--> src/lib.rs:6:23
|
6 | fn fmt(&self, f: &fmt:Formatter) -> fmt::Result {
| ^^^ not a type
Some errors have detailed explanations: E0050, E0573.
For more information about an error, try `rustc --explain E0050`.
error: could not compile `playground` (lib) due to 4 previous errors
```
### Rationale and extra context
When looking at rust-analyzer output or when just looking at the last few error messages, it's easy to see only the "too many arguments" error, and not the syntax error that points at the exact problem.
### Other cases
_No response_
### Rust Version
```Shell
1.80.1
```
### Anything else?
_No response_ | A-diagnostics,A-parser,T-compiler,D-confusing,D-papercut,D-verbose | low | Critical |
2,473,772,350 | PowerToys | File & Folder colorize/highlight | ### Description of the new feature / enhancement
Right-click menu with some color options to highlight files and folders. Would be nice if it traveled with the files and folders.
### Scenario when this would be used?
For all projects and files in progress.
### Supporting information
OSX has had this for many years. | Needs-Triage | low | Minor |
2,473,775,078 | ui | [feat]: Need to create phone number input component like "React-Phone-Input-2" | ### Feature description
Feature Description: Phone Number Input Component
Feature Name: PhoneNumberInput
Category: Form Controls
Description:
The PhoneNumberInput component is designed to allow users to input and format phone numbers in a standardized and user-friendly manner. This component is essential for applications that require phone number input with country code selection, automatic formatting, and validation.
Why It's Needed:
Currently, the Shadcn UI library lacks a specialized phone number input component that handles country code selection, formatting, and validation in one package. Adding this component will enhance the form controls offering, providing developers with a ready-to-use solution for phone number inputs, which are common in many web applications.
### Affected component/components
_No response_
### Additional Context
<PhoneNumberInput
label="Phone Number"
defaultCountry="US"
required
onChange={(value) => console.log(value)}
/>
Example like: https://www.npmjs.com/package/react-phone-input-2
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues and PRs | area: request | low | Minor |
2,473,884,822 | pytorch | Torch Distributed Elastic has no way to address non-retriable errors | ### ๐ The feature, motivation and pitch
Looking at https://pytorch.org/docs/stable/elastic/errors.html, I don't see any way to avoid restarts when a non-retriable user error has happened. It is often important for user errors to have varieties as well. For instance, user's training module may discover an error at runtime that is non-retriable (say a node level issue that requires an orchestration level restart). There is currently no way to tell elastic agent that the exception should not be retried.
### Alternatives
Wait until we run out of retries.
### Additional context
None
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,473,894,996 | godot | AudioStreamPlayer's volume ignored on web when playing it with an AnimationPlayer's audio clips track. | ### Tested versions
Reproducible with Godot 4.3 Stable. Web export. I'm assuming this happens because of the new (4.3) web audio sample system.
### System information
Windows 10, Web export, Chrome, Godot4.3 Stable, wav format.
### Issue description
AudioStreamPlayer's volume property is not applied, it doesn't affect the actual volume of the played audio when playing it using an AnimatioPlayer's animation that uses an audio clips track to play the audio. Only happens on web. Only when playing the audio using an audio clips track.
### Steps to reproduce
In the minimal reproduction project I have two AudioStreamPlayers. Both can reproduce the same "footsteps.wav" audio, and both are set to -80 dB, so they should both emit no sound when played if they are applying the volume properly. However, when pressing the second button that plays an AnimationPlayer's animation that uses an audio clips track to reproduce the audio, the volume is ignored in the web so the audio can be heard at normal volume.
For reference, pressing the first button will play the audio using another AnimationPlayer's animation but with a track that
enables the playing property of the AudioStreamPlayer. Doing it this way, or any other way other than using the audio clips
track, volume (-80) is applied correctly so no sound can be heard.
Also, both apply the -80dB volume correctly when not in the web.
I don't see any workarround to make it work. Please inform me if you know one. Using the playing property track works but it doesn't let us use the clipping features of the audio clips track, and we would have to remake all the animations audio timing and edit some audios that need to be clipped in an external audio editing program.
### Minimal reproduction project (MRP)
[audio-volume-ignored-on-web.zip](https://github.com/user-attachments/files/16663601/audio-volume-ignored-on-web.zip) | bug,platform:web,needs testing,topic:audio | low | Minor |
2,473,898,822 | pytorch | [dynamo] Dynamo does not support infinite iterators (e.g., `itertools.count()`). | Dynamo's `zip` does not support infinite iterators (e.g., `itertools.count()`).
Dynamo always realizes iterable into list items, which leads to an infinite loop. Also, fetching items from an iterator may have side effects. We should not realize the iterator into the sequence at once.
_Originally posted by @XuehaiPan in https://github.com/pytorch/pytorch/pull/133876#discussion_r1722156380_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo,dynamo-must-fix | low | Major |
2,473,902,517 | go | proposal: runtime/local: new package to provide transparent access to Per-P Local Storage | ### Proposal Details
This proposal introduces a new package, `runtime/local`, providing an API for per-P local storage in Go. This enables efficient and race-free access to shared yet locally scoped data, like random number generator states or per-P Logger, without the overhead of locks or cas loop.
Several packages within the standard library, including `runtime.fastrand` and `sync.Pool`, already utilize per-P or per-M storage internally for performance optimization. This proposal aims to expose a similar mechanism through a public API, enables developers to leverage this approach for their own concurrency needs.
This proposal introduces the runtime/local package with the following API:
```go
package local
// Key represents a unique identifier for accessing local storage.
type Key uint32
// AllocateKey reserves a new Key for local storage access.
func AllocateKey() Key
// ReleaseKey releases a previously allocated Key.
func ReleaseKey(k Key)
// Storage provides access to the per-P local storage.
type Storage struct {
// pid represents the logical processor ID associated with this Storage.
pid int
// values holds the actual stored data, indexed by Key.
values []any
}
// ID returns the logical processor ID associated with this Storage.
func (s *Storage) ID() int
// Load retrieves the value associated with the given Key.
// The second return value indicates whether a value was found.
func (s *Storage) Load(k Key) (any, bool)
// Store associates the given value with the specified Key.
// It returns true if the operation was successful, false otherwise.
func (s *Storage) Store(k Key, v any) bool
// OpenFunc executes the provided function f with the Storage
// instance associated with the calling goroutine's current P.
// It returns the logical processor ID associated with the Storage.
func OpenFunc(f func(*Storage)) (id int)
```
----
The proposed package name candidates are following:
- `runtime/local`
- `runtime/threadlocal`
- `runtime/plocal`
- `runtime/tls`
----
Open Issues:
- Released Key Reuse and Storage Reclamation: The proposal should consider how to handle storage associated with released keys.
- Finalizers to flush locally collected data to global storage. | Proposal | low | Major |
2,473,902,631 | godot | Multiple anonymous RefCounted wrapped in Signal/Callable in function argument scope are freed early | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8], v4.2.2.stable.official [15073afe3], v4.2.1.stable.official [b09f793f5], v4.1.3.stable.official [f06b6836a] and not checked in other versions.
### System information
Godot v4.3.stable - Windows 10.0.22631
### Issue description
Multiple anonymous RefCounted wrapped in Signal/Callable to be passed as function arguments will be freed early, except for that last one.
### Steps to reproduce
Set the code attached at MRP into the scene node and run it.
Wherever it is executed, the result is the same as following:
```
false
false
true
```
I would expect it all to be `true` (or all to be `false`).
### Minimal reproduction project (MRP)
```gdscript
extends Node
class AnonRef:
func fun() -> void: pass
static func _receiver(fun1: Callable, fun2: Callable, fun3: Callable) -> void:
print(fun1.get_object() is not null)
print(fun2.get_object() is not null)
print(fun3.get_object() is not null)
func _ready() -> void:
_receiver(
AnonRef.new().fun,
AnonRef.new().fun,
AnonRef.new().fun)
``` | bug,topic:core,needs testing | low | Minor |
2,473,961,909 | godot | c# exported enum arrays get given "GD0102: The type of the exported member is not supported" | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Arch Linux #1 SMP PREEMPT_DYNAMIC Thu, 15 Aug 2024 00:25:30 +0000 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (nvidia; 555.58.02) - AMD Ryzen 5 7600X 6-Core Processor (12 Threads)
### Issue description
arrays of enums in c# cant be exported, despite the [docs](https://docs.godotengine.org/en/4.3/tutorials/scripting/c_sharp/c_sharp_variant.html#c-sharp-variant-compatible-types) stating "Enums are supported by Godot.Variant since their underlying type is an integer type". so internally the representation should just be an integer array, which is supported
### Steps to reproduce
just put this somewhere in a project script
```cs
enum Test { One, Two, Three };
[Export] Test[] _enums;
```
### Minimal reproduction project (MRP)
N/A | topic:dotnet | low | Minor |
2,473,965,620 | flutter | [Release] Exception Encountered During Flutter 3.25 Beta Startup | When attempting to start the Flutter 3.25 beta, I encountered the following exception:
```
Exception: Tried to commit with message Update Dart SDK to c5264a1bd1d2ddff6c9a32ff7f99a2813f0ae4f9 but no changes were present
```
### Background
This issue appears to stem from a recent adjustment in our branch alignment process. Specifically, when using the Skia autoroller to generate the Dart-to-Engine roll, the autoroller typically selects Dart dev candidates to roll into the engine.
However, during the alignment for Dart 3.6 beta 1 and Flutter 3.25, a commit landed in Dart that caused a break in the engine. To address this, we implemented a workaround by cherry-picking a revert to Dart and using the beta hash instead of the dev hash, as the dev hash was broken.
This workaround resulted in the engine already containing the candidate Dart hash (`c5264a1bd1d2ddff6c9a32ff7f99a2813f0ae4f9`), leading to the exception.
### Current Behavior
When invoking the conductor without the Dart hash, the process completes successfully but does not generate a pull request (PR) to initiate CI for building and signing artifacts.
### Workaround
To resolve this issue, manually create a PR to the engine branch to kick off the CI process.
CC: @christopherfujino | team-release | low | Critical |
2,473,968,419 | go | cmd/go/internal/modload: TestQueryImport/golang.org_x_text failures | ```
#!watchflakes
default <- pkg == "cmd/go/internal/modload" && test == "TestQueryImport/golang.org_x_text"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8739169825673976609)):
=== RUN TestQueryImport/golang.org_x_text
go: finding module for package golang.org/x/text
import_test.go:81: queryImport(_, "golang.org/x/text"): cannot find module providing package golang.org/x/text: unrecognized import path "golang.org/x/text": https fetch: Get "https://golang.org/x/text?go-get=1": read tcp [2001:4860:1002:21:f807:5bac:3303:e7e]:62726->[2607:f8b0:4004:c1f::8d]:443: read: operation timed out
--- FAIL: TestQueryImport/golang.org_x_text (18.09s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,473,968,470 | go | x/vulndb: TestLintReports/data/reports/GO-2024-2690.yaml failures | ```
#!watchflakes
default <- pkg == "golang.org/x/vulndb" && test == "TestLintReports/data/reports/GO-2024-2690.yaml"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8739154919140143105)):
=== RUN TestLintReports/data/reports/GO-2024-2690.yaml
all_test.go:116: modules[0] "github.com/hashicorp/vault": version &{1.16.0 fixed} does not exist
--- FAIL: TestLintReports/data/reports/GO-2024-2690.yaml (20.43s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,vulncheck or vulndb | low | Critical |
2,473,981,582 | flutter | If a release PR is opened against main then the merge target "fixed", some targets not appropriately cancelled | See the PR: https://github.com/flutter/engine/pull/54585
And the issue: https://github.com/flutter/flutter/issues/153612 | team-infra,P2,triaged-infra | low | Minor |
2,473,995,633 | vscode | Diff Editor: in-file search should reveal only a small fragment of unchanged region | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
As discussed in #193834, if we have `hideUnchangedRegions` enabled and a search term occurs inside a currently hidden region, the search tools may reveal a huge fragment of the file in an unexpected way (and it works differently depending on whether we search in the left or right editor). It would be more user-friendly to reveal only `contextLineCount` lines around the term.
See screenshots in https://github.com/microsoft/vscode/issues/193834#issuecomment-2289335872 and [the example in Monaco Playground](https://microsoft.github.io/monaco-editor/playground.html?source=v0.50.0#XQAAAALxEQAAAAAAAABBqQkHQ5NjdMjwa-jY7SIQ9S7DNlzs5W-mwj0fe1ZCDRFc9ws9XQE0SJE1jc2VKxhaLFIw9vEWSxW3yscxahHbk_7RaxvTpxkYn3V-u8LX9aoCz6W93JdE-E7gNRYqv4Nj7ofd3sXvMZS3Q7wGKETF2-D42O7I090YntPxQ__nU7ZppDAgCRPFaAjVbn6Dsgc08REFK388tlMPtBfCLR0x19cPb-syrihplIz8m_ed63MK9FH47qVi9VukSPo6IIPihyEOnUrQFF7Ip5Rj_zevWe5zEfzLa3VdAtBoBuUvHkO7u-bfplmwMpG5fR5X09NRagFPV9BxT1xMA8Jkoio0oKUhHd7xlNBWPjCvqRfY25oSsFjaAAcRNavvB6NEMPjAnGW2Q9iPhxm36qPk4YdlLB07NUjeB_uGrVdrX1-Be2QbietkL68IPya1lJZthHNKndkIQddGCBj5O1g7Uajvxsq5Ft2arze4A__pCMAq).
Note that the expected result on the second screenshot can be achieved manually by changing something in the given line in the source or target text and then reverting the change in the editor, which will reset the diff, but keep the fragment visible -- that's how I took the screenshot. | feature-request,diff-editor | low | Minor |
2,474,004,323 | next.js | Cannot use import alias if route path starts with period | ### Link to the code that reproduces this issue
https://github.com/jdhenry08/next-route-broken-imports
### To Reproduce
1. View src/app/.well-known/route.ts
2. Change "../../constants" import to "~/constants" or "src/constants"
### Current vs. Expected behavior
Importing the constants file from either of those two paths should work, since they're aliased in tsconfig.json, but the red squiggly remains :(
### Provide environment information
```bash
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
_No response_ | bug | low | Critical |
2,474,060,973 | next.js | Dynamic `import()` function usage behaves strangely, prevents server-only code | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/relaxed-gagarin-khcz7s
### To Reproduce
The following code executes fine in Node, but it causes a "Module Not Found" error in Next.js:
```ts
export default function Home() {
return "Hello, world!";
}
// This function is never called
const example = () => {
return false;
// This code is inaccessible
import("any import, but for the sake of the demo, a path that doesn't exist");
};
```
<img width="564" alt="Screenshot 2024-08-19 at 2 14 53โฏPM" src="https://github.com/user-attachments/assets/b6be7d47-62fa-47c0-a610-2817b98f5bdf">
### Current vs. Expected behavior
You can copy the `example` function definition into Node and not only does the function get defined, but you can run it and you will get a `false` value in return. No errors, because we haven't done anything to trigger one. That is what I would expect to happen here.
What is actually instead is confusing, but I suspect is related to ensuring dynamically imported code is available *client side*. Unfortunately, this prevents the use of dynamic imports to split code in a way that allows me to keep server-only code server side because all dynamic imports are resolved and bundled by Next.js, even if there is a conditional guarding the import on the client.
### Provide environment information
```bash
Node 20.x
Next 14.25 or 15 Canary
See reproduction for any other required details.
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Lazy Loading, Module Resolution, Output (export/standalone), Pages Router
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | bug,Output (export/standalone),Lazy Loading,Pages Router,Module Resolution | low | Critical |
2,474,094,596 | godot | TabContainer not set up properly when its children are ready | ### Tested versions
v4.3.stable.official [77dcf97d8]
Works fine in v4.2.2 (and before)
### System information
Windows 10
### Issue description
When instantiating TabContainer children via code the TabContainer is not set up properly when one of the children is ready. Compare these two outputs:
v4.2.2

v4.3

This forces users to `call_deferred` any logic that relies on these properties, which is pretty inconvenient.
### Steps to reproduce
1) Instantiate a TabContainer
2) Instantiate a Control node as a child of the TabContainer
3) Connect the `ready` signal of the child
4) Check state of the TabContainer within the connected function
### Minimal reproduction project (MRP)
[TabContainerCurrentTab.zip](https://github.com/user-attachments/files/16664709/TabContainerCurrentTab.zip)
This is a 4.2.2 project which can easily be converted to 4.3 | bug,regression,topic:gui | low | Major |
2,474,097,429 | ollama | optimize numa behavior for large models with GPU and CPU inference - numa_balancing on GPU causes excessively slow load times | ### What is the issue?
My setup is a 4x A100 80GB, 2TB ram, dual intel cpu. Ubuntu server 22.04.
On a previous version of ollama, the model llama3.1:405b was loaded in a reasonable amount of seconds, with latest version this is not the case anymore.
After issuing the command
ollama run llama3.1:405b
it just remain with the rotating cursor.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6 | feature request,linux,performance | low | Major |
2,474,140,907 | flutter | Flutter conductor should keep all its state within current working directory, allowing for concurrent releases | I imagine a workflow like:
```
mkdir stable_release
cd stable_release
conductor start ....
cd ..
mkdir beta_release
cd beta_release
conductor start ...
``` | P1,team-release | medium | Minor |
2,474,183,420 | PowerToys | Request for Double Pinyin Search Support in PowerToys Run | ### Description of the new feature / enhancement
Currently, PowerToys Run supports Pinyin search, which is great. However, many users utilize the Double Pinyin input method rather than pure Pinyin input. Double Pinyin can be faster than full Pinyin in certain cases and enables quicker searches. We propose adding support for Double Pinyin search within PowerToys Run to enhance the usability for users who prefer this input method.
### Scenario when this would be used?
Users who rely on Double Pinyin input will benefit from this feature when typing search queries in PowerToys Run. For example, inputting "wwxb" could be recognized as "ๅพฎไฟก" (WeChat), allowing for faster and more accurate searches. This would improve overall productivity and search efficiency for users who are accustomed to using Double Pinyin.
### Supporting information
Adding support for Double Pinyin search in PowerToys Run would address the needs of users who prefer this input method. It would enhance the functionality of PowerToys by catering to a wider range of input preferences, potentially improving the user experience for those who find Double Pinyin more efficient than full Pinyin. | Needs-Triage | low | Minor |
2,474,185,862 | vscode | Git - integration displays unhelpful error message when pushing large file | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: Windows 10 21H1
Pushing a large file (over Github's maximum of 100MB) results in a pop-up error message prompting the user to try pulling first:
> Can't push refs to remove. Try running "Pull" first to integrate your changes
Reviewing the actual error message by clicking through to the terminal or command output reveals the actual issue, but including the actual error message in the pop-up may be helpful to some users.
Steps to Reproduce:
1. Commit a large file
2. Push or sync
Edit: Issue remains in 1.92.2
| bug,git,github | low | Critical |
2,474,260,889 | TypeScript | Pasting is broken |
Type: <b>Bug</b>
loading spinner after pasting, then not pasting anything
Extension version: 5.6.20240817
VS Code version: Code 1.90.1 (Universal) (611f9bfce64f25108829dd295f54a6894e87339d, 2024-06-11T21:02:41.372Z)
OS version: Darwin x64 23.5.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz (12 x 2600)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled|
|Load (avg)|19, 12, 9|
|Memory (System)|16.00GB (1.55GB free)|
|Process Argv|--crash-reporter-id 04f6e6a6-e358-4bc9-81fd-d6fa047b4be2|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonregdiag2:30936856
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
0ee40948:31013168
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
refactort:31108082
pythonrstrctxt:31112756
flightc:31119335
wkspc-onlycs-t:31111718
nativeloc2:31118319
wkspc-ranged-c:31118571
```
</details>
<!-- generated by issue reporter --> | Needs More Info | low | Critical |
2,474,262,003 | PowerToys | FancyZones deletes all contents of the zones after short time | ### Microsoft PowerToys version
0.80.1
### Installation method
Other (please specify in "Steps to Reproduce")
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
FancyZones deletes all contents of the zones after a short time
I have set up the zones several times and unfortunately after a short time I had to realize that the contents were deleted or emptied. So I linked them again and deleted them a short time later.
I am confused as to what this software is doing.
### โ๏ธ Expected Behavior
I expected all content to be retained until I relinked it
### โ Actual Behavior
Contents emptied after a short time, deleted
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,474,297,811 | PowerToys | Text Extractor random crashes | ### Microsoft PowerToys version
0.83.0
### Installation method
GitHub, Microsoft Store
### Running as admin
No
### Area(s) with issue?
TextExtractor
### Steps to reproduce
Unfortunately this happens randomly (3 times in total over last 4-5 weeks) so isn't easily reproduced. Using shortcut to bring up the interface, Windows locks up for about 15 sec and then the screen refreshes, often with several apps crashing (not always the same ones).
### โ๏ธ Expected Behavior
Shortcut should bring up interface everytime.
### โ Actual Behavior
Shortcut sometimes leads to crash as described above.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Text Extractor | low | Critical |
2,474,351,975 | rust | Unnecessary alloca Without Optimization Flags | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
The following code, when compiled without optimization flags (or with `-Copt-level=0`) emits LLVM IR for an 8192-byte `alloca`, which can easily cause 100% unnecessary stack overflow errors in a running executable.
```rust
fn test<const SIZE: usize>() {
if SIZE < 4096 {
let arr = [0u8; SIZE];
std::hint::black_box(&arr);
} else {
let vec = vec![0u8; SIZE];
std::hint::black_box(&vec);
}
}
fn main() {
test::<8192>();
}
```
The following is the relevant LLVM IR:
```llvm
define internal void @example::test::hb882c86c9d7a582d() unnamed_addr #1 personality ptr @rust_eh_personality !dbg !546 {
start:
%0 = alloca [16 x i8], align 8
%vec = alloca [24 x i8], align 8
%arr = alloca [8192 x i8], align 1
br label %bb2, !dbg !548
```
`%arr` only ends up being used by `bb1` below, but `bb1` has no predecessors:
```llvm
bb1: ; No predecessors!
call void @llvm.memset.p0.i64(ptr align 1 %arr, i8 0, i64 8192, i1 false), !dbg !555
; call core::hint::black_box
%_3 = call align 1 ptr @core::hint::black_box::h61dc8dd57d9b7993(ptr align 1 %arr), !dbg !556
br label %bb5, !dbg !556
}
```
[Here's](https://godbolt.org/z/E9j8TY4nn) a Godbolt Compiler Explorer link with all of the IR. | A-codegen,T-compiler,C-optimization | low | Critical |
2,474,391,525 | pytorch | AOTAutograd not tracing AMP correctly when using PyTorch/XLA. | ### ๐ Describe the bug
Running, for example, `torch.norm` with AMP results in a tensor with the incorrect data-type. In the program below, we expect `out` to be a `float32` tensor due to [its autocast rule][1]. However, turns out the result is actually of type `float16`.
```python
@torch.compile(backend="openxla")
def foo(x):
with torch.cuda.amp.autocast(dtype=torch.float16):
return torch.norm(x, p=2, dim=1)
inp = torch.rand((5, 5), dtype=torch.float16, device="xla")
out = foo(inp)
# Expected: float32
# Got: float16
```
[1]: https://github.com/pytorch/pytorch/blob/afb3e5ed6a950fcb60b763b4eeddca9ad8d58283/aten/src/ATen/autocast_mode.h#L870-L879
Even though this looks like a PyTorch/XLA issue, inspecting the FX graph received from AOTAutograd, we can observe that AOT recorded `torch.ops.aten.norm.ScalarOpt_dim` instead of `torch.ops.aten.norm.ScalarOpt_dim_dtype`. Because of that, PyTorch/XLA has no access to the appended `at::kFloat` at the end of the call.
```python
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "f16[5, 5]"):
# File: examples/test.py:21 in foo, code: return torch.norm(x, p=2, dim=1)
# Should be: torch.ops.aten.norm.ScalarOpt_dim_dtype(arg0_1, 2, [1], torch.float32)
norm: "f16[5]" = torch.ops.aten.norm.ScalarOpt_dim(arg0_1, 2, [1]); arg0_1 = None
return (norm,)
```
### Expected Behavior
I would expect AOTAutograd to record the function that is called by the autocast layer.
### Observations
1. I can't replicate this issue with `aot_eager`. The FX graph I get from AOT is using `torch.ops.aten.linalg_vector_norm` instead of `torch.ops.aten.norm.ScalarOpt_dim`. Not sure where this happened, though.
2. A similar behavior also seems to happen with `torch.nn.Conv2d` (and, possibly, its other multi-dimensional variants), where the casting of the arguments won't get registered
3. Other operations that have autocast rules should also suffer from this, but I haven't tested them all
Because of (1), it feels like it's an XLA-specific issue inside PyTorch. I don't think it's something inside PyTorch/XLA, since this is a matter of which operation is recorded in the AOTAutograd tracing.
cc @bdhirsh @mcarilli @ptrblck @leslie-fang-intel @jgong5 @ezyang @chauhang @penguinwu @zou3519 @miladm @JackCaoG @ajam
### Versions
PyTorch version: 2.5.0a0+git7128504
Is debug build: True
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.31
Python version: 3.10.14 (main, Apr 24 2024, 12:45:30) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-6.10.4-200.fc40.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
PyTorch/XLA version: [ac7fd44f520d586c3d95fc929134c20946e60806][2]
[2]: https://github.com/pytorch/xla/commit/ac7fd44f520d586c3d95fc929134c20946e60806 | triaged,module: xla,module: amp (automated mixed precision),oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,474,397,214 | ollama | GLM4 tools support | GML4 support tools - https://github.com/THUDM/GLM-4/blob/main/finetune_demo/README_en.md
How to fix the template in https://ollama.com/library/glm4 to make the ollama-tools mechanism work? | feature request | low | Minor |
2,474,430,848 | godot | Missing typeerror for comparisons to null | ### Tested versions
4.3
### System information
Godot v4.3.stable - macOS 10.14.6 - GLES3 (Compatibility) - AMD Radeon R9 M370X OpenGL Engine - Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz (8 Threads)
### Issue description
```
var foo: String
if foo == 0:
pass
```
This shows an error, `Invalid operands "String" and "int" for "==" operator.` But change the 0 to null, and the error will go away, despite it being just as invalid to compare a string to null.
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | discussion,topic:gdscript,documentation | low | Critical |
2,474,448,356 | deno | suggestion: always have the description come first in JSDocs for unstable symbols | A symbol being unstable is secondary compared to a description of what that symbol does. I suggest we move the description to be first, and any unstable notices second.
CC @thisisjofrank @crowlKats

| docs,suggestion | low | Major |
2,474,517,584 | tauri | [feat] Consider changing tauri.conf.json to tauri.config.json | ### Describe the problem
Hello.
Tauri is doing great, I have simply a suggestion for your next major version.
JavaScript softwares tend to recognize their config files with the word "config" written on it. This is not a rule, it may only be the most frequent convention. of course there are other possibilities and all are valid,
I would not care, but to be frank to you, days have gone by before I realized that "conf" was an abbreviation for "config" in the file tauri.conf.json. I did not immediately recognized this file as the config file for Tauri. Even now, I can't put my mind to read it as "config" when I pass my eyes on it, maybe because from where I come from, "conf" is an abbreviation for "comfort". Tauri project brings many default files, it is involved in terminologies from more than 1 ecosystem, so it is easy to become overloaded when you are having your first contact. In this scenario, it would be welcome to avoid the "conf" abbreviation, especially to be more recognizable for people whose main language is not english.
### Describe the solution you'd like
In order to ease comprehension, tauri.config.json (and its format variations) could be the expected file, instead of tauri.conf.json. It is easier to recognize, since there are more examples (astro.config.js, babel.config.js, capacitor.config.js, docusaurus.config.js, eleventy.config.js, eslint.config.js, farm.config.js, gatsby-config.js, iles.config.js, jest.config.js, lume.config.js, nativescript.config.js, next.config.js, nuxt.config.js, prettier.config.js, remix.config.js, stylelint.config.js, tailwind.config.js, tsconfig.json, vite.config.js, vitest.config.js, vue.config.js, webpack.config.js, ...)
All the best
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Major |
2,474,542,634 | PowerToys | power Rename suggestion | ### Description of the new feature / enhancement
I would like to add text to file names/ file name extensions with Power Rename.
### Scenario when this would be used?
this would be used if you typed the file name as part of a file name extension on multiple files, swapped file name and file name extension. or want to add text to a bunch of files
ex. 1 text.txt2024importantfile-->2024 screenshot-->2024importantfile.txt
ex. 2: image1-->image1dogs.png
image2-->image2dogs.png

Note: I have already tried to rename the files by deleting .vsconfig from all files.
### Supporting information

my idea for the feature. | Needs-Triage | low | Minor |
2,474,579,273 | pytorch | Why core aten IR doesnโt contain scalar_tensor overload version of some ops, like bitwise-like ops, remainder and so on? | ### ๐ The doc issue
Many ops has scalar_tensor version in native_functions.yaml, but are not included in core aten IR. Is it missing or is there some other reason?
### Suggest a potential alternative/fix
Add scalar_tensor overload version of ops to core aten IR. | triaged,module: core aten | low | Minor |
2,474,582,463 | pytorch | `torch.compile` crash while trying to remove a node | ### ๐ Describe the bug
I am trying to compile a model where part of the computed tensors are discarded. Eager mode runs successfully while the compiled model crashes.
Removing the accuracy calculation line from the `train_step` function avoids the crash, see the workaround below:
```python
def accuracy(out, labels):
assert out.ndim == 2
assert out.size(0) == labels.size(0)
assert labels.ndim == 1 or (labels.ndim == 2 and labels.size(1) == 1)
labels = labels.flatten()
predictions = torch.argmax(out, 1)
return (labels == predictions).sum(dtype=torch.float64) / labels.size(0)
@torch.compile
def train_step(minibatch, optimizer, model, loss_fn):
category = "paper"
node_features = {
ntype: feat.float()
for (ntype, name), feat in minibatch.node_features.items()
if name == "feat"
}
labels = minibatch.labels[category].long()
optimizer.zero_grad()
out = model(minibatch.sampled_subgraphs, node_features)[category]
loss = loss_fn(out, labels)
# https://github.com/pytorch/pytorch/issues/133942
# num_correct = accuracy(out, labels) * labels.size(0)
num_correct = torch.zeros(1, dtype=torch.float64, device=out.device)
loss.backward()
optimizer.step()
return loss.detach(), num_correct, labels.size(0)
```
### Error logs
```
root@a100cse:/localscratch/dgl-3/examples/graphbolt/pyg/labor# TORCHDYNAMO_VERBOSE=1 python ../hetero/node_classification.py
Training in pinned-pinned-cuda mode.
The dataset is already preprocessed.
Dataset loaded
/localscratch/dgl-3/python/dgl/graphbolt/impl/torch_based_feature_store.py:524: GBWarning: `DiskBasedFeature.pin_memory_()` is not supported. Leaving unmodified.
gb_warning(
Number of model parameters: 2412491
Training: 0it [00:06, ?it/s]
Traceback (most recent call last):
File "/localscratch/dgl-3/examples/graphbolt/pyg/labor/../hetero/node_classification.py", line 430, in <module>
main()
File "/localscratch/dgl-3/examples/graphbolt/pyg/labor/../hetero/node_classification.py", line 373, in main
train(train_dataloader, valid_dataloader, model, args.device)
File "/localscratch/dgl-3/examples/graphbolt/pyg/labor/../hetero/node_classification.py", line 298, in train
train_loss, train_acc, duration = train_helper(
File "/localscratch/dgl-3/examples/graphbolt/pyg/labor/../hetero/node_classification.py", line 281, in train_helper
loss, num_correct, num_samples = train_step(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 421, in _fn
return fn(*args, **kwargs)
File "/localscratch/dgl-3/examples/graphbolt/pyg/labor/../hetero/node_classification.py", line 265, in train_step
optimizer.zero_grad()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1078, in catch_errors
return callback(frame, cache_entry, hooks, frame_state, skip=1)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 919, in _convert_frame
result = inner_convert(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 456, in _convert_frame_assert
return _compile(
File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 83, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/usr/local/lib/python3.10/dist-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 799, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 232, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 618, in compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1184, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 177, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 564, in transform
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2450, in run
super().run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 892, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 804, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 496, in wrapper
return handle_graph_break(self, inst, speculation.reason)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 565, in handle_graph_break
self.output.compile_subgraph(self, reason=reason)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1122, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1314, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 232, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1405, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1386, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 127, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 1944, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py", line 1474, in compile_fx
return aot_autograd(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/backends/common.py", line 65, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 945, in aot_module_simplified
compiled_fn, _ = create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 232, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 678, in create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 280, in aot_dispatch_autograd
fw_module, bw_module = aot_config.partition_fn(
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py", line 1406, in partition_fn
_recursive_joint_graph_passes(graph)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py", line 241, in _recursive_joint_graph_passes
joint_graph_passes(gm)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/fx_passes/joint_graph.py", line 321, in joint_graph_passes
count += patterns.apply(graph.graph) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/pattern_matcher.py", line 1696, in apply
entry.apply(m, graph, node) # type: ignore[arg-type]
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/pattern_matcher.py", line 1008, in apply
self.handler(match, *match.args, **match.kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/fx_passes/joint_graph.py", line 465, in mul_softmax_pattern
match.replace_by_example(repl, [inp, other])
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/pattern_matcher.py", line 239, in replace_by_example
ReplacementPatternEntry.replace_with_graph(
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/pattern_matcher.py", line 1163, in replace_with_graph
match.erase_nodes(graph)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/pattern_matcher.py", line 204, in erase_nodes
graph.erase_node(n)
File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph.py", line 1021, in erase_node
raise RuntimeError(f'Tried to erase Node {to_erase} but it still had {len(to_erase.users)} '
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Tried to erase Node mul_9 but it still had 1 users in the graph: {argmax: None}!
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Minified repro
Run https://github.com/dmlc/dgl/pull/7722 with after installing pytorch geometric and dgl.
```python
from math import inf
import torch
from torch import tensor, device
import torch.fx as fx
import torch._dynamo
from torch._dynamo.testing import rand_strided
from torch._dynamo.debug_utils import run_fwd_maybe_bwd
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
import torch.fx.experimental._config
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
self.L__self___L__model___layers_2_dropout = Dropout(p=0.5, inplace=False)
self.L__self___loss_fn = CrossEntropyLoss()
def forward(self, L_L_labels_ : torch.Tensor, x_7, labels):
l_l_labels_ = L_L_labels_
out_30 = self.L__self___L__model___layers_2_dropout(x_7); x_7 = None
loss = self.L__self___loss_fn(out_30, l_l_labels_); l_l_labels_ = None
predictions = torch.argmax(out_30, 1); out_30 = None
eq = labels == predictions; labels = predictions = None
sum_1 = eq.sum(dtype = torch.float64); eq = None
truediv_15 = sum_1 / 1024; sum_1 = None
num_correct = truediv_15 * 1024; truediv_15 = None
return [loss, num_correct]
mod = Repro()
def load_args(reader):
buf0 = reader.storage('a1b9334d88e7a05117e3540e1976a55e42a3f277', 8192, device=device(type='cuda', index=0), dtype_hint=torch.int64)
reader.tensor(buf0, (1024,), dtype=torch.int64, is_leaf=True) # L_L_labels_
buf1 = reader.storage('3a396a0bd2d1e5fc6bb833c38aef3e0dd147a109', 626688, device=device(type='cuda', index=0))
reader.tensor(buf1, (1024, 153), requires_grad=True) # x_7
reader.tensor(buf0, (1024,), dtype=torch.int64, is_leaf=True) # labels
load_args._version = 0
if __name__ == '__main__':
from torch._dynamo.repro.after_dynamo import run_repro
run_repro(mod, load_args, accuracy=False, command='run',
save_dir='/localscratch/dgl-3/examples/graphbolt/pyg/labor/checkpoints', autocast=False, backend='inductor')
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0a0+3bcc3cddb5.nv24.07
Is debug build: False
CUDA used to build PyTorch: 12.5
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7702 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2183.5930
CPU min MHz: 1500.0000
BogoMIPS: 4000.13
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] onnx==1.16.0
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.0.0+989adb9a2
[pip3] torch==2.4.0a0+3bcc3cddb5.nv24.7
[pip3] torch_geometric==2.5.3
[pip3] torch-tensorrt==2.5.0a0
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.19.0a0
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Critical |
2,474,608,867 | pytorch | torch.compile "run at compile time" decorator. | ### ๐ The feature, motivation and pitch
Let's say I have something like this
```
@cached
def create_pi_expensive():
return torch.ones((), device='cuda', dtype=torch.float16) * 3.14159265358979323846
def f(x):
y = create_pi()
return x + y
```
where we want to cache something at compile time and then reuse it.
Today, this is not so easy to do. If you use `@lru_cache` we completely ignore the decorator and run the function every time (not sure this is the right behavior fwiw). OTOH, even if you implement a manual caching mechanism, we will at the very least need to compile your function twice.
I think it would make sense to allow for a "comptime" annotation.
cc: @ezyang @yanboliang @anijain2305
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: dynamo | low | Minor |
2,474,661,038 | rust | A different case of "implementation of `FnOnce` is not general enough" | I tried this code:
```rust
async fn new_trace_lde<E: FieldElement<BaseField = Felt>>(
&self,
trace_info: &TraceInfo,
main_trace: &ColMatrix<Felt>,
domain: &StarkDomain<Felt>,
) -> (Self::TraceLde<E>, TracePolyTable<E>) {
// Error: implementation of FnOnce is not general enough
WebGPUTraceLde::new(trace_info, main_trace, domain, self.webgpu_hash_fn).await
}
// WebGPUTraceLde::new
pub async fn new(
trace_info: &TraceInfo,
main_trace: &ColMatrix<Felt>,
domain: &StarkDomain<Felt>,
webgpu_hash_fn: HashFn,
) -> (Self, TracePolyTable<E>) {
// extend the main execution trace and build a Merkle tree from the extended trace
let (main_segment_lde, main_segment_tree, main_segment_polys) =
build_trace_commitment(main_trace, domain, webgpu_hash_fn).await;
let trace_poly_table = TracePolyTable::new(main_segment_polys);
let trace_lde = WebGPUTraceLde {
main_segment_lde,
main_segment_tree,
aux_segment_lde: None,
aux_segment_tree: None,
blowup: domain.trace_to_lde_blowup(),
trace_info: trace_info.clone(),
webgpu_hash_fn,
};
(trace_lde, trace_poly_table)
}
```
I expected to see this happen: *No error*
Instead, this happened:
```
implementation of `FnOnce` is not general enough
closure with signature `fn(&'0 [Felt]) -> Vec<Felt>` must implement `FnOnce<(&'1 [Felt],)>`, for any two lifetimes `'0` and `'1`...
...but it actually implements `FnOnce<(&[Felt],)>` (rustc)*
```
### Meta
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: aarch64-apple-darwin
release: 1.80.1
LLVM version: 18.1.7
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary></summary>
<p>
```
error: implementation of `FnOnce` is not general enough
--> prover/src/gpu/webgpu/mod.rs:177:49
|
177 | ) -> (Self::TraceLde<E>, TracePolyTable<E>) {
| _________________________________________________^
178 | | WebGPUTraceLde::new(trace_info, main_trace, domain, self.webgpu_hash_fn).await
179 | | }
| |_____^ implementation of `FnOnce` is not general enough
|
= note: closure with signature `fn(&'0 [Felt]) -> Vec<Felt>` must implement `FnOnce<(&'1 [Felt],)>`, for any two lifetimes `'0` and `'1`...
= note: ...but it actually implements `FnOnce<(&[Felt],)>`
```
</p>
</details>
| C-bug,T-types | low | Critical |
2,474,662,767 | material-ui | 'stylis-plugin-rtl'; No matching version found for @typescript-eslint/parser@5.0.1. | ### Related page
https://mui.com/material-ui/customization/right-to-left/
### Kind of issue
Other
### Issue description
npm install stylis-plugin-rtl
**npm error code ETARGET
npm error notarget No matching version found for @typescript-eslint/parser@5.0.1.**
npm error notarget In most cases you or one of your dependencies are requesting
npm error notarget a package version that doesn't exist.
### Context
implementing rtl in my project, TextField lable stays on the left side while it shown on the border/frame
**
<img width="831" alt="image" src="https://github.com/user-attachments/assets/ffc138cc-0eb2-44d8-8407-1d9b32aa1b48">
**
**Search keywords**: stylis-plugin-rtl | external dependency,customization: theme,customization: css | low | Critical |
2,474,665,951 | godot | Godot 4.3 Steam version fails to integrate with Blender and Visual studio code and sometimes crashes on Ubuntu Linux 24.04 | ### Tested versions
- Reproducible on Godot 4.3 stable Steam version , on Ubuntu Linux OS v 24.04
### System information
Ubuntu 24.04 - Godot 4.3 stable - Steam version
### Issue description
Hello everyone:
I'm testing the Godot 4.3 version obtained from Steam on Ubuntu Linux 24.04, and I'm trying to integrate it with Visual Studio Code and Blender (also the Steam version), following the steps to configure, which I'll attach below, I found that:
1. When trying to open a script, Visual Studio Code DOES NOT OPEN even though I configured both the editor and Godot.
2. When passing the path of the Blender executable (located in Steam) I get the following error (see attached image):
Unexpected --version output from Blender executable at: /home/${USER}/snap/steam/common/.local/share/Steam/steamapps/common/Blender/blender

Where ${USER} is the user
3. Godot seems to freeze and craches the system sometimes
Thank you for your attention and excuse the English
### Steps to reproduce
For integrating with Visual Code:
1. In Visual studio Code
1.1 Install "godot-tools" extension
1.2 In the settings of the estension configure:
1.2.1 Godot Tools > Editor Path:Godot4 :
/home/${USER}/snap/steam/common/.local/share/Steam/steamapps/common/Godot Engine/godot.x11.opt.tools.64
1.2.2 Godot Tools > Lsp:Server Host : 127.0.0.1
1.2.3 Godot Tools > Lsp:Server Port : 6008
1.3 Restart Visual studio Code
2. In Godot
2.1 Go to Editor > Editor Settings > External
2.2 I used this settings:

2.3 Go to Editor > Network > Language Server
2.4 I used this settings:

2.5 Restart Godot
For integrating with Blender:
In Godot
1. Go to Project > Project settings > FileSystem > Import
2. I used this settings:

3. Go to Editor > Editor Settings > FileSystem > Import
4. I used this settings:

5. Restart Godot
### Minimal reproduction project (MRP)
[FPS_Multiplayer_v1.zip](https://github.com/user-attachments/files/16668468/FPS_Multiplayer_v1.zip)
| bug,needs testing,crash | low | Critical |
2,474,674,617 | three.js | OrbitControls: The position being undefined causes an error with position.x. | ### Description
<img width="985" alt="image" src="https://github.com/user-attachments/assets/db842be3-d0ca-4af8-b702-83db7488a9b8">
During dragging, there is an occasional issue where the position is undefined. This has occurred with both the mouse and the touchpad.
### Reproduction steps
No consistent reproduction path has been found, but it has occurred many times.
### Code
```js
if(position){
onTouchStart({ pointerId, pageX: position.x, pageY: position.y });
}
```
### Live example
* [jsfiddle-latest-release WebGLRenderer](https://jsfiddle.net/3mrkqyea/)
* [jsfiddle-dev WebGLRenderer](https://jsfiddle.net/gcqx26jv/)
* [jsfiddle-latest-release WebGPURenderer](https://jsfiddle.net/8L2jkmx7/)
* [jsfiddle-dev WebGPURenderer](https://jsfiddle.net/L3n1w4yh/)
### Screenshots
_No response_
### Version
^0.161.0
### Device
Desktop
### Browser
Chrome
### OS
Windows, MacOS | Addons,Needs Investigation | low | Critical |
2,474,681,598 | godot | Surprising edge cases with export suffix hints | ### Tested versions
4.3
### System information
Windows 10
### Issue description
Noting some edge cases around export suffix/unit hints.
**Test script:**
```gdscript
extends Node
@export_category("Possibly nonsensical")
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _basis : Basis
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _quaternion : Quaternion
@export_category("Missing fields (Also possibly nonsensical)")
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _transform2d : Transform2D
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _transform3d : Transform3D
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _plane : Plane
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _projection : Projection
@export_category("Supported and Sensible")
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _float : float
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _int : int
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _vec2 : Vector2
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _vec2I : Vector2i
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _vec3 : Vector3
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _vec3I : Vector3i
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _vec4 : Vector4
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _vec4I : Vector4i
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _rect2 : Rect2
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _aabb : AABB
@export_category("Not supported (Not Numeric)")
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _bool : bool
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _color : Color
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _string : String
@export_custom(PROPERTY_HINT_NONE, "suffix:units") var _nodepath : NodePath
```
**Output:**


**Surprising Behavior:**
**Missing field suffixes:**
`Transform2D`, `Transform3D`, `Plane`, and `Projection` only have unit suffixes on the last input field in each row. Other multiline types like `Rect2` and `AABB` do have every field suffixed.
Strings are not able to be suffixed. (I assume this was outside of the original scope of adding unit suffixes to numeric types; but as a user it is something I want anyway)
**Suffixes for unitless or mixed-unit types:**
`Transform2D`, `Transform3D`, `Plane`, `Projection`, `Basis`, and `Quaternion` arguably do not make sense to have units - they have defined meanings for each field, and mostly are unitless (my math may be off for some of these). I don't think it makes sense to restrict the use of the suffix export, though. Types can be reused for different purposes.
**Possible Fix:**
String should arguably be able to be suffixed.
`Transform2D`, `Transform3D`, `Plane`, and `Projection` should have suffixes on each subfield.
### Steps to reproduce
Open the MRP and select the only node in the scene. Look at its inspector.
### Minimal reproduction project (MRP)
[mrp-suffix-hints.zip](https://github.com/user-attachments/files/16668914/mrp-suffix-hints.zip) | enhancement,discussion,topic:gdscript,topic:editor | low | Minor |
2,474,697,070 | pytorch | Unable to compile model (Graph break) | ### ๐ Describe the bug
When trying to compile a model I received the error `torch._dynamo.exc.Unsupported: Graph break due to unsupported Python builtin _abc._abc_instancecheck. Please file an issue on GitHub so the PyTorch team can add support for it.`
The model is a PyTorch Geometric GNN with [GATv2Conv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GATv2Conv.html) layers. More info can be found in [this issue](https://github.com/pyg-team/pytorch_geometric/issues/9603#issuecomment-2297931613).
### Error logs
```
E0819 22:05:31.170000 138952784099136 torch/fx/experimental/recording.py:281] [0/0] failed while running defer_runtime_assert(*(Eq(s1 + u0, s1 + u3), '/home/usevitch/mambaforge/envs/neurosub_debug/lib/pyt
hon3.11/site-packages/torch/__init__.py:1318'), **{'fx_node': None})
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] failed to eagerly compile backwards for dynamic, suppressing in case backwards not needed
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] Traceback (most recent call last):
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_fu
nctorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 547, in aot_dispatch_autograd
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] compiled_bw_func = aot_config.bw_compiler(
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dy
namo/backends/common.py", line 47, in _wrapped_bw_compiler
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return disable(disable(bw_compiler)(*args, **kwargs))
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dy
namo/eval_frame.py", line 600, in _fn
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return fn(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_ut
ils_internal.py", line 84, in wrapper_function
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return StrobelightCompileTimeProfiler.profile_compile_time(
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_st
robelight/compile_time_profiler.py", line 129, in profile_compile_time
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return func(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dy
namo/utils.py", line 231, in time_wrapper
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] r = func(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_in
ductor/compile_fx.py", line 1454, in bw_compiler
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return inner_compile(
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dy
namo/repro/after_aot.py", line 84, in debug_wrapper
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] inner_compiled_fn = compiler_fn(gm, example_inputs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_in
ductor/debug.py", line 304, in inner
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return fn(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/contextlib.py", line 81
, in inner
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return func(*args, **kwds)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/contextlib.py", line 81
, in inner
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return func(*args, **kwds)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_in
ductor/compile_fx.py", line 527, in compile_fx_inner
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] compiled_graph = fx_codegen_and_compile(
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/contextlib.py", line 81
, in inner
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return func(*args, **kwds)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_in
ductor/compile_fx.py", line 738, in fx_codegen_and_compile
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] fake_mode = fake_tensor_prop(gm, example_inputs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_in
ductor/compile_fx.py", line 379, in fake_tensor_prop
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] FakeTensorProp(gm, mode=fake_mode).propagate_dont_convert_inputs(
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
passes/fake_tensor_prop.py", line 69, in propagate_dont_convert_inputs
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return super().run(*args)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
interpreter.py", line 146, in run
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] self.env[node] = self.run_node(node)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
passes/fake_tensor_prop.py", line 37, in run_node
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] result = super().run_node(n)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
interpreter.py", line 203, in run_node
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return getattr(self, n.op)(n.target, args, kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
interpreter.py", line 275, in call_function
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return target(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_op
s.py", line 667, in __call__
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return self_._op(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/uti
ls/_stats.py", line 21, in wrapper
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return fn(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_su
bclasses/fake_tensor.py", line 1061, in __torch_dispatch__
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return self.dispatch(func, types, args, kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_su
bclasses/fake_tensor.py", line 1450, in dispatch
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return self._cached_dispatch_impl(func, types, args, kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_su
bclasses/fake_tensor.py", line 1153, in _cached_dispatch_impl
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] output = self._dispatch_impl(func, types, args, kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_su
bclasses/fake_tensor.py", line 1671, in _dispatch_impl
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return maybe_propagate_real_tensors(fast_impl(self, *args, **kwargs))
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_su
bclasses/fake_impls.py", line 1062, in fast_binary_impl
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] final_shape = infer_size(final_shape, shape)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_su
bclasses/fake_impls.py", line 1016, in infer_size
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] torch._check(
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/__i
nit__.py", line 1353, in _check
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] _check_with(RuntimeError, cond, message)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/__i
nit__.py", line 1318, in _check_with
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] if expect_true(cond):
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
experimental/symbolic_shapes.py", line 946, in expect_true
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return a.node.expect_true(frame.f_code.co_filename, frame.f_lineno)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
experimental/sym_node.py", line 435, in expect_true
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return self.shape_env.defer_runtime_assert(
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
experimental/recording.py", line 245, in wrapper
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return fn(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/fx/
experimental/symbolic_shapes.py", line 5338, in defer_runtime_assert
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] assert not self.runtime_asserts_frozen, expr
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] AssertionError: Eq(s1 + u0, s1 + u3)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0]
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] While executing %mul_21 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](ar
gs = (%gather, %index_19), kwargs = {})
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] Original traceback:
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/code/python/neurosub/compile_error_mwe.py", line 24, in forward
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] x = self.layers[-1](x, edge_index)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/
modules/module.py", line 1562, in _call_impl
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return forward_call(*args, **kwargs)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch_geo
metric/nn/conv/gatv2_conv.py", line 329, in forward
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] out = self.propagate(edge_index, x=(x_l, x_r), alpha=alpha)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/tmp/torch_geometric.nn.conv.gatv2_conv_GATv2Conv_propagate_ch0xd74l.py", line 176,
in propagate
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] out = self.message(
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch_geo
metric/nn/conv/gatv2_conv.py", line 375, in message
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0] return x_j * alpha.unsqueeze(-1)
W0819 22:05:31.171000 138952784099136 torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:551] [0/0]
Traceback (most recent call last):
File "/home/usevitch/code/python/neurosub/compile_error_mwe.py", line 41, in <module>
out = model(data.x, data.edge_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/code/python/neurosub/compile_error_mwe.py", line 19, in forward
def forward(self, x, edge_index):
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/modules/container.py", line 295, in __getitem__
return self.__class__(list(self._modules.values())[idx])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/modules/container.py", line 281, in __init__
self += modules
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/modules/container.py", line 322, in __iadd__
return self.extend(modules)
^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/nn/modules/container.py", line 399, in extend
if not isinstance(modules, container_abcs.Iterable):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1116, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 472, in __call__
return _compile(
^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_utils_internal.py", line 84, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 817, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 636, in compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1185, in transform_code_object
transformations(instructions, code_options)
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 178, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 582, in transform
tracer.run()
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2451, in run
super().run()
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 893, in run
while self.step():
^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 805, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 499, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2059, in CALL
self.call_function(fn, args, kwargs)
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 743, in call_function
self.push(fn.call_function(self, args, kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 665, in call_function
unimplemented(msg)
File "/home/usevitch/mambaforge/envs/neurosub_debug/lib/python3.11/site-packages/torch/_dynamo/exc.py", line 221, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: Graph break due to unsupported Python builtin _abc._abc_instancecheck. Please file an issue on GitHub so the PyTorch team can add support for it.
from user code:
File "<frozen abc>", line 119, in __instancecheck__
```
### Minified repro
```python
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
import torch._inductor.inductor_prims
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
import torch.fx.experimental._config
torch._dynamo.config.assume_static_by_default = False
torch._dynamo.config.capture_dynamic_output_shape_ops = True
torch._functorch.config.unlift_effect_tokens = True
isolate_fails_code_str = None
# torch version: 2.4.0+cu121
# torch cuda version: 12.1
# torch git version: e4ee3be4063b7c430974252fdf7db42273388d86
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2023 NVIDIA Corporation
# Built on Tue_Aug_15_22:02:13_PDT_2023
# Cuda compilation tools, release 12.2, V12.2.140
# Build cuda_12.2.r12.2/compiler.33191640_0
# GPU Hardware Info:
# NVIDIA GeForce RTX 4090 : 2
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, primals_26, sym_size_int_1, primals_5, primals_6, primals_11, primals_12, primals_17, primals_18, primals_23, primals_27, addmm, addmm_1, select_2, select_3, full, exp, index_4, div, full_2, scatter_add_1, mul_5, index_7, add_6, exp_1, index_10, div_1, scatter_add_3, mul_11, index_13, add_11, exp_2, index_16, div_2, scatter_add_5, mul_17, index_19, add_16, exp_3, index_22, div_3, full_11, permute_8, permute_12, permute_16, permute_20, permute_24, permute_28, tangents_1):
sum_5 = torch.ops.aten.sum.dim_IntList(tangents_1, [0], True)
view_32 = torch.ops.aten.view.default(sum_5, [2]); sum_5 = None
view_33 = torch.ops.aten.view.default(tangents_1, [primals_26, 1, 2]); tangents_1 = None
view_5 = torch.ops.aten.view.default(select_2, [-1, 1, 1])
expand_11 = torch.ops.aten.expand.default(view_5, [sym_size_int_1, 1, 2])
gather = torch.ops.aten.gather.default(view_33, 0, expand_11); view_33 = expand_11 = None
mul_21 = torch.ops.aten.mul.Tensor(gather, index_19); index_19 = None
unsqueeze_3 = torch.ops.aten.unsqueeze.default(div_3, -1)
mul_22 = torch.ops.aten.mul.Tensor(gather, unsqueeze_3); gather = unsqueeze_3 = None
sum_6 = torch.ops.aten.sum.dim_IntList(mul_21, [2], True); mul_21 = None
squeeze = torch.ops.aten.squeeze.dim(sum_6, -1); sum_6 = None
index_put = torch.ops.aten.index_put.default(full_11, [select_3], mul_22, True); mul_22 = None
div_5 = torch.ops.aten.div.Tensor(div_3, index_22); div_3 = None
neg = torch.ops.aten.neg.default(squeeze)
mul_23 = torch.ops.aten.mul.Tensor(neg, div_5); neg = div_5 = None
div_6 = torch.ops.aten.div.Tensor(squeeze, index_22); squeeze = index_22 = None
index_put_1 = torch.ops.aten.index_put.default(full, [select_2], mul_23, True); mul_23 = None
view_3 = torch.ops.aten.view.default(select_2, [-1, 1])
expand = torch.ops.aten.expand.default(view_3, [sym_size_int_1, 1]); view_3 = None
gather_1 = torch.ops.aten.gather.default(index_put_1, 0, expand); index_put_1 = None
add_19 = torch.ops.aten.add.Tensor(div_6, gather_1); div_6 = gather_1 = None
mul_24 = torch.ops.aten.mul.Tensor(add_19, exp_3); add_19 = exp_3 = None
unsqueeze_4 = torch.ops.aten.unsqueeze.default(mul_24, -1); mul_24 = None
expand_12 = torch.ops.aten.expand.default(unsqueeze_4, [sym_size_int_1, 1, 2]); unsqueeze_4 = None
gt_3 = torch.ops.aten.gt.Scalar(add_16, 0)
mul_18 = torch.ops.aten.mul.Tensor(add_16, 0.2)
where_3 = torch.ops.aten.where.self(gt_3, add_16, mul_18); add_16 = mul_18 = None
mul_25 = torch.ops.aten.mul.Tensor(expand_12, where_3); where_3 = None
mul_26 = torch.ops.aten.mul.Tensor(expand_12, primals_23); expand_12 = primals_23 = None
sum_7 = torch.ops.aten.sum.dim_IntList(mul_25, [0], True); mul_25 = None
mul_27 = torch.ops.aten.mul.Tensor(mul_26, 0.2)
where_4 = torch.ops.aten.where.self(gt_3, mul_26, mul_27); gt_3 = mul_26 = mul_27 = None
index_put_2 = torch.ops.aten.index_put.default(full_11, [select_2], where_4, True)
index_put_3 = torch.ops.aten.index_put.default(full_11, [select_3], where_4, True); full_11 = where_4 = None
add_20 = torch.ops.aten.add.Tensor(index_put, index_put_3); index_put = index_put_3 = None
view_34 = torch.ops.aten.view.default(index_put_2, [primals_26, 2]); index_put_2 = None
mm = torch.ops.aten.mm.default(view_34, permute_8); permute_8 = None
permute_9 = torch.ops.aten.permute.default(view_34, [1, 0])
mm_1 = torch.ops.aten.mm.default(permute_9, mul_17); permute_9 = None
permute_10 = torch.ops.aten.permute.default(mm_1, [1, 0]); mm_1 = None
sum_8 = torch.ops.aten.sum.dim_IntList(view_34, [0], True); view_34 = None
view_35 = torch.ops.aten.view.default(sum_8, [2]); sum_8 = None
permute_11 = torch.ops.aten.permute.default(permute_10, [1, 0]); permute_10 = None
view_36 = torch.ops.aten.view.default(add_20, [primals_26, 2]); add_20 = None
mm_2 = torch.ops.aten.mm.default(view_36, permute_12); permute_12 = None
permute_13 = torch.ops.aten.permute.default(view_36, [1, 0])
mm_3 = torch.ops.aten.mm.default(permute_13, mul_17); permute_13 = mul_17 = None
permute_14 = torch.ops.aten.permute.default(mm_3, [1, 0]); mm_3 = None
sum_9 = torch.ops.aten.sum.dim_IntList(view_36, [0], True); view_36 = None
view_37 = torch.ops.aten.view.default(sum_9, [2]); sum_9 = None
add_21 = torch.ops.aten.add.Tensor(mm, mm_2); mm = mm_2 = None
permute_15 = torch.ops.aten.permute.default(permute_14, [1, 0]); permute_14 = None
view_23 = torch.ops.aten.view.default(scatter_add_5, [-1, 4]); scatter_add_5 = None
add_13 = torch.ops.aten.add.Tensor(view_23, primals_18); view_23 = primals_18 = None
mul_16 = torch.ops.aten.mul.Tensor(add_13, 0.7071067811865476)
erf_2 = torch.ops.aten.erf.default(mul_16); mul_16 = None
add_14 = torch.ops.aten.add.Tensor(erf_2, 1); erf_2 = None
mul_29 = torch.ops.aten.mul.Tensor(add_14, 0.5); add_14 = None
mul_30 = torch.ops.aten.mul.Tensor(add_13, add_13)
mul_31 = torch.ops.aten.mul.Tensor(mul_30, -0.5); mul_30 = None
exp_4 = torch.ops.aten.exp.default(mul_31); mul_31 = None
mul_32 = torch.ops.aten.mul.Tensor(exp_4, 0.3989422804014327); exp_4 = None
mul_33 = torch.ops.aten.mul.Tensor(add_13, mul_32); add_13 = mul_32 = None
add_23 = torch.ops.aten.add.Tensor(mul_29, mul_33); mul_29 = mul_33 = None
mul_34 = torch.ops.aten.mul.Tensor(add_21, add_23); add_21 = add_23 = None
sum_10 = torch.ops.aten.sum.dim_IntList(mul_34, [0], True)
view_38 = torch.ops.aten.view.default(sum_10, [4]); sum_10 = None
view_39 = torch.ops.aten.view.default(mul_34, [primals_26, 1, 4]); mul_34 = None
expand_2 = torch.ops.aten.expand.default(view_5, [sym_size_int_1, 1, 4]); view_5 = None
gather_2 = torch.ops.aten.gather.default(view_39, 0, expand_2); view_39 = None
mul_35 = torch.ops.aten.mul.Tensor(gather_2, index_13); index_13 = None
unsqueeze_2 = torch.ops.aten.unsqueeze.default(div_2, -1)
mul_36 = torch.ops.aten.mul.Tensor(gather_2, unsqueeze_2); gather_2 = unsqueeze_2 = None
sum_11 = torch.ops.aten.sum.dim_IntList(mul_35, [2], True); mul_35 = None
squeeze_1 = torch.ops.aten.squeeze.dim(sum_11, -1); sum_11 = None
index_put_4 = torch.ops.aten.index_put.default(full_2, [select_3], mul_36, True); mul_36 = None
div_8 = torch.ops.aten.div.Tensor(div_2, index_16); div_2 = None
neg_1 = torch.ops.aten.neg.default(squeeze_1)
mul_37 = torch.ops.aten.mul.Tensor(neg_1, div_8); neg_1 = div_8 = None
div_9 = torch.ops.aten.div.Tensor(squeeze_1, index_16); squeeze_1 = index_16 = None
index_put_5 = torch.ops.aten.index_put.default(full, [select_2], mul_37, True); mul_37 = None
gather_3 = torch.ops.aten.gather.default(index_put_5, 0, expand); index_put_5 = None
add_24 = torch.ops.aten.add.Tensor(div_9, gather_3); div_9 = gather_3 = None
mul_38 = torch.ops.aten.mul.Tensor(add_24, exp_2); add_24 = exp_2 = None
unsqueeze_5 = torch.ops.aten.unsqueeze.default(mul_38, -1); mul_38 = None
expand_13 = torch.ops.aten.expand.default(unsqueeze_5, [sym_size_int_1, 1, 4]); unsqueeze_5 = None
gt_2 = torch.ops.aten.gt.Scalar(add_11, 0)
mul_12 = torch.ops.aten.mul.Tensor(add_11, 0.2)
where_2 = torch.ops.aten.where.self(gt_2, add_11, mul_12); add_11 = mul_12 = None
mul_39 = torch.ops.aten.mul.Tensor(expand_13, where_2); where_2 = None
mul_40 = torch.ops.aten.mul.Tensor(expand_13, primals_17); expand_13 = primals_17 = None
sum_12 = torch.ops.aten.sum.dim_IntList(mul_39, [0], True); mul_39 = None
mul_41 = torch.ops.aten.mul.Tensor(mul_40, 0.2)
where_5 = torch.ops.aten.where.self(gt_2, mul_40, mul_41); gt_2 = mul_40 = mul_41 = None
index_put_6 = torch.ops.aten.index_put.default(full_2, [select_2], where_5, True)
index_put_7 = torch.ops.aten.index_put.default(full_2, [select_3], where_5, True); where_5 = None
add_25 = torch.ops.aten.add.Tensor(index_put_4, index_put_7); index_put_4 = index_put_7 = None
view_40 = torch.ops.aten.view.default(index_put_6, [primals_26, 4]); index_put_6 = None
mm_4 = torch.ops.aten.mm.default(view_40, permute_16); permute_16 = None
permute_17 = torch.ops.aten.permute.default(view_40, [1, 0])
mm_5 = torch.ops.aten.mm.default(permute_17, mul_11); permute_17 = None
permute_18 = torch.ops.aten.permute.default(mm_5, [1, 0]); mm_5 = None
sum_13 = torch.ops.aten.sum.dim_IntList(view_40, [0], True); view_40 = None
view_41 = torch.ops.aten.view.default(sum_13, [4]); sum_13 = None
permute_19 = torch.ops.aten.permute.default(permute_18, [1, 0]); permute_18 = None
view_42 = torch.ops.aten.view.default(add_25, [primals_26, 4]); add_25 = None
mm_6 = torch.ops.aten.mm.default(view_42, permute_20); permute_20 = None
permute_21 = torch.ops.aten.permute.default(view_42, [1, 0])
mm_7 = torch.ops.aten.mm.default(permute_21, mul_11); permute_21 = mul_11 = None
permute_22 = torch.ops.aten.permute.default(mm_7, [1, 0]); mm_7 = None
sum_14 = torch.ops.aten.sum.dim_IntList(view_42, [0], True); view_42 = None
view_43 = torch.ops.aten.view.default(sum_14, [4]); sum_14 = None
add_26 = torch.ops.aten.add.Tensor(mm_4, mm_6); mm_4 = mm_6 = None
permute_23 = torch.ops.aten.permute.default(permute_22, [1, 0]); permute_22 = None
view_15 = torch.ops.aten.view.default(scatter_add_3, [-1, 4]); scatter_add_3 = None
add_8 = torch.ops.aten.add.Tensor(view_15, primals_12); view_15 = primals_12 = None
mul_10 = torch.ops.aten.mul.Tensor(add_8, 0.7071067811865476)
erf_1 = torch.ops.aten.erf.default(mul_10); mul_10 = None
add_9 = torch.ops.aten.add.Tensor(erf_1, 1); erf_1 = None
mul_43 = torch.ops.aten.mul.Tensor(add_9, 0.5); add_9 = None
mul_44 = torch.ops.aten.mul.Tensor(add_8, add_8)
mul_45 = torch.ops.aten.mul.Tensor(mul_44, -0.5); mul_44 = None
exp_5 = torch.ops.aten.exp.default(mul_45); mul_45 = None
mul_46 = torch.ops.aten.mul.Tensor(exp_5, 0.3989422804014327); exp_5 = None
mul_47 = torch.ops.aten.mul.Tensor(add_8, mul_46); add_8 = mul_46 = None
add_28 = torch.ops.aten.add.Tensor(mul_43, mul_47); mul_43 = mul_47 = None
mul_48 = torch.ops.aten.mul.Tensor(add_26, add_28); add_26 = add_28 = None
sum_15 = torch.ops.aten.sum.dim_IntList(mul_48, [0], True)
view_44 = torch.ops.aten.view.default(sum_15, [4]); sum_15 = None
view_45 = torch.ops.aten.view.default(mul_48, [primals_26, 1, 4]); mul_48 = None
gather_4 = torch.ops.aten.gather.default(view_45, 0, expand_2); view_45 = None
mul_49 = torch.ops.aten.mul.Tensor(gather_4, index_7); index_7 = None
unsqueeze_1 = torch.ops.aten.unsqueeze.default(div_1, -1)
mul_50 = torch.ops.aten.mul.Tensor(gather_4, unsqueeze_1); gather_4 = unsqueeze_1 = None
sum_16 = torch.ops.aten.sum.dim_IntList(mul_49, [2], True); mul_49 = None
squeeze_2 = torch.ops.aten.squeeze.dim(sum_16, -1); sum_16 = None
index_put_8 = torch.ops.aten.index_put.default(full_2, [select_3], mul_50, True); mul_50 = None
div_11 = torch.ops.aten.div.Tensor(div_1, index_10); div_1 = None
neg_2 = torch.ops.aten.neg.default(squeeze_2)
mul_51 = torch.ops.aten.mul.Tensor(neg_2, div_11); neg_2 = div_11 = None
div_12 = torch.ops.aten.div.Tensor(squeeze_2, index_10); squeeze_2 = index_10 = None
index_put_9 = torch.ops.aten.index_put.default(full, [select_2], mul_51, True); mul_51 = None
gather_5 = torch.ops.aten.gather.default(index_put_9, 0, expand); index_put_9 = None
add_29 = torch.ops.aten.add.Tensor(div_12, gather_5); div_12 = gather_5 = None
mul_52 = torch.ops.aten.mul.Tensor(add_29, exp_1); add_29 = exp_1 = None
unsqueeze_6 = torch.ops.aten.unsqueeze.default(mul_52, -1); mul_52 = None
expand_14 = torch.ops.aten.expand.default(unsqueeze_6, [sym_size_int_1, 1, 4]); unsqueeze_6 = None
gt_1 = torch.ops.aten.gt.Scalar(add_6, 0)
mul_6 = torch.ops.aten.mul.Tensor(add_6, 0.2)
where_1 = torch.ops.aten.where.self(gt_1, add_6, mul_6); add_6 = mul_6 = None
mul_53 = torch.ops.aten.mul.Tensor(expand_14, where_1); where_1 = None
mul_54 = torch.ops.aten.mul.Tensor(expand_14, primals_11); expand_14 = primals_11 = None
sum_17 = torch.ops.aten.sum.dim_IntList(mul_53, [0], True); mul_53 = None
mul_55 = torch.ops.aten.mul.Tensor(mul_54, 0.2)
where_6 = torch.ops.aten.where.self(gt_1, mul_54, mul_55); gt_1 = mul_54 = mul_55 = None
index_put_10 = torch.ops.aten.index_put.default(full_2, [select_2], where_6, True)
index_put_11 = torch.ops.aten.index_put.default(full_2, [select_3], where_6, True); where_6 = None
add_30 = torch.ops.aten.add.Tensor(index_put_8, index_put_11); index_put_8 = index_put_11 = None
view_46 = torch.ops.aten.view.default(index_put_10, [primals_26, 4]); index_put_10 = None
mm_8 = torch.ops.aten.mm.default(view_46, permute_24); permute_24 = None
permute_25 = torch.ops.aten.permute.default(view_46, [1, 0])
mm_9 = torch.ops.aten.mm.default(permute_25, mul_5); permute_25 = None
permute_26 = torch.ops.aten.permute.default(mm_9, [1, 0]); mm_9 = None
sum_18 = torch.ops.aten.sum.dim_IntList(view_46, [0], True); view_46 = None
view_47 = torch.ops.aten.view.default(sum_18, [4]); sum_18 = None
permute_27 = torch.ops.aten.permute.default(permute_26, [1, 0]); permute_26 = None
view_48 = torch.ops.aten.view.default(add_30, [primals_26, 4]); add_30 = None
mm_10 = torch.ops.aten.mm.default(view_48, permute_28); permute_28 = None
permute_29 = torch.ops.aten.permute.default(view_48, [1, 0])
mm_11 = torch.ops.aten.mm.default(permute_29, mul_5); permute_29 = mul_5 = None
permute_30 = torch.ops.aten.permute.default(mm_11, [1, 0]); mm_11 = None
sum_19 = torch.ops.aten.sum.dim_IntList(view_48, [0], True); view_48 = None
view_49 = torch.ops.aten.view.default(sum_19, [4]); sum_19 = None
add_31 = torch.ops.aten.add.Tensor(mm_8, mm_10); mm_8 = mm_10 = None
permute_31 = torch.ops.aten.permute.default(permute_30, [1, 0]); permute_30 = None
view_7 = torch.ops.aten.view.default(scatter_add_1, [-1, 4]); scatter_add_1 = None
add_3 = torch.ops.aten.add.Tensor(view_7, primals_6); view_7 = primals_6 = None
mul_4 = torch.ops.aten.mul.Tensor(add_3, 0.7071067811865476)
erf = torch.ops.aten.erf.default(mul_4); mul_4 = None
add_4 = torch.ops.aten.add.Tensor(erf, 1); erf = None
mul_57 = torch.ops.aten.mul.Tensor(add_4, 0.5); add_4 = None
mul_58 = torch.ops.aten.mul.Tensor(add_3, add_3)
mul_59 = torch.ops.aten.mul.Tensor(mul_58, -0.5); mul_58 = None
exp_6 = torch.ops.aten.exp.default(mul_59); mul_59 = None
mul_60 = torch.ops.aten.mul.Tensor(exp_6, 0.3989422804014327); exp_6 = None
mul_61 = torch.ops.aten.mul.Tensor(add_3, mul_60); add_3 = mul_60 = None
add_33 = torch.ops.aten.add.Tensor(mul_57, mul_61); mul_57 = mul_61 = None
mul_62 = torch.ops.aten.mul.Tensor(add_31, add_33); add_31 = add_33 = None
sum_20 = torch.ops.aten.sum.dim_IntList(mul_62, [0], True)
view_50 = torch.ops.aten.view.default(sum_20, [4]); sum_20 = None
view_51 = torch.ops.aten.view.default(mul_62, [primals_26, 1, 4]); mul_62 = None
gather_6 = torch.ops.aten.gather.default(view_51, 0, expand_2); view_51 = expand_2 = None
view = torch.ops.aten.view.default(addmm, [-1, 1, 4]); addmm = None
index_1 = torch.ops.aten.index.Tensor(view, [select_3]); view = None
mul_63 = torch.ops.aten.mul.Tensor(gather_6, index_1)
unsqueeze = torch.ops.aten.unsqueeze.default(div, -1)
mul_64 = torch.ops.aten.mul.Tensor(gather_6, unsqueeze); gather_6 = unsqueeze = None
sum_21 = torch.ops.aten.sum.dim_IntList(mul_63, [2], True); mul_63 = None
squeeze_3 = torch.ops.aten.squeeze.dim(sum_21, -1); sum_21 = None
index_put_12 = torch.ops.aten.index_put.default(full_2, [select_3], mul_64, True); mul_64 = None
div_14 = torch.ops.aten.div.Tensor(div, index_4); div = None
neg_3 = torch.ops.aten.neg.default(squeeze_3)
mul_65 = torch.ops.aten.mul.Tensor(neg_3, div_14); neg_3 = div_14 = None
div_15 = torch.ops.aten.div.Tensor(squeeze_3, index_4); squeeze_3 = index_4 = None
index_put_13 = torch.ops.aten.index_put.default(full, [select_2], mul_65, True); full = mul_65 = None
gather_7 = torch.ops.aten.gather.default(index_put_13, 0, expand); index_put_13 = expand = None
add_34 = torch.ops.aten.add.Tensor(div_15, gather_7); div_15 = gather_7 = None
mul_66 = torch.ops.aten.mul.Tensor(add_34, exp); add_34 = exp = None
unsqueeze_7 = torch.ops.aten.unsqueeze.default(mul_66, -1); mul_66 = None
expand_15 = torch.ops.aten.expand.default(unsqueeze_7, [sym_size_int_1, 1, 4]); unsqueeze_7 = sym_size_int_1 = None
view_1 = torch.ops.aten.view.default(addmm_1, [-1, 1, 4]); addmm_1 = None
index_2 = torch.ops.aten.index.Tensor(view_1, [select_2]); view_1 = None
add_1 = torch.ops.aten.add.Tensor(index_2, index_1); index_2 = index_1 = None
gt = torch.ops.aten.gt.Scalar(add_1, 0)
mul = torch.ops.aten.mul.Tensor(add_1, 0.2)
where = torch.ops.aten.where.self(gt, add_1, mul); add_1 = mul = None
mul_67 = torch.ops.aten.mul.Tensor(expand_15, where); where = None
mul_68 = torch.ops.aten.mul.Tensor(expand_15, primals_5); expand_15 = primals_5 = None
sum_22 = torch.ops.aten.sum.dim_IntList(mul_67, [0], True); mul_67 = None
mul_69 = torch.ops.aten.mul.Tensor(mul_68, 0.2)
where_7 = torch.ops.aten.where.self(gt, mul_68, mul_69); gt = mul_68 = mul_69 = None
index_put_14 = torch.ops.aten.index_put.default(full_2, [select_2], where_7, True); select_2 = None
index_put_15 = torch.ops.aten.index_put.default(full_2, [select_3], where_7, True); full_2 = select_3 = where_7 = None
add_35 = torch.ops.aten.add.Tensor(index_put_12, index_put_15); index_put_12 = index_put_15 = None
view_52 = torch.ops.aten.view.default(index_put_14, [primals_26, 4]); index_put_14 = None
permute_32 = torch.ops.aten.permute.default(view_52, [1, 0])
mm_12 = torch.ops.aten.mm.default(permute_32, primals_27); permute_32 = None
permute_33 = torch.ops.aten.permute.default(mm_12, [1, 0]); mm_12 = None
sum_23 = torch.ops.aten.sum.dim_IntList(view_52, [0], True); view_52 = None
view_53 = torch.ops.aten.view.default(sum_23, [4]); sum_23 = None
permute_34 = torch.ops.aten.permute.default(permute_33, [1, 0]); permute_33 = None
view_54 = torch.ops.aten.view.default(add_35, [primals_26, 4]); add_35 = primals_26 = None
permute_35 = torch.ops.aten.permute.default(view_54, [1, 0])
mm_13 = torch.ops.aten.mm.default(permute_35, primals_27); permute_35 = primals_27 = None
permute_36 = torch.ops.aten.permute.default(mm_13, [1, 0]); mm_13 = None
sum_24 = torch.ops.aten.sum.dim_IntList(view_54, [0], True); view_54 = None
view_55 = torch.ops.aten.view.default(sum_24, [4]); sum_24 = None
permute_37 = torch.ops.aten.permute.default(permute_36, [1, 0]); permute_36 = None
return [permute_37, view_55, permute_34, view_53, sum_22, view_50, permute_31, view_49, permute_27, view_47, sum_17, view_44, permute_23, view_43, permute_19, view_41, sum_12, view_38, permute_15, view_37, permute_11, view_35, sum_7, view_32, None, None, None, None, None, None, None, None]
def load_args(reader):
reader.symint(900) # primals_26
reader.symint(None) # sym_size_int_1
buf0 = reader.storage(None, 16)
reader.tensor(buf0, (1, 1, 4), is_leaf=True) # primals_5
buf1 = reader.storage(None, 16)
reader.tensor(buf1, (4,), is_leaf=True) # primals_6
buf2 = reader.storage(None, 16)
reader.tensor(buf2, (1, 1, 4), is_leaf=True) # primals_11
buf3 = reader.storage(None, 16)
reader.tensor(buf3, (4,), is_leaf=True) # primals_12
buf4 = reader.storage(None, 16)
reader.tensor(buf4, (1, 1, 4), is_leaf=True) # primals_17
buf5 = reader.storage(None, 16)
reader.tensor(buf5, (4,), is_leaf=True) # primals_18
buf6 = reader.storage(None, 8)
reader.tensor(buf6, (1, 1, 2), is_leaf=True) # primals_23
buf7 = reader.storage(None, 8*s1)
reader.tensor(buf7, (s1, 2), is_leaf=True) # primals_27
buf8 = reader.storage(None, 16*s1)
reader.tensor(buf8, (s1, 4), is_leaf=True) # addmm
buf9 = reader.storage(None, 16*s1)
reader.tensor(buf9, (s1, 4), is_leaf=True) # addmm_1
buf10 = reader.storage(None, 16*s1 + 16*u0, dtype_hint=torch.int64)
reader.tensor(buf10, (s1 + u0,), dtype=torch.int64, storage_offset=Max(1, s1 + u0), is_leaf=True) # select_2
reader.tensor(buf10, (s1 + u0,), dtype=torch.int64, is_leaf=True) # select_3
buf11 = reader.storage(None, 4*s1)
reader.tensor(buf11, (s1, 1), is_leaf=True) # full
buf12 = reader.storage(None, 4*s1 + 4*u0)
reader.tensor(buf12, (s1 + u0, 1), is_leaf=True) # exp
buf13 = reader.storage(None, 4*s1 + 4*u0)
reader.tensor(buf13, (s1 + u0, 1), is_leaf=True) # index_4
buf14 = reader.storage(None, 4*s1 + 4*u0)
reader.tensor(buf14, (s1 + u0, 1), is_leaf=True) # div
buf15 = reader.storage(None, 16*s1)
reader.tensor(buf15, (s1, 1, 4), is_leaf=True) # full_2
buf16 = reader.storage(None, 16*s1)
reader.tensor(buf16, (s1, 1, 4), is_leaf=True) # scatter_add_1
buf17 = reader.storage(None, 16*s1)
reader.tensor(buf17, (s1, 4), is_leaf=True) # mul_5
buf18 = reader.storage(None, 16*s1 + 16*u1)
reader.tensor(buf18, (s1 + u1, 1, 4), is_leaf=True) # index_7
buf19 = reader.storage(None, 16*s1 + 16*u1)
reader.tensor(buf19, (s1 + u1, 1, 4), is_leaf=True) # add_6
buf20 = reader.storage(None, 4*s1 + 4*u1)
reader.tensor(buf20, (s1 + u1, 1), is_leaf=True) # exp_1
buf21 = reader.storage(None, 4*s1 + 4*u1)
reader.tensor(buf21, (s1 + u1, 1), is_leaf=True) # index_10
buf22 = reader.storage(None, 4*s1 + 4*u1)
reader.tensor(buf22, (s1 + u1, 1), is_leaf=True) # div_1
buf23 = reader.storage(None, 16*s1)
reader.tensor(buf23, (s1, 1, 4), is_leaf=True) # scatter_add_3
buf24 = reader.storage(None, 16*s1)
reader.tensor(buf24, (s1, 4), is_leaf=True) # mul_11
buf25 = reader.storage(None, 16*s1 + 16*u2)
reader.tensor(buf25, (s1 + u2, 1, 4), is_leaf=True) # index_13
buf26 = reader.storage(None, 16*s1 + 16*u2)
reader.tensor(buf26, (s1 + u2, 1, 4), is_leaf=True) # add_11
buf27 = reader.storage(None, 4*s1 + 4*u2)
reader.tensor(buf27, (s1 + u2, 1), is_leaf=True) # exp_2
buf28 = reader.storage(None, 4*s1 + 4*u2)
reader.tensor(buf28, (s1 + u2, 1), is_leaf=True) # index_16
buf29 = reader.storage(None, 4*s1 + 4*u2)
reader.tensor(buf29, (s1 + u2, 1), is_leaf=True) # div_2
buf30 = reader.storage(None, 16*s1)
reader.tensor(buf30, (s1, 1, 4), is_leaf=True) # scatter_add_5
buf31 = reader.storage(None, 16*s1)
reader.tensor(buf31, (s1, 4), is_leaf=True) # mul_17
buf32 = reader.storage(None, 8*s1 + 8*u3)
reader.tensor(buf32, (s1 + u3, 1, 2), is_leaf=True) # index_19
buf33 = reader.storage(None, 8*s1 + 8*u3)
reader.tensor(buf33, (s1 + u3, 1, 2), is_leaf=True) # add_16
buf34 = reader.storage(None, 4*s1 + 4*u3)
reader.tensor(buf34, (s1 + u3, 1), is_leaf=True) # exp_3
buf35 = reader.storage(None, 4*s1 + 4*u3)
reader.tensor(buf35, (s1 + u3, 1), is_leaf=True) # index_22
buf36 = reader.storage(None, 4*s1 + 4*u3)
reader.tensor(buf36, (s1 + u3, 1), is_leaf=True) # div_3
buf37 = reader.storage(None, 8*s1)
reader.tensor(buf37, (s1, 1, 2), is_leaf=True) # full_11
buf38 = reader.storage(None, 32)
reader.tensor(buf38, (2, 4), is_leaf=True) # permute_8
buf39 = reader.storage(None, 32)
reader.tensor(buf39, (2, 4), is_leaf=True) # permute_12
buf40 = reader.storage(None, 64)
reader.tensor(buf40, (4, 4), is_leaf=True) # permute_16
buf41 = reader.storage(None, 64)
reader.tensor(buf41, (4, 4), is_leaf=True) # permute_20
buf42 = reader.storage(None, 64)
reader.tensor(buf42, (4, 4), is_leaf=True) # permute_24
buf43 = reader.storage(None, 64)
reader.tensor(buf43, (4, 4), is_leaf=True) # permute_28
buf44 = reader.storage(None, 8*s1)
reader.tensor(buf44, (s1, 2), is_leaf=True) # tangents_1
load_args._version = 0
mod = Repro()
if __name__ == '__main__':
from torch._dynamo.repro.after_aot import run_repro
with torch.no_grad():
run_repro(mod, load_args, accuracy=False, command='minify', save_dir='/home/usevitch/code/python/neurosub/torch_compile_debug/run_2024_08_19_22_23_52_780264-pid_1226344/minifier/checkpoints', tracing_mode='symbolic', check_str=None)
# To run it separately, do
# mod, args = run_repro(mod, load_args, accuracy=False, command='get_args', save_dir='/home/usevitch/code/python/neurosub/torch_compile_debug/run_2024_08_19_22_23_52_780264-pid_1226344/minifier/checkpoints', tracing_mode='symbolic', check_str=None)
# mod(*args)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5975WX 32-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 7006.6401
CPU min MHz: 1800.0000
BogoMIPS: 7186.86
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.4.0
[pip3] torch_cluster==1.6.3+pt24cu121
[pip3] torch-geometric==2.6.0
[pip3] torch_scatter==2.1.2+pt24cu121
[pip3] torch_sparse==0.6.18+pt24cu121
[pip3] torch_spline_conv==1.2.2+pt24cu121
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt24cu121 pypi_0 pypi
[conda] torch-geometric 2.6.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt24cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt24cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt24cu121 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @ezyang @chauhang @penguinwu @rec | triaged,oncall: pt2,module: dynamo | low | Critical |
2,474,729,173 | rust | ICE compiling internal project | > [!NOTE]
>
> Clearing out the incremental folder for the `replicatord` crate resolves the issue.
<!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
Unfortunately, the code is closed source
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: aarch64-apple-darwin
release: 1.80.1
LLVM version: 18.1.7
```
### Error output
```
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.80.1 (3f5fd8dd4 2024-08-06) running on aarch64-apple-darwin
note: compiler flags: -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked -C linker=clang -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_hir/src/definitions.rs:389:13:
("Failed to extract DefId", def_kind, PackedFingerprint(Fingerprint(5450134108200939692, 10911451897743993807)))
stack backtrace:
0: 0x105544c1c - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h41035ce174e31160
1: 0x10558987c - core::fmt::write::h7e946826fce7616b
2: 0x10553b088 - std::io::Write::write_fmt::he3645adfefb23e4a
3: 0x105544a74 - std::sys_common::backtrace::print::h2efe9ae66fda73dc
4: 0x105547080 - std::panicking::default_hook::{{closure}}::hd27200b4fbd3bf40
5: 0x105546d4c - std::panicking::default_hook::hb8656334461229c8
6: 0x10ebcb380 - <alloc[9bfd1da98798fc47]::boxed::Box<rustc_driver_impl[1ef2360f78401c14]::install_ice_hook::{closure#0}> as core[cec0bd9d2fc86fa9]::ops::function::Fn<(&dyn for<'a, 'b> core[cec0bd9d2fc86fa9]::ops::function::Fn<(&'a core[cec0bd9d2fc86fa9]::panic::panic_info::PanicInfo<'b>,), Output = ()> + core[cec0bd9d2fc86fa9]::marker::Sync + core[cec0bd9d2fc86fa9]::marker::Send, &core[cec0bd9d2fc86fa9]::panic::panic_info::PanicInfo)>>::call
7: 0x105547a6c - std::panicking::rust_panic_with_hook::h10171cf76e1aed15
8: 0x105547480 - std::panicking::begin_panic_handler::{{closure}}::h9344de43a47cae21
9: 0x1055450a0 - std::sys_common::backtrace::__rust_end_short_backtrace::h55013ada3ab9c4e8
10: 0x1055471f0 - _rust_begin_unwind
11: 0x1055a5128 - core::panicking::panic_fmt::h0b16bb09366e1f01
12: 0x112cd06b0 - <rustc_hir[370215242720f464]::definitions::Definitions>::local_def_path_hash_to_def_id::err
13: 0x10f608008 - <rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt>::def_path_hash_to_def_id
14: 0x10fe36fb0 - <rustc_query_impl[9b9172b7a4c3de5d]::plumbing::query_callback<rustc_query_impl[9b9172b7a4c3de5d]::query_impl::def_kind::QueryType>::{closure#0} as core[cec0bd9d2fc86fa9]::ops::function::FnOnce<(rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt, rustc_query_system[a56a14b00b0e8ff6]::dep_graph::dep_node::DepNode)>>::call_once
15: 0x10ff62ac8 - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
16: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
17: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
18: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
19: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
20: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
21: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
22: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
23: 0x10ff6288c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
24: 0x10fdd8774 - rustc_query_system[a56a14b00b0e8ff6]::query::plumbing::try_execute_query::<rustc_query_impl[9b9172b7a4c3de5d]::DynamicConfig<rustc_query_system[a56a14b00b0e8ff6]::query::caches::DefaultCache<rustc_type_ir[24e377033abc0aab]::canonical::Canonical<rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt, rustc_middle[2867706b5eb6f7f9]::ty::ParamEnvAnd<rustc_middle[2867706b5eb6f7f9]::ty::predicate::Predicate>>, rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 2usize]>>, false, false, false>, rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt, true>
25: 0x10fff4124 - rustc_query_impl[9b9172b7a4c3de5d]::query_impl::evaluate_obligation::get_query_incr::__rust_end_short_backtrace
26: 0x110470030 - <rustc_infer[12dda32e7c8ea680]::infer::InferCtxt as rustc_trait_selection[7e0d0f6886399ce9]::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation
27: 0x110470520 - <rustc_infer[12dda32e7c8ea680]::infer::InferCtxt as rustc_trait_selection[7e0d0f6886399ce9]::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation_no_overflow
28: 0x1103eb66c - <rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillProcessor>::process_trait_obligation
29: 0x1103eac30 - <rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillProcessor as rustc_data_structures[961a954d878499e9]::obligation_forest::ObligationProcessor>::process_obligation
30: 0x1103d4c48 - <rustc_data_structures[961a954d878499e9]::obligation_forest::ObligationForest<rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillProcessor>
31: 0x1103e10f0 - <rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillmentContext<rustc_infer[12dda32e7c8ea680]::traits::engine::ScrubbedTraitError> as rustc_infer[12dda32e7c8ea680]::traits::engine::TraitEngine<rustc_infer[12dda32e7c8ea680]::traits::engine::ScrubbedTraitError>>::select_where_possible
32: 0x1103dde88 - <rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillmentContext<rustc_infer[12dda32e7c8ea680]::traits::engine::ScrubbedTraitError> as rustc_infer[12dda32e7c8ea680]::traits::engine::TraitEngine<rustc_infer[12dda32e7c8ea680]::traits::engine::ScrubbedTraitError>>::select_all_or_error
33: 0x11055e1cc - rustc_traits[c51852ea9da8a5f1]::codegen::codegen_select_candidate
34: 0x10fe63ba0 - rustc_query_impl[9b9172b7a4c3de5d]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[9b9172b7a4c3de5d]::query_impl::codegen_select_candidate::dynamic_query::{closure#2}::{closure#0}, rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 16usize]>>
35: 0x10ff85770 - <rustc_query_impl[9b9172b7a4c3de5d]::query_impl::codegen_select_candidate::dynamic_query::{closure#2} as core[cec0bd9d2fc86fa9]::ops::function::FnOnce<(rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt, (rustc_middle[2867706b5eb6f7f9]::ty::ParamEnv, rustc_type_ir[24e377033abc0aab]::predicate::TraitRef<rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt>))>>::call_once
36: 0x10fe05f04 - rustc_query_system[a56a14b00b0e8ff6]::query::plumbing::try_execute_query::<rustc_query_impl[9b9172b7a4c3de5d]::DynamicConfig<rustc_query_system[a56a14b00b0e8ff6]::query::caches::DefaultCache<(rustc_middle[2867706b5eb6f7f9]::ty::ParamEnv, rustc_type_ir[24e377033abc0aab]::predicate::TraitRef<rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt>), rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt, true>
37: 0x10ffe2908 - rustc_query_impl[9b9172b7a4c3de5d]::query_impl::codegen_select_candidate::get_query_incr::__rust_end_short_backtrace
38: 0x10fad4308 - rustc_monomorphize[dd171e748c5e8d3]::custom_coerce_unsize_info
39: 0x10faa53c8 - rustc_monomorphize[dd171e748c5e8d3]::collector::find_vtable_types_for_unsizing
40: 0x10faa367c - <rustc_monomorphize[dd171e748c5e8d3]::collector::MirUsedCollector as rustc_middle[2867706b5eb6f7f9]::mir::visit::Visitor>::visit_rvalue
41: 0x10faa764c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_of_instance
42: 0x10faa66fc - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
43: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
44: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
45: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
46: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
47: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
48: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
49: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
50: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
51: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
52: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
53: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
54: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
55: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
56: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
57: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
58: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
59: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
60: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
61: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
62: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
63: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
64: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
65: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
66: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
67: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
68: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
69: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
70: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
71: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
72: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
73: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
74: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
75: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
76: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
77: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
78: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
79: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
80: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
81: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
82: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
83: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
84: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
85: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
86: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
87: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
88: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
89: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
90: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
91: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
92: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
93: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
94: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
95: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
96: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
97: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
98: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
99: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
100: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
101: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
102: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
103: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
104: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
105: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
106: 0x10fac8f24 - std[4ec0ba9e3c6d748b]::panicking::try::<(), core[cec0bd9d2fc86fa9]::panic::unwind_safe::AssertUnwindSafe<rustc_data_structures[961a954d878499e9]::sync::parallel::disabled::par_for_each_in<alloc[9bfd1da98798fc47]::vec::Vec<rustc_middle[2867706b5eb6f7f9]::mir::mono::MonoItem>, rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items::{closure#1}::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>>
107: 0x10fac3dc8 - <rustc_data_structures[961a954d878499e9]::sync::parallel::ParallelGuard>::run::<(), rustc_data_structures[961a954d878499e9]::sync::parallel::disabled::par_for_each_in<alloc[9bfd1da98798fc47]::vec::Vec<rustc_middle[2867706b5eb6f7f9]::mir::mono::MonoItem>, rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items::{closure#1}::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>
108: 0x10fac99ac - rustc_data_structures[961a954d878499e9]::sync::parallel::disabled::par_for_each_in::<alloc[9bfd1da98798fc47]::vec::Vec<rustc_middle[2867706b5eb6f7f9]::mir::mono::MonoItem>, rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items::{closure#1}::{closure#0}>
109: 0x10fac3b58 - <rustc_session[6e72d4e14abb7feb]::session::Session>::time::<(), rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items::{closure#1}>
110: 0x10faa924c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items
111: 0x10faaeb88 - rustc_monomorphize[dd171e748c5e8d3]::partitioning::collect_and_partition_mono_items
112: 0x10fe645d8 - rustc_query_impl[9b9172b7a4c3de5d]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[9b9172b7a4c3de5d]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}::{closure#0}, rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 24usize]>>
113: 0x10ff769b8 - <rustc_query_impl[9b9172b7a4c3de5d]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2} as core[cec0bd9d2fc86fa9]::ops::function::FnOnce<(rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt, ())>>::call_once
114: 0x10fdca410 - rustc_query_system[a56a14b00b0e8ff6]::query::plumbing::try_execute_query::<rustc_query_impl[9b9172b7a4c3de5d]::DynamicConfig<rustc_query_system[a56a14b00b0e8ff6]::query::caches::SingleCache<rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt, true>
115: 0x10fff1ce4 - rustc_query_impl[9b9172b7a4c3de5d]::query_impl::collect_and_partition_mono_items::get_query_incr::__rust_end_short_backtrace
116: 0x10e8b8fa0 - rustc_codegen_ssa[b68336f3a887532b]::base::codegen_crate::<rustc_codegen_llvm[df3168b963d74ead]::LlvmCodegenBackend>
117: 0x10e856fac - <rustc_codegen_llvm[df3168b963d74ead]::LlvmCodegenBackend as rustc_codegen_ssa[b68336f3a887532b]::traits::backend::CodegenBackend>::codegen_crate
118: 0x10f35ad4c - <rustc_session[6e72d4e14abb7feb]::session::Session>::time::<alloc[9bfd1da98798fc47]::boxed::Box<dyn core[cec0bd9d2fc86fa9]::any::Any>, rustc_interface[6917c625c882dc9d]::passes::start_codegen::{closure#0}>
119: 0x10f333ac0 - rustc_interface[6917c625c882dc9d]::passes::start_codegen
120: 0x10f35df60 - <rustc_middle[2867706b5eb6f7f9]::ty::context::GlobalCtxt>::enter::<<rustc_interface[6917c625c882dc9d]::queries::Queries>::codegen_and_build_linker::{closure#0}, core[cec0bd9d2fc86fa9]::result::Result<rustc_interface[6917c625c882dc9d]::queries::Linker, rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>
121: 0x10f39c014 - <rustc_interface[6917c625c882dc9d]::queries::Queries>::codegen_and_build_linker
122: 0x10ebd35d8 - <rustc_interface[6917c625c882dc9d]::interface::Compiler>::enter::<rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<core[cec0bd9d2fc86fa9]::option::Option<rustc_interface[6917c625c882dc9d]::queries::Linker>, rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>
123: 0x10eb956b8 - <scoped_tls[1950327f1bf28942]::ScopedKey<rustc_span[22cad54eabbc67cf]::SessionGlobals>>::set::<rustc_interface[6917c625c882dc9d]::util::run_in_thread_with_globals<rustc_interface[6917c625c882dc9d]::interface::run_compiler<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}>::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>
124: 0x10ebd2f18 - rustc_span[22cad54eabbc67cf]::create_session_globals_then::<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_interface[6917c625c882dc9d]::util::run_in_thread_with_globals<rustc_interface[6917c625c882dc9d]::interface::run_compiler<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}>::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}>
125: 0x10ebc9d58 - std[4ec0ba9e3c6d748b]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[6917c625c882dc9d]::util::run_in_thread_with_globals<rustc_interface[6917c625c882dc9d]::interface::run_compiler<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}>::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>
126: 0x10eba059c - <<std[4ec0ba9e3c6d748b]::thread::Builder>::spawn_unchecked_<rustc_interface[6917c625c882dc9d]::util::run_in_thread_with_globals<rustc_interface[6917c625c882dc9d]::interface::run_compiler<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}>::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#2} as core[cec0bd9d2fc86fa9]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
127: 0x105550588 - std::sys::pal::unix::thread::Thread::new::thread_start::hb184f2abd415aef7
128: 0x186cb32e4 - __pthread_deallocate
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.80.1 (3f5fd8dd4 2024-08-06) running on aarch64-apple-darwin
note: compiler flags: -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked -C linker=clang -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@ractor::actor::ActorRuntime<control::primary::node_manager::node::NodeActor>::start::{closure#0}}: core::marker::Send`
#1 [codegen_select_candidate] computing candidate for `<core::pin::Pin<alloc::boxed::Box<{async block@<control::primary::node_manager::node::NodeActor as ractor::actor::Actor>::spawn_linked<'_>::{closure#0}}>> as core::ops::unsize::CoerceUnsized<core::pin::Pin<alloc::boxed::Box<dyn core::future::future::Future<Output = core::result::Result<(ractor::actor::actor_ref::ActorRef<control::primary::node_manager::node::Message>, tokio::runtime::task::join::JoinHandle<()>), ractor::errors::SpawnErr>> + core::marker::Send>>>>`
#2 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 type_of(thread 'rustc' panicked at compiler/rustc_hir/src/definitions.rs:389:13:
("Failed to extract DefId", type_of, PackedFingerprint(Fingerprint(5450134108200939692, 10911451897743993807)))
stack backtrace:
0: 0x105544c1c - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h41035ce174e31160
1: 0x10558987c - core::fmt::write::h7e946826fce7616b
2: 0x10553b088 - std::io::Write::write_fmt::he3645adfefb23e4a
3: 0x105544a74 - std::sys_common::backtrace::print::h2efe9ae66fda73dc
4: 0x105547080 - std::panicking::default_hook::{{closure}}::hd27200b4fbd3bf40
5: 0x105546d4c - std::panicking::default_hook::hb8656334461229c8
6: 0x10ebcb380 - <alloc[9bfd1da98798fc47]::boxed::Box<rustc_driver_impl[1ef2360f78401c14]::install_ice_hook::{closure#0}> as core[cec0bd9d2fc86fa9]::ops::function::Fn<(&dyn for<'a, 'b> core[cec0bd9d2fc86fa9]::ops::function::Fn<(&'a core[cec0bd9d2fc86fa9]::panic::panic_info::PanicInfo<'b>,), Output = ()> + core[cec0bd9d2fc86fa9]::marker::Sync + core[cec0bd9d2fc86fa9]::marker::Send, &core[cec0bd9d2fc86fa9]::panic::panic_info::PanicInfo)>>::call
7: 0x105547a6c - std::panicking::rust_panic_with_hook::h10171cf76e1aed15
8: 0x105547480 - std::panicking::begin_panic_handler::{{closure}}::h9344de43a47cae21
9: 0x1055450a0 - std::sys_common::backtrace::__rust_end_short_backtrace::h55013ada3ab9c4e8
10: 0x1055471f0 - _rust_begin_unwind
11: 0x1055a5128 - core::panicking::panic_fmt::h0b16bb09366e1f01
12: 0x112cd06b0 - <rustc_hir[370215242720f464]::definitions::Definitions>::local_def_path_hash_to_def_id::err
13: 0x10f608008 - <rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt>::def_path_hash_to_def_id
14: 0x10f3660ac - rustc_interface[6917c625c882dc9d]::callbacks::dep_node_debug
15: 0x110036228 - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::dep_node::DepNode as core[cec0bd9d2fc86fa9]::fmt::Debug>::fmt
16: 0x10558987c - core::fmt::write::h7e946826fce7616b
17: 0x1055392d4 - <&std::io::stdio::Stderr as std::io::Write>::write_fmt::h106b890cac40debb
18: 0x105539cd4 - std::io::stdio::_eprint::hf58134ec6abaf37d
19: 0x112df38ac - rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::print_markframe_trace::<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>
20: 0x10ff62b7c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
21: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
22: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
23: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
24: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
25: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
26: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
27: 0x10ff62b0c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
28: 0x10ff6288c - <rustc_query_system[a56a14b00b0e8ff6]::dep_graph::graph::DepGraphData<rustc_middle[2867706b5eb6f7f9]::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt>
29: 0x10fdd8774 - rustc_query_system[a56a14b00b0e8ff6]::query::plumbing::try_execute_query::<rustc_query_impl[9b9172b7a4c3de5d]::DynamicConfig<rustc_query_system[a56a14b00b0e8ff6]::query::caches::DefaultCache<rustc_type_ir[24e377033abc0aab]::canonical::Canonical<rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt, rustc_middle[2867706b5eb6f7f9]::ty::ParamEnvAnd<rustc_middle[2867706b5eb6f7f9]::ty::predicate::Predicate>>, rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 2usize]>>, false, false, false>, rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt, true>
30: 0x10fff4124 - rustc_query_impl[9b9172b7a4c3de5d]::query_impl::evaluate_obligation::get_query_incr::__rust_end_short_backtrace
31: 0x110470030 - <rustc_infer[12dda32e7c8ea680]::infer::InferCtxt as rustc_trait_selection[7e0d0f6886399ce9]::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation
32: 0x110470520 - <rustc_infer[12dda32e7c8ea680]::infer::InferCtxt as rustc_trait_selection[7e0d0f6886399ce9]::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation_no_overflow
33: 0x1103eb66c - <rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillProcessor>::process_trait_obligation
34: 0x1103eac30 - <rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillProcessor as rustc_data_structures[961a954d878499e9]::obligation_forest::ObligationProcessor>::process_obligation
35: 0x1103d4c48 - <rustc_data_structures[961a954d878499e9]::obligation_forest::ObligationForest<rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillProcessor>
36: 0x1103e10f0 - <rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillmentContext<rustc_infer[12dda32e7c8ea680]::traits::engine::ScrubbedTraitError> as rustc_infer[12dda32e7c8ea680]::traits::engine::TraitEngine<rustc_infer[12dda32e7c8ea680]::traits::engine::ScrubbedTraitError>>::select_where_possible
37: 0x1103dde88 - <rustc_trait_selection[7e0d0f6886399ce9]::traits::fulfill::FulfillmentContext<rustc_infer[12dda32e7c8ea680]::traits::engine::ScrubbedTraitError> as rustc_infer[12dda32e7c8ea680]::traits::engine::TraitEngine<rustc_infer[12dda32e7c8ea680]::traits::engine::ScrubbedTraitError>>::select_all_or_error
38: 0x11055e1cc - rustc_traits[c51852ea9da8a5f1]::codegen::codegen_select_candidate
39: 0x10fe63ba0 - rustc_query_impl[9b9172b7a4c3de5d]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[9b9172b7a4c3de5d]::query_impl::codegen_select_candidate::dynamic_query::{closure#2}::{closure#0}, rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 16usize]>>
40: 0x10ff85770 - <rustc_query_impl[9b9172b7a4c3de5d]::query_impl::codegen_select_candidate::dynamic_query::{closure#2} as core[cec0bd9d2fc86fa9]::ops::function::FnOnce<(rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt, (rustc_middle[2867706b5eb6f7f9]::ty::ParamEnv, rustc_type_ir[24e377033abc0aab]::predicate::TraitRef<rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt>))>>::call_once
41: 0x10fe05f04 - rustc_query_system[a56a14b00b0e8ff6]::query::plumbing::try_execute_query::<rustc_query_impl[9b9172b7a4c3de5d]::DynamicConfig<rustc_query_system[a56a14b00b0e8ff6]::query::caches::DefaultCache<(rustc_middle[2867706b5eb6f7f9]::ty::ParamEnv, rustc_type_ir[24e377033abc0aab]::predicate::TraitRef<rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt>), rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt, true>
42: 0x10ffe2908 - rustc_query_impl[9b9172b7a4c3de5d]::query_impl::codegen_select_candidate::get_query_incr::__rust_end_short_backtrace
43: 0x10fad4308 - rustc_monomorphize[dd171e748c5e8d3]::custom_coerce_unsize_info
44: 0x10faa53c8 - rustc_monomorphize[dd171e748c5e8d3]::collector::find_vtable_types_for_unsizing
45: 0x10faa367c - <rustc_monomorphize[dd171e748c5e8d3]::collector::MirUsedCollector as rustc_middle[2867706b5eb6f7f9]::mir::visit::Visitor>::visit_rvalue
46: 0x10faa764c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_of_instance
47: 0x10faa66fc - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
48: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
49: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
50: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
51: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
52: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
53: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
54: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
55: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
56: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
57: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
58: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
59: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
60: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
61: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
62: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
63: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
64: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
65: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
66: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
67: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
68: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
69: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
70: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
71: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
72: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
73: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
74: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
75: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
76: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
77: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
78: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
79: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
80: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
81: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
82: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
83: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
84: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
85: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
86: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
87: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
88: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
89: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
90: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
91: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
92: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
93: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
94: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
95: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
96: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
97: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
98: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
99: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
100: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
101: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
102: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
103: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
104: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
105: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
106: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
107: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
108: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
109: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
110: 0x10faa6c9c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_items_rec
111: 0x10fac8f24 - std[4ec0ba9e3c6d748b]::panicking::try::<(), core[cec0bd9d2fc86fa9]::panic::unwind_safe::AssertUnwindSafe<rustc_data_structures[961a954d878499e9]::sync::parallel::disabled::par_for_each_in<alloc[9bfd1da98798fc47]::vec::Vec<rustc_middle[2867706b5eb6f7f9]::mir::mono::MonoItem>, rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items::{closure#1}::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>>
112: 0x10fac3dc8 - <rustc_data_structures[961a954d878499e9]::sync::parallel::ParallelGuard>::run::<(), rustc_data_structures[961a954d878499e9]::sync::parallel::disabled::par_for_each_in<alloc[9bfd1da98798fc47]::vec::Vec<rustc_middle[2867706b5eb6f7f9]::mir::mono::MonoItem>, rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items::{closure#1}::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>
113: 0x10fac99ac - rustc_data_structures[961a954d878499e9]::sync::parallel::disabled::par_for_each_in::<alloc[9bfd1da98798fc47]::vec::Vec<rustc_middle[2867706b5eb6f7f9]::mir::mono::MonoItem>, rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items::{closure#1}::{closure#0}>
114: 0x10fac3b58 - <rustc_session[6e72d4e14abb7feb]::session::Session>::time::<(), rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items::{closure#1}>
115: 0x10faa924c - rustc_monomorphize[dd171e748c5e8d3]::collector::collect_crate_mono_items
116: 0x10faaeb88 - rustc_monomorphize[dd171e748c5e8d3]::partitioning::collect_and_partition_mono_items
117: 0x10fe645d8 - rustc_query_impl[9b9172b7a4c3de5d]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[9b9172b7a4c3de5d]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}::{closure#0}, rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 24usize]>>
118: 0x10ff769b8 - <rustc_query_impl[9b9172b7a4c3de5d]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2} as core[cec0bd9d2fc86fa9]::ops::function::FnOnce<(rustc_middle[2867706b5eb6f7f9]::ty::context::TyCtxt, ())>>::call_once
119: 0x10fdca410 - rustc_query_system[a56a14b00b0e8ff6]::query::plumbing::try_execute_query::<rustc_query_impl[9b9172b7a4c3de5d]::DynamicConfig<rustc_query_system[a56a14b00b0e8ff6]::query::caches::SingleCache<rustc_middle[2867706b5eb6f7f9]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[9b9172b7a4c3de5d]::plumbing::QueryCtxt, true>
120: 0x10fff1ce4 - rustc_query_impl[9b9172b7a4c3de5d]::query_impl::collect_and_partition_mono_items::get_query_incr::__rust_end_short_backtrace
121: 0x10e8b8fa0 - rustc_codegen_ssa[b68336f3a887532b]::base::codegen_crate::<rustc_codegen_llvm[df3168b963d74ead]::LlvmCodegenBackend>
122: 0x10e856fac - <rustc_codegen_llvm[df3168b963d74ead]::LlvmCodegenBackend as rustc_codegen_ssa[b68336f3a887532b]::traits::backend::CodegenBackend>::codegen_crate
123: 0x10f35ad4c - <rustc_session[6e72d4e14abb7feb]::session::Session>::time::<alloc[9bfd1da98798fc47]::boxed::Box<dyn core[cec0bd9d2fc86fa9]::any::Any>, rustc_interface[6917c625c882dc9d]::passes::start_codegen::{closure#0}>
124: 0x10f333ac0 - rustc_interface[6917c625c882dc9d]::passes::start_codegen
125: 0x10f35df60 - <rustc_middle[2867706b5eb6f7f9]::ty::context::GlobalCtxt>::enter::<<rustc_interface[6917c625c882dc9d]::queries::Queries>::codegen_and_build_linker::{closure#0}, core[cec0bd9d2fc86fa9]::result::Result<rustc_interface[6917c625c882dc9d]::queries::Linker, rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>
126: 0x10f39c014 - <rustc_interface[6917c625c882dc9d]::queries::Queries>::codegen_and_build_linker
127: 0x10ebd35d8 - <rustc_interface[6917c625c882dc9d]::interface::Compiler>::enter::<rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<core[cec0bd9d2fc86fa9]::option::Option<rustc_interface[6917c625c882dc9d]::queries::Linker>, rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>
128: 0x10eb956b8 - <scoped_tls[1950327f1bf28942]::ScopedKey<rustc_span[22cad54eabbc67cf]::SessionGlobals>>::set::<rustc_interface[6917c625c882dc9d]::util::run_in_thread_with_globals<rustc_interface[6917c625c882dc9d]::interface::run_compiler<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}>::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>
129: 0x10ebd2f18 - rustc_span[22cad54eabbc67cf]::create_session_globals_then::<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_interface[6917c625c882dc9d]::util::run_in_thread_with_globals<rustc_interface[6917c625c882dc9d]::interface::run_compiler<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}>::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}>
130: 0x10ebc9d58 - std[4ec0ba9e3c6d748b]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[6917c625c882dc9d]::util::run_in_thread_with_globals<rustc_interface[6917c625c882dc9d]::interface::run_compiler<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}>::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>
131: 0x10eba059c - <<std[4ec0ba9e3c6d748b]::thread::Builder>::spawn_unchecked_<rustc_interface[6917c625c882dc9d]::util::run_in_thread_with_globals<rustc_interface[6917c625c882dc9d]::interface::run_compiler<core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>, rustc_driver_impl[1ef2360f78401c14]::run_compiler::{closure#0}>::{closure#1}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[cec0bd9d2fc86fa9]::result::Result<(), rustc_span[22cad54eabbc67cf]::ErrorGuaranteed>>::{closure#2} as core[cec0bd9d2fc86fa9]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
132: 0x105550588 - std::sys::pal::unix::thread::Thread::new::thread_start::hb184f2abd415aef7
133: 0x186cb32e4 - __pthread_deallocate
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.80.1 (3f5fd8dd4 2024-08-06) running on aarch64-apple-darwin
note: compiler flags: -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked -C linker=clang -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@ractor::actor::ActorRuntime<control::primary::node_manager::node::NodeActor>::start::{closure#0}}: core::marker::Send`
#1 [codegen_select_candidate] computing candidate for `<core::pin::Pin<alloc::boxed::Box<{async block@<control::primary::node_manager::node::NodeActor as ractor::actor::Actor>::spawn_linked<'_>::{closure#0}}>> as core::ops::unsize::CoerceUnsized<core::pin::Pin<alloc::boxed::Box<dyn core::future::future::Future<Output = core::result::Result<(ractor::actor::actor_ref::ActorRef<control::primary::node_manager::node::Message>, tokio::runtime::task::join::JoinHandle<()>), ractor::errors::SpawnErr>> + core::marker::Send>>>>`
#2 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
warning: `replicatord` (lib test) generated 4 warnings (run `cargo fix --lib -p replicatord --tests` to apply 1 suggestion)
error: could not compile `replicatord` (lib test); 4 warnings emitted
```
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro | low | Critical |
2,474,741,226 | transformers | Optional `bias` for qwen2 model | ### Feature request
`bias` of linear layers in `qwen2` model is hard coded as following:
- https://github.com/huggingface/transformers/blob/85345bb439652d3f03bb4e123cef7a440f2ba95b/src/transformers/models/qwen2/modeling_qwen2.py#L217-L219
- https://github.com/huggingface/transformers/blob/85345bb439652d3f03bb4e123cef7a440f2ba95b/src/transformers/models/qwen2/modeling_qwen2.py#L271-L274
It would be good to make bias optionally configurable through a config file to ensure compatibility with the latest models. (e.g. llama)
### Motivation
`bias` is optional in llama model as following:
- https://github.com/huggingface/transformers/blob/85345bb439652d3f03bb4e123cef7a440f2ba95b/src/transformers/models/llama/modeling_llama.py#L286-L288
### Your contribution
I'll submit PR for this feature | Feature request | low | Minor |
2,474,746,497 | go | cmd/compile: amd64 carry flag spilling uses SBBQ + NEGQ instead of SETCS | ### Go version
go version go1.23.0 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/bremac/.cache/go-build'
GOENV='/home/bremac/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/bremac/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/bremac/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/lib/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/lib/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/bremac/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3831806201=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
This code is a simplified form of a longer unrolled loop, with the non-carry-related logic removed:
```go
func example(carryIn uint, x, y, result []uint) uint {
// Check lengths up-front to simplify the code generated for the loop
if len(x) != len(y) || len(x) != len(result) {
panic("length mismatch")
}
for i := 0; i < len(x); i++ {
result[i], carryIn = bits.Add(x[i], y[i], carryIn)
}
return carryIn
}
```
https://go.dev/play/p/gGVkiLN6qbV
https://go.godbolt.org/z/W313f1EYG
### What did you see happen?
On amd64, the compiled loop has a throughput of one iteration every four cycles:
```as
main_example_pc48:
MOVQ (BX)(DI*8), R8
MOVQ (SI)(DI*8), R9
LEAQ 1(DI), R10
NEGL AX
ADCQ R8, R9
MOVQ R9, (DX)(DI*8)
SBBQ AX, AX
NEGQ AX
MOVQ R10, DI
main_example_pc78:
CMPQ CX, DI
JGT main_example_pc48
```
The bottleneck is the `NEGL` -> `ADCQ` -> `SBBQ` -> `NEGQ` dependency chain.
### What did you expect to see?
The SBBQ / NEGQ pair should use `SETCS` instead, e.g.
```as
main_example_pc48:
MOVQ (BX)(DI*8), R8
MOVQ (SI)(DI*8), R9
LEAQ 1(DI), R10
NEGL AX
ADCQ R8, R9
MOVQ R9, (DX)(DI*8)
SETCS AX
MOVQ R10, DI
main_example_pc78:
CMPQ CX, DI
JGT main_example_pc48
```
This shortens the dependency chain to three instructions. | Performance,help wanted,NeedsInvestigation,compiler/runtime | low | Critical |
2,474,798,659 | tauri | [bug] create window after "await WebviewWindow.getByLabel" got error "thread 'main' has overflowed its stack" on windows | ### Describe the bug
```
export const openFunProxy = async () => {
let proxyWindow = await WebviewWindow.getByLabel('fun-proxy');
if (proxyWindow) {
const isMinimize = await proxyWindow?.isMinimized();
if (isMinimize) proxyWindow?.unminimize();
} else {
// TODO ไธๅ setTimeout windowsไธ้ขไผๆฅ้thread 'main' has overflowed its stack
setTimeout(async()=>{
await invoke('plugin:window-manager|create_proxy_window_cmd');
},1000)
}
};
```
// rust
```
#[tauri::command]
pub async fn create_proxy_window_cmd() -> Result<(), TauriError> {
create_proxy_window();
Ok(())
}
pub fn create_proxy_window() -> WebviewWindow {
warn!("Proxy window not found, create new proxy window!");
let app: &tauri::AppHandle<_> = APP.get().unwrap();
let mut win_builder =
WebviewWindowBuilder::new(app, "fun-proxy", WebviewUrl::App("/desktop/proxy".into()))
.title("Funproxy: ไธ้ฎๅๆข็ฏๅข๏ผ็บตไบซไธๆปๆๅ
๏ผๅ
จ้พ่ทฏๆต่ฏๅฅฝๅธฎๆ ๐
")
.visible(true)
.inner_size(1400., 860.)
.center();
// set_transparent_titlebar
#[cfg(target_os = "macos")]
{
use tauri::TitleBarStyle;
win_builder = win_builder
.title_bar_style(TitleBarStyle::Overlay)
.hidden_title(true);
}
#[cfg(target_os = "windows")]
{
win_builder = win_builder.decorations(false);
}
win_builder.build().unwrap()
}
```
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
PS D:\My Documents\11091628\Desktop\code\fun-family\app\desktop> cargo tauri info
[โ] Environment
- OS: Windows 10.0.22631 X64
โ WebView2: 127.0.2651.105
โ MSVC:
- Visual Studio Community 2022
- Visual Studio ็ๆๅทฅๅ
ท 2022
โ rustc: 1.79.0 (129f3b996 2024-06-10)
โ cargo: 1.79.0 (ffa9cf99a 2024-06-03)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 20.12.2
- pnpm: 9.1.2
- npm: 10.5.0
[-] Packages
- tauri [RUST]: 2.0.0-rc.3
- tauri-build [RUST]: 2.0.0-rc.3
- wry [RUST]: 0.42.0
- tao [RUST]: 0.29.0
- tauri-cli [RUST]: 2.0.0-beta.14
- @tauri-apps/api [NPM]: 2.0.0-rc.2
- @tauri-apps/cli [NPM]: 2.0.0-beta.14
[-] App
- build-type: bundle
- CSP: default-src asset: https://asset.localhost blob: data: filesystem: ws: wss: http: https: tauri: 'unsafe-eval' 'unsafe-inline' 'self' img-src: 'self'
- frontendDist: ../dist
- devUrl: http://127.0.0.1:1420/
- framework: React (Next.js)
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,474,822,949 | godot | get_used_rect not working with hexagonal tile map | ### Tested versions
4.3, 4.2
### System information
Godot v4.3.stable - Windows 10.0.26120 - Vulkan (Forward+) - integrated Intel(R) Iris(R) Xe Graphics (Intel Corporation; 32.0.101.5763) - 12th Gen Intel(R) Core(TM) i5-1235U (12 Threads)
### Issue description
When trying to get the used rect of a hexagonal tile map, the values are a bit off (or my code, I need help)
See the image attached.

This is the exact same code to both maps, no position transformation applied.
### Steps to reproduce
Open the project, run.
### Minimal reproduction project (MRP)
[proj.zip](https://github.com/user-attachments/files/16670000/proj.zip)
| bug,topic:2d | low | Minor |
2,474,858,843 | rust | Long ty names written to disk can leak username info in paths | Long ty names written to disk also has this issue
_Originally posted by @jieyouxu in https://github.com/rust-lang/rust/issues/128594#issuecomment-2266659008_
---
When a type error contains a long type name, it can be written to disk. However, sometimes these long type names can contain username info or absolute paths.
cc #128914 as this may be of concern. | C-enhancement,A-diagnostics,T-compiler,D-diagnostic-infra,A-metrics | low | Critical |
2,474,859,260 | pytorch | Stack trace is symbolized when no exception is thrown | ### ๐ Describe the bug
The following code:
```python
import torch
import torch.distributed as dist
if __name__ == '__main__':
dist.init_process_group(backend="nccl")
torch.cuda.set_device(dist.get_rank())
t = torch.randn(256, device='cuda')
dist.scatter(t, [t, t] if dist.get_rank() == 0 else [])
dist.scatter(t, [t, t] if dist.get_rank() == 0 else [])
torch.cuda.synchronize()
dist.destroy_process_group()
```
prints:
```
$ TORCH_SHOW_CPP_STACKTRACES=1 torchrun --nnodes=1 --nproc-per-node=2 a.py
[rank1]:[W820 06:51:47.822066421 Module.cpp:175] symbolizing C++ stack trace for exception; if this hangs, rerun with TORCH_DISABLE_ADDR2LINE=1...
[rank1]:[W820 06:51:47.982257538 Module.cpp:175] symbolizing C++ stack trace for exception; if this hangs, rerun with TORCH_DISABLE_ADDR2LINE=1...
```
The problem is that it should not try to symbolize the stack: this is an annoying and useless warning message, and may have performance implications.
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ot @ezyang as a follow up of https://github.com/pytorch/pytorch/pull/113207#issuecomment-2298072193
### Versions
torch 2.4.0 | oncall: distributed,triaged | low | Critical |
2,474,874,503 | rust | Unify/streamline long type name written to disk mechanism | > I've unified (I think) the long type output file in `note_obligation_cause_code`, and moved the filename note to the end of `note_obligation_cause_code`. The long type output file probably should be passed even further up the call chain. Although further up in the call chain, there are also various instances of scattered output file / filename notes (esp. in `rustc_hir_typeck`) which probably should eventually be somehow unified too (probably not in this PR).
_Originally posted by @jieyouxu in https://github.com/rust-lang/rust/issues/121739#issuecomment-1969819008_
---
In `rustc_hir_typeck` and probably in more places, there are random scattered logic for writing long type names to disk. Last I tried to modify them in #121739, I discovered that:
- Where and how these long type name files are created is inconsistent.
- There are probably multiple places where different files are created, which can cause multiple long type name files to emitted if you aren't careful.
- AFAICT, they might not properly respect `--remap-path-prefix` and such.
- Not sure if this is problematic for reproducibility, probably not.
It's probably better if we tried to unify the long type name written to disk so they are treated consistently and to avoid the duplicate type name file output footguns. This could potentially aid in addressing #129296. | C-cleanup,A-diagnostics,T-compiler,D-diagnostic-infra | low | Minor |
2,474,886,732 | ollama | how to use batch when using llm | I noticed the api does not support processing batch prompt, the GPU utilization is low, and i want to use batch mode to improve GPU utilization and accelerate the inference process, so, how to do that | feature request | low | Minor |
2,474,905,982 | kubernetes | Scrape etcd_db_total_size_in_use_in_bytes metric from API server to accurately track etcd database usage | ### What would you like to be added?
The Kubernetes API server currently scrapes the `etcd_db_total_size_in_bytes` metric, which has been recently renamed to the `apiserver_storage_size_bytes` metric. According to [Kubernetes documentation](https://kubernetes.io/docs/reference/instrumentation/metrics/), this metric provides the size of the etcd database file, as it is defined as `Size of the storage database file physically allocated in bytes`.
This metric is generally used to check for etcd database size contention, where the database is growing too large, allowing cluster admins to delete excess objects. The current scraping logic for this metric is [here](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/storagebackend/factory/etcd3.go#L461-L493).
EKS - https://aws.github.io/aws-eks-best-practices/reliability/docs/controlplane/#etcd
Azure - https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/create-upgrade-delete/troubleshoot-apiserver-etcd?tabs=resource-specific#cause-2-an-offending-client-leaks-etcd-objects-and-results-in-a-slowdown-of-etcd
However, the above metric only tracks the file size of the etcd database, and not the actual space used by etcd. This metric does not change when etcd performs compaction, which [frees up space for etcd to use](https://etcd.io/docs/v3.4/op-guide/maintenance/#:~:text=Compacting%20old%20revisions%20internally%20fragments,reclaim%20the%20space%20on%20disk.), but does not reflect within the underlying host filesystem as the storage space is fragmented. Hence, the `apiserver_storage_size_bytes` metric could potentially mislead admins who believe that their etcd database is running out of space, but there is still free fragmented capacity that is usable. In addition, when admins delete excess objects, this would not be reflected in the `apiserver_storage_size_bytes` until an etcd defrag is performed later.
I'm proposing the addition of a new metric to the API server (could be called `apiserver_storage_size_in_use_bytes`), which would scrape the `etcd_db_total_size_in_use_in_bytes` metric from etcd. This metric tracks the amount of storage space that is actually in-use by etcd, and would change as admins delete excess objects and when compaction is performed.
Adding etcd metric definitions below for reference, fetched from [etcd page](https://etcd.io/docs/v3.6/metrics/etcd-metrics-latest.txt)
| Metric | Definition | Notes |
| --- | --- |
| etcd_mvcc_db_total_size_in_bytes | Total size of the underlying database physically allocated in bytes.| Currently tracked as `apiserver_storage_size_bytes` in API server metrics |
| etcd_mvcc_db_total_size_in_use_in_bytes | Total size of the underlying database logically in use in bytes. | Proposing this metric to also be tracked within API server |
Happy to create a PR for the required changes as well
### Why is this needed?
This would allow cluster admins to effectively detect when their clusters are encountering storage contention issues at the etcd layer and take corrective actions, such as deleting Kubernetes resources that are no longer required. It would also allow admins to track storage consumption in real-time after deleting the objects, allowing them to verify that their changes are effective. | kind/feature,needs-triage,sig/etcd | low | Major |
2,474,925,918 | go | cmd/internal/obj: symbol redeclared panics during InitTextSym | ### Go version
go1.24
### Output of `go env` in your module/workspace:
```shell
hidden
```
### What did you do?
When compiling the automatically generated code(https://github.com/alibaba/opentelemetry-go-auto-instrumentation/pull/52/), a panic occurred:
```
/test.go:5:47: internal compiler error: panic: runtime error: invalid memory address or nil pointer dereference
Please file a bug report including a short program that triggers the error.
https://go.dev/issue/new
```
reproduce:
```go
package otel_rules
import _ "unsafe"
import otel_debug "runtime/debug"
//go:linkname OtelPrintStack0 otel_debug.OtelPrintStack
func OtelPrintStack0() { otel_debug.PrintStack() }
//go:linkname OtelPrintStack1 otel_debug.OtelPrintStack
func OtelPrintStack1() { otel_debug.PrintStack() }
```
further reduced:
```go
package main
import _ "unsafe"
//go:linkname a main.c
func a() {}
func c() {}
```
stacktrace:
```
/go/mytest/bug.go:6:6: internal compiler error: panic: runtime error: invalid memory address or nil pointer dereference
goroutine 1 [running]:
runtime/debug.Stack()
/go/src/runtime/debug/stack.go:26 +0x5e
cmd/compile/internal/base.FatalfAt({0x1936d8?, 0xc0?}, {0xe3824f, 0x9}, {0xc000193708, 0x1, 0x1})
/go/src/cmd/compile/internal/base/print.go:230 +0x1ea
cmd/compile/internal/base.Fatalf(...)
/go/src/cmd/compile/internal/base/print.go:195
cmd/compile/internal/gc.handlePanic()
/go/src/cmd/compile/internal/gc/main.go:54 +0x8a
panic({0xdb0de0?, 0x14edc00?})
/go/src/runtime/panic.go:785 +0x132
cmd/internal/obj.(*Link).InitTextSym(...)
/go/src/cmd/internal/obj/plist.go:186
cmd/compile/internal/ir.setupTextLSym(0xfb1578?, 0xc0004fad01?)
/go/src/cmd/compile/internal/ir/abi.go:77 +0x244
cmd/compile/internal/ir.InitLSym(0xc0004f8f00, 0x1)
/go/src/cmd/compile/internal/ir/abi.go:32 +0xea
cmd/compile/internal/gc.prepareFunc(0xc0004f8f00)
/go/src/cmd/compile/internal/gc/compile.go:95 +0x25
cmd/compile/internal/gc.enqueueFunc(0xc0004f8f00)
/go/src/cmd/compile/internal/gc/compile.go:76 +0x268
cmd/compile/internal/gc.Main(0xe73238)
/go/src/cmd/compile/internal/gc/main.go:300 +0x123b
main.main()
/go/src/cmd/compile/main.go:57 +0xf9
```
In reduced program, both `a` and `c` have identical LSym, i.e. `main.c`.
When `a` has a body, `setupTextLSym` is called for the first time, initializing `LSym.Extra` at
https://github.com/golang/go/blob/7fcd4a7007979e4aaa9e8893bd0088f5f28627e7/src/cmd/internal/obj/plist.go#L189
Then, when processing `c`, `setupTextLSym` is called again. `s.Func().Text` was not initialized in the previous step by `s.NewFuncInfo()`, leading to the below panic
https://github.com/golang/go/blob/7fcd4a7007979e4aaa9e8893bd0088f5f28627e7/src/cmd/internal/obj/plist.go#L186
### What did you see happen?
compiler panic
### What did you expect to see?
not panic | NeedsInvestigation,compiler/runtime | low | Critical |
2,474,940,088 | rust | Build to AArch64 errors with "error: fixup value out of range" | ### Code
Something to do with using these structures which would have a size of.. roughly 2^36.
Obviously, this shouldn't work in practice, but this error is very cryptic; and it compiles on x86.
It's not triggered on just this; it needs to be with other code. I'm not sure the exact specifics.
https://github.com/midnightveil/microkit-rustc-bug This repo is a much of a minification I could do in a few hours. Much of the code near the end of build_system() that touches the results of PGD/PUD etc once removed seems to make it work again. The relevant struct definitions are at the bottom of the file.
```Rust
#[derive(Copy, Clone)]
struct PGD {
puds: [Option<PUD>; 512],
}
impl PGD {
fn new() -> Self {
PGD { puds: [None; 512] }
}
}
#[derive(Copy, Clone)]
struct PUD {
dirs: [Option<DIR>; 512],
}
impl PUD {
fn new() -> Self {
PUD { dirs: [None; 512] }
}
}
#[derive(Copy, Clone)]
struct DIR {
pts: [Option<PT>; 512],
}
impl DIR {
fn new() -> Self {
DIR { pts: [None; 512] }
}
}
#[derive(Copy, Clone)]
struct PT {
pages: [u64; 512],
}
impl PT {
fn new() -> Self {
PT {
pages: [u64::MAX; 512],
}
}
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
Using either a macOS host, or with `cargo +nightly rustc --target aarch64-unknown-linux-gnu` on Linux; it works fine on x86-64 builds (the compiler doesn't crash at least). It's specific to AArch64 targets.
`rustc --version --verbose`:
```
rustc 1.82.0-nightly (636d7ff91 2024-08-19)
binary: rustc
commit-hash: 636d7ff91b9847d6d43c7bbe023568828f6e3246
commit-date: 2024-08-19
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
```
### Error output
```
error: fixup value out of range
```
With `-Z treat-err-as-bug=1`:
```
thread 'rustc' panicked at compiler/rustc_errors/src/lib.rs:1803:17:
aborting due to `-Z treat-err-as-bug=1`
stack backtrace:
0: 0x7f427c0aa7a5 - std::backtrace::Backtrace::create::hd44cf642eac7dd13
1: 0x7f427a5aa8d5 - std::backtrace::Backtrace::force_capture::h1c6a5336f4788f1c
2: 0x7f427974448e - std[f4f7433038ca58d7]::panicking::update_hook::<alloc[5afee40ebe45352b]::boxed::Box<rustc_driver_impl[bf262a983b89031b]::install_ice_hook::{closure#0}>>::{closure#0}
3: 0x7f427a5c2747 - std::panicking::rust_panic_with_hook::h2812277631d65a18
4: 0x7f427a5c23d3 - std::panicking::begin_panic_handler::{{closure}}::h1cbf6e1a8ab08919
5: 0x7f427a5bfc09 - std::sys::backtrace::__rust_end_short_backtrace::he514a61806a4f1e2
6: 0x7f427a5c20d4 - rust_begin_unwind
7: 0x7f42774a80f3 - core::panicking::panic_fmt::hc4e73aa92e327778
8: 0x7f427c933996 - <rustc_errors[522b8b3df7a583e8]::DiagCtxtInner>::panic_if_treat_err_as_bug.cold
9: 0x7f427bfce210 - <rustc_errors[522b8b3df7a583e8]::DiagCtxtInner>::emit_diagnostic::{closure#3}
10: 0x7f427bfd16bf - rustc_interface[632640cbf95a869f]::callbacks::track_diagnostic::<core[688a17d15409e27]::option::Option<rustc_span[7b3448155f652e2d]::ErrorGuaranteed>>
11: 0x7f427bfcfabe - <rustc_errors[522b8b3df7a583e8]::DiagCtxtInner>::emit_diagnostic
12: 0x7f427bfcf95d - <rustc_errors[522b8b3df7a583e8]::DiagCtxtHandle>::emit_diagnostic
13: 0x7f4278f6b2bb - <() as rustc_errors[522b8b3df7a583e8]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
14: 0x7f427bd62d64 - <rustc_codegen_ssa[65d2e4fac7b81301]::back::write::SharedEmitterMain>::check
15: 0x7f427bd5ef44 - <rustc_codegen_llvm[d285dc14666e2f1]::LlvmCodegenBackend as rustc_codegen_ssa[65d2e4fac7b81301]::traits::backend::CodegenBackend>::join_codegen
16: 0x7f427bd5c006 - <rustc_interface[632640cbf95a869f]::queries::Linker>::link
17: 0x7f427bbe1de3 - rustc_interface[632640cbf95a869f]::interface::run_compiler::<core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>, rustc_driver_impl[bf262a983b89031b]::run_compiler::{closure#0}>::{closure#1}
18: 0x7f427bbc7290 - std[f4f7433038ca58d7]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[632640cbf95a869f]::util::run_in_thread_with_globals<rustc_interface[632640cbf95a869f]::util::run_in_thread_pool_with_globals<rustc_interface[632640cbf95a869f]::interface::run_compiler<core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>, rustc_driver_impl[bf262a983b89031b]::run_compiler::{closure#0}>::{closure#1}, core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>>::{closure#0}, core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>>
19: 0x7f427bbc78fa - <<std[f4f7433038ca58d7]::thread::Builder>::spawn_unchecked_<rustc_interface[632640cbf95a869f]::util::run_in_thread_with_globals<rustc_interface[632640cbf95a869f]::util::run_in_thread_pool_with_globals<rustc_interface[632640cbf95a869f]::interface::run_compiler<core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>, rustc_driver_impl[bf262a983b89031b]::run_compiler::{closure#0}>::{closure#1}, core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>>::{closure#0}, core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[688a17d15409e27]::result::Result<(), rustc_span[7b3448155f652e2d]::ErrorGuaranteed>>::{closure#1} as core[688a17d15409e27]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
20: 0x7f427bbc7c6b - std::sys::pal::unix::thread::Thread::new::thread_start::hcb43f834109dfd4d
21: 0x7f427d64a272 - start_thread
22: 0x7f427d6c5dec - clone3
23: 0x0 - <unknown>
```
</p>
</details>
| A-linkage,A-LLVM,I-ICE,T-compiler,C-bug,D-terse,O-AArch64 | low | Critical |
2,474,948,325 | TypeScript | Regex unicode property autocomplete | ### ๐ Search Terms
regular expressions, unicode character class, unicode property, hints, autocomplete
### โ
Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### โญ Suggestion
Since TS now validates regular expressions including unicode properties and even provides spelling suggestions, why not autocomplete them?
### ๐ Motivating Example
In this example TS 5.5 understands that the first regex is incorrect:
```ts
const re1 = /\p{Emoj}/u // Error: Unknown Unicode property name or value. Did you mean "Emoji"?
const re2 = /\p{Emoji}/u // Ok
```
### ๐ป Use Cases
It would be convenient if TS suggested available unicode properties when suggestions are triggered inside `\p{}`/`\P{}` sequences. | Suggestion,Domain: Completion Lists,Awaiting More Feedback | low | Critical |
2,474,950,578 | ui | [bug]: Cannot identify if the already selected select item is clicked on Select component | ### Describe the bug
I tried to use the `onValueChange` function in Select component. This function doesn't trigger when the already selected element is clicked, only when the value is changed. I also tried to use `onChange` functions in `SelectContent` and `SelectItem`. These are even not triggered.
### Affected component/components
Select, Select Content, Select Item
### How to reproduce
1. Create a select component with `onChangeValue` function with a console log.
2. Select a value from the list.
3. You can see the console.log output.
4. Now select the same item in the list.
5. No console.log output is available.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Arc browser
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,475,014,691 | neovim | vim.filetype.match doesn't detect file type but :filetype detect does | ### Problem
Sometimes, vim is able to automatically determine the filetype of a file and highlight it correctly, but `vim.filetype.match` does not detect it
### Steps to reproduce
Open a file with the following contents:
```conf
# foo
bar "baz"
```
`:set filetype?` shows the filetype as `conf`, and the syntax is highlighted correctly, but `vim.filetype.match({buf = 0})` returns `nil`.
### Expected behavior
`vim.filetype.match` should detect the filetype when `:filetype` is able to detect it
### Neovim version (nvim -v)
NVIM v0.11.0-dev+599-g8df6736ca
### Vim (not Nvim) behaves the same?
N/a, related to the nvim api
### Operating system/version
Fedora Silverblue 40
### Terminal name/version
wezterm 20240203-110809-5046fc22
### $TERM environment variable
xterm-256color
### Installation
nvim-nightly Fedora Copr | enhancement,filetype | low | Minor |
2,475,020,993 | godot | Resources with typed arrays of resources show up as the path to the resource instead of the class name in editor causing an error that the type does not match. | ### Tested versions
- Reproducible in 4.3 but was not present in 4.2
### System information
Mac OS Sonoma M1
### Issue description
Resources with typed arrays of resources show up as the path to the resource instead of the class name in editor causing an error that the type does not match. Removing the typing in script logic accessing the array fixes the error and the script runs correctly.
Resource contains:
```gdscript
@export var resources: Array[test_res]
```
Accessing it like so:
```gdscript
const CONTAINER: container_res = preload("res://container.tres")
...
for resource: test_res in CONTAINER.resources:
```
Produces:
`Parse Error: Unable to iterate on value of type "Array[res://test_res.gd]" with variable of type "test_res".`
I believe this is related to the following changes: #78219 #90751 #85024
### Steps to reproduce
Create resource that contains an array of custom resources. Attempt to iterate or access values from the array as the custom resource class type.
### Minimal reproduction project (MRP)
Minimum example files
[testgd.zip](https://github.com/user-attachments/files/16671595/testgd.zip) | bug,topic:gdscript,needs testing,regression | low | Critical |
2,475,035,438 | PowerToys | FancyZones: Allow reordering of Custom zones | ### Description of the new feature / enhancement
I have more than 5 custom configurations. They are sorted by time of creation. I would like to sort by most used. I like manual ordering. Of other apps, I am used that drag and drop allows for reordering of items. That should be possible for custom layouts.

### Scenario when this would be used?
Having many custom zones and requesting another ordering than by-creation.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,475,093,942 | TypeScript | Suboptimal completion list for template literal type member | ### ๐ Search Terms
"completion list", "template literal", "prefix", "unexpected items"
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries.
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.0-dev.20240820#code/C4TwDgpgBAKlC8UAGASA3gIjAJwgMwEsAPARgwB8tdCiAmDAX3QGdhsCA7AcwaQG4AUAIA2EYFCIAuWAigYMQA
### ๐ป Code
```ts
type T = `${"prefix1"|"prefix2"}${string}`;
let x: T = ""
```
### ๐ Actual behavior
When requesting a completion inside the `""`, I get `"$"`, `"let"`, `"prefix1"`, `"prefix2"`, `"string"`, `"T"`, `"type"`, `"x"`.
### ๐ Expected behavior
I expect to get just `"prefix1"`, `"prefix2"`.
### Additional information about the issue
The current completions include strings that the value cannot legally start with. | Help Wanted,Domain: Completion Lists,Possible Improvement | low | Minor |
2,475,104,225 | pytorch | cudagraph capture p2p ops fail | ### ๐ Describe the bug
I use cudagraph to capture send/recv but it fails.
The code is run with `torchrun --nnodes=1 --nproc_per_node=2 test.py`
```
import os
import torch
import torch.distributed as dist
def test_func(x, rank):
if rank == 0:
x += 1
# Send the tensor to process 1
dist.send(tensor=x, dst=1)
else:
# Receive tensor from process 0
dist.recv(tensor=x, src=0)
return x + 2
def run(rank):
torch.cuda.set_device(rank)
x = torch.ones(1, device='cuda')
y = test_func(x, rank)
dist.barrier()
graph = torch.cuda.CUDAGraph()
with torch.cuda.graph(graph):
x = torch.ones(1, device='cuda')
y = test_func(x, rank)
for i in range(1):
x.copy_(torch.ones(1, device='cuda'))
graph.replay()
print(f"Rank{rank} has data {y}")
def main():
rank = int(os.environ['RANK'])
local_rank = int(os.environ['LOCAL_RANK'])
world_size = int(os.environ['WORLD_SIZE'])
dist.init_process_group('nccl', rank=rank, world_size=world_size)
run(local_rank)
if __name__ == "__main__":
main()
```
The following is the error:
```
W0820 08:46:33.506000 140422493812544 torch/distributed/run.py:779]
W0820 08:46:33.506000 140422493812544 torch/distributed/run.py:779] *****************************************
W0820 08:46:33.506000 140422493812544 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0820 08:46:33.506000 140422493812544 torch/distributed/run.py:779] *****************************************
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 79, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1949, in send
[rank0]: default_pg.send([tensor], dst, tag).wait()
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AttributeError: 'NoneType' object has no attribute 'wait'
[rank0]: During handling of the above exception, another exception occurred:
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspace/test.py", line 25, in run
[rank0]: y = test_func(x, rank)
[rank0]: ^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/test.py", line 10, in test_func
[rank0]: dist.send(tensor=x, dst=1)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank0]: msg_dict = _get_msg_dict(func.__name__, *args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 54, in _get_msg_dict
[rank0]: "args": f"{args}, {kwargs}",
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor.py", line 463, in __repr__
[rank0]: return torch._tensor_str._str(self, tensor_contents=tensor_contents)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor_str.py", line 698, in _str
[rank0]: return _str_intern(self, tensor_contents=tensor_contents)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor_str.py", line 618, in _str_intern
[rank0]: tensor_str = _tensor_str(self, indent)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor_str.py", line 350, in _tensor_str
[rank0]: formatter = _Formatter(get_summarized_data(self) if summarize else self)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor_str.py", line 138, in __init__
[rank0]: nonzero_finite_vals = torch.masked_select(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: CUDA error: operation not permitted when stream is capturing
[rank0]: CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
[rank0]: For debugging consider passing CUDA_LAUNCH_BLOCKING=1
[rank0]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
[rank0]: During handling of the above exception, another exception occurred:
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspace/test.py", line 41, in <module>
[rank0]: main()
[rank0]: File "/workspace/test.py", line 38, in main
[rank0]: run(local_rank)
[rank0]: File "/workspace/test.py", line 23, in run
[rank0]: with torch.cuda.graph(graph):
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/cuda/graphs.py", line 185, in __exit__
[rank0]: self.cuda_graph.capture_end()
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/cuda/graphs.py", line 83, in capture_end
[rank0]: super().capture_end()
[rank0]: RuntimeError: CUDA error: operation failed due to a previous error during capture
[rank0]: CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
[rank0]: For debugging consider passing CUDA_LAUNCH_BLOCKING=1
[rank0]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
[rank1]: Traceback (most recent call last):
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 79, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1998, in recv
[rank1]: pg.recv([tensor], src, tag).wait()
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: AttributeError: 'NoneType' object has no attribute 'wait'
[rank1]: During handling of the above exception, another exception occurred:
[rank1]: Traceback (most recent call last):
[rank1]: File "/workspace/test.py", line 25, in run
[rank1]: y = test_func(x, rank)
[rank1]: ^^^^^^^^^^^^^^^^^^
[rank1]: File "/workspace/test.py", line 13, in test_func
[rank1]: dist.recv(tensor=x, src=0)
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: msg_dict = _get_msg_dict(func.__name__, *args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 54, in _get_msg_dict
[rank1]: "args": f"{args}, {kwargs}",
[rank1]: ^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor.py", line 463, in __repr__
[rank1]: return torch._tensor_str._str(self, tensor_contents=tensor_contents)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor_str.py", line 698, in _str
[rank1]: return _str_intern(self, tensor_contents=tensor_contents)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor_str.py", line 618, in _str_intern
[rank1]: tensor_str = _tensor_str(self, indent)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor_str.py", line 350, in _tensor_str
[rank1]: formatter = _Formatter(get_summarized_data(self) if summarize else self)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/_tensor_str.py", line 138, in __init__
[rank1]: nonzero_finite_vals = torch.masked_select(
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: RuntimeError: CUDA error: operation not permitted when stream is capturing
[rank1]: CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
[rank1]: For debugging consider passing CUDA_LAUNCH_BLOCKING=1
[rank1]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
[rank1]: During handling of the above exception, another exception occurred:
[rank1]: Traceback (most recent call last):
[rank1]: File "/workspace/test.py", line 41, in <module>
[rank1]: main()
[rank1]: File "/workspace/test.py", line 38, in main
[rank1]: run(local_rank)
[rank1]: File "/workspace/test.py", line 23, in run
[rank1]: with torch.cuda.graph(graph):
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/cuda/graphs.py", line 185, in __exit__
[rank1]: self.cuda_graph.capture_end()
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/cuda/graphs.py", line 83, in capture_end
[rank1]: super().capture_end()
[rank1]: RuntimeError: CUDA error: operation failed due to a previous error during capture
[rank1]: CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
[rank1]: For debugging consider passing CUDA_LAUNCH_BLOCKING=1
[rank1]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
[rank0]:[W820 08:46:38.286146836 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
W0820 08:46:40.124000 140422493812544 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 480 closing signal SIGTERM
E0820 08:46:41.340000 140422493812544 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 479) of binary: /opt/conda/bin/python
Traceback (most recent call last):
File "/opt/conda/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==2.4.0', 'console_scripts', 'torchrun')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/distributed/run.py", line 901, in main
run(args)
File "/opt/conda/lib/python3.11/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/opt/conda/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
test.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-08-20_08:46:40
host : cambricon-PowerEdge-C4140
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 479)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
I use this docker pytorch/pytorch:2.4.0-cuda12.1-cudnn9-devel and an old version like pytorch2.1 can capture correctly.
I think this function return nullptr may cause the issue.
https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L3225
### Versions
```
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 4
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 44 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.26.4 py311h08b1b3b_0
[conda] numpy-base 1.26.4 py311hf175353_0
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch 2.4.0 py3.11_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.0 py311_cu121 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtriton 3.0.0 py311 pytorch
[conda] torchvision 0.19.0 py311_cu121 pytorch
```
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @mcarilli @ezyang @eellison @penguinwu | oncall: distributed,triaged,module: cuda graphs | low | Critical |
2,475,114,641 | pytorch | Feat: output dtype for count_nonzero | ### ๐ The feature, motivation and pitch
Can we add a `dtype` parameter to `count_nonzero()` to specify the output dtype?
Currently `count_nonzero()` outputs a int64 tensor. When working with large tensors, it may be preferable to use float32 or float16 outputs to save memory, especially if the count is later used with other floating-point operations.
For reference, many other operations have the option to specify an output dtype, eg [`torch.sum()`](https://pytorch.org/docs/stable/generated/torch.sum.html#torch.sum).
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD | triaged,actionable,module: python frontend | low | Minor |
2,475,175,407 | next.js | next/link <Link> is not loading AMP page components from non-AMP page, I have to use <a> tag only then it renders AMP components which resets the router. | ### Link to the code that reproduces this issue
https://github.com/atul-vashisht/amp-error
### To Reproduce
1 Create a simple page router next app with non-AMP home page index.
2. add a new sample amp page with with some amp components.
3. on home page put a next/link <Link> tag with href="/my-sample-amp-page"
4. you will not be able to see the amp components.
### Current vs. Expected behavior
<Link> is not rendering amp page components from non-AMP page.
### Provide environment information
```bash
{
"name": "amp-error",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"react": "^18",
"react-dom": "^18",
"next": "14.1.4"
}
}
```
### Which area(s) are affected? (Select all that apply)
Pages Router
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
NON AMP PAGE IS NOT ABLE TO LOAD AMP PAGE WITH THE <LINK> TAG, USING <a> tag or target="_blank" reset the router. | bug,Pages Router | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.