id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,828,374,585 | DeepSeek: Grouped/Ragged gemm with offsets on the GPU | ngimel | open | [
"triaged",
"module: linear algebra",
"module: nestedtensor"
] | 3 | COLLABORATOR | When using token choice MoE, each expert gets varying number of tokens as input, and, in a sync-free implementation, the number of the tokens is not known on the CPU. The MoE computation needs to do several gemms with these unknown size inputs, # of gemms = number of experts on this GPU. One way to implement this is to have tokens physically packed as N_tokens_max x model_dim tensor and have an offsets tensor of the size N_experts in the device device, and have grouped gemm kernels that can consume thus packed data. In some cases our actual number of tokens will be less than N_tokens_max and we'll need to figure out how exactly to handle that.
The grouped gemms are relatively straightforward for input computation (`I @ w.T`) and gradInput computation (`dO @ w`) where only one of the inputs is ragged, and the second is weight whose shape is static. To compute weight gradient we need to do `dO.T @ I` where both inputs are ragged, so that would be a meaningfully different kernel compared to forward and gradInput. In addition for fp8 matmul we need the second argument to be physically transposed in memory. That can be done in a separate kernel most likely, but details are tbd.
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | true |
2,828,373,795 | Fix random crash in PyPer | shengfukevin | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Summary: PyPer saw random crashes when writing into ET file. This DIFF is to check if the output file is in condition before writing into it, and catch the exception if something bad happens, instead of crashing.
Test Plan: buck2 run mode/opt caffe2/test:test_profiler_cuda -- profiler.test_execution_trace.TestExecutionTraceCUDA
Differential Revision: D69065509
| true |
2,828,373,324 | Tracking issue for Improvements for DeepSeek | ngimel | open | [
"oncall: distributed",
"module: float8",
"module: sdpa"
] | 4 | COLLABORATOR | DeepSeek training, as described in the paper, uses a few techniques that are currently inconvenient to implement in pytorch.
1) Token choice MoE. Routing and computation for MoE should happen without host-device syncs. After computing and ranking scores, the information on which token has to go where (and also, how many tokens each expert receives) lives on GPU. This means that communication has to happen with meta data on GPU only, and subsequent MoE computation also should use data on the GPU to get the necessary offsets into inputs. In particular, that requires communication APIs that accept GPU metadata (they don't exist today), and ragged gemm API that reads offsets from the GPU memory. #146328, #146329
2) Block quantization for fp8 #146368
2) Hierarchical implementation for the a2a communication to reduce cross-node traffic #146331
3) MLA attention, currently not implemented in an efficient way #146330
4) Overlapping communication and computation - DeepSeek uses fine-grain overlapping strategy, where forward of one microbatch is overlapped with backward of another. We currently don't have a way to conveniently express that, possibly torch.compile could help but there is a wide open space for design options here. #146332
5) More flexibility for mixed precision optimizers (e.g. DeepSeek mentions that they keep optimizer states in bf16) #146542
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @yanbing-j @vkuzo @albanD @kadeng @penguinwu | true |
2,828,310,869 | [benchmark] Remove ONNX | justinchuby | closed | [
"open source",
"Merged",
"module: benchmark",
"ciflow/trunk",
"release notes: benchmark",
"module: dynamo",
"ciflow/inductor",
"topic: devs"
] | 6 | COLLABORATOR | ONNX exporter experiments in benchmark is obsolete and unmaintained. This PR removes it to unblock https://github.com/pytorch/pytorch/pull/146003
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @atalman | true |
2,828,291,113 | [torch][amdsmi] Avoid ODR violation when loading amdsmi | danzimm | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 19 | CONTRIBUTOR | Summary:
amdsmi bundles its own copy of `libamd_smi.so`. When you're interacting with `amdsmi` from *only* python that's fine, but when you try to interact with `libamd_smi.so` from native code too this poses a problem, because from native code you'll be linking against the copy of `libamd_smi.so` from the SDK.
This means you'll end up with 2 copies of `libamd_smi.so` in your process, and potentially (Murphey's law says you will, as does our CI) violate ODR.
In order to avoid this issue from the PT side of the world we can hook the `dlopen("path/to/bundled/libamd_smi.so")` and try to use the already loaded/SDK version of `libamd_smi.so` first, before proceeding to use the `path/to/bundled/libamd_smi.so`.
Test Plan: CI, inspect process using libamd_smi.so from native + python and observe only a single copy loaded
Differential Revision: D69064038
| true |
2,828,289,391 | [easy] Add type annotation for autotune_num_choices_displayed | henrylhtsang | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Test Plan: ci
Differential Revision: D69064447
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,198,171 | [dynamo][skip-function] Add missing unimplemented line | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146339
* #146116
* __->__ #146322
This is a missing line from the merged PR in the stack below. Lets try to get this in quickly.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,828,173,916 | [ONNX] Support custom axis name through dynamic_shapes | titaiwangms | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: new features"
] | 12 | COLLABORATOR | Fixes #143443
This PR aims to support custom dynamic axis naming through dynamic_shapes. Currently, _Dim and _DimHint do not support dynamic axis naming (#144273).
1. **the original dynamic shapes guarantee**
The axis renaming is only applied when dynamic shapes include string instead of all _Dim and _DimHint. Thus, there will not be any inconsistent behavior to dynamic_shapes with torch.export.export if the given dynamic shapes follow torch.export.export format.
2. _DimHint.AUTO is applied to the axes that are specified with custom names to avoid exporter crash. (_DimHint.DYNAMIC crashes when the export fails.)
3. There's no need to handle cases where kwargs are out of order with the model signature,
as torch.export.export supports dynamism only when kwargs and dynamic_shapes are provided in order.
https://github.com/pytorch/pytorch/blob/49082f9dba3b79a344cb03652972ddbe7c3729cc/torch/export/_trace.py#L2034
4. If `torch.onnx.ExportedProgram` finds the axes share the same constraints, they will have the same name (e.g. s0, s1, ...). Therefore, even if the ONNX users specify them with different custom names, they won't be respected.
Example model:
```python
class NestedModel(torch.nn.Module):
def forward(
self,
x: torch.Tensor,
ys: list[torch.Tensor],
zs: dict[str, torch.Tensor],
c: torch.Tensor,
):
y = ys[0] + ys[1] + zs["a"] + zs["b"]
w = 5
if x.shape[0] < 3 and c.shape[0] != 4:
return x + w, x + y, c
else:
return x - w, x - y, c
input = (
torch.ones(5),
[torch.zeros(5), torch.ones(5)],
{"a": torch.zeros(5), "b": torch.ones(5)},
torch.ones(6),
)
dynamic_shapes = (
{0: torch.export.Dim("dim_x", min=3)}, # _Dim
[("custom_name_axis_ys_0",), (torch.export.Dim.AUTO,)], # custom name
{
"a": {0: torch.export.Dim.AUTO},
"b": ("custom_name_axis_zs_b_0",),
}, # _DimHint
{0: "custom_name_axis_c_0"}, # custom name
)
``` | true |
2,828,164,098 | DeepSpeed github repo move sync | stas00 | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | DeepSpeed has moved to a new repo on github https://github.com/deepspeedai/DeepSpeed
This PR updates this repo to use the new URL.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,828,120,347 | [export][dynamic shapes] use size-oblivious upper bound reasoning for backed symbols | pianpwk | closed | [
"Stale",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,828,115,819 | Hack AC to not clear recomputed activations | soulitzer | open | [
"no-stale"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146633
* __->__ #146318
* #145399
* #145533
* #145531
* #145520
| true |
2,828,101,996 | [inductor] Improve type annotations in _inductor/pattern_matcher.py | rec | closed | [
"module: typing",
"open source",
"better-engineering",
"release notes: fx",
"topic: not user facing",
"fx",
"module: inductor",
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146317
* #146248
See https://github.com/pytorch/pytorch/issues/146167
cc @ezyang @malfet @xuzhao9 @gramster @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,828,062,196 | Fix not inlining functions used in metal files | Isalia20 | closed | [
"open source",
"Merged",
"release notes: build",
"topic: bug fixes",
"ciflow/binaries_wheel"
] | 4 | COLLABORATOR | Fixes issue when building PyTorch with Xcode installed after https://github.com/pytorch/pytorch/pull/146231
```
FAILED: caffe2/aten/src/ATen/kernels_basic.metallib /Users/Irakli_Salia/Desktop/pytorch/build/caffe2/aten/src/ATen/kernels_basic.metallib
cd /Users/Irakli_Salia/Desktop/pytorch/build/caffe2/aten/src/ATen && xcrun metallib -o kernels_basic.metallib BinaryKernel_30.air Bucketization_30.air CrossKernel_30.air FusedOptimizerOps_30.air Gamma_30.air HistogramKernel_30.air Im2Col_30.air Indexing_30.air LinearAlgebra_30.air Quantized_30.air RMSNorm_30.air RenormKernel_30.air Repeat_30.air SpecialOps_30.air TriangularOps_30.air UnaryKernel_30.air UnfoldBackward_30.air UpSample_30.air
LLVM ERROR: multiple symbols ('_ZN3c105metal4zetaEff')!
[3835/5420] Building CXX object c10/test/CMakeFiles/c10_small_vector_test.dir/util/small_vector_test.cpp.o
ninja: build stopped: subcommand failed.
```
AI to @malfet: Add linter that ensures that `c10/metal/` headers do not have any functions there, only templates | true |
2,828,046,196 | [export] dynamic_shapes with `Dim` fails when `DYNAMIC` succeeds | xadupre | open | [
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 3 | COLLABORATOR | ### 🐛 Describe the bug
Following example fails. I'm not sure that's an error. The only way to make it work is to use ``torch.export.Dim.DYNAMIC``. Ideally `Dim` should have the same effect as `Dim.DYNAMIC`?
```python
import torch
class Model(torch.nn.Module):
def forward(self, x, y, z):
return torch.cat((x, y), axis=1) + z[:, ::2]
model = Model()
x = torch.randn(2, 3)
y = torch.randn(2, 5)
z = torch.randn(2, 16)
model(x, y, z)
batch = torch.export.Dim("batch")
dx = torch.export.Dim("dx")
dy = torch.export.Dim("dy")
dz = torch.export.Dim("dz")
torch.export.export(
model,
(x, y, z),
dynamic_shapes={
"x": {0: batch, 1: dx},
"y": {0: batch, 1: dy},
"z": {0: batch, 1: dz},
},
)
```
```
torch._dynamo.exc.UserError: Constraints violated (batch, dz)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of batch = L['x'].size()[0] in the specified range satisfy the generated guard L['x'].size()[0] != 9223372036854775807.
- Not all values of dz = L['z'].size()[1] in the specified range satisfy the generated guard L['z'].size()[1] != 9223372036854775807.
- Not all values of dz = L['z'].size()[1] in the specified range satisfy the generated guard ((1 + L['z'].size()[1]) // 2) != 1.
```
### Versions
```
PyTorch version: 2.7.0.dev20250131+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.5
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+gitb2684bf3
[pip3] torch==2.7.0.dev20250131+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250131+cu126
[pip3] torchvision==0.22.0.dev20250131+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,828,026,989 | print partial fx graph for all tracing errors | bobrenjc93 | closed | [
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146314
* #146296
* #146298
followup from discussions on earlier PRs
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,827,919,712 | [Dynamo] fix torch._dynamo.assume_constant_result when used on class method | matthewfl | open | [
"triaged",
"open source",
"Stale",
"module: dynamo"
] | 9 | CONTRIBUTOR | This PR fixes `torch._dynamo.assume_constant_result` when it is used on a class method, and the class was instantiated inside of the dynamo traced code.
The issue: Currently when an object is modified by storing attributes on it in dynamo, the side effects of those attribute stores are tracked using the `SideEffect` system, and the modifications to the class are not performed in the first place. For example, in
```python
class A:
def __init__(self):
self.value = 123
```
The `self.value` field will not be set on the class `A` but rather it will only be tracked within the `SideEffect` system.
This causes a problem with `torch._dynamo.assume_constant_result` as it converts the value in dynamo back into a normal python value and invokes the function as normal python. However, it currently does not find the `self.value` field (as it was never set on the underlying object). This PR checks if the object passed to the `torch._dynamo.assume_constant_result` has any pending mutations from the `SideEffect` system and applies them before calling the `torch._dynamo.assume_constant_result` function.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,827,711,912 | Discrepancy in Outputs When Converting to ONNX using torch.autocast with float16 | AyoubMDL | closed | [
"module: onnx",
"triaged"
] | 9 | NONE | ### 🐛 Describe the bug
I converted a PyTorch model to ONNX inside `torch.autocast(device_type="cpu", dtype=torch.float16)` context. When running inference on both the original and converted models, I observed that the difference between their outputs is on the order of **1e-2**.
I expected a smaller error, around 1e-5, similar to float32 precision, but I assume that in float16, the error might naturally be higher.
### Questions:
* Is this level of discrepancy expected for mixed-precision models eventhough it is just a converted model (no added ops) ?
* Are there any recommended ways to minimize the difference between PyTorch and ONNX Runtime outputs in FLOAT16 (I believe they will always be an error because of the optimization differences) ?
#### Additional Notes:
Upgrading to the latest PyTorch version (which has FP16 support on CPU) reduced the error but it remains in the 1e-2 range.
Would appreciate any insights on this. Thanks!
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.3
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 4 2024, 08:55:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.107.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz
Stepping: 5
CPU MHz: 4187.537
CPU max MHz: 4800,0000
CPU min MHz: 800,0000
BogoMIPS: 5799.77
Virtualization: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxconverter-common==1.16.0
[pip3] onnxmltools==1.13.0
[pip3] onnxruntime==1.19.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxscript==0.1.0.dev20250114
[pip3] pytorch-lightning==1.9.5
[pip3] torch==2.6.0
[pip3] torchao==0.8.0
[pip3] torchaudio==2.5.1
[pip3] torchmetrics==1.6.1
[pip3] torchtune==0.5.0
[pip3] torchvision==0.20.1
[pip3] triton==3.2.0
[conda] Could not collect | true |
2,827,677,864 | Enable TemporaryFileName tests on Windows | cyyever | closed | [
"triaged",
"open source",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 | true |
2,827,630,887 | Fix type stubs for SymmetricMemory | lw | closed | [
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146310
| true |
2,827,621,097 | [ARM] - Bug in SVE Vectorization, test_vec_remainder fails with torchinductor compiler error | robert-hardwick | closed | [
"module: tests",
"triaged",
"module: vectorization",
"module: arm"
] | 2 | COLLABORATOR | ### 🐛 Describe the bug
We are seeing a bug on Aarch64 Neoverse-V1
```
To execute this test, run the following from the base repo dir:
python test/inductor/test_cpu_repro.py CPUReproTests.test_vec_remainder
Error is:
‘blend’ is not a member of ‘at::vec::CPU_CAPABILITY::Vectorized<signed char>’
```
It looks to me like we have added a `Vectorized<int##bit##_t> blendv` implementation for SVE here
https://github.com/pytorch/pytorch/pull/119571 , but there isn't one for `blend`. Not my area of expertise so wondering if @maajidkhann can advise?
Full traceback
```
torch._inductor.exc.InductorError: CppCompileError: C++ compile error
Command:
g++ /tmp/tmpve7pi8c8/ba/cbapfyanhraerjaxtxvjfdtkxqjlwrrt6y6w67p6fq255prn7u3l.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_SVE -D CPU_CAPABILITY_SVE256 -D AT_BUILD_ARM_VEC256_WITH_SLEEF -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/opt/conda/envs/py_3.10/include/python3.10 -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -march=armv8-a+sve -msve-vector-bits=256 -D_GLIBCXX_USE_CXX11_ABI=1 -ltorch -ltorch_cpu -ltorch_python -lgomp -L/opt/conda/envs/py_3.10/lib -L/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib -o /tmp/tmpve7pi8c8/ba/cbapfyanhraerjaxtxvjfdtkxqjlwrrt6y6w67p6fq255prn7u3l.so
Output:
/tmp/tmpve7pi8c8/ba/cbapfyanhraerjaxtxvjfdtkxqjlwrrt6y6w67p6fq255prn7u3l.cpp: In function ‘void kernel(const int8_t*, const int8_t*, int8_t*)’:
/tmp/tmpve7pi8c8/ba/cbapfyanhraerjaxtxvjfdtkxqjlwrrt6y6w67p6fq255prn7u3l.cpp:19:91: error: ‘blend’ is not a member of ‘at::vec::CPU_CAPABILITY::Vectorized<signed char>’
19 | auto tmp2 = tmp0 - (decltype(tmp0)::blendv(tmp0 / decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1), tmp0 / decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) - decltype(tmp0)(1), (tmp0 % decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) != decltype(tmp0)(0)) & ((tmp0 < decltype(tmp0)(0)) != (decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) < decltype(tmp0)(0))))) * tmp1;
| ^~~~~
/tmp/tmpve7pi8c8/ba/cbapfyanhraerjaxtxvjfdtkxqjlwrrt6y6w67p6fq255prn7u3l.cpp:19:151: error: ‘blend’ is not a member of ‘at::vec::CPU_CAPABILITY::Vectorized<signed char>’
19 | auto tmp2 = tmp0 - (decltype(tmp0)::blendv(tmp0 / decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1), tmp0 / decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) - decltype(tmp0)(1), (tmp0 % decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) != decltype(tmp0)(0)) & ((tmp0 < decltype(tmp0)(0)) != (decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) < decltype(tmp0)(0))))) * tmp1;
| ^~~~~
/tmp/tmpve7pi8c8/ba/cbapfyanhraerjaxtxvjfdtkxqjlwrrt6y6w67p6fq255prn7u3l.cpp:19:232: error: ‘blend’ is not a member of ‘at::vec::CPU_CAPABILITY::Vectorized<signed char>’
19 | auto tmp2 = tmp0 - (decltype(tmp0)::blendv(tmp0 / decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1), tmp0 / decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) - decltype(tmp0)(1), (tmp0 % decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) != decltype(tmp0)(0)) & ((tmp0 < decltype(tmp0)(0)) != (decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) < decltype(tmp0)(0))))) * tmp1;
| ^~~~~
/tmp/tmpve7pi8c8/ba/cbapfyanhraerjaxtxvjfdtkxqjlwrrt6y6w67p6fq255prn7u3l.cpp:19:340: error: ‘blend’ is not a member of ‘at::vec::CPU_CAPABILITY::Vectorized<signed char>’
19 | auto tmp2 = tmp0 - (decltype(tmp0)::blendv(tmp0 / decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1), tmp0 / decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) - decltype(tmp0)(1), (tmp0 % decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) != decltype(tmp0)(0)) & ((tmp0 < decltype(tmp0)(0)) != (decltype(tmp0)::blend<255>(decltype(tmp0)(1), tmp1) < decltype(tmp0)(0))))) * tmp1;
| ^~~~~
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/inductor/test_cpu_repro.py CPUReproTests.test_vec_remainder
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
### Versions
PyTorch version: 2.7.0a0+git8feb7c9
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git8feb7c9
[conda] No relevant packages
cc @mruberry @ZainRizvi @malfet @snadampal @milpuz01 | true |
2,827,538,187 | Support SymmetricMemory's signaling kernels on sm60 and sm70 | lw | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146308
By leveraging libcudacxx's utilities: https://nvidia.github.io/cccl/libcudacxx/extended_api/synchronization_primitives/atomic_ref.html
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,827,419,944 | Intermittent false positives in lintrunner's mypy | rec | open | [
"module: lint",
"triaged",
"module: flaky-tests"
] | 3 | COLLABORATOR | ### 🐛 Describe the bug
### Summary
Not infrequently, lintrunner's MYPY linter seems to locally get into a bad state where it repeatably reports false positives for code where the CI correctly reports none. This has been an intermittent nuisance for me for months, and apparently a lot of Quansight developers have seen fairly regularly it too.
It's not a huge deal because I just run `mypy` by hand on the files in question, but it slows down the workflow and had lead me to miss real errors.
I read https://github.com/pytorch/pytorch/issues/144577 but this seems more general.
### Notes
* I run `lintrunner init` every time I run `lintrunner`.
* Generally, there are no staged or unstaged changes in the git repo at all - I'm linting exactly the most recent commit.
* By the end, there are no other linter errors - I can clean up all other errors, and only the false positives will remain.
* The github CI does not show these issues when I push
* Usually the error is in files I have not changed.
* it feels like often the errors are "Name is not defined".
* It's only one or two files, in a specific area: it feels like the checker goes bad for "one scope"
* The errors do report the correct contents and line numbers of the file: it's not that `lintrunner` is running on some other file.
* I do see the false positives if I just run `lintrunner` on the individual file: `lintrunner --take MYPY torch/_inductor/select_algorithm.py`
* ...but I see no errors if I run mypy directly: `mypy torch/_inductor/select_algorithm.py`
* I have seen a few times where it reported mypy errors _in C++ files!_ and apparently others at Quansight have seen it too.
* I'm almost always rebased against `viable/strict`.
* Sometimes a new rebasing against `viable/strict` fixes the issue, but then sometimes rebasing against `viable/strict` seems to cause the issue.
* Removing `.mypy_cache` changes nothing.
* Removing the cache slows down `mypy` a great deal the next time I run it, but not `lintrunner` so clearly the lintrunner cache is elsewhere, but I haven't found it.
### Reproducing it?
As I started to prepare this issue report, in one repo in [these ten lines](https://github.com/pytorch/pytorch/blob/550441a87b4b7f3493f23a3a1e05678cd65ceefe/torch/_inductor/select_algorithm.py#L757-L768) the variables `other`, `output_name` and `template_mask` were reported as "Name is not defined" (when they are defined a page or so above).
Unfortunately, in the course of my other work, that repo seems to be better now, so I don't have a way to reproduce this.
In my experience, I'm fairly certain to have this happen during any given week.
If I had some hints as to how to diagnose this issue, I'd definitely report back with better details when it happened.
----
I'm a big fan of code quality tools, which are generally things people barely notice except as a vague annoyance. 😀 Thanks for this great system!
cc @clee2000 @wdvr | true |
2,827,398,463 | torch.compile failed on single-node multigpu setting. | Felix-Zhenghao | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 1 | NONE | ### 🐛 Describe the bug
I ran this code on an 8-GPU (A100) node:
```python
vla = ActionOnlyVLA(cfg.vla, vlm).to(torch.bfloat16).to(device_id)
vla = torch.compile(
model=vla,
mode="reduce-overhead",
)
```
I ran ablation and I am sure that it is torch.compile error.
### Error logs
Same error appears on all the local_rank:
```
[rank2]: Traceback (most recent call last):
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/cpp_builder.py", line 331, in _run_compile_cmd
[rank2]: status = subprocess.check_output(args=cmd, cwd=cwd, stderr=subprocess.STDOUT)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/subprocess.py", line 466, in check_output
[rank2]: return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/subprocess.py", line 571, in run
[rank2]: raise CalledProcessError(retcode, process.args,
[rank2]: subprocess.CalledProcessError: Command '['g++', '/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp', '-D', 'TORCH_INDUCTOR_CPP_WRAPPER', '-D', 'C10_USING_CUSTOM_GENERATED_MACROS', '-D', 'CPU_CAPABILITY_AVX512', '-shared', '-fPIC', '-O3', '-DNDEBUG', '-ffast-math', '-fno-finite-math-only', '-fno-unsafe-math-optimizations', '-ffp-contract=off', '-march=native', '-Wall', '-std=c++17', '-Wno-unused-variable', '-Wno-unknown-pragmas', '-fopenmp', '-I/data/czh/miniconda3/include/python3.12', '-I/data/czh/miniconda3/include/python3.12', '-I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include', '-I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/torch/csrc/api/include', '-I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/TH', '-I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/THC', '-I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include', '-I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/torch/csrc/api/include', '-I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/TH', '-I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/THC', '-mavx512f', '-mavx512dq', '-mavx512vl', '-mavx512bw', '-mfma', '-D_GLIBCXX_USE_CXX11_ABI=0', '-ltorch', '-ltorch_cpu', '-ltorch_python', '-lc10', '-lgomp', '-L/data/czh/miniconda3/lib', '-L/data/czh/miniconda3/lib/python3.12/site-packages/torch/lib', '-L/data/czh/miniconda3/lib/python3.12/site-packages/torch/lib', '-L/data/czh/miniconda3/lib/python3.12/site-packages/torch/lib', '-o', '/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.so']' returned non-zero exit status 1.
[rank2]: The above exception was the direct cause of the following exception:
[rank2]: Traceback (most recent call last):
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
[rank2]: compiled_fn = compiler_fn(gm, self.example_inputs())
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
[rank2]: compiled_gm = compiler_fn(gm, example_inputs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/__init__.py", line 2234, in __call__
[rank2]: return compile_fx(model_, inputs_, config_patches=self.config)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1253, in compile_fx
[rank2]: return compile_fx(
[rank2]: ^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
[rank2]: return aot_autograd(
[rank2]: ^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
[rank2]: cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
[rank2]: compiled_fn = dispatch_and_compile()
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
[rank2]: compiled_fn, _ = create_aot_dispatcher_function(
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
[rank2]: return _create_aot_dispatcher_function(
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
[rank2]: compiled_fn, fw_metadata = compiler_fn(
[rank2]: ^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 588, in aot_dispatch_autograd
[rank2]: compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
[rank2]: return _fw_compiler_base(model, example_inputs, is_inference)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
[rank2]: return inner_compile(
[rank2]: ^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/contextlib.py", line 81, in inner
[rank2]: return func(*args, **kwds)
[rank2]: ^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
[rank2]: return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
[rank2]: inner_compiled_fn = compiler_fn(gm, example_inputs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
[rank2]: compiled_graph = FxGraphCache.load(
[rank2]: ^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1370, in load
[rank2]: compiled_graph = compile_fx_fn(
[rank2]: ^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
[rank2]: compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 878, in fx_codegen_and_compile
[rank2]: compiled_fn = graph.compile_to_fn()
[rank2]: ^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1913, in compile_to_fn
[rank2]: return self.compile_to_module().call
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1839, in compile_to_module
[rank2]: return self._compile_to_module()
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1867, in _compile_to_module
[rank2]: mod = PyCodeCache.load_by_key_path(
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 2876, in load_by_key_path
[rank2]: mod = _reload_python_module(key, path)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 45, in _reload_python_module
[rank2]: exec(code, mod.__dict__, mod.__dict__)
[rank2]: File "/tmp/torchinductor_czh/37/c37woc3u2aghep6ncg2dytsghqh6ifwot4p3ekcbiyuo4cwvcqjx.py", line 7124, in <module>
[rank2]: async_compile.wait(globals())
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 276, in wait
[rank2]: scope[key] = result.result()
[rank2]: ^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3353, in result
[rank2]: return self.result_fn()
[rank2]: ^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 2377, in future
[rank2]: result = get_result()
[rank2]: ^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 2177, in load_fn
[rank2]: future.result()
[rank2]: File "/data/czh/miniconda3/lib/python3.12/concurrent/futures/_base.py", line 456, in result
[rank2]: return self.__get_result()
[rank2]: ^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
[rank2]: raise self._exception
[rank2]: File "/data/czh/miniconda3/lib/python3.12/concurrent/futures/thread.py", line 58, in run
[rank2]: result = self.fn(*self.args, **self.kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 2218, in _worker_compile_cpp
[rank2]: cpp_builder.build()
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/cpp_builder.py", line 1508, in build
[rank2]: status = run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/cpp_builder.py", line 352, in run_compile_cmd
[rank2]: return _run_compile_cmd(cmd_line, cwd)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/data/czh/miniconda3/lib/python3.12/site-packages/torch/_inductor/cpp_builder.py", line 346, in _run_compile_cmd
[rank2]: raise exc.CppCompileError(cmd, output) from e
[rank2]: torch._inductor.exc.CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX512 -shared -fPIC -O3 -DNDEBUG -ffast-math -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/data/czh/miniconda3/include/python3.12 -I/data/czh/miniconda3/include/python3.12 -I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include -I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/torch/csrc/api/include -I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/TH -I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/THC -I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include -I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/torch/csrc/api/include -I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/TH -I/data/czh/miniconda3/lib/python3.12/site-packages/torch/include/THC -mavx512f -mavx512dq -mavx512vl -mavx512bw -mfma -D_GLIBCXX_USE_CXX11_ABI=0 -ltorch -ltorch_cpu -ltorch_python -lc10 -lgomp -L/data/czh/miniconda3/lib -L/data/czh/miniconda3/lib/python3.12/site-packages/torch/lib -L/data/czh/miniconda3/lib/python3.12/site-packages/torch/lib -L/data/czh/miniconda3/lib/python3.12/site-packages/torch/lib -o /tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.so
Output:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:48:57: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
48 | auto tmp23 = tmp20.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:48:62: error: expected primary-expression before ‘float’
48 | auto tmp23 = tmp20.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:50:36: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
50 | return decltype(tmp23)::blendv(tmp24, tmp23, tmp22);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:79:61: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
79 | auto tmp36 = tmp33.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:79:66: error: expected primary-expression before ‘float’
79 | auto tmp36 = tmp33.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:81:40: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
81 | return decltype(tmp36)::blendv(tmp37, tmp36, tmp35);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:110:65: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
110 | auto tmp49 = tmp46.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:110:70: error: expected primary-expression before ‘float’
110 | auto tmp49 = tmp46.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:112:44: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
112 | return decltype(tmp49)::blendv(tmp50, tmp49, tmp48);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:143:69: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
143 | auto tmp64 = tmp61.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:143:74: error: expected primary-expression before ‘float’
143 | auto tmp64 = tmp61.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:145:48: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
145 | return decltype(tmp64)::blendv(tmp65, tmp64, tmp63);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:200:65: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
200 | auto tmp97 = tmp94.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:200:70: error: expected primary-expression before ‘float’
200 | auto tmp97 = tmp94.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:202:44: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
202 | return decltype(tmp97)::blendv(tmp98, tmp97, tmp96);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:260:63: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
260 | auto tmp133 = tmp130.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:260:68: error: expected primary-expression before ‘float’
260 | auto tmp133 = tmp130.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:262:40: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
262 | return decltype(tmp133)::blendv(tmp134, tmp133, tmp132);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:293:67: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
293 | auto tmp148 = tmp145.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:293:72: error: expected primary-expression before ‘float’
293 | auto tmp148 = tmp145.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:295:44: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
295 | return decltype(tmp148)::blendv(tmp149, tmp148, tmp147);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:350:63: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
350 | auto tmp181 = tmp178.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:350:68: error: expected primary-expression before ‘float’
350 | auto tmp181 = tmp178.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:352:40: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
352 | return decltype(tmp181)::blendv(tmp182, tmp181, tmp180);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:416:59: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
416 | auto tmp223 = tmp220.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:416:64: error: expected primary-expression before ‘float’
416 | auto tmp223 = tmp220.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:418:36: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
418 | return decltype(tmp223)::blendv(tmp224, tmp223, tmp222);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:447:63: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
447 | auto tmp236 = tmp233.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:447:68: error: expected primary-expression before ‘float’
447 | auto tmp236 = tmp233.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:449:40: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
449 | return decltype(tmp236)::blendv(tmp237, tmp236, tmp235);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:480:67: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
480 | auto tmp251 = tmp248.template cast<float,1>();
| ^~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:480:72: error: expected primary-expression before ‘float’
480 | auto tmp251 = tmp248.template cast<float,1>();
| ^~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:482:44: error: decltype evaluates to ‘<type error>’, which is not a class or enumeration type
482 | return decltype(tmp251)::blendv(tmp252, tmp251, tmp250);
| ^~~~~~~~
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp: In lambda function:
/tmp/torchinductor_czh/mc/cmcjfccazmmoijulhnbndj3kmvsjj5zlyzckdrfhfcof5d4pjich.cpp:537:63: error: ‘struct at::vec::CPU_CAPABILITY::Vectorized<bool>’ has no member named ‘cast’
537 | auto tmp284 = tmp281.template cast<float,1>();
| ^~~~
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-70-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
GPU 7: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3600.104
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.14.0
[pip3] rotary-embedding-torch==0.8.6
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchdiffeq==0.2.5
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] rotary-embedding-torch 0.8.6 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov | true |
2,827,393,460 | Error message doesn't fully cover all situations | ILCSFNO | closed | [] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
The doc of [`torch.searchsorted()`](https://pytorch.org/docs/stable/generated/torch.searchsorted.html#torch-searchsorted) shows its `Keyword Arguments` as below:
> ### Keyword Arguments
> * ...
> * right ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of innermost dimension within sorted_sequence (one pass the last index of the innermost dimension). In other words, if False, gets the lower bound index for each value in values on the corresponding innermost dimension of the sorted_sequence. If True, gets the upper bound index instead. Default value is False. side does the same and is preferred. It will error if side is set to “left” while this is True.
> * side ([str](https://docs.python.org/3/library/stdtypes.html#str), optional) – the same as right but preferred. “left” corresponds to False for right and “right” corresponds to True for right. It will error if this is set to “left” while right is True. Default value is None.
> * ...
It is noted that the func should error if `side` is set to `“left”` while `right` is `True`, but what if `side` is set to `“right”` while `right` is `False`?
It is showed that it will work well, which is unexpected behavior.
### Minified Repro
```python
import torch
sorted_sequence = torch.stack([torch.arange(10) for i in range(3)])
values = torch.randint(10, (3, 3))
# result = torch.searchsorted(sorted_sequence, values, side='left', right=True) # Running Error and Expected Error
result = torch.searchsorted(sorted_sequence, values, side='right', right=False) # Running Well but Expected Error!
```
### Output
_No response_
### Versions
pytorch==2.5.0
torchvision==0.20.0
torchaudio==2.5.0
pytorch-cuda=12.1 | true |
2,827,082,544 | DeviceMesh API to merge multiple sub-meshes | kmehant | open | [
"oncall: distributed",
"module: DeviceMesh"
] | 1 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
It would be helpful if there is a way to merge sub meshes into single DeviceMesh. On the other hand, thank you for the slicing or sub mesh extraction support to the DeviceMesh.
### Alternatives
Currently, I only see of using `init_device_mesh()` API to recreate a DeviceMesh from existing sub meshes. Please let me know if I have missed out something.
### Additional context
I will be happy to raise a PR to add this feature given some direction on this.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,826,650,500 | Update slow tests | pytorchupdatebot | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3 | COLLABORATOR | This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests. | true |
2,826,586,583 | Bahaviors of Tranforms.v2.Resize and Transforms.Resize Are Different | donglihe-hub | closed | [] | 0 | NONE | ### 🐛 Describe the bug
The expected output is a (1, 224, 224) tensor. When input is an numpy.ndarray, transforms.Resize raises
```
TypeError: Unexpected type <class 'numpy.ndarray'>
```
While transforms.v2.Resize silently runs the code without resizing the input
```
v2 passed. torch.Size([1, 1023, 587]) <class 'torch.Tensor'>
```
Attached are the test codes
```python
import numpy as np
from torchvision.transforms import v2
from torchvision import transforms
if __name__ == "__main__":
t = np.ones((1023, 587))
transform_v2 = v2.Compose(
[
v2.Resize((224, 224)),
v2.ToTensor(),
]
)
print(f"v2 passed. {transform_v2(t).shape} {type(transform_v2(t))}")
transform_v1 = transforms.Compose(
[
transforms.Resize((224, 224)),
transforms.ToTensor(),
]
)
print(transform_v1(t).shape)
```
IMO, shouldn't v2 behave the same as v1. If their bahaviors are not expected to be the same, shouldn't v2 at least tell users resizing doesn't take effect?
### Versions
torchvision 0.21.0 | true |
2,826,573,231 | Possible gradient miscalculation with repeated indices in Autograd Functions `PutBackward0`, `IndexPutBackward0` and `ScatterBackward0` | mntsx | closed | [
"module: autograd",
"triaged",
"module: advanced indexing"
] | 5 | NONE | ### 🐛 Describe the bug
I think this is an incorrect behaviour for `PutBackward0`, `IndexPutBackward0` and `ScatterBackward0`:
```python
>>> import torch
>>> T = torch.zeros(size=(1,), requires_grad=True)
>>> idx = torch.tensor([0,0], dtype=torch.long)
>>> src = torch.tensor([1.0, 2.0], requires_grad=True)
>>> O = torch.put(input=T, index=idx, source=src)
>>> O = O.sum()
>>> O.backward()
>>> print(O)
tensor(2., grad_fn=<SumBackward0>)
>>> print(src.grad)
tensor([1., 1.])
```
```python
>>> import torch
>>> T = torch.zeros(size=(1,), requires_grad=True)
>>> idx = torch.tensor([0,0], dtype=torch.long)
>>> src = torch.tensor([1.0, 2.0], requires_grad=True)
>>> C = T.clone()
>>> C[idx] = src
>>> O = C.sum()
>>> O.backward()
>>> print(O)
tensor(2., grad_fn=<SumBackward0>)
>>> print(src.grad)
tensor([1., 1.])
```
```python
>>> import torch
>>> T = torch.zeros(size=(1,), requires_grad=True)
>>> idx = torch.tensor([0, 0], dtype=torch.long)
>>> src = torch.tensor([1.0, 2.0], requires_grad=True)
>>> O = torch.scatter(input=T, dim=0, index=idx, src=src)
>>> O = O.sum()
>>> O.backward()
>>> print(O)
tensor(2., grad_fn=<SumBackward0>)
>>> print(src.grad)
tensor([1., 1.])
```
If elements inserted at the same index overwrite each other (and the output being `2.` instead of `3.` shows that they do overwrite each other), then `src.grad[0]` should be `0.`, not `1.`.
### Versions
```bash
python collect_env.py
--2025-02-03 00:54:07-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24353 (24K) [text/plain]
Saving to: ‘collect_env.py.1’
collect_env.py.1 100%[=================================================>] 23.78K --.-KB/s in 0.005s
2025-02-03 00:54:07 (5.15 MB/s) - ‘collect_env.py.1’ saved [24353/24353]
/home/miguelmnts/.pyenv/versions/3.12.6/lib/python3.12/site-packages/torch/_subclasses/functional_tensor.py:295: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.6 (main, Sep 25 2024, 17:04:58) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 7520U with Radeon Graphics
CPU family: 23
Model: 160
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
BogoMIPS: 5588.57
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip rdpid
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 4 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch_nd_conv==0.1.0
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan | true |
2,826,530,241 | log out partial fx graph when guard on data dependent during non stirct tracing | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146314
* #146296
* __->__ #146298
As discussed with @avikchaudhuri and @bdhirsh last week, this can be quite useful when debugging.
The following code produces a data dependent error
```
import torch
from torch import nn
# UserError: Could not guard on data-dependent expression Eq(507 - u0, 0) (unhinted: Eq(507 - u0, 0)). (Size-like symbols: u0)
class Repro(nn.Module):
def __init__(self):
super().__init__()
def forward(self, cache, update, pos):
_, _, max_seq_len, _ = cache.shape
_, _, seqlen, _ = update.shape
pos_item = pos[0].item() # u0
torch._check(pos_item + seqlen <= max_seq_len) # u0 + 502 <= 507
torch._check(pos_item >= 0)
before = cache.narrow(2, 0, pos_item)
# FAIL
# Laith: why can't we make unbacked expressions size-like?
after = cache.narrow(2, (pos_item + seqlen), (max_seq_len - pos_item - seqlen))
# PASS
end = torch.tensor(max_seq_len - pos_item - seqlen).item()
after = cache.narrow(2, (pos_item + seqlen), end)
return torch.cat([before, update, after], dim=2)
repro = Repro()
bsz = 1
n_heads = 4
max_seq_len = 512
head_dim = 64
seqlen = 5
pos_item = 1
cache = torch.zeros(bsz, n_heads, max_seq_len, head_dim)
update = torch.ones(bsz, n_heads, seqlen, head_dim)
pos = torch.tensor([pos_item])
example_inputs = (cache, update, pos)
torch.export.export(repro, example_inputs, strict=False)
```
This is what it now prints out
```
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: "f32[1, 4, 512, 64][131072, 32768, 64, 1]cpu", arg1_1: "f32[1, 4, 5, 64][1280, 320, 64, 1]cpu", arg2_1: "i64[1][1]cpu"):
# File: /data/users/bobren/a/pytorch/r1.py:14 in forward, code: pos_item = pos[0].item() # u0
select: "i64[][]cpu" = torch.ops.aten.select.int(arg2_1, 0, 0); arg2_1 = None
item: "Sym(u0)" = torch.ops.aten.item.default(select); select = None
# File: /data/users/bobren/a/pytorch/r1.py:15 in forward, code: torch._check(pos_item + seqlen <= max_seq_len) # u0 + 502 <= 507
add: "Sym(u0 + 5)" = item + 5
le: "Sym(u0 + 5 <= 512)" = add <= 512; add = le = None
# File: /data/users/bobren/a/pytorch/r1.py:16 in forward, code: torch._check(pos_item >= 0)
ge: "Sym(u0 >= 0)" = item >= 0; ge = None
# File: /data/users/bobren/a/pytorch/r1.py:17 in forward, code: before = cache.narrow(2, 0, pos_item)
narrow: "f32[1, 4, u0, 64][131072, 32768, 64, 1]cpu" = torch.ops.aten.narrow.default(arg0_1, 2, 0, item); narrow = None
# File: /data/users/bobren/a/pytorch/r1.py:21 in forward, code: after = cache.narrow(2, (pos_item + seqlen), (max_seq_len - pos_item - seqlen))
add_1: "Sym(u0 + 5)" = item + 5
sub: "Sym(512 - u0)" = 512 - item; item = None
sub_1: "Sym(507 - u0)" = sub - 5; sub = None
narrow_1 = torch.ops.aten.narrow.default(arg0_1, 2, add_1, sub_1); arg0_1 = add_1 = sub_1 = narrow_1 = None
Traceback (most recent call last):
File "/data/users/bobren/a/pytorch/r1.py", line 45, in <module>
torch.export.export(repro, example_inputs, strict=False)
File "/data/users/bobren/a/pytorch/torch/export/__init__.py", line 368, in export
return _export(
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1044, in wrapper
raise e
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1017, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/export/exported_program.py", line 117, in wrapper
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 2079, in _export
return _export_for_training(
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1044, in wrapper
raise e
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1017, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/export/exported_program.py", line 117, in wrapper
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1944, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1879, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1665, in _export_to_aten_ir_make_fx
gm, graph_signature = transform(_make_fx_helper)(
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1809, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1585, in _make_fx_helper
gm = make_fx(
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2194, in wrapped
return make_fx_tracer.trace(f, *args)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2132, in trace
return self._trace_inner(f, *args)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2103, in _trace_inner
t = dispatch_trace(
File "/data/users/bobren/a/pytorch/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_dynamo/eval_frame.py", line 749, in _fn
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1136, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1692, in trace
res = super().trace(root, concrete_args)
File "/data/users/bobren/a/pytorch/torch/fx/_symbolic_trace.py", line 834, in trace
(self.create_arg(fn(*args)),),
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1191, in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "<string>", line 1, in <lambda>
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1488, in wrapped_fn
return tuple(flat_fn(*args))
File "/data/users/bobren/a/pytorch/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 879, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/_symbolic_trace.py", line 811, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1762, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/_symbolic_trace.py", line 529, in call_module
ret_val = forward(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/_symbolic_trace.py", line 804, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/export/_trace.py", line 1793, in forward
tree_out = mod(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/_symbolic_trace.py", line 811, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1762, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/_symbolic_trace.py", line 529, in call_module
ret_val = forward(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/_symbolic_trace.py", line 804, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/bobren/a/pytorch/r1.py", line 21, in forward
after = cache.narrow(2, (pos_item + seqlen), (max_seq_len - pos_item - seqlen))
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1239, in __torch_function__
return func(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1286, in __torch_function__
return func(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_export/non_strict_utils.py", line 654, in __torch_function__
return func(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_ops.py", line 866, in handler
return torch._library.utils.handle_dispatch_mode(
File "/data/users/bobren/a/pytorch/torch/_library/utils.py", line 296, in handle_dispatch_mode
return curr_mode.__torch_dispatch__(op_overload, overload_types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1341, in __torch_dispatch__
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 910, in proxy_call
out = func(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_ops.py", line 749, in __call__
return self._op(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1267, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1808, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1369, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 2282, in _dispatch_impl
decomposition_table[func](*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_decomp/decompositions.py", line 759, in slice_forward
return self.as_strided(sizes, strides, storage_offset)
File "/data/users/bobren/a/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1267, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1808, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1370, in _cached_dispatch_impl
entry = self._make_cache_entry(state, key, func, args, kwargs, output)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1640, in _make_cache_entry
output_info = self._get_output_info_for_cache_entry(
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1583, in _get_output_info_for_cache_entry
synth_output = self._output_from_cache_entry(
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1738, in _output_from_cache_entry
return self._get_output_tensor_from_cache_entry(
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1709, in _get_output_tensor_from_cache_entry
empty.set_(storage, storage_offset, shape, stride)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/sym_node.py", line 564, in guard_size_oblivious
r = self.shape_env.evaluate_expr(
File "/data/users/bobren/a/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
File "/data/users/bobren/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6468, in evaluate_expr
return self._evaluate_expr(
File "/data/users/bobren/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6658, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Ne(507 - u0, 1) (unhinted: Ne(507 - u0, 1)). (Size-like symbols: u0)
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D69079434](https://our.internmc.facebook.com/intern/diff/D69079434) | true |
2,826,528,753 | [inductor] Refactor CaptureIndexing into global scope | jansel | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* __->__ #146297
* #146282
* #146257
* #146255
* #146254
* #146252
And inline SimplifyIndexing into it CaptureIndexing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,826,488,821 | dump partial fx graph to stderr when dynamo tracing fails with guard on data-dependent | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146363
* __->__ #146296
* #146298
As discussed with @avikchaudhuri and @bdhirsh last week, this can be quite useful when debugging.
The following code produces a data dependent error
```
import torch
from torch import nn
# UserError: Could not guard on data-dependent expression Eq(507 - u0, 0) (unhinted: Eq(507 - u0, 0)). (Size-like symbols: u0)
class Repro(nn.Module):
def __init__(self):
super().__init__()
def forward(self, cache, update, pos):
_, _, max_seq_len, _ = cache.shape
_, _, seqlen, _ = update.shape
pos_item = pos[0].item() # u0
torch._check(pos_item + seqlen <= max_seq_len) # u0 + 502 <= 507
torch._check(pos_item >= 0)
before = cache.narrow(2, 0, pos_item)
# FAIL
# Laith: why can't we make unbacked expressions size-like?
after = cache.narrow(2, (pos_item + seqlen), (max_seq_len - pos_item - seqlen))
# PASS
end = torch.tensor(max_seq_len - pos_item - seqlen).item()
after = cache.narrow(2, (pos_item + seqlen), end)
return torch.cat([before, update, after], dim=2)
repro = Repro()
bsz = 1
n_heads = 4
max_seq_len = 512
head_dim = 64
seqlen = 5
pos_item = 1
cache = torch.zeros(bsz, n_heads, max_seq_len, head_dim)
update = torch.ones(bsz, n_heads, seqlen, head_dim)
pos = torch.tensor([pos_item])
example_inputs = (cache, update, pos)
torch.export.export(repro, example_inputs)
```
This is what it now prints out
```
class GraphModule(torch.nn.Module):
def forward(self, L_cache_: "f32[1, 4, 512, 64][131072, 32768, 64, 1]cpu", L_update_: "f32[1, 4, 5, 64][1280, 320, 64, 1]cpu", L_pos_: "i64[1][1]cpu"):
l_cache_ = L_cache_
l_update_ = L_update_
l_pos_ = L_pos_
# File: /data/users/bobren/a/pytorch/r1.py:14 in forward, code: pos_item = pos[0].item() # u0
getitem: "i64[][]cpu" = l_pos_[0]; l_pos_ = None
item: "Sym(u0)" = getitem.item(); getitem = None
# File: /data/users/bobren/a/pytorch/r1.py:15 in forward, code: torch._check(pos_item + seqlen <= max_seq_len) # u0 + 502 <= 507
add: "Sym(u0 + 5)" = item + 5
le: "Sym(u0 + 5 <= 512)" = add <= 512; add = None
_check = torch._check(le); le = _check = None
# File: /data/users/bobren/a/pytorch/r1.py:16 in forward, code: torch._check(pos_item >= 0)
ge: "Sym(u0 >= 0)" = item >= 0
_check_1 = torch._check(ge); ge = _check_1 = None
# File: /data/users/bobren/a/pytorch/r1.py:17 in forward, code: before = cache.narrow(2, 0, pos_item)
before: "f32[1, 4, u0, 64][131072, 32768, 64, 1]cpu" = l_cache_.narrow(2, 0, item); before = None
# File: /data/users/bobren/a/pytorch/r1.py:21 in forward, code: after = cache.narrow(2, (pos_item + seqlen), (max_seq_len - pos_item - seqlen))
add_1: "Sym(u0 + 5)" = item + 5
sub: "Sym(512 - u0)" = 512 - item; item = None
sub_1: "Sym(507 - u0)" = sub - 5; sub = None
narrow_1 = l_cache_.narrow(2, add_1, sub_1); l_cache_ = add_1 = sub_1 = narrow_1 = None
Traceback (most recent call last):
File "/data/users/bobren/a/pytorch/torch/_dynamo/utils.py", line 3075, in run_node
return getattr(args[0], node.target)(*args[1:], **kwargs)
File "/data/users/bobren/a/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1267, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1808, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1369, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 2282, in _dispatch_impl
decomposition_table[func](*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_decomp/decompositions.py", line 759, in slice_forward
return self.as_strided(sizes, strides, storage_offset)
File "/data/users/bobren/a/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1267, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1808, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1370, in _cached_dispatch_impl
entry = self._make_cache_entry(state, key, func, args, kwargs, output)
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1640, in _make_cache_entry
output_info = self._get_output_info_for_cache_entry(
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1583, in _get_output_info_for_cache_entry
synth_output = self._output_from_cache_entry(
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1738, in _output_from_cache_entry
return self._get_output_tensor_from_cache_entry(
File "/data/users/bobren/a/pytorch/torch/_subclasses/fake_tensor.py", line 1709, in _get_output_tensor_from_cache_entry
empty.set_(storage, storage_offset, shape, stride)
File "/data/users/bobren/a/pytorch/torch/fx/experimental/sym_node.py", line 564, in guard_size_oblivious
r = self.shape_env.evaluate_expr(
File "/data/users/bobren/a/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
File "/data/users/bobren/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6468, in evaluate_expr
return self._evaluate_expr(
File "/data/users/bobren/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6658, in _evaluate_expr
raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Ne(507 - u0, 1) (unhinted: Ne(507 - u0, 1)). (Size-like symbols: u0)
Caused by: after = cache.narrow(2, (pos_item + seqlen), (max_seq_len - pos_item - seqlen)) # r1.py:21 in forward (utils/_stats.py:27 in wrapper)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,826,488,556 | The result of torch.sum will show the value of NAN | anonymous-tai | closed | [
"module: NaNs and Infs"
] | 2 | NONE | ### 🐛 Describe the bug
There is no NAN in my input value, but the output value has NAN
```python
import torch
import numpy as np
[torch.sum.zip](https://github.com/user-attachments/files/18637750/torch.sum.zip)
fun = torch.sum
# Load the input from the .npz file
input = np.load('/path/to/your/input.npz')
# Check if any variable contains NaN
def check_nan(variable, name):
# If the variable is a numpy array, check for NaNs
if isinstance(variable, np.ndarray):
if np.any(np.isnan(variable)):
print(f"{name} contains NaN values.")
else:
print(f"{name} does not contain NaN values.")
# If the variable is a torch Tensor, use torch.isnan() to check for NaNs
elif isinstance(variable, torch.Tensor):
if torch.isnan(variable).any():
print(f"{name} contains NaN values.")
else:
print(f"{name} does not contain NaN values.")
# If the variable is a dictionary, recursively check its values
elif isinstance(variable, dict):
for key, value in variable.items():
check_nan(value, f"{name}[{key}]")
else:
print(f"{name} is of type {type(variable)}, no NaN check implemented.")
# Convert the numpy arrays to PyTorch tensors
def convert_to_tensor(variable):
if isinstance(variable, np.ndarray):
return torch.from_numpy(variable) # Convert NumPy arrays to PyTorch tensors
elif isinstance(variable, dict):
for key, value in variable.items():
variable[key] = convert_to_tensor(value) # Convert each dictionary value
return variable
# Convert the .npz file into a dictionary of tensors
input_tensor = {key: torch.from_numpy(value) for key, value in input.items()}
# Ensure the input values are converted to tensors
for key, value in input_tensor.items():
if isinstance(value, np.ndarray):
print(f"Warning: {key} is still a numpy array.") # Just in case any value isn't converted
# Perform the sum operation
output1_t = fun(**input_tensor)
print(input_tensor)
# Convert the result to a NumPy array
output1 = output1_t.cpu().detach().numpy()
# Check for NaN values
check_nan(input_tensor, "input")
check_nan(output1_t, "output1_t")
check_nan(output1, "output1")
```

The test input and test files are located in the torch.sum.zip file
[torch.sum.zip](https://github.com/user-attachments/files/18637758/torch.sum.zip)
### Versions
ubuntu 20.04.6
torch 2.5.1+cu118
Python 3.9.19
CUDA Version: 12.4 | true |
2,826,441,369 | Enable TemporaryFileName tests on Windows | cyyever | closed | [
"open source",
"windows-triaged",
"topic: not user facing"
] | 1 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,826,420,654 | [AOTI] Fix an unaligned memory access issue in mm_template | desertfire | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ciflow/inductor-rocm"
] | 12 | CONTRIBUTOR | Summary: Fixes a corner case in the Triton MM template, where the dimension M (dynamic size) can be smaller than BLOCK_M (similarly for the N dimenstion) can trigger unaligned memory access error.
Differential Revision: D69034578
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,826,349,917 | [mps/inductor] Add support for digamma(). | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,826,324,529 | Enable some tests on Windows | cyyever | closed | [
"open source",
"topic: not user facing"
] | 4 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,826,296,275 | [ c10d ] modify API to get device string from device with torch.device | ankurneog | closed | [
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7 | CONTRIBUTOR | Modify the ```get_default_backend_for_device()``` API to extract the device string using ```torch.device()```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,826,282,124 | Use device agnostic APIs for device_count and backend in common_fsdp | ankurneog | open | [
"triaged",
"open source",
"module: fsdp",
"ciflow/trunk",
"topic: not user facing",
"module: hpu"
] | 28 | CONTRIBUTOR | Replace device specific APIs with device abstracted API
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang @jeromean @bsochack @sujoysaraswati | true |
2,826,223,366 | [Trace PyDispatcher] Capture Vmapped autograd function as graph | yanboliang | open | [
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146288
* #146272
* #146271
* #146270
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,826,204,425 | Test macos2 | cyyever | closed | [
"open source",
"release notes: dataloader"
] | 1 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,826,157,418 | Non-blocking GPU to CPU copy of complex numbers with the different conj status produces wrong results | ngimel | open | [
"high priority",
"triaged",
"module: complex",
"module: correctness (silent)"
] | 2 | COLLABORATOR | ```python
import torch
def _test_copy(dst, src, non_blocking):
event = torch.cuda.Event()
dst.copy_(src, non_blocking=non_blocking)
if non_blocking:
event.record()
event.synchronize()
#print(src, dst)
return torch.equal(dst.contiguous().cpu(), src.contiguous().cpu())
for _ in range(10):
src = torch.randn((8,), dtype=torch.complex64, device="cuda").conj()
dst = torch.zeros_like(src, device="cpu").pin_memory()
ret = _test_copy(dst, src, non_blocking=True)
print(ret)
```
The `conj_physical` happens before the copy has completed, and the copied data is never conjugated.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @anjali411 @dylanbespalko @mruberry @nikitaved @amjames | true |
2,826,131,197 | [scan] Autograd with partial gradient support | bohnstingl | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo"
] | 9 | COLLABORATOR | This PR introduces the Autograd feature for scan with partial gradient support. It is a combination of the already opened PRs: https://github.com/pytorch/pytorch/pull/135631 and https://github.com/bohnstingl/pytorch/pull/4
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @ydwu4 | true |
2,826,128,545 | [metal] Move digamma to special_math.h | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,826,072,280 | [dynamo] misc fixes for inspect | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146116
* #146219
* __->__ #146283
* #146075
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,826,044,076 | [inductor] Minor compile time optimizations in DefaultHandler | jansel | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* #146297
* __->__ #146282
* #146257
* #146255
* #146254
* #146252
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,826,001,403 | [metal] Refactor digamma in preparation for moving it. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,991,726 | [Inductor-CPU] Avoid redundant compute of index in AVX512 FP32 acc GEMM micro-kernel | sanchitintel | closed | [
"open source",
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2 | COLLABORATOR | `constexpr int idx` doesn't seem to help here since `idx` is equal to `i`:
```cpp
constexpr int row = i / COLS;
constexpr int col = i % COLS;
// some other code
constexpr int idx = row * COLS + col;
vc[idx] = at::vec::fmadd(va, vb[col], vc[idx]);
```
TODO
- [ ] Although it's known at the time of compilation as to what various values of `i` would be due to forced-unrolling of `compute` lambda calls, check if the compiler really computes values of `row`, `col` and `idx` corresponding to each value of `i` at compile-time.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,989,201 | [MPS] Add linalg det and fix lu factor for non contiguous tensors | Isalia20 | closed | [
"open source",
"Merged",
"release notes: mps",
"ciflow/mps"
] | 6 | COLLABORATOR | Requested in #77764
This PR adds support for linalg.det on MPS and fixes lu factor for non contiguous tensors, current implementation crashed on any kind of non-contiguous tensor with an error:
```
-[AGXG13XFamilyCommandBuffer blitCommandEncoderCommon:]:833: failed assertion `A command encoder is already encoding to this command buffer'
zsh: abort python det.py
``` | true |
2,825,982,194 | [inductor] Guard a member variable with a define. | dcci | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | It's unused otherwise, and when running MPS tests, I get a bunch of warnings of this kind:
/Users/davidino/pytorch/pytorch/torch/include/torch/csrc/inductor/aoti_runtime/model_container.h:412:10: warning: private field 'blob_size_' is not used [-Wunused-private-field]
412 | size_t blob_size_;
| ^
1 warning generated.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,959,514 | shapeenv and prov logging | bobrenjc93 | closed | [
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146277
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D69025488](https://our.internmc.facebook.com/intern/diff/D69025488) | true |
2,825,938,989 | [BE]: Enable ruff SLOT checks | Skylion007 | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 8 | COLLABORATOR | This enables a check that which a class which only inherits from immutable classes like str, tuple, and NamedTuple, also defined `__slots__` so they don't allocate memory unnecessarily. This also ensure contributors think about how they define their classes with subclass NamedTuples and str, of which we have many in our codebase
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,825,911,453 | Correctly handle duplicated arguments when merging input views. | ysiraichi | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 14 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146275
Fix: #135099
This PR changes how we map the original inputs into the new set of
inputs that take in the tensor input's base instead of their aliases.
**Problem:** in order to create this mapping, we had a dictionary that
mapped the hashed arguments into their respective indices. However, if
there's a group of equal arguments, we will have only one mapping for
such an argument. This breaks the assumption that there will be one
mapping for each argument.
**Solution:** map the hashed arguments into a list of indices. Then, we
will be able to correctly reconstruct the parameters for the new calling
convention. | true |
2,825,897,360 | torch.nn.functional.one_hot has inconsistent behavior between eager and torch.compile when num_classes=0 | meetmul | open | [
"triaged",
"actionable",
"oncall: pt2",
"module: fakeTensor",
"module: pt2-dispatcher"
] | 4 | NONE | ### 🐛 Describe the bug
When `num_classes=0`, `torch.nn.functional.one_hot` will throw `Class values must be smaller than num_classes.` under eager but outputs empty tensor under torch.compile.
```python
import torch
f = torch.nn.functional.one_hot
a = torch.arange(0, 5) % 3 # [0,1,2,0,1]
num_classes = 0
try:
torch.nn.functional.one_hot(a,num_classes)
except Exception as e:
print("Error on eager: ", str(e))
res = torch.compile(torch.nn.functional.one_hot)(a,num_classes)
print("Output under torch.compile: ", res)
```
### Error logs
Error on eager: Class values must be smaller than num_classes.
Output under torch.compile: tensor([], size=(5, 0), dtype=torch.int64)
### Versions
```
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @chauhang @penguinwu @eellison @zou3519 @bdhirsh @yf225 | true |
2,825,663,845 | [dcp] Minor improvements to filesystem writer | ananthsub | open | [
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 27 | NONE | - Apply same check to `_SerialCpuLoader ` from `_OverlappingCpuLoader` for determining when to clone non-contiguous cpu tensors
- Add minor helper function to avoid iterating over `WriteItem`s twice to collect bytes and tensor write items
- Use the metadata filename constant instead of harcoding
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,825,650,506 | [Trace PyDispatcher] Add CustomFunctionHigherOrderOperatorVariable | yanboliang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146288
* __->__ #146272
* #146271
* #146270
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,825,650,487 | [Trace PyDispatcher] Support temporarily_pop_interpreter_stack ctx manager | yanboliang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146288
* #146272
* __->__ #146271
* #146270
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,825,650,471 | [Dynamo][Trace PyDispatcher] Remove disable from HigherOrderOperator.__call__ | yanboliang | closed | [
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146288
* #146272
* #146271
* __->__ #146270
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,825,650,441 | [Dynamo][Trace PyDispatcher] Support calling id function over class | yanboliang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146272
* #146271
* #146270
* __->__ #146269
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,825,590,916 | Enable some tests on MacOS | cyyever | closed | [
"module: macos",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @seemethere @malfet @pytorch/pytorch-dev-infra @albanD | true |
2,825,588,721 | Format tests by PYFMT | cyyever | open | [
"oncall: distributed",
"triaged",
"open source",
"Stale",
"topic: not user facing",
"ciflow/inductor",
"release notes: distributed (checkpoint)"
] | 1 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,825,584,034 | Remove unused import in tests | cyyever | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"oncall: export"
] | 3 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @penguinwu | true |
2,825,565,634 | Add libtorch nightly build for CUDA 12.8 | tinglvv | closed | [
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 12 | COLLABORATOR | Try removing sm50 and sm60 to shrink binary size, and resolve the ld --relink error
"Architecture support for Maxwell, Pascal, and Volta is considered feature-complete and will be frozen in an upcoming release." from 12.8 release note.
Also updating the runner for cuda 12.8 test to g4dn (T4, sm75) due to the drop in sm50/60 support.
https://github.com/pytorch/pytorch/issues/145570
cc @atalman @malfet @ptrblck @msaroufim @eqy @nWEIdia | true |
2,825,519,752 | [ROCm] opportunistic fastatomics for ReduceAdd operations for MI300 GPUs | pragupta | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"rocm",
"keep-going",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300"
] | 38 | CONTRIBUTOR | In this approach, we are catching any lane within a wave that is doing fastatomics to the same destination address and computing the sum on the CU. This is leading to 3x improvement in scatter_add performance and 2x improvement in index_select.
scatter_add performance on MI300x:
dtype|Baseline (before optimizations)|opportunistic fastatomics
-------|----------------------------------|----------------------------------
f32|1.389425039|0.430447996
fp16|2.195472956|0.779729486
bf16|2.194051027|0.784599513
Using the following reproducer
```
import torch
import triton
def main():
dtype = torch.float32
dim = 1305301
a = torch.rand(100, device="cuda", dtype=dtype)
index = torch.randint(0, 100, (dim,), device="cuda")
src = torch.rand(dim, device="cuda", dtype=dtype)
print("=" * 20)
print(
triton.testing.do_bench(
lambda: a.scatter_add(0, index, src),
return_mode="median",
)
)
print("=" * 20)
if __name__ == "__main__":
main()
```
co-authored by: @amd-hhashemi
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,825,517,076 | [aoti] Assign proxy call args by name, and support default values. | zhxchen17 | closed | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11 | CONTRIBUTOR | Fixing the following issue when compiling the following program:
```
window = torch.hann_window(N_FFT).to(x.device)
stft = torch.stft(
x, N_FFT, HOP_LENGTH, window=window, return_complex=True
)
magnitudes = stft[..., :-1].abs() ** 2
return magnitudes
```
```
Traceback (most recent call last):
File "/home/zhxchen17/miniconda3/envs/dev/lib/python3.11/unittest/case.py", line 57, in testPartExecutor
yield
File "/home/zhxchen17/miniconda3/envs/dev/lib/python3.11/unittest/case.py", line 623, in run
self._callTestMethod(testMethod)
File "/home/zhxchen17/miniconda3/envs/dev/lib/python3.11/unittest/case.py", line 579, in _callTestMethod
if method() is not None:
^^^^^^^^
File "/home/zhxchen17/pytorch/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/home/zhxchen17/pytorch/test/inductor/test_torchinductor.py", line 12356, in new_test
return value(self)
^^^^^^^^^^^
File "/home/zhxchen17/pytorch/test/inductor/test_aot_inductor.py", line 4334, in test_stft
self.check_model(model, example_inputs)
File "/home/zhxchen17/pytorch/test/inductor/test_aot_inductor_utils.py", line 185, in check_model
actual = AOTIRunnerUtil.run(
^^^^^^^^^^^^^^^^^^^
File "/home/zhxchen17/pytorch/test/inductor/test_aot_inductor_utils.py", line 137, in run
optimized = AOTIRunnerUtil.load(device, so_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhxchen17/pytorch/test/inductor/test_aot_inductor_utils.py", line 119, in load
return torch._export.aot_load(so_path, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhxchen17/pytorch/torch/_export/__init__.py", line 165, in aot_load
runner = torch._C._aoti.AOTIModelContainerRunnerCuda(so_path, 1, device) # type: ignore[assignment, call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected extern kernel aten::hann_window to have serialized argument type as_scalar_type for argument 1 but got as_device
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,515,550 | Fix unreachable code | lancelotnd | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Fixes #146261
Removed unreachable code | true |
2,825,515,295 | Unreachable code triggers no-return error when compiling with -Werror=return-type | lancelotnd | closed | [] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
This is a minor smell but while compiling with extra flags `-finstrument-functions` I get the following error.
```
/home/users/lancend/code/H24-Experiments/pytorch-unified/aten/src/ATen/native/xnnpack/Linear.cpp:212:1: error: control reaches end of non-void function [-Werror=return-type]
212 | }
| ^
```
While this can of course be disabled with `-Wno-error=return-type`, the problem remains that the function that caused this error contains unreachable code that should be removed.
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0a0+gitc38b9b0
Is debug build: True
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42133-1b9c17779
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.3.1 24491 1e0fda770a2079fbd71e4b70974d74f62fd3af10)
CMake version: version 3.31.5
Libc version: glibc-2.35
Python version: 3.9.21 | packaged by conda-forge | (main, Dec 5 2024, 13:51:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-553.16.1.el8_10.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300A (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42133
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD Instinct MI300A Accelerator
CPU family: 25
Model: 144
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3700.0000
CPU min MHz: 1500.0000
BogoMIPS: 7399.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] optree==0.14.0
[pip3] torch==2.7.0a0+gitc38b9b0
[pip3] torchvision==0.22.0a0+867521e
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.0.2 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] torch 2.7.0a0+gitc38b9b0 dev_0 <develop>
[conda] torchvision 0.22.0a0+867521e dev_0 <develop>
``` | true |
2,825,507,884 | flex attention NoValidChoicesError with torch 2.6 | samsja | closed | [
"triaged",
"module: regression",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 7 | NONE | ### 🐛 Describe the bug
When using flex attention I am experiencing error that did not appear with torch 2.5.1.
More info here : https://github.com/PrimeIntellect-ai/prime/pull/209#issuecomment-2629145668
```bash
22:36:40 [INFO] [Rank 0] Caught an exception, terminating children
22:36:40 [INFO] [Rank 0] NoValidChoicesError: No choices to select, please consider adding ATEN into max_autotune_gemm_backends config (defined in torch/_inductor/config.py) to allow at least one choice.
target: flex_attention_backward
args[0]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 128, 2048, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='primals_2', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 128, 2048, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='primals_3', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 128, 2048, 1]))
))
args[3]: TensorBox(StorageBox(
InputBuffer(name='getitem', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 128, 2048, 1]))
))
args[4]: TensorBox(StorageBox(
DonatedBuffer(name='getitem_1', layout=FixedLayout('cuda:0', torch.float32, size=[32, 16, 1024], stride=[16384, 1024, 1]))
))
args[5]: TensorBox(StorageBox(
InputBuffer(name='tangents_1', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 131072, 128, 1]))
))
args[6]: TensorBox(StorageBox(
Pointwise(
'cuda',
torch.float32,
def inner_fn(index):
i0, i1, i2 = index
tmp0 = ops.constant(0, torch.float32)
return tmp0
,
ranges=[32, 16, 1024],
origin_node=full_default,
origins=OrderedSet([full_default])
)
))
args[7]: Subgraph(name='fw_graph0', graph_module=<lambda>(), graph=None)
args[8]: Subgraph(name='joint_graph0', graph_module=<lambda>(), graph=None)
args[9]: (1024, 1024, TensorBox(StorageBox(
InputBuffer(name='primals_5', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8], stride=[8, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_4', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8, 8], stride=[64, 64, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_7', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8], stride=[8, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_8', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8, 8], stride=[64, 64, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_9', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8], stride=[8, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_10', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8, 8], stride=[64, 64, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_11', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8], stride=[8, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_12', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8, 8], stride=[64, 64, 8, 1]))
)), 128, 128, Subgraph(name='mask_graph0', graph_module=<lambda>(), graph=None))
args[10]: 0.08838834764831843
args[11]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'OUTPUT_LOGSUMEXP': True}
args[12]: ()
args[13]: (TensorBox(StorageBox(
InputBuffer(name='primals_6', layout=FixedLayout('cuda:0', torch.int64, size=[32, 1024], stride=[1024, 1]))
)),)
Traceback (most recent call last):
File "/root/prime/src/zeroband/train.py", line 589, in <module>
raise e
File "/root/prime/src/zeroband/train.py", line 581, in <module>
train(config)
File "/root/prime/src/zeroband/train.py", line 352, in train
loss.backward()
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_tensor.py", line 626, in backward
torch.autograd.backward(
File "/root/prime/.venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/root/prime/.venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/root/prime/.venv/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1710, in backward
return impl_fn()
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1700, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2037, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 54, in _wrapped_bw_compiler
return disable(disable(bw_compiler_fn)(*args, **kwargs))
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1808, in bw_compiler
return inner_compile(
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 675, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1496, in run_node
result = super().run_node(n)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1143, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1133, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 409, in wrapped
out = decomp_fn(*args, **kwargs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/kernel/flex_attention.py", line 2361, in flex_attention_backward
broadcasted_grad_key = autotune_select_algorithm(
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 1909, in autotune_select_algorithm
return _ALGORITHM_SELECTOR_CACHE(*args, **kwargs)
File "/root/prime/.venv/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 1379, in __call__
raise NoValidChoicesError(
torch._inductor.exc.LoweringException: NoValidChoicesError: No choices to select, please consider adding ATEN into max_autotune_gemm_backends config (defined in torch/_inductor/config.py) to allow at least one choice.
target: flex_attention_backward
args[0]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 128, 2048, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='primals_2', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 128, 2048, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='primals_3', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 128, 2048, 1]))
))
args[3]: TensorBox(StorageBox(
InputBuffer(name='getitem', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 128, 2048, 1]))
))
args[4]: TensorBox(StorageBox(
DonatedBuffer(name='getitem_1', layout=FixedLayout('cuda:0', torch.float32, size=[32, 16, 1024], stride=[16384, 1024, 1]))
))
args[5]: TensorBox(StorageBox(
InputBuffer(name='tangents_1', layout=FixedLayout('cuda:0', torch.bfloat16, size=[32, 16, 1024, 128], stride=[2097152, 131072, 128, 1]))
))
args[6]: TensorBox(StorageBox(
Pointwise(
'cuda',
torch.float32,
def inner_fn(index):
i0, i1, i2 = index
tmp0 = ops.constant(0, torch.float32)
return tmp0
,
ranges=[32, 16, 1024],
origin_node=full_default,
origins=OrderedSet([full_default])
)
))
args[7]: Subgraph(name='fw_graph0', graph_module=<lambda>(), graph=None)
args[8]: Subgraph(name='joint_graph0', graph_module=<lambda>(), graph=None)
args[9]: (1024, 1024, TensorBox(StorageBox(
InputBuffer(name='primals_5', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8], stride=[8, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_4', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8, 8], stride=[64, 64, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_7', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8], stride=[8, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_8', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8, 8], stride=[64, 64, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_9', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8], stride=[8, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_10', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8, 8], stride=[64, 64, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_11', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8], stride=[8, 8, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_12', layout=FixedLayout('cuda:0', torch.int32, size=[32, 1, 8, 8], stride=[64, 64, 8, 1]))
)), 128, 128, Subgraph(name='mask_graph0', graph_module=<lambda>(), graph=None))
args[10]: 0.08838834764831843
args[11]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'OUTPUT_LOGSUMEXP': True}
args[12]: ()
args[13]: (TensorBox(StorageBox(
InputBuffer(name='primals_6', layout=FixedLayout('cuda:0', torch.int64, size=[32, 1024], stride=[1024, 1]))
)),)
```
using normal sdpa fix the issue.
### Versions
pytorch 2.6
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,825,499,953 | [mps/inductor] Implement support for polygamma(). | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,495,491 | [dynamo] Add return to python_type | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146116
* #146219
* #146075
* #146070
* #146214
* __->__ #146258
* #146198
* #146062
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,825,483,114 | [inductor] Refactor op handlers part 5 | jansel | closed | [
"module: cpu",
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* #146297
* #146282
* __->__ #146257
* #146255
* #146254
* #146252
This makes OpHandler just a normal class using inheritance, and removes typing workarounds needed because it wasn't
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,424,793 | use copy2d in h2d/d2h copy when possible | ngimel | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cuda",
"ciflow/slow",
"ci-no-td"
] | 15 | COLLABORATOR |
A rewrite of #138964
In addition to rewriting the conditions for using copy2d, this PR fixes a few other problems with #138964:
1) gpu-gpu copies when peer access is disabled shouldn't rely on copy2d
2) copy2d should record even for the host pinned memory, like the regular copy does
3) copy2d shouldn't pretend that it's synchronizing (for the purposes of cuda sanitizer tracer) when it's non-blocking
In this PR copy2d behaves in exactly the same way as copy does wrt to those additional syncs, except it calls a different underlying cuda call.
Tests for multiple cases going through copy2d and avoiding copy2d pattern due to unsatisfied conditions are added.
Fixes #ISSUE_NUMBER
cc @ptrblck @msaroufim @eqy | true |
2,825,404,069 | [inductor] Refactor op handlers part 4 | jansel | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* #146297
* #146282
* #146257
* __->__ #146255
* #146254
* #146252
This replaces the `__getattr__()` pattern used in remaining OpHandlers with a `DefaultHandler` class defined in part 2.
Some compile time wins from this as well:
```
2025-02-02T19:46:32.2033010Z
2025-02-02T19:46:32.2036607Z WIN: benchmark ('add_loop_inductor', 'compile_time_instruction_count') failed, actual result 29633182927 is -1.71% lower than expected 30150000000 ±1.50% please update the expected results.
2025-02-02T19:46:32.2037575Z
2025-02-02T19:46:32.2037907Z please update all results that changed significantly, and not only the failed ones
2025-02-02T19:46:32.2039291Z PASS: benchmark ('add_loop_inductor_dynamic_gpu', 'compile_time_instruction_count') pass, actual result 43986879172 -1.02% is within expected 44440000000 ±2.50%
2025-02-02T19:46:32.2040131Z
2025-02-02T19:46:32.2041180Z WIN: benchmark ('add_loop_inductor_gpu', 'compile_time_instruction_count') failed, actual result 26246225695 is -1.85% lower than expected 26740000000 ±1.50% please update the expected results.
2025-02-02T19:46:32.2042188Z
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,404,025 | [inductor] Refactor op handlers part 3 | jansel | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* #146297
* #146282
* #146257
* #146255
* __->__ #146254
* #146252
Fixes type errors that arise from typing `V.ops`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,401,001 | [mps] Move polygamma to special_math.h. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 3 | MEMBER | In preparation to implement it in inductor.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,370,926 | [inductor] Refactor op handlers part 2 | jansel | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* #146297
* #146282
* #146257
* #146255
* #146254
* __->__ #146252
This replaces the `__getattr__()` pattern used in (some) OpHandlers with a `DefaultHandler` class that has an implementation of every op that calls `self._default()`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,825,369,624 | fatal error: torch/torch.h: No such file or directory #include <torch/torch.h> | pgn-penguin | closed | [] | 2 | NONE | Hi, I’m trying to create a custom c++ extension but get stuck with #include < torch/extension.h > with the error:
`interrpolation.cpp:1:29: fatal error: torch/extension.h: No such file or directory
#include <torch/extension.h>`
Just wondering that torch can been detected just like below, but just kept getting errors:

My c_cpp_properties.json is below with my VS environment, wondering anying missing:
```
{
"configurations": [
{
"name": "Win32",
"includePath": [
"${workspaceFolder}/**",
"C:/Users/USER/anaconda3/envs/cppcuda/Lib/site-packages/torch/include",
"C:/Users/USER/anaconda3/envs/cppcuda/Lib/site-packages/torch/include/torch/csrc/api/include",
"C:/Users/USER/anaconda3/envs/cppcuda/Lib/site-packages/torch/include"
],
"defines": [
"_DEBUG",
"UNICODE",
"_UNICODE"
],
"windowsSdkVersion": "10.0.22621.0",
"cStandard": "c17",
"cppStandard": "c++14",
"intelliSenseMode": "windows-msvc-x64"
}
],
"version": 4
}
```
Any guidance is much appreciated, thank you.
| true |
2,825,364,605 | Feature Request: Support for .to(device) on DataLoader objects | Arpon-programmer | closed | [] | 0 | NONE | ### 🚀 The feature, motivation and pitch
Currently, in PyTorch, moving tensors to a specific device (like GPU or CPU) requires manually iterating through each batch in the DataLoader and calling .to(device) on individual elements. While this is not overly complex, it would be much more convenient if DataLoader could directly support moving all tensors to the specified device in a single call, similar to what is available for individual tensors.
Problem:
Currently, to move data to the correct device, we must do this:
for inputs, targets in train_dl:
inputs, targets = inputs.to(device), targets.to(device)
# Continue with the model...
This requires manually handling each batch of data and moving it to the device. It’s not a major issue but can be an inconvenience when working with large datasets or models that require frequent device transfers.
Suggestion:
Add support to the DataLoader object to directly support .to(device) for all batches within the loader. This would make it easier and cleaner to handle device transfers.
Example:
train_dl.to(device)
or,
train_dl = train_dl.to(device)
This would internally iterate through each batch and move all tensors (inputs and targets) to the specified device without requiring manual iteration.
### Alternatives
_No response_
### Additional context
This feature would simplify code when working with models that involve multiple devices or require frequent device management. Additionally, it could make the code cleaner and more intuitive for users who are new to PyTorch. | true |
2,825,361,489 | Replace `*args : Any` with `typing_extensions.TypeVarTuple` | Skylion007 | open | [
"module: typing",
"triaged"
] | 1 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
There are a lot of times we forward positional args to superclasses or other methods with `*args`. We should be using typing_extensions.TypeVarTuple whenever possible to forward these args whenever possible. TypedDict are a great way to type Kwargs, but probably only worth doing for the most popular method forwarding.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @malfet @xuzhao9 @gramster | true |
2,825,347,881 | Make fx.node.map_arg() and .map_aggregate() generic | rec | closed | [
"oncall: distributed",
"module: typing",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor",
"suppress-api-compatibility-check",
"suppress-bc-linter"
] | 39 | COLLABORATOR | ## What's the problem?
The popular `fx.node.map_arg()` and `fx.node.map_aggregate()` apply operations recursively on `dict`s, `tuples`, `list`s, etc, and return a new collection of the same type.
Unfortunately, their base input type is `Argument`, which is [very unspecific indeed](https://github.com/pytorch/pytorch/blob/5d55a6585d5806c2743e92118e663f5abb261895/torch/fx/node.py#L48-L58): most type information is just thrown away at the call site of either of these functions, as far as the type checker goes.
As `torch` moves to a more typed code base, this would force innocent, unsuspecting developers to add logically unnecessary casts or `# type: ignore` statements.
## What's the solution?
Making these two `node.map_*` functions generic on the first argument and return type means that type information is preserved for the type checker. (The signature of the other parameter, the function that visits the nodes and subnodes, has not changed, nor should it.)
## Won't it break everything?
It doesn't break the type checker - one place needed an extra hint.
There have been code breakages, resolved one, at least one new one... we'll see!
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146248
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @malfet @xuzhao9 @gramster @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi | true |
2,825,346,420 | torch/nn/modules/conv.py: docs: improvements | kuraga | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Fix highlighting in generated documentation (`torch/nn/modules/conv.py`):
* attrs should be `:attrs:`,
* constants should be constants,
* text in math should be '\text{}`.
Reborn of #136218.
/cc @albanD, @jbschlosser, @mikaylagawarecki | true |
2,825,329,023 | [BE][Ez]: Make c10/special arrays constexpr | Skylion007 | closed | [
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | COLLABORATOR | No reason to have array creation overhead for these constexpr arrays. This is better because it guarantees the array is not duplicated across templates or translation units unless necessary and allows the compiler to do static compile time bounds checking (even in loop based accesses) | true |
2,825,317,568 | [BE]: Remove redundant int cast calls | Skylion007 | closed | [
"oncall: distributed",
"open source",
"better-engineering",
"release notes: quantization",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 2 | COLLABORATOR | Remove redundant int cast calls
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,825,245,720 | Simplify CUDA version checking on tests | cyyever | closed | [
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (c10d)"
] | 1 | COLLABORATOR | Since we require CUDA >=11.0
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,825,235,302 | Enable some tests on Windows | cyyever | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,825,206,484 | "Automatically add `__all__`" tool and linter | rec | open | [
"module: typing",
"module: lint",
"triaged",
"better-engineering"
] | 4 | COLLABORATOR | ### Automatically add `_all__` (AAA)
I have been [playing around](https://github.com/rec/test/blob/master/python/importer_counter.py) with using Python's built-in `tokenizer` to build a big sorted table of which `torch.` modules import which modules and symbols from `torch.` (summary at end).
Seeing [this issue](https://github.com/pytorch/pytorch/issues/131765) about adding `__all__`, it strikes me that I could quite easily modify my code to **automatically add or update `__all__` in any existing `.py` file**, if there were any interest.
Right now there are 1303 .py files below `torch/` that do _not_ contain `__all__` and 695 that do.
### `all_linter`
An `all_linter` would check that **all symbols imported by another module appear in `__all__`** for their module.
Given AAA it'd be easy to naïvely run over each Python file on each commit (13 seconds on this fast machine, a bit slow). Writing it to work incrementally is a better idea, probably not hard, more design needed.
----
### Appendix: the most imported modules in `torch`
My original goal was to figure out which modules and symbols were the best candidates for adding typing and documentation by seeing which were imported the most from code within `torch/`.
I have excerpts from a run of https://github.com/rec/test/blob/master/python/importer_counter.py below.
I note an `experimental` near the top. 😁
The full "report" (it's JSON) goes into increasing levels of detail and is about 45k lines as of this writing.
```
"torch": 1808,
"torch._inductor.pattern_matcher": 486,
"torch._dynamo.utils": 339,
"torch._inductor.utils": 323,
"torch.utils._pytree": 272,
"torch.fx": 179,
"torch.fx.experimental.symbolic_shapes": 177,
"torch.nn": 173,
"torch.optim.optimizer": 165,
"torch.testing._internal.common_utils": 164,
"torch._prims_common": 161,
"torch._dynamo.source": 149,
...
{
"torch": {
"(module)": 1121,
"Tensor": 183,
"config": 84,
"_C": 38,
"ir": 35,
"variables": 19,
"SymInt": 14,
"_dtypes_impl": 11,
"torch._inductor.pattern_matcher": {
"CallFunction": 32,
"KeywordArg": 30,
"Arg": 29,
"CallFunctionVarArgs": 27,
"Ignored": 26,
"ListOf": 26,
....
"torch._inductor.pattern_matcher": {
"compute_mutation_region_ids": [
"torch._functorch.compile_utils"
],
"same_mutation_regions": [
"torch._functorch.compile_utils"
],
"Arg": [
"torch._inductor.fx_passes.b2b_gemm",
"torch._inductor.fx_passes.binary_folding",
"torch._inductor.fx_passes.decompose_mem_bound_mm",
"torch._inductor.fx_passes.mkldnn_fusion",
"torch._inductor.fx_passes.post_grad",
"torch._inductor.fx_passes.quantization",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_1",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_10",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_11",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_12",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_13",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_14",
"torch._inductor.fx_passes.serialized_patterns._sfdp_pattern_15",
...
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @malfet @xuzhao9 @gramster | true |
2,825,205,942 | How to perform BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32 efficiently using pytorch API? | Wongboo | closed | [
"module: cuda",
"triaged",
"module: linear algebra",
"module: python frontend",
"matrix multiplication"
] | 7 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
NVIDIA's cutlass library can perform BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32 for improved numerical stability. For example, consider the following snippet from [this code example from flash-attention](https://github.com/Dao-AILab/flash-attention/blob/02541ac9e8382f4d8e17f1f2ba0d7de2c792390c/csrc/flash_attn/src/flash_fwd_kernel.h#L319) calling it:
```
FLASH_NAMESPACE::gemm</*A_in_regs=*/Kernel_traits::Is_Q_in_regs>(
acc_s, tSrQ, tSrK, tSsQ, tSsK, tiled_mma, smem_tiled_copy_Q, smem_tiled_copy_K,
smem_thr_copy_Q, smem_thr_copy_K
);
```
where `tSrQ`, `tSrK`, `tSsQ`, `tSsK` is BF16/FP16, while final result `acc_s` is FP32.
I notice [pytorch's BF16 matrix mulitiplication](https://pytorch.org/docs/stable/notes/numerical_accuracy.html#reduced-precision-reduction-for-fp16-and-bf16-gemms) will use FP32 as intermediate accumulations, but final result is downcast to BF16 anyway. I experimented with the `out` parameter and `autocast`, but neither provided a complete solution.
Surely, below code can implement BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32
```
A = torch.randn((12, 3, 4, 5), dtype=torch.bfloat16)
B = torch.randn((12, 3, 5, 6), dtype=torch.bfloat16)
C = torch.einsum("...ij,...jk->...ijk", A, B).sum(dtype=torch.float32, dim=-2)
```
However, I have serious reservations about the speed and memory efficiency of this approach. I wonder if There is a more pytorch way to call the corresponding CUTLASS API.
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @albanD | true |
2,825,142,807 | Win32 Build crashes on startup (C++). | AnFunctionArray | open | [
"needs reproduction",
"module: build",
"module: windows",
"triaged"
] | 2 | NONE | ### 🐛 Describe the bug
Win32 build with msvc2022 and protoc.exe from latest pytorch pip python library and cuda12.6 - crashes when linked and ran by executable at startup in c10.dll (when linked and ran with downloaded latest pytorch win32 libraries online it works).
*The executable is C++ project*
### Versions
Unrelatable since it's about building from source code.
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | true |
2,825,129,852 | torch_shm_manager: undefined reference to gloo | adamjstewart | open | [
"module: build",
"module: cuda",
"triaged"
] | 17 | CONTRIBUTOR | ### 🐛 Describe the bug
I'm seeing PyTorch 2.6.0 build issues, but only when compiling with CUDA support and using the system gloo. The specific error message is:
```
[7209/7228] Linking CXX executable bin/torch_shm_manager
FAILED: bin/torch_shm_manager
: && /builds/spack/spack/lib/spack/env/gcc/g++ -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-error=dangling-reference -Wno-error=redundant-move -Wno-stringop-overflow -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -rdynamic -Wl,--dependency-file=caffe2/torch/lib/libshm/CMakeFiles/torch_shm_manager.dir/link.d -Wl,--no-as-needed caffe2/torch/lib/libshm/CMakeFiles/torch_shm_manager.dir/manager.cpp.o -o bin/torch_shm_manager -Wl,-rpath,/tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib:/home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/cpuinfo-2024-09-26-6dpxpvi2tu4ifv4mkc7zyke7xb4exics/lib:/home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/protobuf-3.13.0-hifms4p2pv4x7vh6gauu34hiatc42cpz/lib:/home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/pthreadpool-2023-08-29-wr5i7t4emwmywdqm4yzzmoonjjwrwk7g/lib:/home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/gloo-2023-12-03-sfe3fwtqo4p73apjqsmjoi2o7wj4ak75/lib: lib/libshm.so -lrt lib/libc10.so -Wl,-rpath-link,/tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib:/home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/cpuinfo-2024-09-26-6dpxpvi2tu4ifv4mkc7zyke7xb4exics/lib:/home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/protobuf-3.13.0-hifms4p2pv4x7vh6gauu34hiatc42cpz/lib:/home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/pthreadpool-2023-08-29-wr5i7t4emwmywdqm4yzzmoonjjwrwk7g/lib:/home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/gloo-2023-12-03-sfe3fwtqo4p73apjqsmjoi2o7wj4ak75/lib && /home/software/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeh/linux-ubuntu24.04-x86_64_v3/gcc-13.2.0/cmake-3.31.4-hcuqrjfn4aeqmqd6xg7uaeqkihcg6ia4/bin/cmake -E __run_co_compile --lwyu="ldd;-u;-r" --source=bin/torch_shm_manager && :
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::allgather(gloo::AllgatherOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::rendezvous::Store::~Store()'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::alltoallv(gloo::AlltoallvOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::rendezvous::Context::connectFullMesh(gloo::rendezvous::Store&, std::shared_ptr<gloo::transport::Device>&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `typeinfo for gloo::rendezvous::Store'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::AllgathervOptions::setOutput(void*, std::vector<unsigned long, std::allocator<unsigned long> >, unsigned long)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `vtable for gloo::rendezvous::PrefixStore'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::barrier(gloo::BarrierOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::Context::getTimeout() const'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::allgatherv(gloo::AllgathervOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::scatter(gloo::ScatterOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::rendezvous::Context::Context(int, int, int)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::AlltoallvOptions::setInput(void*, std::vector<long, std::allocator<long> >, unsigned long)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::transport::tcp::CreateDevice(gloo::transport::tcp::attr const&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::BarrierOptions::BarrierOptions(std::shared_ptr<gloo::Context> const&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::Context::setTimeout(std::chrono::duration<long, std::ratio<1l, 1000l> >)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::AlltoallvOptions::setOutput(void*, std::vector<long, std::allocator<long> >, unsigned long)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::AllgathervOptions::setInput(void*, unsigned long, unsigned long)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::Context::createUnboundBuffer(void*, unsigned long)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::allreduce(gloo::AllreduceOptions const&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::broadcast(gloo::BroadcastOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::alltoall(gloo::AlltoallOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::reduce(gloo::ReduceOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::gather(gloo::GatherOptions&)'
/usr/bin/ld: /tmp/root/spack-stage/spack-stage-py-torch-2.6.0-kkm3ehmxkqnsoh6tob3zmjtb54eprcnw/spack-src/build/lib/libtorch_cpu.so: undefined reference to `gloo::rendezvous::PrefixStore::PrefixStore(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, gloo::rendezvous::Store&)'
collect2: error: ld returned 1 exit status
```
The error occurs on both x86_64 and aarch64. It only occurs for PyTorch 2.6.0, not 2.5.1. And it only occurs when compiling with CUDA support, not for the CPU.
Attached is the full build log and environment variables needed for reproducibility:
* [build log](https://github.com/user-attachments/files/18628327/spack-build-out.txt)
* [build env](https://github.com/user-attachments/files/18628328/spack-build-env-mods.txt)
### Versions
Can't run script because it won't install, but here's a best attempt:
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: 12.6.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04
GCC version: 13.2.0
Clang version: N/A
CMake version: 3.31.4
Libc version: ?
Python version: 3.12.8
Python platform: Linux
Is CUDA available: True
CUDA runtime version: 12.6.3?
CUDA_MODULE_LOADING set to: ?
GPU models and configuration: cuda_arch 8.0
Nvidia driver version: ?
cuDNN version: 8.9.7.29
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @malfet @seemethere @ptrblck @msaroufim @eqy @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,825,109,965 | Remove NOLINTNEXTLINE | cyyever | closed | [
"oncall: distributed",
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 9 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,825,090,670 | [2/N] Fix cppcoreguidelines-init-variables suppression | cyyever | open | [
"triaged",
"open source",
"topic: not user facing"
] | 6 | COLLABORATOR | This PR removes all `cppcoreguidelines-init-variables` suppressions.
| true |
2,825,000,332 | [Rosetta] Can not get the correct MacOS version | smartliuhw | closed | [
"low priority",
"triaged",
"module: macos",
"module: intel",
"module: mps"
] | 2 | NONE | ### 🐛 Describe the bug
I'm trying to use a bf16 tensor on MPS, but got the error message: ``MPS BFloat16 is only supported on MacOS 14 or newer``.
However, my MacOS version is 15.3, but the torch cannot recognize the system version.
This is the result of my test
```python
>>> import torch
>>> torch.backends.mps.is_macos_or_newer(15, 0)
False
>>> torch.backends.mps.is_macos_or_newer(14, 0)
False
>>> torch.backends.mps.is_macos_or_newer(13, 0)
True
```
### Versions
```
python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git1eba9b3
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (x86_64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.4
Libc version: N/A
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 11:37:13) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] optree==0.14.0
[pip3] torch==2.6.0a0+git1eba9b3
[pip3] torchdata==0.10.1
[pip3] torchpippy==0.2.0
[conda] mkl-include 2023.2.2 pypi_0 pypi
[conda] mkl-static 2023.2.2 pypi_0 pypi
[conda] numpy 2.2.2 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] torch 2.6.0a0+git1eba9b3 dev_0 <develop>
[conda] torchdata 0.10.1 pypi_0 pypi
[conda] torchpippy 0.2.0 pypi_0 pypi
```
cc @malfet @albanD @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @kulinseth @DenisVieriu97 @jhavukainen | true |
2,824,957,495 | [inductor] Refactor op handlers part 1 | jansel | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146373
* #146297
* #146282
* #146257
* #146255
* #146254
* #146252
* __->__ #146235
* #146226
* #146225
This enforces the invariant that every backend implements the same set of ops and removes a layer of indirection for BasicMathOps.
Interestingly this is a small compile time win:
```
...
WIN: benchmark ('add_loop_inductor', 'compile_time_instruction_count') failed, actual result 30151159301 is -6.13% lower than expected 32120000000 ±1.50% please update the expected results.
please update all results that changed significantly, and not only the failed ones
PASS: benchmark ('add_loop_inductor_dynamic_gpu', 'compile_time_instruction_count') pass, actual result 44447549162 -1.69% is within expected 45210000000 ±2.50%
WIN: benchmark ('add_loop_inductor_gpu', 'compile_time_instruction_count') failed, actual result 26743557195 is -2.25% lower than expected 27360000000 ±1.50% please update the expected results.
please update all results that changed significantly, and not only the failed ones
PASS: benchmark ('basic_modules_ListOfLinears_eager', 'compile_time_instruction_count') pass, actual result 945129734 +0.93% is within expected 936400000 ±1.50%
WIN: benchmark ('basic_modules_ListOfLinears_inductor', 'compile_time_instruction_count') failed, actual result 18984384503 is -3.19% lower than expected 19610000000 ±1.50% please update the expected results.
please update all results that changed significantly, and not only the failed ones
WIN: benchmark ('basic_modules_ListOfLinears_inductor_gpu_force_shape_pad', 'compile_time_instruction_count') failed, actual result 17258025389 is -1.94% lower than expected 17600000000 ±1.50% please update the expected results.
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,952,011 | [1/N] Fix F401 errors in tests | cyyever | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 8 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,824,871,096 | Remove unactivated test | cyyever | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,824,799,706 | Disable has_relational_guards check for dict_tag optimization for now | isuruf | closed | [
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146232
has_relational_guards evaluates to true almost always, and leads to a
slowdown in guards runtime
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,824,771,130 | [mps] Move zeta() to special_math.h. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 3 | MEMBER | In preparation for implementing digamma/polygamma
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,737,932 | [cutlass backend] Add instantiation level for generating configs | henrylhtsang | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Passing through instantiation level to generate more configs.
I do see some C++ compilation error. But running is fine. Using 2222 generates 1k+ configs.
Differential Revision: D68989194
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,714,744 | [ca] no longer require is_traceable annotations for c++ autograd functions | xmfan | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compiled autograd"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146386
* __->__ #146229
This PR removes the CA compile-time error for C++ autograd functions, and supports them by having dynamo graph break on them (instead of allow_in_graph). The CppNode's collects are kept as is for now.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,688,295 | [PT2][Inductor][reland] Add runtime numeric check for the post grad pass | mengluy0125 | closed | [
"fb-exported",
"Stale",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 6 | CONTRIBUTOR | Summary: We observed compilation time regression with previous diff implementation D63438718. Here we fix the issue and reland the diff
Test Plan:
### numeric check enablement test
```
buck2 run mode/opt //scripts/jackiexu0313/pt2:local_model_with_pt2 -- --test_mode batch --use_synthetic_data --flow_id 685229965 -n
```
### compilation time check
```
buck2 run mode/opt //caffe2/benchmarks/dynamo/fb:torchbench_run_nanogpt_training -- -m nanogpt -t training
```
```
torchbench_run
duration_ms: 219528
defaults-batch_size: 1
defaults-speedup-x1000: 1408
defaults-abs_latency-x1000: 29068
defaults-compilation_latency-x1000: 93996
defaults-compression_ratio-x1000: 924
defaults-eager_peak_mem-x1000: 2473
defaults-dynamo_peak_mem-x1000: 2675
defaults-calls_captured: 1156
defaults-unique_graphs: 3
defaults-graph_breaks: 8
defaults-unique_graph_breaks: 6
defaults-autograd_captures: 0
defaults-autograd_compiles: 0
defaults-cudagraph_skips: 0
cudagraphs-batch_size: 1
cudagraphs-speedup-x1000: 5065
cudagraphs-abs_latency-x1000: 7983
cudagraphs-compilation_latency-x1000: 76961
cudagraphs-compression_ratio-x1000: 1485
cudagraphs-eager_peak_mem-x1000: 4473
cudagraphs-dynamo_peak_mem-x1000: 3012
cudagraphs-calls_captured: 1154
cudagraphs-unique_graphs: 2
cudagraphs-graph_breaks: 4
cudagraphs-unique_graph_breaks: 4
cudagraphs-autograd_captures: 0
cudagraphs-autograd_compiles: 0
cudagraphs-cudagraph_skips: 0
cudagraphs_dynamic-batch_size: 1
cudagraphs_dynamic-speedup-x1000: 5038
cudagraphs_dynamic-abs_latency-x1000: 8334
cudagraphs_dynamic-compilation_latency-x1000: 22521
cudagraphs_dynamic-compression_ratio-x1000: 893
cudagraphs_dynamic-eager_peak_mem-x1000: 4017
cudagraphs_dynamic-dynamo_peak_mem-x1000: 4493
cudagraphs_dynamic-calls_captured: 1154
cudagraphs_dynamic-unique_graphs: 2
cudagraphs_dynamic-graph_breaks: 4
cudagraphs_dynamic-unique_graph_breaks: 4
cudagraphs_dynamic-autograd_captures: 0
cudagraphs_dynamic-autograd_compiles: 0
cudagraphs_dynamic-cudagraph_skips: 0
```
```
servicelab create benchmark_torchbench_run_nanogpt_training -d D68979204
```
Successfully submitted experiment: https://www.internalfb.com/servicelab/experiment/4800587892/
Differential Revision: D68979204
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,824,685,430 | [ROCm][TunableOp] Add bias data type to params signature. | naromero77amd | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm-mi300"
] | 3 | COLLABORATOR | Add bias vector data type in TunableOp params signature.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.