id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,874,188,316 | GradScaler Not work on Intel Arc GPU | xiaoran007 | closed | [] | 2 | NONE | ### 🐛 Describe the bug
I recently setup the environment for my Intel ARC A770 GPU and it worked fine on FP32. However, when I try to train with mixed precision, I find that GradScaler doesn't work properly on the ARC GPUs.
If I use GradScaler directly according to this [documentation](https://pytorch.org/docs/stable/notes/get_start_xpu.html#train-with-amp) (without passing the device type and actually trying to call cuda `scaler = torch.amp.GradScaler(enabled=use_amp)`), it produces a warning that GradScaler is not enabled.
```python
UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
warnings.warn(
```
If I pass in the “xpu” device type `scaler = GradScaler(device="xpu", enabled=True)`, it throws an error:
```python
File "/home/xiaoran/miniconda3/envs/torch/lib/python3.9/site-packages/torch/amp/grad_scaler.py", line 451, in step
self.unscale_(optimizer)
File "/home/xiaoran/miniconda3/envs/torch/lib/python3.9/site-packages/torch/amp/grad_scaler.py", line 335, in unscale_
inv_scale = self._scale.double().reciprocal().float()
RuntimeError: Required aspect fp64 is not supported on the device
```
I also checked the autocast and it actually works as expected, so the problem seems to be GradScaler related only?
### Versions
Collecting environment information...
PyTorch version: 2.6.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-53-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 48%
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4988.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts vnmi md_clear flush_l1d
Virtualization: VT-x
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 3 MiB (12 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] pytorch-triton-xpu==3.2.0
[pip3] torch==2.6.0+xpu
[pip3] torchaudio==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[conda] numpy 1.26.3 pypi_0 pypi
[conda] pytorch-triton-xpu 3.2.0 pypi_0 pypi
[conda] torch 2.6.0+xpu pypi_0 pypi
[conda] torchaudio 2.6.0+xpu pypi_0 pypi
[conda] torchvision 0.21.0+xpu pypi_0 pypi | true |
2,874,179,768 | [DSD] Fixes issue when there is a PG without parameters | fegin | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147730
Fixes https://github.com/pytorch/pytorch/issues/143828
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,874,113,683 | [RFE][Distributed][NCCL] A feature request for stream management API in PG NCCL | Aidyn-A | closed | [
"oncall: distributed",
"module: cuda",
"module: nccl",
"module: c10d"
] | 16 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
### A feature request for stream management API in PG NCCL
Asynchronous communication offers the benefit of being overlapped with other CUDA operations, thanks to stream concurrency. However, in the current state of PyTorch, this advantage may be compromised by a potential "read-before-write" issue, as each NCCL process group operates on its own dedicated stream, which may act independently.
Here is an example of a proper behavior:
```python
dist.reduce_scatter_tensor(buffer, data, group=local_pg)
dist.all_reduce(buffer, group=cross_pg)
dist.all_gather_into_tensor(data, buffer, group=local_pg)
```

Where the collective ops are executed in the right order.
However, once these ops are requested as async:
```python
dist.reduce_scatter_tensor(buffer, data, group=local_pg, async_op=True)
dist.all_reduce(buffer, group=cross_pg, async_op=True)
dist.all_gather_into_tensor(data, buffer, group=local_pg, async_op=True)
```

The collective ops are no longer in proper order since they run on different streams that are asynchronous wrt each other. **This behavior is certainly not what users would expect and can be considered as a bug.**
One can mitigate this with `handle.wait()` call, but the main goal is to run these comms parallel to the compute ops (matmuls etc.). We would like to propose one of two ways of setting streams in process groups:
- Pass a user defined stream as PG-option in new_group:
```python
nccl_options = dist.ProcessGroupNCCL.Options(stream=user_stream)
pg = dist.new_group(backend="nccl", pg_options=nccl_options)
```
- Set a user stream at runtime:
```python
pg1.set_stream(user_stream)
pg2.set_stream(user_stream)
dist.reduce_scatter_tensor(buffer, data, group=pg1, async_op=True)
dist.all_reduce(buffer, group=pg2, async_op=True)
```
### Motivation
It was first discovered in an attempt to do multi data center training. In the above example
```python
dist.reduce_scatter_tensor(buffer, data, group=local_pg)
dist.all_reduce(buffer, group=cross_pg)
dist.all_gather_into_tensor(data, buffer, group=local_pg)
```
The `cross_pg group` has “netName” set to “Socket” in ProcessGroupOptionsNCCL to tell NCCL to use a frontend TCP network which connects two compute clusters together, instead of IB or RoCE, that are set for `local_pg`. Without more control over the communication stream, the inter DC communication must be exposed and not overlapped with any compute.
There are many other cases that could potentially benefit from fine control of compute stream as well.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ptrblck @msaroufim @eqy @skyw @alpha0422 | true |
2,874,092,879 | Update slow tests | pytorchupdatebot | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3 | COLLABORATOR | This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests. | true |
2,874,033,075 | [XPU][Inductor] Update Intel triton for release 2.7. | etaf | closed | [
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 9 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148323
* __->__ #147727
* #148538
* #148534
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,873,994,208 | DISABLED test_inductor_all_reduce_coalesced (__main__.CompileTest) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d"
] | 11 | NONE | Platforms: linux, rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_all_reduce_coalesced&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37685839699).
Over the past 3 hours, it has been determined flaky in 21 workflow(s) with 42 failures and 21 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_all_reduce_coalesced`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000 @wdvr | true |
2,873,838,543 | Avoid linking multiple OMP runtimes in libtorch_cpu.so if BLAS used is OpenBLAS. | vinithakv | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11 | CONTRIBUTOR | When PyTorch is built with OpenBLAS support and libopenblas is ldrectly linked with libgomp.so the libtorch_cpu.so ends up getting multiple omp runtimes linked against it. This may result in unexpected runtime behaviour /regression. This patch fixes this by avoiding linking against libomp.so if OpenBLAS is linked against libgomp.so
Fixes #146603
| true |
2,873,793,691 | [DTensor] [distributed]: Operator aten.select.int does not have a sharding strategy registered | biwang-pa | closed | [
"oncall: distributed",
"triaged",
"module: dtensor"
] | 4 | NONE | ### 🚀 The feature, motivation and pitch
Hi,
It looks like `aten.select.int` has no sharding strategy registered. I searched around it looks like this issue is something DTensor module needs to add op by op.
Here is the error log I have:
```
(TunerInternal pid=2226) Training errored after 0 iterations at 2025-02-22 21:28:10. Total running time: 3min 50s
(TunerInternal pid=2226) Error file: /tmp/ray/session_2025-02-22_20-52-15_173487_11/artifacts/2025-02-22_21-24-19/test_run/driver_artifacts/TorchTrainer_6e990_00000_0_2025-02-22_21-24-20/error.txt
(TunerInternal pid=2226)
ray.exceptions.RayTaskError(NotImplementedError): ray::TrainTrainable.train() (pid=1481, ip=10.107.76.5, actor_id=16e01024e8f96f26f980a63b03000000, repr=TorchTrainer)
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 331, in train
raise skipped from exception_cause(skipped)
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/train/_internal/utils.py", line 53, in check_for_failure
ray.get(object_ref)
ray.exceptions.RayTaskError(NotImplementedError): ray::_RayTrainWorker__execute.get_next() (pid=1555, ip=10.107.76.4, actor_id=82297487b1b887def21481a203000000, repr=<ray.train._internal.worker_group.RayTrainWorker object at 0x7df873ca2b90>)
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/train/_internal/worker_group.py", line 33, in __execute
raise skipped from exception_cause(skipped)
File "/home/ray/anaconda3/lib/python3.10/site-packages/ray/train/_internal/utils.py", line 169, in discard_return_wrapper
train_func(*args, **kwargs)
File "/home/jupyter/open_source_model/ray_train_tp_fsdp.py", line 314, in train_func_main
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/_tensor.py", line 626, in backward
torch.autograd.backward(
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/distributed/tensor/_api.py", line 346, in __torch_dispatch__
return DTensor._op_dispatcher.dispatch(
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 170, in dispatch
self.sharding_propagator.propagate(op_info)
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py", line 206, in propagate
OutputSharding, self.propagate_op_sharding(op_info.schema)
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py", line 46, in __call__
return self.cache(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.10/site-packages/torch/distributed/tensor/_sharding_prop.py", line 455, in propagate_op_sharding_non_cached
raise NotImplementedError(
NotImplementedError: Operator aten.select.int does not have a sharding strategy registered.
```
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu | true |
2,873,786,644 | fix simple-spec crash | mayank31398 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 24 | CONTRIBUTOR | found an issue while running `python torchgen/fuse/gen_patterns.py`
exact error:
```shell
Traceback (most recent call last):
File "/Users/mayankmishra/Desktop/non-IBM/pytorch/torchgen/fuse/gen_patterns.py", line 19, in <module>
joint_graph.lazy_init()
File "/Users/mayankmishra/miniconda3/envs/ai/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 2096, in lazy_init
result = fn()
File "/Users/mayankmishra/miniconda3/envs/ai/lib/python3.10/site-packages/torch/_inductor/fx_passes/joint_graph.py", line 53, in lazy_init
_pad_mm_init()
File "/Users/mayankmishra/miniconda3/envs/ai/lib/python3.10/site-packages/torch/_inductor/fx_passes/pad_mm.py", line 905, in _pad_mm_init
gen_register_replacement(
File "/Users/mayankmishra/miniconda3/envs/ai/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 1584, in gen_register_replacement
pat = _serialize_pattern(
File "/Users/mayankmishra/miniconda3/envs/ai/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 1539, in _serialize_pattern
file_template = get_file_template()
File "/Users/mayankmishra/miniconda3/envs/ai/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 1513, in get_file_template
if isinstance(attr, type) and issubclass(attr, (PatternExpr, _TargetExpr)):
File "/Users/mayankmishra/miniconda3/envs/ai/lib/python3.10/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
```
This PR fixes this issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,873,776,823 | Segmentation fault in `torch.ops.profiler._call_end_callbacks_on_jit_fut` | vwrewsge | open | [
"oncall: profiler"
] | 0 | NONE | ### 🐛 Describe the bug
Passing a tuple with None value to `torch.ops.profiler._call_end_callbacks_on_jit_fut` can cause a Segmentation fault.
# Code
```
import torch
torch.ops.profiler._call_end_callbacks_on_jit_fut(torch.tensor(0), None)
```
# Output
```
Segmentation fault
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | true |
2,873,762,309 | Skip test_dtypes xpu test on bmm and addbmm | daisyden | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 6 | NONE | For [RFC](https://github.com/pytorch/pytorch/issues/114850), this PR is to skip Intel GPU TestCommon::test_dtypes test on bmm and addbmm as both ops do not have complex64 support at present. To achieve this and limit the updates to op_db, extended DecorateInfo to support list device_type.
| true |
2,873,740,025 | [ONNX Convert] Error when input to nn.AdaptiveAvgPool2d size is variable | cengyi22 | open | [
"module: onnx",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
We are working on converting a trained model with nn.AdaptiveAvgPool2d to onnx, but we encountered an error where the onnx conversion is not performed when the input of nn.AdaptiveAvgPool2d is variable.
class spatial_strip_att2(nn.Module):
def __init__(self, dim, kernel=3, dilation=1, group=2, H=False) -> None:
super().__init__()
self.k = kernel
pad = dilation*(kernel-1) // 2
self.kernel = (1, kernel) if H else (kernel, 1)
self.padding = (kernel//2, 1) if H else (1, kernel//2)
self.dilation = dilation # 커널의 팽창 정도
self.group = group # 그룹화된 convolution 수
self.pad = nn.ReflectionPad2d((pad, pad, 0, 0)) if H else nn.ReflectionPad2d((0, 0, pad, pad))
self.conv = nn.Conv2d(dim, group*kernel, kernel_size=1, stride=1, bias=False)
self.ap = nn.AdaptiveAvgPool2d((1, 1))
self.filter_act = nn.Tanh()
self.inside_all = nn.Parameter(torch.zeros(dim,1,1), requires_grad=True)
self.lamb_l = nn.Parameter(torch.zeros(dim), requires_grad=True)
self.lamb_h = nn.Parameter(torch.zeros(dim), requires_grad=True)
gap_kernel = (None,1) if H else (1, None)
self.gap = nn.AdaptiveAvgPool2d(gap_kernel) # Note: the bug is here, we should also pass if the gap_kernel is of variable size.
Any guidance on how to work around this, improve the code, etc. would be greatly appreciated.
### Versions
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro (10.0.22631 64비트)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 561.19
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 7 8845HS w/ Radeon 780M Graphics
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3801
MaxClockSpeed: 3801
L2CacheSize: 8192
L2CacheSpeed: None
Revision: 29954
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] onnx==1.16.1
[pip3] onnxruntime-gpu==1.18.1
[pip3] onnxscript==0.1.0.dev20240708
[pip3] pytorch-lightning==2.3.0
[pip3] torch==2.3.1+cu121
[pip3] torchaudio==2.3.1+cu121
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.18.1+cu121
[conda] Could not collect
| true |
2,873,731,278 | DISABLED test_real_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_real_imag_view_lazy_complex128%22%5D)). | true |
2,873,731,224 | DISABLED test_real_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_real_imag_view_lazy_complex128%22%5D)). | true |
2,873,731,188 | DISABLED test_flatten_nonview_xla (__main__.TestViewOpsXLA) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_flatten_nonview_xla%22%2C%22TestViewOpsXLA%22%5D)). | true |
2,873,731,085 | DISABLED test_real_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_real_imag_view_lazy_complex128%22%5D)). | true |
2,873,730,939 | DISABLED test_real_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_real_imag_view_lazy_complex128%22%5D)). | true |
2,873,730,424 | DISABLED test_real_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_real_imag_view_lazy_complex128%22%5D)). | true |
2,873,730,243 | DISABLED test_real_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_real_imag_view_lazy_complex128%22%5D)). | true |
2,873,730,062 | DISABLED test_real_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_real_imag_view_lazy_complex128%22%5D)). | true |
2,873,729,889 | DISABLED test_real_imag_view_lazy_complex128 (__main__.TestViewOpsLAZY) | ankurneog | closed | [
"skipped"
] | 1 | CONTRIBUTOR | Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22test_view_ops.py%3A%3ATestViewOpsLAZY%3A%3Atest_real_imag_view_lazy_complex128%22%5D)). | true |
2,873,699,476 | [AOTI][XPU] Suppress multi-line comment warning for XPU. | etaf | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147710
This PR aim to suppress multi-line comment waring in sycl header when building Inductor cpp_wrapper .
```
/intel/oneapi/compiler/2025.0/include/sycl/detail/builtins/builtins.hpp:235:1: warning: multi-line comment [-Wcomment]
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,873,675,475 | [MPS/Inductor] Add support for xlog1py. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,873,639,826 | [Window] Fix invalid file path on windows. | etaf | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/xpu"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147708
This PR aims to fix the invalid path for windows: `C:\\Users\\sdp\\AppData\\Local\\Temp\\tmp0wugz2qm\\dynamo\\code_state___main__.TestFxGraphCache.test_cache_hot_load_pgo:None:.pkl.lock`
Windows does not allow chars `\ / : * ? " < > |` in a path.
And this PR also replace `os.rename` to `os.replace` in torch/_dynamo/pgo.py because `os.replace` allows target file exists on Windows, but not `os.rename` .
| Function | `os.rename()` | `os.replace()` |
|--------------------------------|----------------------------|----------------------------|
| Rename a file | ✅ | ✅ |
| Move a file | ✅ | ✅ |
| Overwrite an existing file | ❌ (Error on Windows) | ✅ (Will overwrite) |
| Overwrite an existing directory | ❌ (Error on Windows) | ❌ (Error on Windows) |
| Move across disks | ❌ | ❌ |
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,873,621,596 | DISABLED test_inductor_all_gather_into_tensor_single (__main__.CompileTest) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d"
] | 17 | NONE | Platforms: linux, rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_all_gather_into_tensor_single&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37677708367).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_all_gather_into_tensor_single`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000 @wdvr | true |
2,873,486,904 | Random Batch Sampler Speedup | GalAvineri | open | [
"triaged",
"open source",
"release notes: dataloader"
] | 11 | NONE | # Motivation
`Sampler` outputs indices using a generator, forcing `BatchSampler` to iterate over the indices one-by-one before grouping them into batches.
If `Sampler` constructs the whole sequence of indices before yielding it, batching could be done more efficiently over the sequence than by iterating over a generator.
This occurs for example in the widely used `RandomSampler`.
This PR replaces iteration with slicing by merging `RandomSampler` and `BatchSampler` into `RandomBatchSampler`.
Builds upon https://github.com/pytorch/pytorch/pull/137423
Benchmarking code is based on https://github.com/pytorch/pytorch/issues/76950
```
batch_size drop_last replacement avg and std original avg and std new speedup
4 True True 0.0085 +- 3.7e-04 0.0032 +- 2.3e-04 166.63%
4 True False 0.0126 +- 3.2e-04 0.0041 +- 1.1e-03 210.52%
4 False True 0.0100 +- 2.3e-04 0.0031 +- 3.9e-05 222.77%
4 False False 0.0068 +- 5.0e-04 0.0037 +- 9.6e-05 85.69%
8 True True 0.0083 +- 1.2e-04 0.0016 +- 1.9e-05 403.48%
8 True False 0.0054 +- 1.2e-04 0.0022 +- 7.5e-05 147.67%
8 False True 0.0090 +- 9.0e-05 0.0016 +- 3.5e-05 452.94%
8 False False 0.0060 +- 1.2e-04 0.0022 +- 7.9e-05 172.32%
64 True True 0.0079 +- 1.0e-04 0.0003 +- 1.9e-05 2257.91%
64 True False 0.0050 +- 1.1e-04 0.0009 +- 2.0e-05 457.21%
64 False True 0.0082 +- 5.7e-05 0.0003 +- 1.7e-05 2418.74%
64 False False 0.0052 +- 8.8e-05 0.0009 +- 2.1e-05 475.84%
256 True True 0.0078 +- 9.2e-05 0.0002 +- 1.6e-05 3696.33%
256 True False 0.0052 +- 4.9e-05 0.0008 +- 2.1e-05 555.58%
256 False True 0.0084 +- 5.4e-05 0.0002 +- 1.1e-05 3676.29%
256 False False 0.0056 +- 1.2e-03 0.0008 +- 2.9e-05 601.11%
1024 True True 0.0082 +- 6.0e-05 0.0002 +- 1.6e-05 4226.53%
1024 True False 0.0052 +- 4.9e-05 0.0008 +- 1.8e-05 589.77%
1024 False True 0.0083 +- 7.4e-05 0.0002 +- 1.6e-05 4216.05%
1024 False False 0.0053 +- 7.3e-05 0.0008 +- 1.8e-05 598.53%
4096 True True 0.0080 +- 1.0e-04 0.0002 +- 1.9e-05 4200.74%
4096 True False 0.0053 +- 8.2e-05 0.0007 +- 1.5e-05 608.29%
4096 False True 0.0081 +- 1.3e-04 0.0002 +- 1.4e-05 4398.71%
4096 False False 0.0052 +- 6.9e-05 0.0007 +- 1.2e-05 604.97%
8192 True True 0.0079 +- 7.3e-05 0.0002 +- 1.5e-05 4324.38%
8192 True False 0.0053 +- 8.5e-05 0.0007 +- 1.9e-05 613.55%
8192 False True 0.0080 +- 5.8e-05 0.0002 +- 1.3e-05 4545.14%
8192 False False 0.0053 +- 1.0e-04 0.0007 +- 1.2e-05 613.44%
16384 True True 0.0081 +- 1.6e-04 0.0002 +- 1.1e-05 4527.95%
16384 True False 0.0052 +- 1.1e-04 0.0007 +- 2.2e-05 606.50%
16384 False True 0.0080 +- 6.5e-05 0.0002 +- 1.2e-05 4462.77%
16384 False False 0.0052 +- 4.0e-05 0.0007 +- 1.7e-05 604.40%
```
In order to support the `replacement` argument I used numpy's `choice` since I couldn't find an efficient alternative in pytorch.
Therefore I also used a numpy.random.Generator in the `generator` argument.
If it is required to not use numpy, I could look into finding an efficient torch alternative for `choice`. | true |
2,873,375,483 | PyTorch README Update | naveeen0308 | closed | [
"open source",
"topic: not user facing"
] | 2 | NONE | This pull request updates the PyTorch README file to enhance clarity, improve formatting, and ensure compliance with the latest contribution guidelines. The changes include refined installation instructions, updated resource links, and improved guidance for contributing to PyTorch. The update maintains the structure of the original document while ensuring consistency with PyTorch's documentation standards. | true |
2,873,367,060 | Update README.md | dhi2906nesh | closed | [
"open source",
"topic: not user facing"
] | 3 | NONE | **Minor Documentation Enhancement in README**
Description:
This pull request introduces minor punctuation improvements to the README file by adding necessary full stops for better readability and consistency. While the changes are small, they contribute to maintaining a polished and professional documentation style, ensuring clarity for users and contributors. | true |
2,873,341,705 | Update README.md | naveeen0308 | closed | [
"open source",
"topic: not user facing"
] | 2 | NONE | Corrected grammatical grammers. | true |
2,873,341,158 | [bugfix][ez]: jit bugfix 3.10 version checking | Skylion007 | closed | [
"triaged",
"open source",
"ciflow/trunk",
"release notes: python_frontend",
"topic: bug fixes"
] | 4 | COLLABORATOR | Caught by a ruff static linter check. Will be flagged when we update the version. This was enabling on any minor version of 3.10 when it should have only enabled on 3.11
cc @albanD | true |
2,873,334,752 | Compiled `flex_attention` assuming wrong output tensor shape | mauriceweiler | closed | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 5 | NONE | ### 🐛 Describe the bug
When running the code below, I get the following error:
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_method reshape(*(FakeTensor(..., device='cuda:0', size=(1, 1024, 1, 8)), 1, 1024, 16), **{}):
shape '[1, 1024, 16]' is invalid for input of size 8192
```
The code runs without problems when not compiling `flex_attention` and the shape `[1, 1024, 16]` is actually valid while size `8192` in the error message is wrong.
Note that this size corresponds to `B*L*E` instead of `B*L*Ev`, which suggests that `flex_attention` confuses query/key dimensions with value dimensions.
``` python
import torch
from torch.nn.attention.flex_attention import flex_attention
B = 1
L = 1024
H = 1
Ev = 16
E = 8 # breaks for any E != Ev
@torch.compile
def fct(Q,K,V):
out = flex_attention(Q,K,V) # (B,H,L,Ev)
out = out.transpose(1,2) # (B,L,H,Ev)
out = out.reshape(B,L,H*Ev) # (B,L,H*Ev)
return out
Q = torch.randn(B,H,L,E ).cuda()
K = torch.randn(B,H,L,E ).cuda()
V = torch.randn(B,H,L,Ev).cuda()
out = fct(Q,K,V)
print(out.shape)
```
@BoyuanFeng
### Error logs
```
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] failed while attempting to run meta for aten.view.default
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] Traceback (most recent call last):
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] File ".../.venv/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 2384, in _dispatch_impl
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] r = func(*args, **kwargs)
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] File ".../.venv/lib/python3.13/site-packages/torch/_ops.py", line 723, in __call__
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] return self._op(*args, **kwargs)
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] ~~~~~~~~^^^^^^^^^^^^^^^^^
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] File ".../.venv/lib/python3.13/site-packages/torch/_refs/__init__.py", line 4675, in view
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] return _reshape_view_helper(a, *shape, allow_copy=False)
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] File ".../.venv/lib/python3.13/site-packages/torch/_refs/__init__.py", line 3713, in _reshape_view_helper
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] shape = utils.infer_size(shape, a.numel())
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] File ".../.venv/lib/python3.13/site-packages/torch/_prims_common/__init__.py", line 923, in infer_size
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] torch._check(
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] ~~~~~~~~~~~~^
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] numel == newsize,
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] ^^^^^^^^^^^^^^^^^
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] lambda: f"shape '{list(shape)}' is invalid for input of size {numel}",
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] )
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] ^
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] File ".../.venv/lib/python3.13/site-packages/torch/__init__.py", line 1656, in _check
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] _check_with(RuntimeError, cond, message)
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] File ".../.venv/lib/python3.13/site-packages/torch/__init__.py", line 1638, in _check_with
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] raise error_type(message_evaluated)
E0223 11:58:26.152000 3756476 torch/_subclasses/fake_tensor.py:2388] [0/0] RuntimeError: shape '[1, 1024, 16]' is invalid for input of size 8192
Traceback (most recent call last):
File ".../reproducer.py", line 21, in <module>
out = fct(Q,K,V)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File ".../.venv/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
~~~~~~~~~~^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
~~~~~~~~~~~^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
~~~~~~~~~^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
~~~~~~~~~~^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/variables/misc.py", line 1022, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/variables/tensor.py", line 591, in call_method
return wrap_fx_proxy(
tx,
...<4 lines>...
),
)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 2153, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 2219, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
target_cls, tx, proxy, example_value, subclass_type, **options
)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 2315, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2536, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2471, in get_fake_value
ret_val = wrap_fake_exception(
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
)
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2017, in wrap_fake_exception
return fn()
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2472, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2604, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
e.__traceback__
) from e
File ".../.venv/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2588, in run_node
return getattr(args[0], node.target)(*args[1:], **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File ".../.venv/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 1816, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 1377, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File ".../.venv/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 2384, in _dispatch_impl
r = func(*args, **kwargs)
File ".../.venv/lib/python3.13/site-packages/torch/_ops.py", line 723, in __call__
return self._op(*args, **kwargs)
~~~~~~~~^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/_refs/__init__.py", line 4675, in view
return _reshape_view_helper(a, *shape, allow_copy=False)
File ".../.venv/lib/python3.13/site-packages/torch/_refs/__init__.py", line 3713, in _reshape_view_helper
shape = utils.infer_size(shape, a.numel())
File ".../.venv/lib/python3.13/site-packages/torch/_prims_common/__init__.py", line 923, in infer_size
torch._check(
~~~~~~~~~~~~^
numel == newsize,
^^^^^^^^^^^^^^^^^
lambda: f"shape '{list(shape)}' is invalid for input of size {numel}",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File ".../.venv/lib/python3.13/site-packages/torch/__init__.py", line 1656, in _check
_check_with(RuntimeError, cond, message)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.13/site-packages/torch/__init__.py", line 1638, in _check_with
raise error_type(message_evaluated)
torch._dynamo.exc.TorchRuntimeError: Failed running call_method reshape(*(FakeTensor(..., device='cuda:0', size=(1, 1024, 1, 8)), 1, 1024, 16), **{}):
shape '[1, 1024, 16]' is invalid for input of size 8192
from user code:
File ".../reproducer.py", line 14, in fct
out = out.reshape(B,L,H*Ev) # (B,L,H*Ev)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250213+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.13.1 (main, Dec 19 2024, 14:32:25) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7532 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 119%
CPU max MHz: 2400.0000
CPU min MHz: 1500.0000
BogoMIPS: 4799.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250213+cu126
[pip3] torchaudio==2.6.0.dev20250213+cu126
[pip3] torchvision==0.22.0.dev20250213+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,873,326,236 | Inconsistent inference with batch size=1 | dimiz51 | closed | [] | 4 | NONE | ### 🐛 Describe the bug
I have built a custom implementation of DETR (Detection transformer) using a ResNet50 backbone from torchvision. The problem is that inference fails to produce any detections with batch size = 1 while it works fine for any other batch size > 1.
**Forward pass of my model**
```
def forward(self, x):
# Pass inputs through the CNN backbone...
tokens = self.backbone(x)["layer4"]
# Pass outputs from the backbone through a simple conv...
tokens = self.conv1x1(tokens)
# Re-order in patches format
tokens = rearrange(tokens, "b c h w -> b (h w) c")
# Pass encoded patches through encoder...
out_encoder = self.transformer_encoder((tokens + self.pe_encoder))
# We expand so each image of each batch get's it's own copy of the
# query embeddings. So from (1, 100, 256) to (4, 100, 256) for example
# for batch size=4, with 100 queries of embedding dimension 256.
queries = self.queries.repeat(out_encoder.shape[0], 1, 1)
# Compute outcomes for all intermediate
# decoder's layers...
class_preds = []
bbox_preds = []
for layer in self.transformer_decoder.layers:
queries = layer(queries, out_encoder)
class_preds.append(self.linear_class(queries))
bbox_preds.append(self.linear_bbox(queries))
# Stack and return
class_preds = torch.stack(class_preds, dim=1)
bbox_preds = torch.stack(bbox_preds, dim=1)
return class_preds, bbox_preds
```
**My inference code**
```
def run_inference(
model,
device,
inputs,
nms_threshold=0.3,
image_size=480,
empty_class_id=0,
out_format="xyxy",
scale_boxes=True,
):
"""
Utility function that wraps the inference and post-processing and returns the results for the
batch of inputs. The inference will be run using the passed model and device while post-processing
will be done on the CPU.
Args:
model (torch.nn.Module): The trained model for inference.
device (torch.device): The device to run inference on.
inputs (torch.Tensor): Batch of input images.
nms_threshold (float, optional): NMS threshold for removing overlapping boxes. Default is 0.3.
image_size (int, optional): Image size for transformations. Default is 480.
empty_class_id (int, optional): The class ID representing 'no object'. Default is 0.
out_format (str, optional): Output format for bounding boxes. Default is "xyxy".
scale_boxes (bool, optional): Whether to scale the bounding boxes. Default is True.
Returns:
List of tuples: Each tuple contains (nms_boxes, nms_probs, nms_classes) for a batch item.
"""
if model and device:
model.eval()
model.to(device)
inputs = inputs.to(device)
else:
raise ValueError("No model or device provided for inference!")
with torch.no_grad():
out_cl, out_bbox = model(inputs)
# Get the outputs from the last decoder layer..
out_cl = out_cl[:, -1, :]
out_bbox = out_bbox[:, -1, :]
out_bbox = out_bbox.sigmoid().cpu()
out_cl_probs = out_cl.cpu()
scale_factors = torch.tensor([image_size, image_size, image_size, image_size])
results = []
for i in range(inputs.shape[0]):
o_bbox = out_bbox[i]
o_cl = out_cl_probs[i].softmax(dim=-1)
o_bbox = ops.box_convert(o_bbox, in_fmt="cxcywh", out_fmt=out_format)
# Scale boxes if needed...
if scale_boxes:
o_bbox = o_bbox * scale_factors
# Filter out boxes with no object...
o_keep = o_cl.argmax(dim=-1) != empty_class_id
if o_keep.sum() == 0:
results.append((np.array([]), np.array([]), np.array([])))
continue
keep_probs = o_cl[o_keep]
keep_boxes = o_bbox[o_keep]
# Apply NMS
nms_boxes, nms_probs, nms_classes = class_based_nms(
keep_boxes, keep_probs, nms_threshold
)
results.append((nms_boxes, nms_probs, nms_classes))
return results
```
**Note:** Probabilities after softmax with batch size 1 are always near zero for any other class than the "empty" class (ID=0).
Any ideas?
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.13 (main, Nov 9 2023, 01:24:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 2500U with Radeon Vega Mobile Gfx
CPU family: 23
Model: 17
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2000.0000
CPU min MHz: 1600.0000
BogoMIPS: 3992.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 128 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 2 MiB (4 instances)
L3 cache: 4 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.0
[pip3] optree==0.11.0
[pip3] torch==2.6.0
[pip3] torchinfo==1.8.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
| true |
2,873,308,485 | [BE] add missing overload annotations for `tree_map_only` | XuehaiPan | closed | [
"module: typing",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147691
* #144640
* __->__ #147699
cc @ezyang @malfet @xuzhao9 @gramster | true |
2,873,307,025 | Update ninja missing error message | albanD | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | COLLABORATOR | In cpp_extensions
| true |
2,873,224,847 | [docs] fix numpy docs reference | martin-kokos | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: docs"
] | 6 | CONTRIBUTOR | Fix a link to numpy documentation that has moved and now 404's
I"ve checked other numpy doc links that point to docs.scipy.org (which then redirects to numpy.org) and they do work, so I am fixing just this 404. | true |
2,873,182,745 | There is a problem with the wording here. | leftsl | closed | [
"module: docs",
"module: nn",
"triaged"
] | 2 | NONE | ### 📚 The doc issue
site:[error documentation](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.register_forward_hook)
description:In the documentation for PyTorch's register_forward_hook method, it states:
"prepend (bool) – If True, the provided hook will be fired before all existing forward hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing forward hooks on this torch.nn.modules.Module. Note that global forward hooks registered with register_module_forward_hook() will fire before all hooks registered by this method. Default: False"
However, there is no modules submodule within torch.nn. The correct reference should be torch.nn.Module?
### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | true |
2,873,024,404 | [dynamo][cpp-guards] Disable dict-tag optim if the guard_manager has child accessors | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140756
* __->__ #147694
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,872,995,655 | [Intel GPU] OneDNN primitive cache support for Int4 WOQ gemm on XPU | baodii | open | [
"oncall: distributed",
"module: cpu",
"module: mkldnn",
"open source",
"module: amp (automated mixed precision)",
"NNC",
"release notes: quantization",
"release notes: releng",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)",
"module: compiled autograd",
"rele... | 6 | NONE | * add onednn primitive cache for int4 gemm for xpu
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @mcarilli @ptrblck @leslie-fang-intel @EikanWang @voznesenskym @penguinwu @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan @CaoZhongZ @rogerxfeng8 | true |
2,872,983,620 | Please generate a new version of pytorch into blackwell rtx5080 with cuda 12.8. Thank you very much. | FrankDela | open | [
"module: binaries",
"triaged"
] | 1 | NONE | ### 🚀 The feature, motivation and pitch
Please generate a new version of pytorch into blackwell rtx5080 with cuda 12.8. Thank you very much.
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @osalpekar @atalman | true |
2,872,935,578 | [FX] micro-optimization `map_aggregate(immutable_dict)` | XuehaiPan | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147691
* #144640
* #147699
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,872,878,248 | Update triton_heuristics.py | anikamule | open | [
"good first issue",
"triaged",
"open source",
"better-engineering",
"actionable",
"topic: not user facing",
"module: inductor"
] | 13 | NONE | Fixes #146018
This PR fixes a bug where args_with_constexprs overwrites grid, causing a TypeError. The fix adds a check to ensure the correct number of arguments are passed to launcher, improving error handling and preventing unexpected failures in Triton kernel execution.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,872,873,220 | testing pr not for commit | oulgen | closed | [
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147689
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,872,865,397 | [CacheBench] Add ciflow/trunk test | oulgen | closed | [
"Merged",
"ciflow/trunk",
"release notes: releng",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147783
* #147782
* #147781
* #147780
* __->__ #147688
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,872,836,225 | [MPS] Add eager support for xlog1py. | dcci | closed | [
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor"
] | 5 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,872,820,281 | Potential mismatch in bias indexing in `fx_passes/binary_folding.py` | jiannanWang | open | [
"triaged",
"module: inductor"
] | 3 | NONE | ### 🐛 Describe the bug
In [pytorch/torch/_inductor/fx_passes/binary_folding.py](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/fx_passes/binary_folding.py), there appears to be a bug at line 134. The relevant snippet is:
```
# conv.bias
if conv_node.args[1] is not None and conv_node.args[1].op != "get_attr":
return False
```
The comment suggests we are checking the convolution’s bias here. However, in [pytorch/torch/_inductor/fx_passes/efficient_conv_bn_eval.py](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/fx_passes/efficient_conv_bn_eval.py) at line 198, bias is retrieved from conv_node.args[2]:
```
conv_bias = conv_node.args[2] if len(conv_node.args) >= 3 else None # type: ignore[union-attr]
```
Given this, it seems that:
1. The bias check in binary_folding.py is likely using the wrong index (conv_node.args[1] instead of conv_node.args[2]).
2. There is redundancy just above line 134—conv_node.args[1] is already checked for the weight.
```
# conv.weight
if conv_node.args[1].op != "get_attr":
return False
```
The mismatch and redundancy in the indexing strongly suggest a bug.
### Versions
main branch
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,872,814,028 | manual dynamism whitelist | bobrenjc93 | closed | [
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147685
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D70043918](https://our.internmc.facebook.com/intern/diff/D70043918) | true |
2,872,814,013 | manual whitelist | bobrenjc93 | closed | [
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147685
* __->__ #147684
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,872,785,378 | [dynamo][guards] Dont consider tensor immutable for guards | anijain2305 | closed | [
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147683
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,872,783,580 | Error in parsing arch_string in _extract_arch_version | thinwoodsman | open | [
"module: cuda",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
In torch/cuda/__init__.py , the parsing of arch_string assumes the presence of an underscore ('_'):
```
def _extract_arch_version(arch_string: str):
"""Extracts the architecture string from a CUDA version"""
base = arch_string.split("_")[1]
base = base.removesuffix("a")
return int(base)
```
The arch_string values for AMD GPUs (in this instance, a Radeon RX 7700S) do not have '_' in them, as can be seen by printing arch_string inside the function:
gfx900
gfx906
gfx908
gfx90a
gfx942
gfx1030
gfx1100
gfx1101
This results in an IndexError:
```
File "/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py", line 237, in _extract_arch_version
base = arch_string.split("_")[1]
~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
Adding a check for '_' in the string solves the problem:
```
def _extract_arch_version(arch_string: str):
"""Extracts the architecture string from a CUDA version"""
if "_" not in arch_string:
return 0
base = arch_string.split("_")[1]
base = base.removesuffix("a")
return int(base)
```
### Error logs
_No response_
### Versions
[NOTE: this is present in 6.2.4 and 6.3. The prior installed version was 6.2 and did not exhibit this behavior]
Collecting environment information...
PyTorch version: 2.7.0.dev20250222+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Debian GNU/Linux trixie/sid (x86_64)
GCC version: (Debian 14.2.0-12) 14.2.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.40
Python version: 3.12.8 (main, Dec 13 2024, 13:19:48) [GCC 14.2.0] (64-bit runtime)
Python platform: Linux-6.12.6-amd64-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon⢠RX 7700S (gfx1102)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7940HS w/ Radeon 780M Graphics
CPU family: 25
Model: 116
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 34%
CPU max MHz: 4001.0000
CPU min MHz: 400.0000
BogoMIPS: 7984.63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpy-stl==2.8.0
[pip3] numpydoc==1.8.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-genai==0.5.2
[pip3] optree==0.12.1
[pip3] pytorch-lightning==2.3.3
[pip3] pytorch-metric-learning==2.6.0
[pip3] pytorch-triton-rocm==3.2.0+git4b3bb1f8
[pip3] sk2torch==1.2.0
[pip3] torch==2.7.0.dev20250222+rocm6.3
[pip3] torch-audiomentations==0.11.1
[pip3] torch-pitch-shift==1.2.4
[pip3] torch-pruning==1.4.1
[pip3] torchaudio==2.6.0.dev20250222+rocm6.3
[pip3] torchdata==0.7.1
[pip3] torchextractor==0.3.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchsde==0.2.6
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.18.0
[pip3] torchvision==0.22.0.dev20250222+rocm6.3
[pip3] torchviz==0.0.2
[pip3] triton==3.1.0
[pip3] types-flake8==7.1
[pip3] types-flake8-bugbear==24.12.12
[pip3] types-flake8-builtins==2.5
[pip3] types-flake8-docstrings==1.7
[pip3] types-flake8-rst-docstrings==0.3
[pip3] types-flake8-simplify==0.21
[pip3] types-flake8-typing-imports==1.16
[pip3] types-mypy-extensions==1.0
[conda] Could not collect
cc @ptrblck @msaroufim @eqy @chauhang @penguinwu | true |
2,872,762,802 | [MPS/inductor] Adjust more tests that depends on non-divisible input sizes | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | Also adjust a comment while I'm at it.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,871,309,312 | [AOTI][refactor] Replace run_command_and_check with CppBuilder.build | desertfire | closed | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 10 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147680
* #147679
Consolidate cpp compilation action to CppBuilder
Differential Revision: [D69723632](https://our.internmc.facebook.com/intern/diff/D69723632/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,871,309,034 | [AOTI][refactor] Rename use_absolute_path to use_relative_path | desertfire | closed | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 12 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147680
* __->__ #147679
The option really means to compile a cpp file using its basename instead of the its full path.
Differential Revision: [D69722709](https://our.internmc.facebook.com/intern/diff/D69722709/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,871,249,093 | [dynamo][guards] Allow child accessor traversal for TENSOR_MATCH with matching dict tag | anijain2305 | closed | [
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147678
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,871,230,570 | distributed_c10d.broadcast causing unexpected CUDA oom error | jeffrey-cochran | open | [
"oncall: distributed",
"triaged",
"module: c10d"
] | 0 | NONE | ### 🐛 Describe the bug
Broadcasting `torch.tensor(True)` is causing a CUDA oom error with 2 GPUs. I'm not sure if this is an issue that's only arisen since [support for CUDA 12.8](https://github.com/pytorch/pytorch/issues/145570) was added to the nightly build (my second GPU is a 5070 Ti and requires sm_120, so I can't test this on an earlier build of torch).
It arises in the context of training with DDP using `ultralytics.yolo`. Looking at the traceback, the following should be sufficient to produce the error if the other environment variables like MASTER_ADDR etc are set (NOTE: it may be necessary to have two GPUs with different amounts of VRAM. I can't check this myself):
```
import torch
from torch import distributed as dist
from datetime import timedelta
torch.cuda.set_device(0)
os.environ["TORCH_NCCL_BLOCKING_WAIT"] = "1"
dist.init_process_group(
backend="nccl",
timeout=timedelta(seconds=10800), # 3 hours
rank=0,
world_size=2,
)
dist.broadcast(torch.tensor(True), src=0)
```
I get the following error:
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/root/.config/Ultralytics/DDP/_temp_ydxjdeim139778799645408.py", line 13, in <module>
[rank1]: results = trainer.train()
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 208, in train
[rank1]: self._do_train(world_size)
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 329, in _do_train
[rank1]: self._setup_train(world_size)
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 273, in _setup_train
[rank1]: dist.broadcast(self.amp, src=0) # broadcast the tensor from rank 0 to all other ranks (returns None)
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2728, in broadcast
[rank1]: work = group.broadcast([tensor], opts)
[rank1]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:3384, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.25.1
[rank1]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank1]: Last error:
[rank1]: Cuda failure 2 'out of memory'
W0222 14:07:11.855000 102132 site-packages/torch/distributed/elastic/multiprocessing/api.py:898] Sending process 102150 closing signal SIGTERM
E0222 14:07:12.070000 102132 site-packages/torch/distributed/elastic/multiprocessing/api.py:870] failed (exitcode: 1) local_rank: 1 (pid: 102151) of binary: SOME_DIR/anaconda3/envs/cuda_test/bin/python
Traceback (most recent call last):
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/run.py", line 893, in <module>
main()
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 354, in wrapper
return f(*args, **kwargs)
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/run.py", line 889, in main
run(args)
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/run.py", line 880, in run
elastic_launch(
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/root/.config/Ultralytics/DDP/_temp_ydxjdeim139778799645408.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-02-22_14:07:11
host : SOME_PC.
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 102151)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Traceback (most recent call last):
File "SOME_DIR/anaconda3/envs/cuda_test/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/cfg/__init__.py", line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/model.py", line 810, in train
self.trainer.train()
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 203, in train
raise e
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 201, in train
subprocess.run(cmd, check=True)
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['SOME_DIR/anaconda3/envs/cuda_test/bin/python', '-m', 'torch.distributed.run', '--nproc_per_node', '2', '--master_port', '57359', '/root/.config/Ultralytics/DDP/_temp_ydxjdeim139778799645408.py']' returned non-zero exit status 1.
```
### Versions
PyTorch version: 2.7.0.dev20250221+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA TITAN RTX
GPU 1: NVIDIA GeForce RTX 5070 Ti
Nvidia driver version: 572.47
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5900X 12-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
BogoMIPS: 7399.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid fsrm
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 32 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250221+cu128
[pip3] torchaudio==2.6.0.dev20250221+cu128
[pip3] torchvision==0.22.0.dev20250221+cu128
[conda] numpy 2.1.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.3.14 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.57 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.7.1.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.41 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.55 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.2.55 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.7.53 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.55 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250221+cu128 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250221+cu128 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250221+cu128 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,871,204,262 | [mps/inductor] XFAIL adaptive_avg_pool_with_output_size_0. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | Non-divisible input sizes are not implemented on MPS device yet
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,871,138,509 | [DCP] fix dcp gather_object/scatter_object_list | Ghost-LZW | closed | [
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 9 | CONTRIBUTOR | gather_object/scatter_object_list's dst is `Destination rank on global process group (regardless of group argument)`.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,871,016,826 | [ONNX] Record the capture strategy in onnx program | justinchuby | closed | [
"module: onnx",
"triaged"
] | 0 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
We can create a field `_capture_strategy` to record the strategy used for creating the onnx program and use it to guard the tests. This way we will be able to catch regressions when there are fall back strategies triggered.
cc @titaiwangms @xadupre
### Alternatives
_No response_
### Additional context
_No response_ | true |
2,870,949,833 | [Inductor] Update should_decompose_mm condition for CPU | hl475 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Summary:
Previously, for cpu we decompose addmm if
```
check_device(mat1, mat2, device="cpu")
and mat1.shape[0] == 1
and mat2.shape[0] <= 64
and mat2.shape[1] <= 16
```
We have a new case where `mat2.shape[2] = 304`, and benchmark shows that it will beneficial if we decompose, so update the condition to
```
check_device(mat1, mat2, device="cpu")
and mat1.shape[0] == 1
and mat2.shape[0] <= 64
and mat2.shape[1] <= 512
```
Differential Revision: D70033166
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,870,761,034 | torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(256*u0, 256) (unhinted: Eq(256*u0, 256)). | bhack | open | [
"oncall: pt2",
"oncall: export"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
I was trying to export+compile this:
https://github.com/gorkaydemir/track_on/blob/main/model/modules.py
But I got an error with:
`ATTENTION: guard_size_oblivious would fix the error, evaluating expression to False.`
### Error logs
```python
....
File "/workspace/model/track_on.py", line 415, in inference
q_t = self.query_decoder(q_init_t, f_t, context_memory.clone(), past_mask, queried_now_or_before) # (B, N, C)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/model/query_decoder.py", line 150, in forward
q_t, memory = self.memory_forward(q_t, memory, past_q_mask, i)
File "/workspace/model/query_decoder.py", line 87, in memory_forward
qkv_updated = self.memory_to_query[iter_num](q=q_input, k=k_input, v=v_input, mask=mask) # (n_useful, memory_size+1, C)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/model/modules.py", line 51, in forward
q = q + self.dropout1(self.mha(q, k, v, key_padding_mask=mask, attn_mask=full_attn_mask, need_weights=False)[0])
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
-....
exported_program = export(wrapped_model, (), example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/__init__.py", line 360, in export
return _export(
^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 1047, in wrapper
raise e
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 1020, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 2083, in _export
ep = _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 1047, in wrapper
raise e
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 1020, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 1946, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 1299, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/export/_trace.py", line 694, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1602, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1393, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 585, in __call__
return _compile(
^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1023, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 746, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 782, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 700, in transform
tracer.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3065, in run
super().run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1158, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1068, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 754, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2528, in CALL
self._call(inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2522, in _call
self.call_function(fn, args, kwargs)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 992, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 897, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 384, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1009, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3286, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3444, in inline_call_
self.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1158, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1068, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 754, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2528, in CALL
self._call(inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2522, in _call
self.call_function(fn, args, kwargs)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 992, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 472, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1009, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3286, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3444, in inline_call_
self.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1158, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1068, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 754, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1913, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 992, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 897, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 384, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1009, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3286, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3444, in inline_call_
self.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1158, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1068, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 754, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2528, in CALL
self._call(inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2522, in _call
self.call_function(fn, args, kwargs)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 992, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 897, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 384, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1009, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3286, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3444, in inline_call_
self.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1158, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1068, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 754, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2528, in CALL
self._call(inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2522, in _call
self.call_function(fn, args, kwargs)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 992, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 472, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1009, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3286, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3444, in inline_call_
self.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1158, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1068, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 754, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1913, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 992, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 897, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 384, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1009, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3286, in inline_call
return tracer.inline_call_()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3444, in inline_call_
self.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1158, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1068, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 754, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2528, in CALL
self._call(inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2522, in _call
self.call_function(fn, args, kwargs)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 992, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 444, in call_function
return wrap_fx_proxy(
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2258, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2324, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2420, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 3135, in get_fake_value
raise UserError( # noqa: B904
torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(256*u0, 256) (unhinted: Eq(256*u0, 256)). (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to False.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Caused by: q = q + self.dropout1(self.mha(q, k, v, key_padding_mask=mask, attn_mask=full_attn_mask, need_weights=False)[0]) # orkspace/model/modules.py:51 in forward (_decomp/decompositions.py:4388 in should_fold)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
.....
return self.model.inference(video, queries)
File "/workspace/model/track_on.py", line 415, in inference
q_t = self.query_decoder(q_init_t, f_t, context_memory.clone(), past_mask, queried_now_or_before) # (B, N, C)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/model/query_decoder.py", line 150, in forward
q_t, memory = self.memory_forward(q_t, memory, past_q_mask, i)
File "/workspace/model/query_decoder.py", line 87, in memory_forward
qkv_updated = self.memory_to_query[iter_num](q=q_input, k=k_input, v=v_input, mask=mask) # (n_useful, memory_size+1, C)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/model/modules.py", line 51, in forward
q = q + self.dropout1(self.mha(q, k, v, key_padding_mask=mask, attn_mask=full_attn_mask, need_weights=False)[0])
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example
```
### Versions
nightly
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,870,742,732 | torch_cuda.dll was built failed to link _cudnn_attention_forward | xuhancn | closed | [
"module: build",
"module: windows",
"module: cudnn",
"module: cuda",
"triaged"
] | 6 | COLLABORATOR | ### 🐛 Describe the bug
When I' doing some PyTorch development work, I found `torch_cuda.dll` was built failed when link `_cudnn_attention_forward`.
To confirm and reproduce issue, I created a empty PR and triggered CI with `ciflow/binaries` tag: https://github.com/pytorch/pytorch/pull/147664
The failed UTs here: https://hud.pytorch.org/pr/pytorch/pytorch/147664#37643002431
Detailed error message
> error LNK2019: unresolved external symbol "__declspec(dllimport) class std::tuple<class at::Tensor,class at::Tensor,class at::Tensor,class at::Tensor,class c10::SymInt,class c10::SymInt,class at::Tensor,class at::Tensor,class at::Tensor> __cdecl at::native::_cudnn_attention_forward(class at::Tensor const &,class at::Tensor const &,class at::Tensor const &,class std::optional<class at::Tensor> const &,class std::optional<class at::Tensor> const &,class std::optional<class at::Tensor> const &,__int64,__int64,bool,double,bool,bool,class std::optional<double>)" (__imp_?_cudnn_attention_forward@native@at@@YA?AV?$tuple@VTensor@at@@V12@V12@V12@VSymInt@c10@@V34@V12@V12@V12@@std@@AEBVTensor@2@00AEBV?$optional@VTensor@at@@@4@11_J2_NN33V?$optional@N@4@@Z) referenced in function "class std::tuple<class at::Tensor,class at::Tensor,class at::Tensor,class at::Tensor,class c10::SymInt,class c10::SymInt,class at::Tensor,class at::Tensor,class at::Tensor> __cdecl at::A0xbf25dd52::`anonymous namespace'::wrapper_CUDA___cudnn_attention_forward(class at::Tensor const &,class at::Tensor const &,class at::Tensor const &,class std::optional<class at::Tensor> const &,class std::optional<class at::Tensor> const &,class std::optional<class at::Tensor> const &,class c10::SymInt,class c10::SymInt,bool,double,bool,bool,class std::optional<double>)" (?wrapper_CUDA___cudnn_attention_forward@?A0xbf25dd52@1at@@YA?AV?$tuple@VTensor@at@@V12@V12@V12@VSymInt@c10@@V34@V12@V12@V12@@std@@AEBVTensor@2@00AEBV?$optional@VTensor@at@@@4@11VSymInt@c10@@2_NN33V?$optional@N@4@@Z)
>
> bin\torch_cuda.dll : fatal error LNK1120: 1 unresolved externals
All build failed UT are on **Windows** `wheel` and `libtorch` cases.
### Versions
`main` branch
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @csarofeen @ptrblck @xwang233 @eqy @msaroufim @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,870,661,562 | fix #145064 , added error checking for empty tensor in _pdist_forward | AmalDevHaridevan | open | [
"triaged",
"open source",
"ciflow/trunk",
"release notes: nn"
] | 5 | NONE | Fixes #145064
Added TORCH_CHECK to prevent iterating over nullptr and causing segfault.
We can verify this by running the following simple test:
```python import torch
print(torch.__version__)
input = torch.rand((11, 15,3))
print("Running test with non empty tensor")
print("="*50)
print(torch.ops.aten._pdist_forward(input, p=2.0))
print("="*50)
print("Running test with empty tensor")
print("="*50)
input = torch.rand((11, 15, 0))
print(torch.ops.aten._pdist_forward(input, p=2.0))
```
# Before fix:
```2.7.0a0+git464e572
Running test with non empty tensor
==================================================
tensor([1.2083, 1.4906, 1.2710, 1.4653, 1.6329, 1.5641, 1.6864, 1.3509, 1.3771,
1.8574, 0.9800, 1.5987, 1.4999, 1.4619, 1.6616, 1.7614, 1.3761, 1.3119,
1.3935, 1.4656, 1.6993, 1.3452, 1.4604, 1.0390, 1.2662, 1.6565, 1.5740,
1.3851, 1.8369, 1.6037, 1.5965, 1.3896, 1.1114, 1.4699, 1.6736, 1.5287,
1.2168, 1.5095, 1.6844, 1.4027, 1.7431, 1.2226, 1.4504, 1.1963, 1.5279,
1.2033, 1.1480, 1.2056, 1.0587, 1.3939, 1.3022, 1.5384, 1.3645, 1.6349,
1.2800])
==================================================
Running test with empty tensor
==================================================
Segmentation fault (core dumped)
```
# After fix
```
2.7.0a0+git464e572
Running test with non empty tensor
==================================================
tensor([1.5208, 1.5068, 1.2832, 1.4650, 1.9227, 1.9052, 1.9649, 1.9571, 1.8125,
1.7174, 1.8387, 1.6939, 1.6634, 1.8099, 1.3245, 1.7073, 1.4311, 1.8628,
1.6667, 1.6101, 1.8348, 1.4548, 1.3954, 1.5973, 1.7277, 1.8505, 1.3647,
1.6524, 1.6583, 0.9928, 1.2633, 1.5329, 1.7163, 1.2425, 1.3743, 2.0104,
1.8953, 1.4519, 1.8834, 1.5887, 2.0280, 1.1968, 1.2921, 1.4689, 1.5236,
1.7794, 1.4897, 1.5896, 1.6168, 1.6176, 1.6705, 1.8576, 1.5708, 1.2780,
1.3247])
==================================================
Running test with empty tensor
==================================================
Traceback (most recent call last):
File "/home/harid/test.py", line 12, in <module>
print(torch.ops.aten._pdist_forward(input, p=2.0))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_ops.py", line 1156, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input tensor is empty
```
@albanD , I had to close the previous PR because of accidentally merging main with my branch. Furthermore, I also updated test cases to reflect the changes, which were causing failures in the previous PR. | true |
2,870,623,720 | python3 -m torch.utils.collect_env not providing expected output. | chowdri | closed | [
"module: rocm",
"module: collect_env.py",
"triaged"
] | 29 | NONE | I'm using the following guide to install pytorch on my computer (elementaryOS 7.1 / Ubuntu 22.04.5):
https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/native_linux/install-pytorch.html
I have installed pytorch using pip.
The process goes as expected until I use the following command to verify installation:
`$ python3 -m torch.utils.collect_env`
I get the following output:
```
/usr/lib/python3.10/runpy.py:126: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Collecting environment information...
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/praful/.local/lib/python3.10/site-packages/torch/utils/collect_env.py", line 651, in <module>
main()
File "/home/praful/.local/lib/python3.10/site-packages/torch/utils/collect_env.py", line 634, in main
output = get_pretty_env_info()
File "/home/praful/.local/lib/python3.10/site-packages/torch/utils/collect_env.py", line 629, in get_pretty_env_info
return pretty_str(get_env_info())
File "/home/praful/.local/lib/python3.10/site-packages/torch/utils/collect_env.py", line 454, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
File "/home/praful/.local/lib/python3.10/site-packages/torch/utils/collect_env.py", line 411, in get_pip_packages
out = run_with_pip([sys.executable, '-mpip'])
File "/home/praful/.local/lib/python3.10/site-packages/torch/utils/collect_env.py", line 406, in run_with_pip
for line in out.splitlines()
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
I have followed the guide as is, except I have used sudo for installation of packages.
As per the guide, the above output is unexpected. Any hints on what can I do differently to resolve the issue?
Much thanks for any guidance and insights.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,870,618,541 | Enforce full FIPS compliance with hashlib - ruff rule S324 | dinesh-GDK | closed | [
"oncall: distributed",
"triaged",
"open source",
"release notes: releng",
"fx"
] | 3 | NONE | Fixes #147627
Add rule S324 to the RUFF linter
Command to test
```bash
ruff check
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @Skylion007 | true |
2,870,593,565 | [MPS] Workaround rng bug for 5D tensors | malfet | closed | [
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3 | CONTRIBUTOR | For some reason MPSGraph returns repeated values is tensor dimention is
larger than 4, which can be clearly seen by running following
```swift
import Metal
import MetalPerformanceShadersGraph
func randMPS(device: MTLDevice, obuf: MTLBuffer, nelem: Int, ndim: Int = 5) {
let graph = MPSGraph()
var dims = Array(repeating: 1, count: ndim)
dims[0] = nelem
let shape = dims.map { NSNumber(value: $0) }
let randNode = graph.randomUniformTensor(withShape: shape, seed: 42, name: nil)
let mpsOutputBuffer = MPSGraphTensorData(obuf, shape: shape, dataType: .float32)
guard let queue = device.makeCommandQueue() else { fatalError("Can't make queue") }
graph.run(with: queue, feeds: [:], targetOperations: nil, resultsDictionary: [randNode: mpsOutputBuffer])
}
func printBuf(_ prefix: String, buf: MTLBuffer, nelem: Int) {
let buf_data = buf.contents().assumingMemoryBound(to: Float.self)
print(prefix)
for i in 0..<nelem {
print(buf_data[i], terminator: i != nelem - 1 ? " " : "\n")
}
}
guard let device = MTLCopyAllDevices().first else { fatalError("Not Metal device found") }
print("Using device \(device.name)")
let nelem = 2
guard let buf = device.makeBuffer(length:nelem * MemoryLayout<Float>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") }
randMPS(device: device, obuf: buf, nelem: nelem, ndim: 4)
printBuf("4D uniform", buf: buf, nelem: nelem)
randMPS(device: device, obuf: buf, nelem: nelem, ndim: 5)
printBuf("5D uniform", buf: buf, nelem: nelem)
```
Workaround by flatting the tensor if it's contiguous
Fixes https://github.com/pytorch/pytorch/issues/147624 | true |
2,870,575,139 | [inductor] align `replicationpad` on processing `bool` dtype with eager | shaoyuyoung | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 14 | CONTRIBUTOR | Fixes #143779
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,870,567,435 | Enable ruff rule S324 | zeshengzong | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"fx"
] | 4 | CONTRIBUTOR | Fixes #147627
- Add `S324` in `pyproject.toml `
- Running check and clean warnings
```bash
lintrunner --take RUFF --all-files
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,870,538,751 | [don't merge] test Windows cuda wheel and libtorch ci | xuhancn | open | [
"open source",
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 2 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,870,387,800 | [DDP] Temporarily disable comm mem | kwen2501 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147663
For fear that it incur slightly more memory usage and cause some applications at tight memory margin to OOM.
(bc the comm mem pool is a separate pool than the regular pool ?)
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D70026681](https://our.internmc.facebook.com/intern/diff/D70026681) | true |
2,870,366,824 | GPUDirect Storage dcp.save/dcp.load torch.save/torch.load | OrenLeung | open | [
"module: serialization",
"triaged"
] | 0 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
Currently gpudirect storage is not accessible to the average end user although #133489 upstreamed cufile GDS support, the UX is quite horrible.
https://github.com/pytorch/pytorch/pull/133489/files#diff-893b1eea27352f336f4cd832919e48d721e4e90186e63400b8596db6b82e7450R5270-R5289
feature request would be to support the following commonly used APIs as first class:
- `torch.save`
- `torch.load`
- `dcp.save`
- `dcp.load`
the benefits would be that end users will be able to more quickly save/load model checkpoints and take advantage of their Weka/Vast/DDN/"insert gds compatible storage solution" and get their money's worth
Furthermore, for KV Cache, end users will be able to `torch.save` it into their persistent storage and `torch.load` from persistent NVMe storage at much higher speeds without getting bottlenecked by the CPU.
### Alternatives
- write a custom gpudirect storage checkpointer
- don't use GDS and continue requiring traffic go through the cpu and causing slowdowns
### Additional context
_No response_
cc @mruberry @mikaylagawarecki | true |
2,870,340,525 | Add shape fn for einsum | pgmoka | open | [
"triaged",
"module: xla",
"module: linear algebra",
"module: lazy"
] | 1 | NONE | ### 🚀 The feature, motivation and pitch
We are trying to lower the einsum operation to PyTorch XLA using full code generation(https://github.com/pytorch/xla/issues/8713), but we are blocking doing so until it is added to [torch/csrc/lazy/core/shape_inference.h](https://github.com/pytorch/pytorch/blob/6e0b09728a55ce3c82647f82b6432e585a226f1e/torch/csrc/lazy/core/shape_inference.h#L4)
### Additional context
An example of this being done before can be seen in https://github.com/pytorch/pytorch/pull/82162/files.
Currently when trying to run our code generation steps(more details in [this comment](https://github.com/pytorch/xla/issues/8713#issuecomment-2670156483), we run into the problem [here](https://github.com/pytorch/xla/issues/8713#issuecomment-2675916193)
cc @bdhirsh @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | true |
2,870,333,157 | Initial implementation of host memory stats | mradmila | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cuda",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 12 | CONTRIBUTOR | This is an initial attempt to provide some statistics for the pinned host memory allocations flowing through CachingHostAllocator. Many times in the past we have had inexplicable slowdowns that would be much easier to diagnose if we had some host memory characteristics.
This change tries very hard not to disrupt the initial design of the allocator, and it uses existing locking mechanism, whenever possible, to gather statistics "for free". Only deviation from that is on the "slow path" where we incur CUDA calls anyway, so taking a short lock is not going to hurt the performance much, especially in the steady state where most allocations will come from cache.
As mentioned before, this is the first PR, to introduce the concept and to see if it fits the right paradigm. We can always add more later.
Metrics that would require more involved changes to the code base and locks, like requested memory, have been punted for now. I also tried to reuse the Stat structure used in CUDA caching allocator, in order to maintain symmetry.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,870,312,763 | [ROCm][TunableOp] Speed-up matmul_small_brute_force_tunableop unit test | naromero77amd | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3 | COLLABORATOR | This PR has a UT speed-up and some refactoring of tests.
A previous PR https://github.com/pytorch/pytorch/pull/142422 fixed this matmul_small_brute_force_tunableop for the FP16 data type by adding TunableOp numerical checks. It had the unfortunate side effect that it increased the execution time for the FP32 and FP64 data types by a significant margin. This PR *reduces* the execution time by 20+ minutes.
We also move a hipBLASLt version check to a different tunableop UT for simplicity.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang | true |
2,870,309,511 | [cuBLAS] restrict input range for `addmm` tests | eqy | open | [
"module: cuda",
"triaged",
"module: cublas",
"open source",
"Stale",
"topic: not user facing",
"matrix multiplication"
] | 3 | COLLABORATOR | cancellation seems to be an issue for the larger (~10k) sizes
this also allows us to test with tighter tolerances
cc @ptrblck @msaroufim @jerryzh168 @csarofeen @xwang233 | true |
2,870,308,180 | [ROCm] Use IPT=8 for block radix sort | jerrymannil | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 6 | CONTRIBUTOR | Improve performance for shapes that use block radix sort by decreasing the item_per_thread to 8.
This will increase the thread block size leading to higher occupancy.
Co-author: @amd-sushetty
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,870,297,716 | [aoti x with_effect token] Unbacked symint and register lowering | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 22 | CONTRIBUTOR | Differential Revision: D70022208
- When resolving unbacked symints in ExternKernel for with_effect, we need to ignore the first item in the binding path, because the `example_output` doesn't contain the effect token, but the binding paths do.
- Similarly, `node.meta["val"]` contains the effect token, so when we compute_unbacked_bindings, we need to remove that effect token
- For `torch.ops.higher_order.with_effects`'s lowering, we should not extract the items out of an list (i.e. `*result` vs `result`). The `get_attr` nodes consider the result to be in the list format.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,870,281,359 | [Submodule] [Cutlass] Update to 3.8.0 tag | drisspg | closed | [
"module: cuda",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147655
cc @ptrblck @msaroufim @eqy | true |
2,870,263,250 | [for experimentation only] check compile time with C++ pytree | anijain2305 | closed | [
"ciflow/periodic",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend",
"keep-going",
"module: compiled autograd"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147654
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan | true |
2,870,259,583 | Bf16 fused adam(W) | janeyx99 | open | [
"release notes: foreach_frontend"
] | 1 | CONTRIBUTOR | Many things do work!
Some things do not:
amsgrad does not work
some checks are removed (not critical imo, but def less safe) to let multidevice work
adam + adamw should work
tensor lr would work if it's on cpu
the float, float, bf16, bf16 pattern is specialized, other mixed precision will not work
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147653
* #146640
| true |
2,870,251,892 | [CI] Checkout with more processes | clee2000 | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | The default action doesn't use more processes, possibly because most github provided runners only have 2 cpus, but we have more than that, so we might as well use them
Generally cuts maybe 1 min off of checkout time?
Changed checkout from pytorch/pytorch@main to pytorch/pytorch@my branch to test on 249a936998e66cc0d6ad8664e0e93ec1b9432a8b
| true |
2,870,243,446 | Add super().setUp() to some test cases | clee2000 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | I saw that their disabled issues were getting spammed with comments, meaning that they were still running in CI despite having a disable issue, so I added the super().setUp() call to check if there's a disable issue for them since they were missing it
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,870,212,516 | [MPS] Add inductor support for spherical_bessel_j0. | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4 | MEMBER | Counterpart to my previous patch that added support for the op in eager.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,870,198,862 | [custom op] fix inductor cpp codegen when returning a list of single tensor | ydwu4 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147567
* __->__ #147649
* #147130
For a custom op that returns a list of a single tensor with unbacked symint shape:
```python
@torch.library.custom_op(
"aoti_custom_ops::fn_ret_list_of_single_tensor", mutates_args={}
)
def fn_ret_list_of_single_tensor(x: torch.Tensor) -> list[torch.Tensor]:
s = x.sum().to(torch.int64)
return [torch.randn(s.item())]
@fn_ret_list_of_single_tensor.register_fake
def _(x):
ctx = torch._custom_op.impl.get_ctx()
i0 = ctx.new_dynamic_size()
return [torch.randn(i0)]
```
Before the fix, we have the following error:
```
/tmp/tmp5iikarn2/cci3ruqb7zdwtl457zo4itspq3sjnqiayhcshp5uaak7ktksckix/cggzqlwf4bmu6tjqodhoto3hhkhgharhwtvw2uxsasqrdipnazrv.cpp:456:26: error: type/value mismatch at argument 1 in template parameter list for ‘template<class _Tp, class ... _Types> constexpr const _Tp& std::get(const std::variant<_Types ...>&)’
456 | auto u0 = std::get<0>(buf1).size(0);
| ~~~~~~~~~~~^~~~~~
/tmp/tmp5iikarn2/cci3ruqb7zdwtl457zo4itspq3sjnqiayhcshp5uaak7ktksckix/cggzqlwf4bmu6tjqodhoto3hhkhgharhwtvw2uxsasqrdipnazrv.cpp:456:26: note: expected a type, got ‘0’
In file included from /data/users/yidi/pytorch/torch/include/c10/util/Exception.h:14,
from /data/users/yidi/pytorch/torch/include/c10/core/ScalarType.h:5,
from /data/users/yidi/pytorch/torch/include/ATen/AccumulateType.h:4,
from /data/users/yidi/pytorch/torch/include/ATen/native/Math.h:3,
from /data/users/yidi/pytorch/torch/include/ATen/cpu/vec/vec_base.h:31,
from /data/users/yidi/pytorch/torch/include/ATen/cpu/vec/vec512/vec512.h:8,
from /data/users/yidi/pytorch/torch/include/ATen/cpu/vec/vec.h:4,
from /data/users/yidi/pytorch/torch/include/ATen/cpu/vec/functional_base.h:6,
from /data/users/yidi/pytorch/torch/include/ATen/cpu/vec/functional.h:3,
from /tmp/tmp5iikarn2/3b/c3bi5gk6mslf6u4iaqafhxm64z6u65e3eain4xlary5blqnvv6xx.h:39,
from /tmp/tmp5iikarn2/cci3ruqb7zdwtl457zo4itspq3sjnqiayhcshp5uaak7ktksckix/cggzqlwf4bmu6tjqodhoto3hhkhgharhwtvw2uxsasqrdipnazrv.cpp:366:
/usr/include/c++/11/variant:1145:27: note: candidate: ‘template<class _Tp, class ... _Types> constexpr const _Tp&& std::get(const std::variant<_Types ...>&&)’
1145 | constexpr const _Tp&& get(const variant<_Types...>&& __v)
| ^~~
/usr/include/c++/11/variant:1145:27: note: template argument deduction/substitution failed:
/tmp/tmp5iikarn2/cci3ruqb7zdwtl457zo4itspq3sjnqiayhcshp5uaak7ktksckix/cggzqlwf4bmu6tjqodhoto3hhkhgharhwtvw2uxsasqrdipnazrv.cpp:456:26: error: type/value mismatch at argument 1 in template parameter list for ‘template<class _Tp, class ... _Types> constexpr const _Tp&& std::get(const std::variant<_Types ...>&&)’
456 | auto u0 = std::get<0>(buf1).size(0);
| ~~~~~~~~~~~^~~~~~
/tmp/tmp5iikarn2/cci3ruqb7zdwtl457zo4itspq3sjnqiayhcshp5uaak7ktksckix/cggzqlwf4bmu6tjqodhoto3hhkhgharhwtvw2uxsasqrdipnazrv.cpp:456:26: note: expected a type, got ‘0’
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,870,155,087 | [CUDAGraph] Graph Partition | BoyuanFeng | closed | [
"Merged",
"ciflow/trunk",
"topic: new features",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 7 | CONTRIBUTOR | This PR implements cudagraph partition, following previous PR on inductor graph partition (#147038). Since there are many ops that cudagraph cannot support, this PR focuses on `cpu ops` and will add more partition rules in the next PR.
## Example
```python
import torch
torch._inductor.config.graph_partition = True
def f(x, y):
x1 = x + 1
y1 = y + 1
y_cpu = y1.cpu() + 1
z = x @ y
return x1 + y1 + z + y_cpu.cuda()
x, y = [torch.ones(2, 2, device="cuda") for _ in range(2)]
x_cloned, y_cloned = [tmp.clone() for tmp in [x,y]]
eager_out = f(x, y)
f_compiled = torch.compile(f, mode="reduce-overhead")
for _ in range(5):
compiled_out = f_compiled(x_cloned, y_cloned)
assert torch.allclose(eager_out, compiled_out)
```
w/o graph partition, we will skip cudagraph:
```
skipping cudagraphs due to skipping cudagraphs due to cpu device (device_put). Found from :
File "/home/boyuan/playground/cudagraph/graph_partition/graph_partition.py", line 9, in f
y_cpu = y1.cpu() + 1 # 3
```
w/ graph partition, we can see two cudagraphify under the same torch-compiled region:

## Design
PR #147038 splits `def call(args)` function into multiple `def partition_id(args)`. In this PR, we use `recursively_apply_fns()` to wrap each `partition_id()` function with `cudagraphify`. One major design point is, `cudagraphify` takes metadata such as static_input_idxs and we need to provide such metadata for each graph partition. However, we previously only have such metadata for the original graph instead of graph partitions.
The [idea](https://github.com/pytorch/pytorch/pull/147038#discussion_r1964124800) is:
- compute a mapping from the partition metadata (e.g., input/output idx) to the graph metadata, stored in `GraphPartitionMap`.
- during post_compile, get the `CudagraphMetadata` for each partition based on the graph-level metadata and `GraphPartitionMap`, via `get_partition_cudagraph_metadata()`.
- finally, in `cudagraph_partition_pos_compile`, we compute the `CudagraphMetadata` and apply cudagraphify for each graph via `recursively_apply_fns`.
#### Q: How does it work with codecache?
While we have multiple graph partitions, we still have 1 file and 1 `call` function for 1 dynamo graph. The major difference is we need to additionally load a `recursively_apply_fns()` for graph partition. We also add `partition_maps: Optional[list[GraphPartitionMap]]` to `CompiledFxGraph` so it will be serialized and could be deserialized later.
## Edge Case 1
PyTorch has an assumption on input/output orders. For example, backward inputs take saved tensors first and then tangents. In graph partition, we respect such orders via `graph_partition_signature_reorder`.
## Edge Case 2
Cudagraphifying `call` function gives 2 cudagraph managed tensors `buf0` and `primals_1`. However, cudagraphifying `partition_0` gives only 1 cudagraph managed tensor `buf0`. This leads to a semantic difference between cudagraph w/ and w/o graph partition. [full code comparison](https://www.internalfb.com/intern/diffing/?paste_number=1747654420)

To achieve the same semantic, we returns an input tensor as output if it is not freed in a graph partition. This allows more cudagraph managed tensors and is important for handling saved tensors.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan @eellison @zou3519 @jamesjwu | true |
2,870,119,622 | [BE] TCPStore: use typed errors for assertions | d4l3k | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3 | MEMBER | This is a follow up to #147465 that changes most TORCH_CHECK calls in TCPStore and TCPStoreLibUvBackend to use typed exceptions instead of generic `TORCH_CHECK` calls which end up as RuntimeErrors in Python.
Test plan:
```
pytest test/distributed/test_store.py
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o | true |
2,870,075,033 | [ROCm] change is_hip_clang() to always return True | ethanwee1 | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | hipify is replacing kernel launchs <<< >>> with hipLaunchKernelGGL() macro and this is a regression caused by /opt/rocm/hip/.hipinfo no longer existing.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,870,067,707 | [sm100][sm120][fp8][CUDA] skip rowwise scaling tests on SM100+ for now | eqy | closed | [
"module: cuda",
"triaged",
"open source",
"Stale",
"topic: not user facing",
"matrix multiplication",
"module: float8"
] | 3 | COLLABORATOR | currently not implemented IIUC
cc @ptrblck @msaroufim @jerryzh168 @yanbing-j @vkuzo @albanD @kadeng @penguinwu | true |
2,870,059,608 | Implement metal kernel for basic MPS arithmetic ops using TensorIterator | skotapati | closed | [
"triaged",
"open source",
"Merged",
"topic: performance",
"release notes: mps",
"ciflow/mps"
] | 19 | COLLABORATOR | Add metal kernels for add, subtract, & lerp ops using TensorIterator. Should help resolve: https://github.com/pytorch/pytorch/issues/143874 | true |
2,870,005,329 | Representation string of a meta tensor is not a valid `tensor` call | chajath | open | [
"triaged",
"module: meta tensors",
"module: python frontend"
] | 6 | NONE | ### 🚀 The feature, motivation and pitch
When creating tensor for `meta` device, the representation of the tensor is in the form of:
`tensor(..., device='meta', size=(3,4))`
If I try to run the representation code, I run into issues:
`TypeError: tensor() got an unexpected keyword argument 'size'`
this is different behavior to a concrete tensor where the valid construction call is given as a representation i.e.:
```
>>> # Not a valid call
>>> torch.empty(3,4, device='meta')
tensor(..., device='meta', size=(3, 4))
>>> # Valid tensor vall
>>> torch.empty(3,4)
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
```
In the spirit of making representation runnable, it would be better to give an empty construction call i.e.:
```
>>> torch.empty([3,4], device="meta")
torch.empty([3,4], device='meta')
```
### Alternatives
In the context of rendering, I would need to create a custom visitor that will check if the target object is meta tensor and redirect rendering to a custom rendering function
### Additional context
We are generating code snippets and rely on representation strings of tensors in places where it makes sense.
cc @ezyang @eellison @bdhirsh @albanD | true |
2,869,995,787 | UNSTABLE pull / unstable-linux-focal-cuda12.4-py3.10-gcc9-sm89-xfail / build | clee2000 | closed | [
"module: ci",
"triaged",
"unstable"
] | 2 | CONTRIBUTOR | See https://github.com/pytorch/pytorch/pull/147487 for context
I want to run an experiment and this job might be flaky so I am marking it as unstable. The job already has unstable in the job name so I think it should be fine but also making this just incase
cc @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,869,958,672 | [CacheBench] Refactor code to prepare for mode benchmarks | oulgen | closed | [
"Merged",
"ciflow/trunk",
"release notes: benchmark",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147641
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,869,940,839 | Fix test_halide.py report invocation to re-run failed tests | isuruf | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147640
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,869,880,644 | [Inductor] Hot fix after #146917 | anmyachev | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 5 | COLLABORATOR | This pull request reverts the changes to `torch/_inductor/ir.py` file that were added in #146917.
Where I tested, there were changes only from `torch/_inductor/codegen/cpp_wrapper_gpu.py`, it turns out that changes in `torch/_inductor/ir.py` file are not really needed. So it's my fault, I didn't sync the environments (between several machines) correctly.
@davidberard98 @YUNQIUGUO maybe that's why the tests on CUDA didn't pass?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,869,789,301 | [aotd] Alias of intermediate unwrap TensorAlias | IvanKobzarev | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147638
Bug was reported by internal user.
AOTD classified outputs that are aliases of intermediates of the graph in different categories.
...
- output is alias of intermediate which base is already output
- output is alias of intermediate which base is not in output
If we look at the fn:
```
def fn(x):
ix = x + 1
a = ix.transpose(0, 1)
return a.detach(), a
```
output 0: detach view of alias a, where a is already output
output 1: alias of intermediate ix, then additional output ix will be added internally
output 0 base is TensorAlias(a) in this case, but could be Tensor.
Adding runtime unwrapping solves this problem.
Alternatively we should track base of a.detach() all the way to ix, in that case the base will be always a Tensor, not TensorAlias.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,869,754,640 | [CD] Enable triton xpu windows build | chuanqi129 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Depends on #147727, which introduce triton xpu windows support | true |
2,869,687,219 | [PP] Remove extra code and docs BE | H-Huang | closed | [
"oncall: distributed",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147636
current docs:
<img width="746" alt="image" src="https://github.com/user-attachments/assets/4c4088fc-ee97-4a82-be28-e33eb35e76f5" />
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,869,603,152 | Remove backend_type_map from Backend | H-Huang | open | [
"oncall: distributed",
"better-engineering",
"Stale",
"release notes: distributed (c10d)"
] | 2 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147635
Fix https://github.com/pytorch/pytorch/issues/147044. `backend_type_map` was previously used to get the default device for the object collectives / barrier, but is no longer used, so we can remove it. Not sure if this will break anything in our tests, so waiting for CI to run.
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,869,519,157 | constexpr all the things in irange.h | swolchok | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147633
I got complaints while irangeifying some files in ExecuTorch
that irange could not be used in a constexpr function. This made the
complaints go away.
I added a constexpr function in irange_test that used to fail to build
with `error: variable of non-literal type 'iterator' (aka
'integer_iterator<int, true>') cannot be defined in a constexpr
function before C++23` and now builds fine.
Differential Revision: [D69959614](https://our.internmc.facebook.com/intern/diff/D69959614/) | true |
2,869,432,289 | torch.compile programming model doc requests | zou3519 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0 | CONTRIBUTOR | - [ ] torch._dynamo.debug_trace API that returns (1) guards (2) bytecode (3) the FX graph
- [ ] gm.print_readable() -> there should be a `gm.readable()` that returns a string. Otherwise, printing in a jupyter notebook is weird
- [ ] maybe a make_fx with tracing_mode = Fake already that is more descriptive. Like "operator_trace(f)".
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,869,403,880 | torch.distributed.init_process_group with unspecified backend out of date | tpopp | open | [
"oncall: distributed",
"triaged",
"module: c10d"
] | 0 | NONE | ### 📚 The doc issue
"Support for multiple backends is experimental. Currently when no backend is specified, both gloo and nccl backends will be created." is stated in the documentation. I didn't bisect the change, but this is the behavior I see at 2.4, and is NOT the behavior I see at 2.6 and 2.7/main
https://pytorch.org/docs/main/distributed.html#initialization:~:text=that%20supports%20MPI.-,NOTE,-Support%20for%20multiple
### Suggest a potential alternative/fix
For 2.6, main, and potentially 2.5 documentation, this should be updated to something like "Currently when no backend is specified, the nccl backend will be created. To use the gloo backend for collectives with CPU tensors and the nccl backend for collectives with CUDA tensors, specify `backend='cpu:gloo,cuda:nccl'`". This might not be exactly correct. It's the behavior I see after touching this for the first time.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,869,355,282 | [ROCm] Improve backwards indexing when stride is not one | doru1004 | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: rocm",
"ciflow/rocm"
] | 6 | CONTRIBUTOR | Improve backwards indexing when stride is not one.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.