id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,805,571,274 | aot inductor intermediate tensor debug printing (setting 2) not working | exclamaforte | open | [
"triaged",
"oncall: pt2"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
Code:
```python
from torch._inductor.fuzzer import ConfigFuzzer, visualize_results #, create_simple_test_model_gpu
import torch
def create_simple_test_model_gpu():
"""Create a simple test model function for demonstration."""
batch_size = 32
seq_length = 50
hidden_size = 768
def test_fn():
inp = torch.randn(batch_size, seq_length, hidden_size, device="cuda")
weight = torch.randn(hidden_size, hidden_size, device="cuda")
matmul_output = inp @ weight
final_output = torch.nn.LayerNorm(hidden_size, device="cuda")(matmul_output)
return True
return test_fn
tf = create_simple_test_model_gpu()
comp = torch.compile(options={"aot_inductor.debug_intermediate_value_printer": "2"})(tf)
comp()
```
Error msg:
```
Traceback (most recent call last):
File "/home/gabeferns/org/debug/fuzzer-0/bug.py", line 23, in <module>
comp()
File "/home/gabeferns/pt-envs/fuzzer/torch/_dynamo/eval_frame.py", line 566, in _fn
return fn(*args, **kwargs)
File "/home/gabeferns/org/debug/fuzzer-0/bug.py", line 11, in test_fn
def test_fn():
File "/home/gabeferns/pt-envs/fuzzer/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/aot_autograd.py", line 1199, in forward
return compiled_fn(full_args)
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 326, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 687, in inner_fn
outs = compiled_fn(args)
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 493, in wrapper
return compiled_fn(runtime_args)
File "/home/gabeferns/pt-envs/fuzzer/torch/_inductor/output_code.py", line 457, in __call__
return self.current_callable(inputs)
File "/tmp/torchinductor_gabeferns/us/cusdgx2jfgdi7skkxb27i4l7xuwe2afa2blsn3kgbqsuldogqqin.py", line 133, in call
_print_debugging_tensor_value_info("inductor: before_launch - triton_poi_fused_randn_0 - 0", 0)
File "/home/gabeferns/pt-envs/fuzzer/torch/_inductor/codegen/debug_utils.py", line 26, in _print_debugging_tensor_value_info
numel = arg.float().numel()
AttributeError: 'int' object has no attribute 'float'
```
I have a fix incoming.
### Versions
git hash: 40e27fbcf2b
cc @chauhang @penguinwu | true |
2,805,551,002 | Tag storages with offset in file when with FakeTensorMode | mikaylagawarecki | closed | [
"Stale"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145424
| true |
2,805,533,573 | Implement deepcopy for AOTICompiledModel | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Summary:
Fix https://github.com/pytorch/pytorch/issues/145411
Support deepcopying AOTICompiledModel. The `loader` is shallow copied.
Test Plan:
```
buck2 run fbcode//mode/opt //caffe2/test/inductor:aot_inductor_package -- -r deepcopy
```
Differential Revision: D68524673
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,805,528,964 | [dynamo][hop] test torch.compiling all HOPs | xmfan | closed | [
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145429
* __->__ #145422
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,805,522,917 | [cp] override compute_log_sumexp to True for aten._scaled_dot_product_efficient_attention.default if False | XilunWu | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: context parallel"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145421
## Description
Our current CP doesn't support efficient attention when `compute_log_sumexp=False`. `compute_log_sumexp=False` only if that `requires_grad=False` and since PP's [shape inference](https://github.com/pytorch/pytorch/blob/d95a6babcc581ff06d1b914ee9f92c81b2e850e2/torch/distributed/pipelining/stage.py#L1387) happens under `torch.no_grad()` context , we need to override `compute_log_sumexp` to `True` in our CP attention implementation.
## Test
- Test PP+FSDP+CP w/ `mixed_precision = "float32"` in torchtitan
- `pytest test/distributed/tensor/test_attention.py -s -k test_ring_attention_sdpa`
Before:
<img width="1880" alt="image" src="https://github.com/user-attachments/assets/872ff583-295e-4751-a280-cf7f2d41c61a" />
After:
<img width="2988" alt="image" src="https://github.com/user-attachments/assets/4bdcc2e5-22a5-427a-91a5-82206d5bd78f" />
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,805,511,150 | [dynamo][guards] Turn on profiling of guard manager | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145509
* #145132
* __->__ #145420
* #145351
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,805,511,051 | [dynamo][fbcode] Turn on inline_inbuilt_nn_modules | anijain2305 | closed | [
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,805,498,992 | [BE] Type annotate metrics.py | BoyuanFeng | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,805,497,528 | [BE] Use `value_or` in layer_norm.cpp | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Now that we have proper optional, no need to do `if (has_value) value else default_value;`
| true |
2,805,492,828 | TopK ROCm Tuning | apakbin | closed | [
"module: rocm",
"open source",
"release notes: cuda",
"ciflow/periodic",
"rocm",
"ciflow/rocm"
] | 4 | CONTRIBUTOR | TopK performance on ROCm performs better on the test suite with the default config.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,805,482,985 | [dynamo][not ready - just for CI] Remove all builtin skiplist | anijain2305 | closed | [
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145415
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,805,469,338 | [BE] Fix edge case in translation validation bisector | StrongerXi | closed | [
"Merged",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145414
This patch fixes a small bug for the binary-search algorithm in
translation validation bisector. Fixes #131303.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,805,465,854 | [c10] catch c10 error and log message | c-p-i-o | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7 | CONTRIBUTOR | Summary:
Explicitly catch c10 error and log the error message only.
The standard exception `e.what()` below ends up logging the stack trace that is confusing users.
See S477887 for details.
Test Plan:
tested locally.
```
buck test caffe2/test/cpp/c10d:TCPStoreTest
buck2 daemon constraint mismatch: Version mismatch; killing daemon...
Starting new buck2 daemon...
Connected to new buck2 daemon.
File changed: fbcode//caffe2/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
File changed: fbsource//xplat/caffe2/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
Watchman fresh instance: new mergebase, cleared graph state, cleared dep files
Soft Error: source_directory_includes_subpackage: Directory `v2.17.1-1` of package `fbsource//third-party/nccl` may not cover any subpackages, but includes subpackage `v2.17.1-1/src/tests`.
Soft Error: source_directory_includes_subpackage: Directory `v2.18.3-1` of package `fbsource//third-party/nccl` may not cover any subpackages, but includes subpackage `v2.18.3-1/src/tests`.
Soft Error: source_directory_includes_subpackage: Directory `v2.19.3-1` of package `fbsource//third-party/nccl` may not cover any subpackages, but includes subpackage `v2.19.3-1/src/tests`.
Buck UI: https://www.internalfb.com/buck2/dbd34fa4-50ed-4eeb-800d-688f5a7bec68
Test UI: https://www.internalfb.com/intern/testinfra/testrun/281475375994918
Network: Up: 1.5GiB Down: 4.7GiB (reSessionID-d6b0568e-2347-4375-a2d9-2d03ca0c2161)
Loading targets. Remaining 0/3024 69199 dirs read, 687558 targets declared
Analyzing targets. Remaining 0/31483 1481904 actions, 1719048 artifacts declared
Executing actions. Remaining 0/250391 77:11:29.7s exec time total
Command: test. Finished 2031 local, 45445 remote, 51473 cache (52% hit) 20:16:36.9s exec time cached (26%)
Time elapsed: 7:32.7s
Tests finished: Pass 8. Fail 0. Fatal 0. Skip 0. Build failure 0
```
Differential Revision: D68516080
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k | true |
2,805,450,144 | Add Torchao docs link to Pytorch libraries | jainapurva | closed | [
"module: docs",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 13 | CONTRIBUTOR | Add Torchao docs link to the libraries section in torch docs.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke | true |
2,805,447,813 | cannot pickle 'torch._C._aoti.AOTIModelPackageLoader' object | yushangdi | closed | [
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
The AOTI compiled object cannot be deepcopied.
Repro:
```python
import copy
import logging
import torch
from torch.nn import functional as F
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(10, 16)
self.relu = torch.nn.ReLU()
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x, y):
x = self.fc1(x)
x = self.relu(x)
x = self.sigmoid(x)
return x
def main():
with torch.no_grad():
model = Model()
example_inputs = (
torch.randn(8, 10),
torch.randn(8, 10),
)
ep = torch.export.export(model, example_inputs)
package_path = torch._inductor.aoti_compile_and_package(ep)
compiled_model = torch._inductor.aoti_load_package(package_path)
copy.deepcopy(compiled_model) # this line errors with TypeError: cannot pickle 'torch._C._aoti.AOTIModelPackageLoader' object
if __name__ == "__main__":
main()
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git729b7c0
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.5.1
/usr/lib64/libcudnn_adv.so.9.5.1
/usr/lib64/libcudnn_cnn.so.9.5.1
/usr/lib64/libcudnn_engines_precompiled.so.9.5.1
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib64/libcudnn_graph.so.9.5.1
/usr/lib64/libcudnn_heuristic.so.9.5.1
/usr/lib64/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 172
On-line CPU(s) list: 0-171
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 172
Socket(s): 1
Stepping: 11
BogoMIPS: 3591.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 arat vnmi umip pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 5.4 MiB (172 instances)
L1i cache: 5.4 MiB (172 instances)
L2 cache: 688 MiB (172 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-171
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git729b7c0
[pip3] torchvision==0.22.0a0+f7b1cfa
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.11 py310h5eee18b_0
[conda] mkl_random 1.2.8 py310h1128e8f_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+git729b7c0 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchvision 0.22.0a0+f7b1cfa pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi | true |
2,805,440,195 | [inductor] fix autotuning memory usage | shunting314 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145410
* #145325
* #140249
We use `cpu_tensor.copy_(gpu_tensor)` to clone mutated kernel arguments for autotuning. The purpose is to avoid increasing peak memory due to the clone. But if `gpu_tensor` is not contiguous, this `copy_` will need allocate an temporary tensor on GPU to store a contiguous copy of `gpu_tensor`:
https://github.com/pytorch/pytorch/blob/6e53588789c48682c7da969de9cbace67a1dd9f3/aten/src/ATen/native/cuda/Copy.cu#L322-L334
Here is a standalone script to illustrate this behavior: https://gist.github.com/shunting314/812a848dc67b1d674ae42415a7a462c8 . The script report 6GB rather than 3GB peak memory usage.
Note that, with all the following efforts
1. donated buffer
2. inplace padding
3. this PR
We save 3GB peak memory (18.6GB -> 15.5GB) for GPT2 model for torch.compile.
The peak memory of GPT2 is like a '...\_M\_...' shape. There are 2 places that we reach the peak. Donated buffer remove the first peak by computing grad_softmax inplace, and inplace padding removes the second peak by not allocating an extra buffer for mm-padding.
Before all these optimizations, the peak memory is 18.6GB for GPT2 with torch.compile.
With 1 & 2, the peak memory is
1. 17.7GB with a cold cache
2. 15.5GB with a warm cache (since the autotuning overhead is skipped)
With 1 & 2 & 3, we save 3GB peak memory (18.6GB -> 15.5GB) no matter if autotuning happens or not
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,805,431,367 | [BE] Type annotate pad_mm.py | BoyuanFeng | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,805,422,848 | Fix staging for CPU tensors in OSS DCP async_save | daulet-askarov | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | CONTRIBUTOR | Summary:
As found in
https://github.com/pytorch/pytorch/issues/144657
for CPU tensors we accidentally skip copying during staging due to using offload to cpu helper, which does a no-op for CPU tensors. This means that if the trainer changes the original source CPU tensor value after launch async save but before the actual writing/uploading to the destination commences, the writing/uploading logic will accidentally pick up the latest state of the tensor, while it should have dealt with its own dedicated copy saved earlier. Dropping _offload_state_dict_to_cpu in favor of _copy_state_dict fixes this bug.
Test Plan:
Running the user script from the linked GitHub issue verifies the fix:
```
import os
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch.distributed.checkpoint.state_dict import get_model_state_dict
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super().__init__()
self.weight = nn.Parameter(torch.ones(1, 1))
def forward(self, x):
return self.layer(x)
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12345"
os.environ["WORLD_SIZE"] = "1"
os.environ["RANK"] = "0"
dist.init_process_group()
model = Net()
state_dict = get_model_state_dict(model)
pg = dist.new_group(backend="gloo")
try:
steps = [10, 20, 30, 40, 50]
future = None
for step in steps:
# simulate a training step, e.g. optimizer updating values
with torch.no_grad():
model.weight.data.fill_(step)
if future is not None:
future.result()
future = None
future = dcp.async_save(
state_dict,
checkpoint_id=f"outputs/{step}",
process_group=pg,
)
future.result()
for step in steps:
dcp.load(
state_dict,
checkpoint_id=f"outputs/{step}",
process_group=pg,
)
assert state_dict["weight"][0, 0] == step, f"got {state_dict['weight'][0, 0]=} on {step=}"
finally:
dist.destroy_process_group(pg)
dist.destroy_process_group()
```
passes all asserts with this fix. If the script is run in trunk, confirmed that it fails the first assert.
Differential Revision: D68518689
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,805,401,001 | [dynamo][fbcode] Turn on inline_inbuilt_nn_modules | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | As title.
Some internal testing at https://fb.workplace.com/groups/241460628989036/permalink/411650015303429/
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,805,397,946 | [export][be] Clean up local imports from export [2/n] | zhxchen17 | closed | [
"fb-exported",
"Stale",
"ciflow/trunk",
"release notes: export"
] | 6 | CONTRIBUTOR | Summary: as title
Test Plan: CI
Differential Revision: D68450108
| true |
2,805,379,687 | [dynamo] Install guard when branching on empty dictionary | StrongerXi | closed | [
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145405
This fixes an internal test failure on guarding NN module hooks, which
started failing after #143997 stopped eagerly guard on dictionary
length.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,805,378,982 | [distributions] Catch inf gradient in beta distribution | michael-diggin | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Fixes #127387
Under the conditions in the issue, the calculations in [_beta_grad_beta_small](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/Distributions.h#L397) are numerically unstable (due to the `betas = betas * (beta - casted_i);` blowing up, since in that code path `beta` is large), and the gradient can end up being `nan` when `x` is close to 1 (and hence is close to 0 in that function as it uses `1-x`).
It seems that sometimes rather than become `nan`, the series ends up being `inf`, which isn't currently caught. I was able to verify this through some debug/print statements. I struggled to recreate the issue directly with a size of 1, even with directly calling the backward function with `x` values close to 1.
This PR amends the `nan` check by also checking for `inf`, and adds a test based on the failing case from the linked issue.
| true |
2,805,320,057 | Use guard_size_oblivious in debug tensor writer | bobrenjc93 | closed | [
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145650
* __->__ #145403
I've been playing around with graphbreaks and found it sad that the
following code doesn't trace due to a printer calling guard_bool on
size-like strides.
previously this code wouldn't trace, but now it does
```
import torch
torch._dynamo.config.automatic_dynamic_local_pgo = False
@torch.compile()
def fn(x):
y = torch.cat([x, x])
torch._dynamo.graph_break()
z = torch.cat([y, y])
torch._dynamo.graph_break()
return torch.cat([z, z])
x = torch.ones(5, 5)
torch._dynamo.decorators.mark_unbacked(x, 0)
torch._dynamo.decorators.mark_unbacked(x, 1)
fn(x)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,805,286,013 | Update OSS nested tensor docs to focus on NJT | jbschlosser | closed | [
"module: nestedtensor",
"Merged",
"ciflow/trunk",
"topic: docs",
"release notes: nested tensor"
] | 17 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145402
Updated nested tensor docs to be NJT-centric (instead of NST-centric). They now include:
* High-level description of NST vs. NJT + a recommendation to use NJT
* General NJT construction / usage
* torch.compile() integration w/ dynamic shapes
* Common errors and how to fix them
* Contribution guide
* Data layout / shape information (with diagram)
* Links to more extensive tutorials involving Transformers / SDPA / FlexAttention
cc @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | true |
2,805,276,189 | torch.compile has different numerics for var_mean | eellison | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
```
import torch
from torch._dynamo.utils import same
def foo(x):
return torch.ops.aten.var_mean.correction(x, [1], correction = 0, keepdim = True)
inp = torch.rand([112958, 384], device="cuda", dtype=torch.float16)
print(same(foo(inp), torch.compile(foo)(inp)))
```
> [ERROR]:Accuracy failed: allclose not within tol=0.0001
Maybe this is a numerics sensitive op, but it throws up bisecting and is a general pain.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang
### Versions
master | true |
2,805,262,963 | [auto_functionalized] Support `Tensor(a!)[]?` | zou3519 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145400
Summary:
This is just updating some of the checks to allow the Tensor(a!)[]? type
through.
Fixes #144072
Test Plan:
- new tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,805,212,572 | Update NJT linear_backward to return non-aliased tensor bias grad | soulitzer | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145399
* #145533
* #145531
* #145520
Fixes https://github.com/pytorch/pytorch/issues/141292
| true |
2,805,198,454 | [ROCm] Update rocm.yml and add rocm-mi300.yml | amdfaa | closed | [
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 3 | CONTRIBUTOR | - Added another workflow to run the mi300 jobs post-merge.
- Updated rocm.yml to use mi200s instead of mi300s.
- Required to get an idea of how PRs are landing on our mi200s and mi300s.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,805,189,748 | Add MI200 workflow to rocm | amdfaa | closed | [
"module: rocm",
"topic: not user facing"
] | 1 | CONTRIBUTOR | cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,805,184,484 | Make torchelastic etcd rendezvous publicly importable | H-Huang | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (torchelastic)"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145396
* #145387
Make torchelastic publicly importable by raising error on import etcd lazily, [BE task, row 7](https://docs.google.com/spreadsheets/d/1TtATnLJf1rVXaBQd3X3yYqm9xNN9BIWG7QqRgrFiRRI/edit?gid=1748512924#gid=1748512924)
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,805,172,490 | [NVIDIA] Jetson Thor Blackwell Support codegen | johnnynunez | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: build"
] | 11 | CONTRIBUTOR | cc @ptrblck @msaroufim @eqy | true |
2,805,169,621 | [EXPORT AOTI] `aoti_compile_and_package` custom_ops dependecies | bhack | open | [
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor"
] | 12 | CONTRIBUTOR | ### 🐛 Describe the bug
I was trying to `export` and `aoti_compile_and_package` a model with this custom op:
https://github.com/state-spaces/mamba/pull/651
`aoti_load_package` is working correctly on the same export env.
But it is not going to work in a fresh env when I don't have the custom ops dependency installed (e.g. `selective_scan_cuda.cpython-311-x86_64-linux-gnu.so`).
In this case we have `Error during testing: Could not find schema for custom_ops::selective_scan_fwd`
Is this cause the custom_op `.so` isn't included in the packaged `aoti_compile_and_package`?
If yes, Is it an expected behavior by design?
/cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @zou3519
### Versions
nightly | true |
2,805,134,685 | Fix tests broken by #145176 | aorenste | closed | [
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | #145176 broke
test/dynamo/test_dynamic_shapes.py::DynamicShapesReproTests::test_graph_break_on_jit_isinstance_dynamic_shapes
test/dynamo/test_repros.py::ReproTests::test_graph_break_on_jit_isinstance
this backs out the offending change until it can be fixed properly.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145393
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,805,131,319 | Reverting the PR adding Kleidiai-based int4 kernels | albanD | closed | [
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"skip-pr-sanity-checks",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6 | COLLABORATOR | Mitigation for https://github.com/pytorch/pytorch/issues/145273
Reverting https://github.com/pytorch/pytorch/pull/134124 and https://github.com/pytorch/pytorch/pull/144074
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,805,126,806 | [BE][export] Fix hop tests with flaky memory leak | yiming0416 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | CONTRIBUTOR | Summary:
As title. Added `torch._dynamo.reset()` for each test
This should fix several flaky tests in `test_hop.py` such as https://github.com/pytorch/pytorch/issues/139073
Test Plan:
```
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/export/test_hop.py TestHOPCUDA.test_serialize_export_scan_simple_cuda_float32
```
Differential Revision: D68506280
| true |
2,805,103,168 | Move Dynamo test to skip from expected_failures | zou3519 | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145390
Summary:
Fixes https://github.com/pytorch/pytorch/issues/116105
This test is consistently failing. It shouldn't be marked as a flaky
test in the CI using the disabld tests mechanism. I'm skipping the test for now.
Test Plan:
- CI | true |
2,805,085,051 | [DO NOT MERGE] pre-merge runs only on MI200 and post-merge runs on both MI300 | ethanwee1 | closed | [
"open source",
"topic: not user facing"
] | 2 | CONTRIBUTOR | Check to see pre-merge runs only on MI200 and post-merge runs on both MI300 and MI200
| true |
2,805,047,673 | create DISABLED issues for specific runner labels | jeffdaily | open | [
"module: ci",
"triaged"
] | 2 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
ROCm CI runners are a mix of MI200 and MI300 systems. At the time of writing this issue, the MI200 runners are used pre-merge and the MI300 runners are only used post-merge.
- rocm / linux-focal-rocm6.3-py3.10 / test (default, 1, 6, linux.rocm.gpu.mi300.2) [post-merge]
- rocm / linux-focal-rocm6.3-py3.10 / test (default, 1, 6, linux.rocm.gpu.2) [pre-merge]
Other HW vendors might also support different runner labels for the same flows.
We are seeing tests getting DISABLED as flaky because they pass on mi200 pre-merge then fail on mi300 post-merge. Unfortunately, the DISABLED issues are disabling both mi200 and mi300 runner labels for the same flows which means we are losing the mi200 signal unnecessarily.
Is it possible to create DISABLED issues that can also specify the runner label?
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,805,042,611 | Fix test_modules_can_be_imported | H-Huang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145396
* __->__ #145387
`test_modules_can_be_imported` test is currently failing due to a few missing private modules and this PR gets it working before I start to clean up the public allow list | true |
2,805,029,010 | DISABLED test_view_of_slice_cuda (__main__.TestUnbackedSymintsCUDA) | jeffdaily | closed | [
"module: rocm",
"triaged",
"skipped"
] | 2 | COLLABORATOR | Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_unbacked_symints.py%3A%3ATestUnbackedSymintsCUDA%3A%3Atest_view_of_slice_cuda%22%5D)).
This seems to be an mi300-specific failure.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,805,018,452 | Bail on checking internal overlap when dealing with unbacked symints | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 19 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145385
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,805,015,868 | [Doc] Add period at the end of the sentence | malfet | closed | [
"Merged",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Test plan: https://docs-preview.pytorch.org/pytorch/pytorch/145384/generated/torch.compiler.disable.html#torch-compiler-disable
Fixes https://github.com/pytorch/pytorch/issues/145365
| true |
2,804,967,430 | Windows Pytorch compiler crash some version of cl.exe. Fix provided | deepbeepmeep | open | [
"module: windows",
"triaged",
"oncall: pt2"
] | 1 | NONE | ### 🐛 Describe the bug
Hi.
In _cpp_builder.py / function 'check_compiler_exist_windows'_ you check for the existence of the cl C++ compiler by calling it with a '/help' option.
However for some versions of cl.exe, the header of the help message contains some invisible invalid utf8 char (here a single xff):
_Compilateur d\'optimisation Microsoft (R) C/C++ version\xff19.35.32216.1_
This causes the following crash:
```torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 54: invalid start byte'
```
The solution would to be to remove the decode line since this is only an existence test you don't need to process the help message
```
@functools.lru_cache(None)
def check_compiler_exist_windows(compiler: str) -> None:
"""
Check if compiler is ready, in case end user not activate MSVC environment.
"""
try:
output_msg = (
subprocess.check_output( [compiler, "\help" ] , stderr=subprocess.STDOUT)
.strip()
#.decode(*SUBPROCESS_DECODE_ARGS)
)
```
### Versions
not needed
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu | true |
2,804,963,839 | flaky test issues should close themselves if the test doesn't exist anymore | zou3519 | closed | [
"module: ci",
"triaged",
"module: flaky-tests",
"module: infra"
] | 3 | CONTRIBUTOR | I've been going through the pt2 flaky test issues and some of the tests look like they've been deleted. It would be nice for this to be automated.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @clee2000 @wdvr | true |
2,804,829,846 | Use AOTI as inductor backend with precompile mode. | zhxchen17 | closed | [
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | Summary:
Design doc: https://docs.google.com/document/d/1Z15cBBPjoZ7gH00TSgCdgaYko7a7Br-ERd3_hA-g2IU/edit?usp=sharing
In this diff we are trying to introduce some stateful API to enable a global mode which will force inductor to use AOTI as a backend. Different from PR https://github.com/pytorch/pytorch/pull/141700, we didn't try to populate the package file into caching system, instead we bypass caching to simplify the implementation in the current form.
Similar to PR https://github.com/pytorch/pytorch/pull/141700, I did a quick benchmark to the loading time and it looks like the following:
- Precompile
```
buck run mode/opt scripts/zhxchen17:precompile
```
- Load using cache:
```
time buck run mode/opt scripts/zhxchen17:precompile -- --loader cache
```
Output:
```
real 0m24.593s
user 0m59.342s
sys 0m17.201s
```
- Load using load_fullgraph_package
```
time buck run mode/opt scripts/zhxchen17:precompile -- --loader precompile
```
Output:
```
real 0m10.907s
user 0m9.210s
sys 0m1.173s
```
Test Plan:
buck run mode/opt caffe2/test:test_export -- -r test_fullgraph_package_basic
_function
Differential Revision: D68459341
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,804,793,268 | Move privateuse1 test out of test_utils and make them serial | albanD | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12 | COLLABORATOR | Fixes https://github.com/pytorch/pytorch/issues/132720
The reason is that changing the privateuse1 module is global and so can race when other tests happen to check if it is enabled.
| true |
2,804,775,547 | torch.logit works incorrectly when input < eps after torch.compile | meetmul | closed | [
"triaged",
"bug",
"oncall: pt2",
"module: decompositions",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 3 | NONE | ### 🐛 Describe the bug
According to the doc https://pytorch.org/docs/stable/special.html#torch.special.logit, when input < eps, the actual computation is: `ln(eps/(1-eps))`. But this is not what `torch.compile` (with inductor backend) does.
```python
import torch
input = torch.tensor(0.3, dtype=torch.float64)
eps = torch.tensor(0.9, dtype=torch.float64)
compiled = torch.compile(torch.logit)
print(f"compiled: {compiled(input, eps)}")
print(f"expected: {torch.log(eps / (1 - eps))}")
```
```
compiled: -2.1972245773362196
expected: 2.1972245773362196
```
When using `aot_eager` to compile `torch.logit`, the compiled API's result is expected. So I think the issue lies in the inductor backend.
### Error logs
```
compiled: -2.1972245773362196
expected: 2.1972245773362196
```
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @SherlockNoMad @zou3519 @bdhirsh @yf225 | true |
2,804,730,726 | Loading weights using `torch.distributed.checkpoint` leads to large loss values | fingertap | closed | [
"oncall: distributed",
"triaged"
] | 8 | NONE | ### 🐛 Describe the bug
Using different init method leads to losses with different scales:
```python
# NOTE: This will produce loss in range [3, 5]
return init_with_meta(self, auto_wrap_policy)
# NOTE: This will produce normal loss in range [0.4, 1]
return init_with_hf(self, auto_wrap_policy)
```
However, I have checked that the `distcp` checkpoints should be correct (I converted the distcp to safetensors and checked the generations are reasonable). Is there anything I am missing?
The complete code to reproduce:
```python
import torch
import torch.distributed as dist
import torch.nn.functional as F
from functools import cached_property
class Dataset:
def __init__(self, dialogues: list[list[dict[str, str]]], tokenizer):
self.dialogues = [self.process_history(dialogue, tokenizer) for dialogue in dialogues]
def process_history(self, history: list[dict[str, str]], tokenizer):
if len(history) == 0:
raise ValueError("History is empty")
standard_history = []
for message in history:
if "from" in message:
message["role"] = message.pop("from")
if "value" in message:
message["content"] = message.pop("value")
assert "role" in message and "content" in message
message["role"] = message["role"].lower()
standard_history.append(message)
generation_prompt = "<|start_header_id|>assistant<|end_header_id|>\n\n<|begin_of_thought|>\n\n"
# Apply chat template, tokenize, and get labels
prev, input_ids, attn_mask, labels = "", [], [], []
for index in range(len(standard_history)):
templated = tokenizer.apply_chat_template(
standard_history[: index + 1],
tokenize=False,
add_generation_prompt=False
)
if templated.endswith(generation_prompt):
templated = templated[:-len(generation_prompt)]
assert templated.startswith(prev), (templated, prev)
prev, current_templated = templated, templated[len(prev) :]
tokenized = tokenizer(current_templated, add_special_tokens=False)
ids, mask = tokenized.input_ids, tokenized.attention_mask
input_ids.extend(ids)
attn_mask.extend(mask)
if standard_history[index].get("calculate_loss") is not None:
if standard_history[index]["calculate_loss"]:
lbl = [x for x in ids]
else:
lbl = [-100] * len(ids)
elif standard_history[index]["role"] != "assistant":
lbl = [-100] * len(ids)
else:
lbl = [x for x in ids]
labels.extend(lbl)
return {
"input_ids": torch.tensor(input_ids, dtype=torch.long),
"attention_mask": torch.tensor(attn_mask, dtype=torch.long),
"labels": torch.tensor(labels, dtype=torch.long),
}
def __len__(self):
return len(self.dialogues)
def __getitem__(self, idx: int):
return self.dialogues[idx]
def zero_pad_sequences(sequences: list[torch.Tensor], side: str = "left", value=0, max_len: int | None = None):
assert side in ("left", "right")
if max_len is not None:
sequences = [x[..., :max_len] for x in sequences]
max_seq_len = max(seq.size(-1) for seq in sequences)
else:
max_len = max_seq_len
padded_sequences = []
for seq in sequences:
pad_len = max_len - seq.size(-1)
padding = (pad_len, 0) if side == "left" else (0, pad_len)
padded_sequences.append(F.pad(seq, padding, value=value))
return torch.stack(padded_sequences, dim=0)
class Exp:
model_path: str = "/checkpoints/Meta-Llama-3.1-8B-Instruct/"
distcp_path: str = "/checkpoints/Meta-Llama-3.1-8B-Instruct/distcp"
data_path: str = "/data/sft_data_sample.json"
num_epochs: int = 1
def run(self):
from tqdm import tqdm
for epoch in range(self.num_epochs):
pbar = tqdm(self.dataloader, desc=f"Epoch {epoch+1}/{self.num_epochs}")
losses, max_loss_counts = [], 5
for batch in pbar:
input_ids = batch["input_ids"].cuda()
attention_mask = batch["attention_mask"].cuda()
labels = batch["labels"].cuda()
logits = self.model(input_ids, attention_mask=attention_mask).logits
logits = logits[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous()
loss = self.criterion(logits.view(-1, logits.size(-1)), labels.view(-1))
loss.backward()
self.optimizer.step()
losses.append(loss.item())
if len(losses) > max_loss_counts:
losses.pop(0)
mean_loss = sum(losses) / len(losses)
pbar.set_postfix({"avg. loss": mean_loss})
@cached_property
def criterion(self):
import torch.nn as nn
return nn.CrossEntropyLoss(ignore_index=-100)
@cached_property
def dataloader(self):
import json
from torch.utils.data import DistributedSampler, DataLoader
def collate_fn(batch: list[dict]) -> dict:
input_ids = zero_pad_sequences(
[x["input_ids"] for x in batch],
side="right",
value=self.tokenizer.pad_token_id,
max_len=self.max_seq_len
)
attention_mask = zero_pad_sequences(
[x["attention_mask"] for x in batch],
side="right",
value=0,
max_len=self.max_seq_len
)
labels = zero_pad_sequences(
[x["labels"] for x in batch],
side="right",
value=-100,
max_len=self.max_seq_len
)
return {k: torch.cat([x[k] for x in batch]) for k in batch[0].keys()}
with open(self.data_path, "r") as f:
dialogues = json.load(f)
dataset = Dataset(dialogues, self.tokenizer)
sampler = DistributedSampler(dataset, num_replicas=self.world_size, rank=self.rank)
return DataLoader(dataset, batch_size=1, sampler=sampler)
@cached_property
def model(self):
import torch.distributed.checkpoint as dcp
from functools import partial
from transformers import LlamaForCausalLM, AutoConfig
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP, StateDictType
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
def init_with_meta(self, auto_wrap_policy):
with torch.device("meta"):
model = LlamaForCausalLM(
AutoConfig.from_pretrained(
self.model_path,
torch_dtype=torch.bfloat16,
device_map="cuda",
attn_implementation="flash_attention_2",
)
)
model.gradient_checkpointing_enable()
model = model.to(torch.bfloat16)
fsdp_model = FSDP(
model,
auto_wrap_policy=auto_wrap_policy,
device_id=self.rank,
param_init_fn=lambda x: x.to_empty(device=torch.cuda.current_device(), recurse=False)
)
with FSDP.state_dict_type(fsdp_model, StateDictType.SHARDED_STATE_DICT):
state_dict = {"model": fsdp_model.state_dict()}
dcp.load(
state_dict,
storage_reader=dcp.FileSystemReader(self.distcp_path),
)
fsdp_model.load_state_dict(state_dict["model"])
fsdp_model = fsdp_model.to(torch.bfloat16)
return fsdp_model
def init_with_hf(self, auto_wrap_policy):
model = LlamaForCausalLM.from_pretrained(
self.model_path,
torch_dtype=torch.bfloat16,
device_map="cuda",
attn_implementation="flash_attention_2",
)
model.gradient_checkpointing_enable()
fsdp_model = FSDP(
model,
auto_wrap_policy=auto_wrap_policy,
device_id=self.rank,
param_init_fn=lambda x: x.to_empty(device=torch.cuda.current_device(), recurse=False)
)
return fsdp_model
auto_wrap_policy = partial(
transformer_auto_wrap_policy,
transformer_layer_cls={LlamaDecoderLayer},
)
# NOTE: This will produce loss in range [3, 5]
return init_with_meta(self, auto_wrap_policy)
# NOTE: This will produce normal loss in range [0.4, 1]
return init_with_hf(self, auto_wrap_policy)
@cached_property
def optimizer(self):
from torch.optim import AdamW
optimizer = AdamW(self.model.parameters(), lr=1e-5)
return optimizer
@cached_property
def tokenizer(self):
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(self.model_path)
return tokenizer
@cached_property
def rank(self):
return dist.get_rank()
@cached_property
def world_size(self):
return dist.get_world_size()
if __name__ == "__main__":
dist.init_process_group()
exp = Exp()
torch.cuda.set_device(exp.rank)
exp.run()
dist.destroy_process_group()
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-95
Off-line CPU(s) list: 96-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0+cu121
[pip3] torchaudio==2.4.0+cu121
[pip3] torchvision==0.19.0+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.0+cu121 pypi_0 pypi
[conda] torchaudio 2.4.0+cu121 pypi_0 pypi
[conda] torchvision 0.19.0+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,804,709,801 | Inductor autograd raises an error in the second run may because of fx graph cache | Ronbogo | closed | [
"high priority",
"triaged",
"bug",
"oncall: pt2",
"module: inductor"
] | 4 | NONE | ### 🐛 Describe the bug
```python
import torch
import os
os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
@torch.compile
def func(x):
return x * x
x = torch.tensor(0.0, device="cuda", requires_grad=True)
func(x).backward()
print(x.grad)
```
run the code twice will get a triton error in the second run.
```
Traceback (most recent call last):
File "/root/dev/temp/tst.py", line 14, in <module>
func(x).backward()
File "/root/dev/pytorch/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/root/dev/pytorch/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/root/dev/pytorch/torch/autograd/graph.py", line 815, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/root/dev/pytorch/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/root/dev/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1958, in backward
return impl_fn()
File "/root/dev/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1944, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/root/dev/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2079, in _backward_impl
out = call_func_at_runtime_with_args(
File "/root/dev/pytorch/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/root/dev/pytorch/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/root/dev/pytorch/torch/_inductor/utils.py", line 2228, in run
return model(new_inputs)
File "/tmp/torchinductor_root/ra/crazrzms2jyia4lhreqvggnuhmqpq44ag44s5qjmcvsbwhbd2hdr.py", line 95, in call
triton_poi_fused_add_mul_0.run(tangents_1, primals_1, buf0, 1, grid=grid(1), stream=stream0)
File "/root/dev/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 961, in run
return launcher(
File "<string>", line 6, in launcher
File "/usr/local/lib/python3.10/dist-packages/triton/backends/nvidia/driver.py", line 435, in __call__
self.launch(*args, **kwargs)
RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered
```
set `TORCHINDUCTOR_FX_GRAPH_CACHE=0` can fix it.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git62ce3e6
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 20.0.0git (https://github.com/llvm/llvm-project.git ece4e1276e2140d84b05b8c430a0e547a1f23210)
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 551.61
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 3
BogoMIPS: 5376.02
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Vulnerable: No microcode
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0a0+git62ce3e6
[pip3] torch-xla==2.5.0+git3d860bf
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang @gchanan @zou3519 @msaroufim @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @atalman @malfet @ptrblck @nWEIdia @xwang233 | true |
2,804,701,893 | distributed.new_group with backend GLOO hangs when distributed.split_group was called before | mawi2017 | open | [
"oncall: distributed"
] | 0 | NONE | ### 🐛 Describe the bug
A call to `distributed.new_group` with backend GLOO hangs if `distributed.split_group` was called before and not all ranks are part of a new ProcessGroup (whether in `new_group` and/or `split_group`).
Reproducer:
```python
#!/usr/bin/env python3
import os
import torch
import torch.distributed as dist
LOCAL_RANK = int(os.getenv("LOCAL_RANK"))
torch.distributed.init_process_group(backend='cpu:gloo,cuda:nccl', device_id=torch.device("cuda", LOCAL_RANK))
WORLD_SIZE = dist.get_world_size()
# hang in v2.5.1 and 2.7.0.dev20250120+cu126.
ranks_split = [ list(range(WORLD_SIZE-1)) ]
ranks_new = list(range(WORLD_SIZE))
# hang in v2.5.1, crash in tear down in 2.7.0.dev20250120+cu126.
# ranks_split = [ list(range(WORLD_SIZE)) ]
# ranks_new = list(range(WORLD_SIZE-1))
# hang in v2.5.1, crash in tear down in 2.7.0.dev20250120+cu126.
# ranks_split = [ list(range(WORLD_SIZE-1)) ]
# ranks_new = list(range(WORLD_SIZE-1))
# works
# ranks_split = [ list(range(WORLD_SIZE)) ]
# ranks_new = list(range(WORLD_SIZE))
dist.split_group(split_ranks=ranks_split)
print("new_group ...")
dist.new_group(ranks=ranks_new, backend=dist.Backend.GLOO) # hang occurs here
print("done")
dist.barrier()
```
Run with: `torchrun --nproc-per-node 2 ./torch-split-group-repro.py`
### Versions
Reproducible with PyTorch 2.5.1 and latest 2.7.0.dev20250120+cu126.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,804,630,988 | [torchbench] Increase tolerance for amp only poolformer_m36 | IvanKobzarev | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145375
https://github.com/pytorch/pytorch/issues/144893
```
python benchmarks/dynamo/timm_models.py --only poolformer_m36 --accuracy --no-translation-validatio --training --amp --device cuda --backend inductor
```
`--float32`, `--bfloat16` - passes the accuracy
`--disable-cudagraph` does not change the result
accuracy_fail only for `--amp` and gives `0.048` res_error, on 1-element result Tensor.
This fails with `0.01` tolerance.
If to increase tolerance to 0.04 it passes. I have not reproduced "eager_two_runs_differ" on H100.
I think this is a true distribution of results with `--amp`, so increasing tolerance to 0.04 for ano case only makes it passing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,804,546,480 | Memory Leak in MPS Backend During LSTM Iterations (Out of Memory Error) | Tyndall-log | open | [
"module: rnn",
"module: memory usage",
"triaged",
"module: mps"
] | 9 | NONE | ### 🐛 Describe the bug
## Bug Description
When running a simple LSTM model on the MPS backend with a repetitive loop, memory usage steadily increases, eventually leading to an Out of Memory error. This issue occurs despite clearing the MPS memory cache using torch.mps.empty_cache() after every iteration. The error happens after running approximately 15,666 iterations with a batch size of 16 and hidden size of 256.
Reproduction Steps
Run the following code to reproduce the issue:
```py
import torch
import torch.nn as nn
import platform
class LSTMModel(nn.Module):
def __init__(self, input_size, hidden_size, num_layers=1, batch_first=True):
super(LSTMModel, self).__init__()
self.lstm = nn.LSTM(input_size, hidden_size, num_layers=num_layers, batch_first=batch_first)
def forward(self, x, hidden):
output, hidden = self.lstm(x, hidden)
return output, hidden
def check_memory_leak():
input_size = 256
hidden_size = 256
batch_size = 16
sequence_length = 10
num_iterations = 100000 # Set a high number to check for memory leaks
# Use MPS if available
device = "mps" if torch.backends.mps.is_available() else "cpu"
# Model initialization
model = LSTMModel(input_size, hidden_size).to(device)
# Input data and hidden state initialization
x = torch.randn(batch_size, sequence_length, input_size).to(device)
hidden = (
torch.zeros(1, batch_size, hidden_size).to(device),
torch.zeros(1, batch_size, hidden_size).to(device),
)
print("Starting memory check...")
for i in range(num_iterations):
with torch.no_grad():
output, hidden = model(x, hidden)
# Clear MPS memory cache
torch.mps.empty_cache()
print(f"Iteration {i + 1}/{num_iterations}: Completed")
if __name__ == "__main__":
print("PyTorch Version:", torch.__version__)
print("Python Version:", platform.python_version())
print("Platform:", platform.system(), platform.release())
print("MPS Available:", torch.backends.mps.is_available())
print("MPS Built:", torch.backends.mps.is_built())
check_memory_leak()
```
## Expected Behavior
Memory usage should remain stable or properly recycle after clearing the cache with torch.mps.empty_cache().
## Observed Behavior
The program crashes with an Out of Memory error after ~15,666 iterations. The error message is as follows:
RuntimeError: MPS backend out of memory (MPS allocated: 24.00 MB, other allocations: 27.18 GB, max allowed: 27.20 GB). Tried to allocate 16.00 KB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
## Environment Information
MacBook Air 15 M3(24GB)
PyTorch Version: 2.5.1
Python Version: 3.12.2
Platform: Darwin 24.3.0
MPS Available: True
MPS Built: True
## Additional Context
This issue may be related to the MPS backend’s memory management while handling LSTM computations. Using torch.mps.empty_cache() does not appear to effectively release memory in this scenario. The problem persists even when torch.no_grad() is used.
## Request
Could you please investigate the memory leak issue in the MPS backend for LSTM models? Let me know if further debugging or testing is needed.
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.3
Libc version: N/A
Python version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3
Versions of relevant libraries:
[pip3] efficientnet_pytorch==0.7.1
[pip3] numpy==1.26.4
[pip3] segmentation_models_pytorch==0.4.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] numpy 2.2.1 pypi_0 pypi
[conda] numpy-base 1.26.4 py312he047099_0
[conda] segmentation-models-pytorch 0.4.0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.4.1 py312_cpu pytorch
[conda] torchvision 0.19.1 py312_cpu pytorch
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @mikaylagawarecki @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | true |
2,804,519,510 | [inductor][BE] Enable test_cpu_cpp_wrapper in fbcode | desertfire | closed | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 9 | CONTRIBUTOR | Differential Revision: D68278174
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov | true |
2,831,350,477 | UI Update Request: Addition of zentorch backend to OSS dashboard | naveenthangudu | closed | [
"triaged",
"module: benchmark",
"oncall: pt2"
] | 1 | NONE | zentorch is a **PyTorch plugin optimized for deep learning workloads on AMD EPYC™ servers**. It is based on the **ZenDNN Library**.
We ran the zentorch plugin with the **Torchbench**, **HuggingFace**, and **Timm Models** suites in the TorchInductor Performance Dashboard for float32 precision.

>Note
>1. **Meta Inductor**: Values for Inductor from Performance CPU Dashboard for single core.
>2. **Inductor**: Values of Inductor local runs on AMD CPU for single core.
cc @chauhang @penguinwu | true |
2,804,451,669 | [XPU] torch.nn.functional.pad brings wrong results with torch.compile on Intel GPU | qwqdlt | closed | [
"triaged",
"oncall: pt2",
"module: inductor",
"module: xpu"
] | 3 | NONE | ### 🐛 Describe the bug
torch.nn.functional.pad brings wrong results with torch.compile on Intel GPU (XPU).
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, *args):
pad = torch.nn.functional.pad(args[0], (0, 1, 1, 0), mode = 'constant', value = 0.5)
return pad
m = Model()
inp = torch.randn((1, 1), dtype=torch.float32)
print(inp)
# tensor([[-0.5137]])
m.to('cpu')
cpu_out = m(inp.to('cpu'))
print(cpu_out)
# tensor([[ 0.5000, 0.5000],
# [-0.5137, 0.5000]])
m.to('xpu')
xpu_out = m(inp.to('xpu'))
print(xpu_out)
# tensor([[ 0.5000, 0.5000],
# [-0.5137, 0.5000]], device='xpu:0')
opt = torch.compile(m, fullgraph=True, backend='inductor', mode=None)
opt.to('cpu')
cpu_out = opt(inp.to('cpu'))
print(cpu_out)
# tensor([[ 0.5000, 0.5000],
# [-0.5137, 0.5000]])
opt.to('xpu')
xpu_out = opt(inp.to('xpu'))
print(xpu_out) # Different!
# tensor([[-0.5137, -0.5137],
# [-0.5137, -0.5137]], device='xpu:0')
```
### **Error Logs**
```bash
tensor([[-0.5137]])
tensor([[ 0.5000, 0.5000],
[-0.5137, 0.5000]])
tensor([[ 0.5000, 0.5000],
[-0.5137, 0.5000]], device='xpu:0')
tensor([[ 0.5000, 0.5000],
[-0.5137, 0.5000]])
tensor([[-0.5137, -0.5137],
[-0.5137, -0.5137]], device='xpu:0')
```
### Versions
PyTorch version: 2.5.1+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 9
Socket(s): 1
Stepping: 4
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtop
ology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_sin
gle ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 432 KiB (9 instances)
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241211
[pip3] pytorch-triton-xpu==3.1.0
[pip3] torch==2.5.1+xpu
[pip3] torchaudio==2.5.1+xpu
[pip3] torchvision==0.20.1+xpu
[conda] numpy 2.1.3 pypi_0 pypi
[conda] pytorch-triton-xpu 3.1.0 pypi_0 pypi
[conda] torch 2.5.1+xpu pypi_0 pypi
[conda] torchaudio 2.5.1+xpu pypi_0 pypi
[conda] torchvision 0.20.1+xpu pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @gujinghui @fengyuan14 @guangyey @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | true |
2,804,367,860 | Set `size` when `is_coalesced` is set in `torch.sparse_coo_tensor()` | ILCSFNO | open | [
"module: sparse",
"triaged"
] | 5 | CONTRIBUTOR | ### 📚 The doc issue
The doc of [torch.sparse_coo_tensor()](https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html#torch-sparse-coo-tensor) shows its `Parameters`/`Keyword Arguments` as below:
> size (list, tuple, or torch.Size, optional) – Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.
> is_coalesced (bool, optional) – When`True`, the caller is responsible for providing tensor indices that correspond to a coalesced tensor. If the `check_invariants` flag is False, no error will be raised if the prerequisites are not met and this will lead to silently incorrect results. To force coalescion please use `coalesce()` on the resulting Tensor. Default: None: except for trivial cases (e.g. nnz < 2) the resulting Tensor has is_coalesced set to `False`.
But when `is_coalesced` is set, whether it is None/True/False/..., `size` must be set properly, but document isn't noted or warned.
### Repro
```python
import torch
is_coalesced = True # choice: None, True, False
i = torch.tensor([[0, 1, 0], [1, 2, 3]])
v = torch.tensor([3.0, 4.0, 5.0])
s = (2, 3)
result = torch.sparse_coo_tensor(i, v, is_coalesced=is_coalesced) # always fail
# result = torch.sparse_coo_tensor(i, v, s, is_coalesced=is_coalesced) # always success
```
### Outputs
```txt
TypeError: sparse_coo_tensor() received an invalid combination of arguments - got (Tensor, Tensor, is_coalesced=bool), but expected one of:
* (object indices, object values, *, torch.dtype dtype = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False, bool check_invariants = None)
* (object indices, object values, tuple of ints size, *, torch.dtype dtype = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False, bool check_invariants = None, bool is_coalesced = None)
* (tuple of ints size, *, torch.dtype dtype = None, torch.device device = None, bool requires_grad = False, bool check_invariants = None)
```
### Suggest a potential alternative/fix
So, a `Note`/`Warning` should be added to the doc of [torch.sparse_coo_tensor()](https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html#torch-sparse-coo-tensor) as shown below:
> Note/Warning:
When `is_coalesced` is set, whether it is None/True/False/..., `size` must be set properly.
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @svekars @brycebortree @sekyondaMeta @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | true |
2,804,298,810 | Enable C++ API parity tests on AArch64 | murste01 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10 | CONTRIBUTOR | Re-enables C++ API parity tests on AArch64 which now pass. | true |
2,804,184,484 | Error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory | Neronjust2017 | closed | [] | 1 | NONE | ### 🐛 Describe the bug
I’m using pytorch lighting DDP training with batch size = 16, 8 (gpu per node) * 2 (2 nodes) = 16 total gpus. However, I got the following
error, which happens in ModelCheckpoint callback. There seems to be an error during synchronization between nodes when saving the model checkpoint. And I decreased the batch size to 4 and this error disappeared. Can anyone help me?
```
callbacks:
- type: ModelCheckpoint
every_n_train_steps: 2000
save_top_k: 30
monitor: "step"
filename: "checkpoint_{epoch}-{step}"
```
```
File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/trainer.py", line 1030, in _run_stage
[rank2]: self.fit_loop.run()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/loops/fit_loop.py", line 206, in run
[rank2]: self.on_advance_end()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/loops/fit_loop.py", line 378, in on_advance_end
[rank2]: call._call_callback_hooks(trainer, "on_train_epoch_end", monitoring_callbacks=True)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/call.py", line 210, in _call_callback_hooks
[rank2]: fn(trainer, trainer.lightning_module, *args, **kwargs)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 323, in on_train_epoch_end
[rank2]: self._save_topk_checkpoint(trainer, monitor_candidates)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 383, in _save_topk_checkpoint
[rank2]: self._save_monitor_checkpoint(trainer, monitor_candidates)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 703, in _save_monitor_checkpoint
[rank2]: self._update_best_and_save(current, trainer, monitor_candidates)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 732, in _update_best_and_save
[rank2]: filepath = self._get_metric_interpolated_filepath_name(monitor_candidates, trainer, del_filepath)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 661, in _get_metric_interpolated_filepath_name
[rank2]: while self.file_exists(filepath, trainer) and filepath != del_filepath:
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 774, in file_exists
[rank2]: return trainer.strategy.broadcast(exists)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/strategies/ddp.py", line 307, in broadcast
[rank2]: torch.distributed.broadcast_object_list(obj, src, group=_group.WORLD)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 2636, in broadcast_object_list
[rank2]: object_tensor = torch.empty( # type: ignore[call-overload]
[rank2]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory.
```
### Versions
PyTorch version: 2.3.0a0+6ddf5cf85e.nv24.04
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A800-SXM4-80GB
Nvidia driver version: 470.199.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] cudnn==1.1.2
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] onnxruntime==1.16.0
[pip3] optree==0.11.0
[pip3] pynvjitlink==0.1.13
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==3.0.0+a9bc1a364
[pip3] torch==2.3.0a0+6ddf5cf85e.nv24.4
[pip3] torch-scatter==2.1.2
[pip3] torch-tensorrt==2.3.0a0
[pip3] torchdata==0.7.1a0
[pip3] torchmetrics==1.4.2
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.18.0a0
[conda] Could not collect | true |
2,804,147,705 | The possible error in the pytorch documentation of RNN. | IOT-Duan | open | [
"module: rnn",
"triaged"
] | 0 | NONE | ### 📚 The doc issue
### 1. Where is the documentation?
URL: https://pytorch.org/docs/stable/generated/torch.nn.RNN.html#rnn
### 2. What is the possible error?
The documentation provide a piece of code about " Efficient implementation equivalent to the following with bidirectional=False " which is shown below:
```python
# Efficient implementation equivalent to the following with bidirectional=False
def forward(x, h_0=None):
if batch_first:
x = x.transpose(0, 1)
seq_len, batch_size, _ = x.size()
if h_0 is None:
h_0 = torch.zeros(num_layers, batch_size, hidden_size)
h_t_minus_1 = h_0
h_t = h_0
output = []
for t in range(seq_len):
for layer in range(num_layers):
h_t[layer] = torch.tanh(
x[t] @ weight_ih[layer].T
+ bias_ih[layer]
+ h_t_minus_1[layer] @ weight_hh[layer].T
+ bias_hh[layer]
)
output.append(h_t[-1])
h_t_minus_1 = h_t
output = torch.stack(output)
if batch_first:
output = output.transpose(0, 1)
return output, h_t
```
However, the piece of code **does not explain** the implementation of RNN correctly because it uses `x[t]`as the input data to compute the `h_t[layer]`**in each RNN layer** at the time `t`.
To compute the `h_t[layer]` correctly, the input data in each RNN layer at the time `t` should be 'x[t]' when `layer == 0` and 'h_t[layer-1]' when `layer > 0` respectively.
Thus, the correct interpretation of the RNN implementation can be:
### 3. The code of possible correct interpretation of the RNN implementation
```python
def forward(x, h_0=None):
if batch_first:
x = x.transpose(0, 1)
seq_len, batch_size, _ = x.size()
if h_0 is None:
h_0 = torch.zeros(num_layers, batch_size, hidden_size)
h_t_minus_1 = h_0
h_t = h_0
output = []
for t in range(seq_len):
input_t = x[t]
for layer in range(num_layers):
h_t[layer] = torch.tanh(
input_t @ weight_ih[layer].T
+ bias_ih[layer]
+ h_t_minus_1[layer] @ weight_hh.T
+ bias_hh[layer]
)
input_t = h_t[layer]
output.append(h_t[-1])
h_t_minus_1 = h_t
output = torch.stack(output)
if batch_first:
output = output.transpose(0, 1)
return output, h_t
```
### Suggest a potential alternative/fix
_No response_
cc @mikaylagawarecki | true |
2,804,135,595 | [ARM] Fix `test_float_to_int_conversion_nonfinite` | robert-hardwick | closed | [
"triaged",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11 | COLLABORATOR | We have broken tests on Aarch64 which are not enabled upstream, this PR will fix and enable those tests.
```
AssertionError: Tensor-likes are not equal!
Mismatched elements: 2 / 3 (66.7%)
Greatest absolute difference: 1 at index (1,)
Greatest relative difference: 1.0842021724855044e-19 at index (1,)
To execute this test, run the following from the base repo dir:
python test/test_tensor_creation_ops.py TestTensorCreationCPU.test_float_to_int_conversion_nonfinite_cpu_int64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
cc @malfet @snadampal @milpuz01 | true |
2,803,978,803 | removed check for ConvTranspose3D on MPS | mlaves | open | [
"triaged",
"open source",
"release notes: mps"
] | 15 | NONE | Fixes #130256
I removed `TORCH_CHECK(input_t.dim() < 5, "ConvTranspose 3D is not supported on MPS");` as it is actually supported. | true |
2,803,951,881 | No period in docstring of torch.compiler.disable | Tony-Y | closed | [
"module: docs",
"triaged",
"actionable"
] | 0 | CONTRIBUTOR | ### 📚 The doc issue
<img width="829" alt="Image" src="https://github.com/user-attachments/assets/0cc8b4fb-eb13-4ea9-9a09-51c30ff33d3b" />
### Suggest a potential alternative/fix
https://github.com/pytorch/pytorch/blob/3cbc8c54fd37eb590e2a9206aecf3ab568b3e63c/torch/compiler/__init__.py#L228-L231
At least, there should be a period at the end of line 230.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke | true |
2,803,887,718 | DISABLED test_cache_hot_load_device_cuda_bfloat16_dynamic_False (__main__.TestFxGraphCache) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_hot_load_device_cuda_bfloat16_dynamic_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35972563562).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_hot_load_device_cuda_bfloat16_dynamic_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 363, in test_cache_hot_load
self.assertEqual(counters["inductor"]["fxgraph_cache_miss"], 2)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 2 but got 3.
Absolute difference: 1
Relative difference: 0.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_hot_load_device_cuda_bfloat16_dynamic_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,887,595 | DISABLED test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_False (__main__.TestFxGraphCache) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35961539277).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 5 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 146, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 7 but got 14.
Absolute difference: 7
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,887,453 | DISABLED test_comprehensive_svd_lowrank_cuda_float32 (__main__.TestInductorOpInfoCUDA) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 12 | NONE | Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_svd_lowrank_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35964561116).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_svd_lowrank_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2262, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1620, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1542, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 950, in inner
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 942, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1189, in test_comprehensive
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1149, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 624, in check_model_gpu
check_model(
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 532, in check_model
assert strides_equal
AssertionError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_cuda.py", line 247, in wrapped
return f(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1620, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(3, 2), device="cuda:0", dtype=torch.float32], args=TensorList[Tensor[size=(3, 2), device="cuda:0", dtype=torch.float32]], kwargs={'q': '2', 'M': 'None'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_svd_lowrank_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,887,452 | DISABLED test_max_autotune_remote_caching_dynamic_False (__main__.TestMaxAutotuneRemoteCache) | pytorch-bot[bot] | open | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_max_autotune_remote_caching_dynamic_False&suite=TestMaxAutotuneRemoteCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35967125228).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 9 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_max_autotune_remote_caching_dynamic_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 1072, in test_max_autotune_remote_caching
self.assertEqual(global_stats.autotune_remote, Stats(2, 3, 2))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=4, num_get_hit=2, num_get_miss=4) != Stats(num_put=2, num_get_hit=3, num_get_miss=2)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_max_autotune.py TestMaxAutotuneRemoteCache.test_max_autotune_remote_caching_dynamic_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_max_autotune.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,886,715 | DISABLED test_max_autotune_remote_caching_dynamic_False (__main__.TestMaxAutotuneRemoteCache) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_max_autotune_remote_caching_dynamic_False&suite=TestMaxAutotuneRemoteCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35970261825).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 10 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_max_autotune_remote_caching_dynamic_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 1072, in test_max_autotune_remote_caching
self.assertEqual(global_stats.autotune_remote, Stats(2, 3, 2))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=4, num_get_hit=2, num_get_miss=4) != Stats(num_put=2, num_get_hit=3, num_get_miss=2)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_max_autotune.py TestMaxAutotuneRemoteCache.test_max_autotune_remote_caching_dynamic_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_max_autotune.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,886,610 | DISABLED test_linear_and_cel_max_autotune (__main__.InplacePaddingTest) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_linear_and_cel_max_autotune&suite=InplacePaddingTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35974314707).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_linear_and_cel_max_autotune`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_inplace_padding.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,775,474 | Fix avg_pool crash with negative numbers | HarryWangATX | open | [
"module: cpu",
"triaged",
"open source",
"Stale",
"release notes: quantization"
] | 4 | NONE | Fixes #145077
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,803,674,921 | FP8: E5M2: The FP8 E5M2 result is not `inf` when casting a FP32 value larger than max normal value of FP8 E5M2 (57344) | fengyuan14 | open | [
"module: docs",
"triaged",
"module: NaNs and Infs",
"module: float8"
] | 2 | COLLABORATOR | ### 🐛 Describe the bug
See the case,
```
>>> import torch
>>> a = torch.tensor(60000, dtype=torch.float)
>>> b = a.to(torch.float8_e5m2)
>>> b
tensor(57344., dtype=torch.float8_e5m2)
```
In theory, the max normal value of fp8 e5m2 is 57344. Any values above 57344 will be represented with fp8 e5m2 `inf`.
https://github.com/pytorch/pytorch/blob/3cbc8c54fd37eb590e2a9206aecf3ab568b3e63c/c10/util/Float8_e5m2.h#L91
Code shows the fp8 value will be `inf` or `nan` if the input fp32 value is larger than 65536, which is the first value not representable for fp8 e5m2. In another word, value between 57344 and 65536 will go to the else branch.
BTW, even the boundary is 65536 in PyTorch implementation, I found,
```
>>> a = torch.tensor(61440, dtype=torch.float)
>>> b = a.to(torch.float8_e5m2)
>>> b
tensor(inf, dtype=torch.float8_e5m2)
```
61440 in fp32 is converted to `inf` in fp8 e5m2.
### Versions
Latest main branch.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @yanbing-j @vkuzo @albanD @kadeng @penguinwu | true |
2,803,609,111 | [autograd] inconsistent jvp results | Luciennnnnnn | open | [
"module: autograd",
"triaged",
"module: functorch"
] | 2 | NONE | ### 🐛 Describe the bug
I have two implementations of an isometry loss function that uses Jacobian-vector products (JVP), but they're producing different gradients:
```python
import torch
vae = VAEModel()
vae.to("cuda")
func = lambda z: vae.decode(z, return_dict=False)[0]
input = torch.randn(1, 8, 8, 8, device="cuda")
u = torch.randn_like(input, device=input.device)
def iso_loss1():
Ju = torch.autograd.functional.jvp(func, input, u, create_graph=True)[1]
TrR = torch.sum(Ju.float() ** 2, dim=tuple(range(1, Ju.dim()))).mean()
isometry_loss = TrR
return isometry_loss
def iso_loss2():
Ju = torch.func.jvp(func, (input,), (u,))[1]
TrR = torch.sum(Ju.float() ** 2, dim=tuple(range(1, Ju.dim()))).mean()
isometry_loss = TrR
return isometry_loss
def compare_grads():
vae.zero_grad()
loss1 = iso_loss1()
loss1.backward()
grads1 = {name: param.grad.clone() for name, param in vae.decoder.named_parameters() if param.grad is not None}
vae.zero_grad()
loss2 = iso_loss2()
loss2.backward()
grads2 = {name: param.grad.clone() for name, param in vae.decoder.named_parameters() if param.grad is not None}
print(f"Loss1: {loss1.item()}")
print(f"Loss2: {loss2.item()}")
max_diff = 0
for name in grads1:
print(f"{grads1[name]=} {grads2[name]=}")
diff = (grads1[name] - grads2[name]).abs().max().item()
print(f"Max gradient difference for {name}: {diff}")
max_diff = max(max_diff, diff)
break
print(f"\nMaximum gradient difference across all parameters: {max_diff}")
compare_grads()
```
The original implementation (iso_loss1) uses `torch.autograd.functional.jvp`, which is computationally expensive as it involves two vector-Jacobian product (VJP) calculations under the hood. To improve performance, I attempted to switch to `torch.func.jvp`, which uses a more efficient forward-mode implementation.
However, I've noticed two concerning issues:
1. The gradients produced by these two loss implementations differ.
2. Unlike `torch.autograd.functional.jvp`, `torch.func.jvp` doesn't provide a `create_graph=True` parameter
This raises the question: Is `torch.func.jvp` not intended for use in network optimization scenarios? I'd appreciate any insights into this behavior and guidance on the proper approach to use.
### Versions
N/A
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @zou3519 @Chillee @samdow @kshitij12345 | true |
2,803,586,493 | Missing docs for `torch._foreach_copy_` | zeshengzong | closed | [
"module: docs",
"triaged",
"needs design",
"module: mta"
] | 4 | CONTRIBUTOR | ### 📚 The doc issue
Here's an implementation of `torch._foreach_copy_`, but seems missing docs for users to know about it.
```python
>>> a = torch.randn(3,3)
>>> b = torch.randn(3,3)
>>> c = torch.zeros(3,3)
>>> d = torch.zeros(3,3)
>>> torch._foreach_copy_([c,d], [a,b])
[tensor([[ 0.6597, -0.1195, 0.2595],
[ 0.0301, 0.3752, 0.3226],
[-0.9088, 0.9146, 0.7712]]), tensor([[-1.7291, 1.4956, -0.1839],
[-0.3988, 0.1179, -1.6674],
[ 0.6873, -0.1709, -0.0677]])]
```
Search in [pytorch document](https://pytorch.org/docs/main/search.html?q=_foreach_copy_&check_keywords=yes&area=default#)

### Suggest a potential alternative/fix
_No response_
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @crcrpar @mcarilli @janeyx99 | true |
2,803,558,946 | ehnace logging statically known by adding size_oblivious(..) | laithsakka | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145354
after the diff
```
[0/0_1] eval size_oblivious(Eq(s1, 1)) == False [statically known]
[0/0_1] eval size_oblivious(Eq(u0, 1)) == False [statically known]
[0/0_1] eval size_oblivious(Eq(s0, 1)) == False [statically known]
[0/0_1] eval size_oblivious(Eq(s0*s1*u0, 0)) == False [statically known]
```
before
```
[0/0_1] eval (Eq(s1, 1)) == False [statically known]
[0/0_1] eval (Eq(u0, 1)) == False [statically known]
[0/0_1] eval (Eq(s0, 1)) == False [statically known]
[0/0_1] eval (Eq(s0*s1*u0, 0)) == False [statically known]
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,803,547,260 | [dtensor][cp] experiment: call flex_attention on DTensor | XilunWu | open | [
"oncall: distributed",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147603
* #147517
* #147516
* #147515
* #147514
* __->__ #145353
```
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 459, in flex_attention_fake_impl
out = _permute_strides(out, query.stride())
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 70, in _permute_strides
new_out = out.new_empty(out.shape).as_strided(out.shape, out_strides)
File "/data/users/xilunwu/oss/pytorch/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/distributed/tensor/_api.py", line 348, in __torch_dispatch__
return DTensor._op_dispatcher.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/distributed/tensor/_dispatch.py", line 174, in dispatch
self.sharding_propagator.propagate(op_info)
File "/data/users/xilunwu/oss/pytorch/torch/distributed/tensor/_sharding_prop.py", line 207, in propagate
OutputSharding, self.propagate_op_sharding(op_info.schema)
File "/data/users/xilunwu/oss/pytorch/torch/distributed/tensor/_sharding_prop.py", line 47, in __call__
return self.cache(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/distributed/tensor/_sharding_prop.py", line 456, in propagate_op_sharding_non_cached
raise NotImplementedError(
torch._dynamo.exc.InternalTorchDynamoError: NotImplementedError: Operator aten.as_strided.default does not have a sharding strategy registered.
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,803,533,138 | DISABLED test_extern (__main__.NumBytesMetricTests) | pytorch-bot[bot] | open | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_extern&suite=NumBytesMetricTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35959227973).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_extern`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_perf.py", line 152, in test_extern
self.assertExpectedInline(count_numel(f, *inp), """200""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3066, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '200' != '820'
- 200
? -
+ 820
? +
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py NumBytesMetricTests.test_extern
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,499,936 | [dynamo] Support fx map_aggregate | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145132
* #145420
* __->__ #145351
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,803,421,635 | [CUDA] Illegal Memory Access with `ReplicationPad2D` | jwnhy | open | [
"module: nn",
"module: cuda",
"triaged",
"module: edge cases",
"topic: fuzzer"
] | 0 | NONE | ### 🐛 Describe the bug
This is found by fuzzer.
```python
import torch
m1 = torch.randn(1, 4484, 2).cuda()
model = torch.nn.ReplicationPad2d((0, 0, 0, 1826029949)).cuda()
model(m1)
```
```bash
computer-sanitizer python3 poc.py
```
compute-sanitizer log
```
========= COMPUTE-SANITIZER
========= Invalid __global__ write of size 4 bytes
========= at void at::native::<unnamed>::replication_pad_forward_kernel2d<float>(at::GenericPackedTensorAccessor<const T1, (unsigned long)4, at::DefaultPtrTraits, long>, at::GenericPackedTensorAccessor<T1, (unsigned long)4, at::DefaultPtrTraits, long>, int, int, int, int)+0x7f0
========= by thread (224,0,0) in block (8388906,0,0)
========= Address 0x79e0d604ab80 is out of bounds
========= and is 8,589,628,544 bytes before the nearest allocation at 0x79e2d6000000 of size 14,608,760,832 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2dfbef]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15803]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75230]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:at::native::structured_replication_pad2d_out_cuda::impl(at::Tensor const&, c10::ArrayRef<long>, at::Tensor const&) [0x279746f]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::(anonymous namespace)::wrapper_CUDA_replication_pad2d(at::Tensor const&, c10::ArrayRef<long>) [0x36007dc]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<long>), &at::(anonymous namespace)::wrapper_CUDA_replication_pad2d>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<long> > >, at::Tensor (at::Tensor const&, c10::ArrayRef<long>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>) [0x3600882]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::_ops::replication_pad2d::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x240eb8c]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::VariableType::(anonymous namespace)::replication_pad2d(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x48445f8]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>), &torch::autograd::VariableType::(anonymous namespace)::replication_pad2d>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt> > >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x4844c25]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::replication_pad2d::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x246806e]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::native::_pad_enum_symint(at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, std::optional<double>) [0x1ba579c]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::native::pad_symint(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>) [0x1ba5df7]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>), &at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__pad>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double> > >, at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>) [0x2d3c898]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::pad::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>) [0x24909b5]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::THPVariable_pad(_object*, _object*, _object*) [0x7732e3]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_python.so
========= Host Frame:cfunction_call in /usr/local/src/conda/python-3.12.7/Objects/methodobject.c:537 [0x149d53]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyObject_MakeTpCall in /usr/local/src/conda/python-3.12.7/Objects/call.c:240 [0x11af9a]
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy | true |
2,803,395,105 | [CUDA] Illegal Memory Access with `AdaptiveAvgPool2d` | jwnhy | open | [
"module: nn",
"module: cuda",
"triaged",
"module: edge cases",
"topic: fuzzer"
] | 1 | NONE | ### 🐛 Describe the bug
```python
import torch
m1 = torch.randn(40, 40, 40).cuda()
model = torch.nn.AdaptiveAvgPool2d(output_size=[1, 67108607]).cuda()
model(m1)
```
```bash
compute-sanitizer python3 poc.py
```
Sanitizer Backtrace:
```
========= Invalid __global__ write of size 4 bytes
========= at void at::native::<unnamed>::adaptive_average_pool<float>(const T1 *, T1 *, int, int, int, int, long, long, long)+0x1dc0
========= by thread (0,0,0) in block (35,0,0)
========= Address 0x738041ff7374 is out of bounds
========= and is 7,784,664,204 bytes before the nearest allocation at 0x738212000000 of size 10,737,418,240 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2dfbef]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15803]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75230]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:at::native::(anonymous namespace)::adaptive_avg_pool2d_out_cuda_template(at::Tensor&, at::Tensor const&, c10::ArrayRef<long>) [0x15fc38d]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::native::adaptive_avg_pool2d_cuda(at::Tensor const&, c10::ArrayRef<long>) [0x15fd909]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::(anonymous namespace)::(anonymous namespace)::wrapper_CUDA___adaptive_avg_pool2d(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x3569d28]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>), &at::(anonymous namespace)::(anonymous namespace)::wrapper_CUDA___adaptive_avg_pool2d>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<c10::SymInt> > >, at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x3569df2]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::_ops::_adaptive_avg_pool2d::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x28900be]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::VariableType::(anonymous namespace)::_adaptive_avg_pool2d(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x4aed88d]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>), &torch::autograd::VariableType::(anonymous namespace)::_adaptive_avg_pool2d>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt> > >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x4aeddd5]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::_adaptive_avg_pool2d::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x28c531e]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::native::adaptive_avg_pool2d_symint(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x18b58a9]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>), &at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__adaptive_avg_pool2d>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<c10::SymInt> > >, at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x2d3c7e2]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::adaptive_avg_pool2d::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x27a4b7e]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::THPVariable_adaptive_avg_pool2d(_object*, _object*, _object*) [0x776589]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_python.so
========= Host Frame:cfunction_call in /usr/local/src/conda/python-3.12.7/Objects/methodobject.c:537 [0x149d53]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyObject_MakeTpCall in /usr/local/src/conda/python-3.12.7/Objects/call.c:240 [0x11af9a]
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy | true |
2,803,377,044 | [inductor][2/N] triton support post-#5512, user-defined triton kernels | davidberard98 | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 11 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145515
* __->__ #145348
* #145051
Triton commit 5220 adds tuple support in Triton (changing the indexing format in AttrsDescriptor) and commit 5512 replaces AttrsDescriptor with raw tuples. This PR fixes user-defined triton kernel handling (in most cases) for these new triton commits.
What this PR fixes:
* in triton_kernel_wrap.py, AST->TTIR parsing was to be updated for the new triton API
* ir.py - don't remove None args when using newer triton versions
* wrapper.py - update signature & constant handling
What this doesn't fix:
* correct None handling - I want to do a closer look at constant handling (including None, equal_to_1, and other constants).
* cpp wrapper (which needs to be fixed for both user-defined triton kernels and inductor-generated kernels)
test/inductor/test_triton_kernels.py passed on triton commit 74de6b46, with the exception of three tests (those shown here: https://github.com/pytorch/pytorch/pull/145348/commits/1374074098fa9e9ae4921b46be8d52f2a85b8a01)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,374,767 | Fix deprecated pytorch_sphinx_theme editable installation in PyTorch CI | huydhn | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/nightly",
"test-config/default",
"ci-no-td"
] | 7 | CONTRIBUTOR | Fixes https://github.com/pytorch/pytorch/issues/145221
~~Pip editable install is going to be deprecated soon https://github.com/pypa/pip/issues/11457. The fix here is just to remove it and install `pytorch_sphinx_theme` normally.~~
It turns out that `-e git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme` has some local resources like fonts that are needed to render the HTML pages. So, we need to keep it and I add `--use-pep517` to properly support `-e`. Another approach is to update PyTorch pyproject.toml, but that change seems to have a much wider implication than just installing doc build requirements.
### Testing
Doc build is working with the change:
* PR https://github.com/pytorch/pytorch/actions/runs/12901499736/job/35975042345?pr=145347
* Nightly https://github.com/pytorch/pytorch/actions/runs/12901500521/job/35975046289 | true |
2,803,297,814 | DISABLED test_graph_break_inside_ctx_with_side_effects (__main__.ContextlibContextManagerTests) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_graph_break_inside_ctx_with_side_effects&suite=ContextlibContextManagerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35960839362).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_graph_break_inside_ctx_with_side_effects`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_ctx_manager.py", line 2051, in test_graph_break_inside_ctx_with_side_effects
self.assertEqual(len(eager.graphs), 0)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 0 but got 1.
Absolute difference: 1
Relative difference: inf
To execute this test, run the following from the base repo dir:
python test/dynamo/test_ctx_manager.py ContextlibContextManagerTests.test_graph_break_inside_ctx_with_side_effects
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_ctx_manager.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,803,297,782 | DISABLED test_partitioning_with_view (__main__.MinCutPartitioningTests) | pytorch-bot[bot] | open | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_partitioning_with_view&suite=MinCutPartitioningTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35951349745).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_partitioning_with_view`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_perf.py", line 776, in test_partitioning_with_view
self.assertExpectedInline(count_numel_train(f, *inp), """900""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3066, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '900' != '1520'
- 900
+ 1520
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py MinCutPartitioningTests.test_partitioning_with_view
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,297,781 | DISABLED test_cat (__main__.NumBytesMetricTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cat&suite=NumBytesMetricTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35951074517).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cat`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_perf.py", line 207, in test_cat
self.assertExpectedInline(count_numel(f, *inp), """400""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3066, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '400' != '1264'
- 400
+ 1264
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py NumBytesMetricTests.test_cat
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,294,113 | DISABLED test_partitioning_unremat_bw (__main__.MinCutPartitioningTests) | pytorch-bot[bot] | open | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 7 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_partitioning_unremat_bw&suite=MinCutPartitioningTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35952027696).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 5 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_partitioning_unremat_bw`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_perf.py", line 718, in test_partitioning_unremat_bw
self.assertExpectedInline(count_numel_train(f, *inp), """1300""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3066, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '1300' != '1720'
- 1300
+ 1720
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py MinCutPartitioningTests.test_partitioning_unremat_bw
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,275,661 | PEP585: Missed conversions | aorenste | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"keep-going",
"suppress-bc-linter",
"release notes: optim"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145342
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
Differential Revision: [D68785969](https://our.internmc.facebook.com/intern/diff/D68785969) | true |
2,803,227,004 | [MPSInductor] Add `gamma` op | malfet | closed | [
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145341
By moving `gamma` and `log_gamma` implementation from `Gamma.metal` to `c10/metal/special_math.h`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,216,713 | internal compiler error: in extract_insn when compiling pytorch with xpu with gcc 12 | jingxu10 | closed | [
"triaged",
"module: xpu"
] | 1 | COLLABORATOR | ### 🐛 Describe the bug
As title, compilation with XPU support fails with the issue below. Compiling CPU succeeds.
```
...
/opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/builtins/builtins.hpp:235:1: warning: multi-line comment [-Wcomment]
235 | // clang++ -[DU]__SYCL_DEVICE_ONLY__ -x c++ math_functions.inc \
| ^
In file included from /usr/include/c++/12/functional:59,
from /root/pytorch/c10/util/string_view.h:6,
from /root/pytorch/c10/util/StringUtil.h:6,
from /root/pytorch/c10/util/Exception.h:8,
from /root/pytorch/aten/src/ATen/BlasBackend.h:3,
from /root/pytorch/aten/src/ATen/Context.h:3:
/usr/include/c++/12/bits/std_function.h: In static member function ~@~Xstatic _Res std::_Function_handler<_Res(_ArgTypes ...), _Functor>::_M_invoke(const sstd::_Any_data&, _ArgTypes&& ...) [with _Res = void; _Functor = sycl::_V1::handler::ResetHostKernel<at::native::xpu::VectorizedElementwiseKernel<8, at::native::xpu::SignbitFunctor<c10::BFloat16>, at::detail::Array<char*, 2>, TrivialOffsetCalculator<1, unsigned int> >, sycl::_V1::nd_item<1>, 1>(const at::native::xpu::VectorizedElementwiseKernel<8, at::native::xpu::SignbitFunctor<c10::BFloat16>, at::detail::Array<char*, 2>, TrivialOffsetCalculator<1, unsigned int> >&)::NormalizedKernelType; _ArgTypes = {const sycl::_V1::nd_item<1>&}]~@~Y:
/usr/include/c++/12/bits/std_function.h:292:7: error: unrecognizable insn:
292 | }
| ^
(insn 21 20 22 4 (set (reg:V2SI 87 [ vect__71.47795 ])
(lshiftrt:V2SI (subreg:V2SI (subreg:V2SF (reg:V2SI 118 [ vect__69.47793 ]) 0) 0)
(const_int 31 [0x1f]))) "/usr/include/c++/12/cmath":662:29 -1
(nil))
during RTL pass: vregs
/usr/include/c++/12/bits/std_function.h:292:7: internal compiler error: in extract_insn, at recog.cc:2791
0x1b3ed3a internal_error(char const*, ...)
???:0
0x6a22ba fancy_abort(char const*, int, char const*)
???:0
0x67affc _fatal_insn(char const*, rtx_def const*, char const*, int, char const*)
???:0
0x67b01e _fatal_insn_not_found(rtx_def const*, char const*, int, char const*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <file:///usr/share/doc/gcc-12/README.Bugs> for instructions.
CMake Error at torch_xpu_ops_sycl_unary_binary_kernels_generated_UnarySignKernels.cpp.o.Release.cmake:145 (message):
Error generating file
/root/pytorch/build/caffe2/aten_xpu/src/CMakeFiles/torch_xpu_ops_sycl_unary_binary_kernels.dir/ATen/native/xpu/sycl/./torch_xpu_ops_sycl_unary_binary_kernels_generated_UnarySignKernels.cpp.o
...
```
### Versions
```
(xpu) root@2649cb81ee38:~# python collect_env.py
Collecting environment information...
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35 Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Intel GPU driver version:
* intel_opencl: 24.45.31740.15-1057~22.04
* level_zero: 1.18.5.0-1055~22.04
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey | true |
2,803,160,156 | Add MKLDNN support for Half GELU | CaoE | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 6 | COLLABORATOR | Add MKLDNN support for Half GELU to align with BFloat16. | true |
2,803,158,527 | [S481486] [MTIA] Correct mtia.device_count() API | chaos5958 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Summary:
Prev: Count the number of "general" accelerators
Curr: Count the number of MTIA devices by using the MTIA runtime API
Test Plan:
```
buck test //mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api -- -r test_get_device_count
```
https://www.internalfb.com/intern/testinfra/testrun/8162774572631995
Reviewed By: BoyueZheng
Differential Revision: D68472668
| true |
2,803,139,096 | [libTorch] Model initialization on multi-device is slow. It seems to run sequentially in multi-thread | thammegowda | open | [
"module: cpp",
"triaged"
] | 1 | NONE | > Originally posted at https://discuss.pytorch.org/t/x/215093
I am using libTorch for inference on multiple GPU devices. I use one-thread-per-device to initialize and then to run inference. Inference (i.e. `forward()` ) works fast as expected, however the initialization step seems to run sequentially. Once the initialization is complete, the rest of the code runs concurrently as expected. This is problematic for bigger models, where each thread takes several minutes. How to initialize models on multiple devices using libtorch?
Here is a minimal, reproducible example:
```cpp
#include <torch/torch.h>
#include <spdlog/spdlog.h>
using namespace torch;
namespace nn = torch::nn;
const torch::Device DEVICE = torch::Device(torch::cuda::is_available() ? torch::kCUDA : torch::kCPU);
// a dummy model for demonstration
struct NetImpl : nn::Module {
nn::Sequential layers;
NetImpl(std::vector<int64_t> sizes, torch::Device device = DEVICE)
: layers{ register_module("layers", torch::nn::Sequential()) }
{
for (size_t i = 0; i < sizes.size() - 1; i++) {
layers->push_back(nn::Linear(sizes[i], sizes[i + 1]));
layers->push_back(nn::Functional(torch::relu));
}
this->to(device);
}
auto forward(Tensor x) -> Tensor {
x = layers->forward(x);
return x;
}
};
TORCH_MODULE(Net);
struct Timer {
std::string name;
std::chrono::time_point<std::chrono::high_resolution_clock> start;
Timer(std::string name="")
: name {name}, start {std::chrono::high_resolution_clock::now()}
{
spdlog::info("Timer {} started", name);
}
double elapsed() {
auto now = std::chrono::high_resolution_clock::now();
return std::chrono::duration_cast<std::chrono::seconds>(now - start).count();
}
~Timer() {
spdlog::info("Timer {} ended: {:.3f}s", name, elapsed());
}
};
int main() {
spdlog::info("torch version {}", TORCH_VERSION);
// deep network; FFN with a lot of layers to make it deep
std::vector<int64_t> dims = {
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
};
if (!torch::cuda::is_available()) {
throw std::runtime_error("CUDA is not available");
}
std::vector<torch::Device> devices;
for (auto i = 0; i < torch::cuda::device_count(); i++) {
devices.push_back(torch::Device(torch::kCUDA, i));
}
{ // scope for timer
int n_threads = devices.size();
Timer timer(fmt::format("[{}-threaded initializer]", n_threads));
std::vector<std::jthread> threads;
for (int i = 0; i < n_threads; i++) {
auto t = std::jthread([i, &dims, &devices] {
auto device = devices[i];
Timer timer(fmt::format("{}", device.str()));
auto model = Net(dims, device);
});
threads.push_back(std::move(t));
}
}
return 0;
}
```
With a single GPU, i.e. `CUDA_VISIBLE_DEVICES=0`
```
[250108 04:12:39|t1753841][info] Timer [1-threaded initializer] started
[250108 04:12:39|t1753854][info] Timer cuda:0 started
[250108 04:12:53|t1753854][info] Timer cuda:0 ended: 14.000s
[250108 04:12:53|t1753841][info] Timer [1-threaded initializer] ended: 14.000s
```
Now, with `CUDA_VISIBLE_DEVICES=0,1,` the time is almost doubled
```
[250108 04:13:02|t1754149][info] Timer [2-threaded initializer] started
[250108 04:13:02|t1754163][info] Timer cuda:0 started
[250108 04:13:02|t1754164][info] Timer cuda:1 started
[250108 04:13:26|t1754164][info] Timer cuda:1 ended: 24.000s
[250108 04:13:27|t1754163][info] Timer cuda:0 ended: 24.000s
[250108 04:13:27|t1754149][info] Timer [2-threaded initializer] ended: 24.000s
```
And with `CUDA_VISIBLE_DEVICES=0,1,2,3`, the pattern continues:
```
[250108 04:14:04|t1754791][info] Timer [4-threaded initializer] started
[250108 04:14:04|t1754795][info] Timer cuda:0 started
[250108 04:14:04|t1754796][info] Timer cuda:1 started
[250108 04:14:04|t1754797][info] Timer cuda:2 started
[250108 04:14:04|t1754798][info] Timer cuda:3 started
[250108 04:14:52|t1754796][info] Timer cuda:1 ended: 47.000s
[250108 04:14:52|t1754795][info] Timer cuda:0 ended: 48.000s
[250108 04:14:58|t1754797][info] Timer cuda:2 ended: 54.000s
[250108 04:14:58|t1754798][info] Timer cuda:3 ended: 54.000s
[250108 04:14:58|t1754791][info] Timer [4-threaded initializer] ended: 54.000s
```
Finally, with all 8 devices:
```
[250108 04:15:50|t1755936][info] Timer [8-threaded initializer] started
[250108 04:15:50|t1755959][info] Timer cuda:0 started
[250108 04:15:50|t1755960][info] Timer cuda:1 started
[250108 04:15:50|t1755961][info] Timer cuda:2 started
[250108 04:15:50|t1755962][info] Timer cuda:3 started
[250108 04:15:50|t1755963][info] Timer cuda:4 started
[250108 04:15:50|t1755964][info] Timer cuda:5 started
[250108 04:15:50|t1755965][info] Timer cuda:6 started
[250108 04:15:50|t1755966][info] Timer cuda:7 started
[250108 04:17:23|t1755960][info] Timer cuda:1 ended: 92.000s
[250108 04:17:23|t1755965][info] Timer cuda:6 ended: 93.000s
[250108 04:17:24|t1755964][info] Timer cuda:5 ended: 93.000s
[250108 04:17:24|t1755959][info] Timer cuda:0 ended: 94.000s
[250108 04:17:24|t1755963][info] Timer cuda:4 ended: 94.000s
[250108 04:17:25|t1755966][info] Timer cuda:7 ended: 94.000s
[250108 04:17:25|t1755961][info] Timer cuda:2 ended: 95.000s
[250108 04:17:28|t1755962][info] Timer cuda:3 ended: 97.000s
[250108 04:17:28|t1755936][info] Timer [8-threaded initializer] ended: 97.000s
```
I can’t see where in `NetImpl` or `nn::LinearImpl` the locking is enforcing sequential execution.
It looks like some internal API (ATen/C10) is at play and I am clueless how to resolve it. How to improve the parallelization in this case?
cc @jbschlosser | true |
2,803,124,669 | DISABLED test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_True (__main__.TestFxGraphCache) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 8 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35950279286).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 146, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 14 but got 35.
Absolute difference: 21
Relative difference: 1.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,124,136 | DISABLED test_mm_plus_mm (__main__.TestPatternMatcher) | pytorch-bot[bot] | open | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mm_plus_mm&suite=TestPatternMatcher&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35949080113).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 6 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mm_plus_mm`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 113, in test_mm_plus_mm
self.common(fn, args, 1, 3)
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 85, in common
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 2.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_pattern_matcher.py TestPatternMatcher.test_mm_plus_mm
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_pattern_matcher.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,124,090 | DISABLED test_cache_hot_load_device_cuda_bfloat16_dynamic_False (__main__.AOTAutogradCacheTests) | pytorch-bot[bot] | open | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_hot_load_device_cuda_bfloat16_dynamic_False&suite=AOTAutogradCacheTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35949205522).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_hot_load_device_cuda_bfloat16_dynamic_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_aot_autograd_cache.py", line 119, in test_cache_hot_load
self.assertEqual(len(cache_info.autotune_artifacts), autotune_expect)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 2 but got 4.
Absolute difference: 2
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_aot_autograd_cache.py AOTAutogradCacheTests.test_cache_hot_load_device_cuda_bfloat16_dynamic_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_aot_autograd_cache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,803,124,089 | DISABLED test_warn_on_invalid_torch_function_standalone_class (__main__.TestTorchFunctionWarning) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"module: __torch_function__"
] | 3 | NONE | Platforms: asan, linux, mac, macos, rocm, win, windows, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_warn_on_invalid_torch_function_standalone_class&suite=TestTorchFunctionWarning&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35951159764).
Over the past 3 hours, it has been determined flaky in 111 workflow(s) with 222 failures and 111 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_warn_on_invalid_torch_function_standalone_class`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_overrides.py`
cc @clee2000 @wdvr @hameerabbasi @rgommers @ezyang | true |
2,803,124,088 | DISABLED test_reorder_peak_memory (__main__.TestOperatorReorderForPeakMemory) | pytorch-bot[bot] | open | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reorder_peak_memory&suite=TestOperatorReorderForPeakMemory&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35949205522).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reorder_peak_memory`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory.py", line 71, in test_reorder_peak_memory
.run(code)
RuntimeError: Expected to find "buf0 = " but did not find it
Searched string:
stream0 = get_raw_stream(0)
triton_red_fused_sum_2.run(buf4, buf6, 1, 2048, grid=grid(1), stream=stream0)
buf1 = buf4; del buf4 # reuse
# Topologically Sorted Source Nodes: [t2], Original ATen: [aten.mm]
extern_kernels.mm(primals_2, primals_3, out=buf1)
del primals_3
buf5 = empty_strided_cuda((2048, 10), (10, 1), torch.float32)
# Topologically Sorted Source Nodes: [t4], Original ATen: [aten.mm]
extern_kernels.mm(buf1, primals_5, out=buf5)
buf7 = empty_strided_cuda((3, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [sum_2], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_3.run(buf5, buf7, 3, 6827, grid=grid(3), stream=stream0)
del buf5
buf9 = buf6; del buf6 # reuse
# Topologically Sorted Source Nodes: [sum_2, add], Original ATen: [aten.sum, aten.add]
stream0 = get_raw_stream(0)
triton_per_fused_add_sum_4.run(buf9, buf7, 1, 3, grid=grid(1), stream=stream0)
del buf7
return (buf9, primals_2, reinterpret_tensor(buf1, (1, 2048), (1, 1), 0), reinterpret_tensor(primals_5, (10, 1), (1, 10), 0), reinterpret_tensor(buf0, (10, 2048), (1, 10), 0), reinterpret_tensor(primals_4, (1, 10), (1, 1), 0), )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
primals_1 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
primals_2 = rand_strided((2048, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_3 = rand_strided((1, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_4 = rand_strided((10, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_5 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: buf0 =
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory.py TestOperatorReorderForPeakMemory.test_reorder_peak_memory
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,803,116,019 | [WIP] [AOTInductor] Use AtenTensorHandle as the constant map's holder. | muchulee8 | closed | [
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145331
Summary:
Previously, all constants are held by RAIIAtenTensorHandle, which
implicitly indicates constants' lifetime is managed by the model itself.
We want to provide the flexibility to let users control the tensor's
lifetime instead.
This change is the first PR, aims to introduce a holder to act as the original
RAII holder managing the lifetime by the model and change the constant map to use AtenTensorHandle.
All behavior should be exactly the same as previous cases.
Test Plan:
Existing test cases. Not yet introducing new functionalities in this PR.
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Differential Revision: [](https://our.internmc.facebook.com/intern/diff/)
Differential Revision: [D68472175](https://our.internmc.facebook.com/intern/diff/D68472175) | true |
2,803,103,195 | [be] fix flaky test aot_export_ cond caused by free symbol lifting and automatic dynamic shape | ydwu4 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145330
Fixes https://github.com/pytorch/pytorch/issues/139998#issuecomment-2605908426.
It seems to be an issue caused by the interaction between dynamoed hop X automatic dynamic shape X auto_lift_free symbols. The immediate error is that the asserteExpectedInline of the graph can sometimes be different e.g. see https://hud.pytorch.org/flakytest?name=test_aot_export_with_torch_cond&suite=TestAOTExport&limit=100, where sometimes the shapes are lifted as input to the cond and sometimes they're not.
The root cause of the flakyness is that the two invocations of torch.cond triggers two torch.compile on the same code object ([code](https://github.com/pytorch/pytorch/blob/main/torch/_higher_order_ops/cond.py#L192)), and triggers automatic dynamic shape because in test_aot_export_with_torch_cond, x has shape (3, 4) while the pre_dispatch one has shape (2, 2). Because of we auto lift free symbols for dynamic shaped input, this causes cond sometimes have the shape as arguments and sometimes not.
This PR adds a simple fix by adding a _dynamo.reset before each torch.cond tests. This fixes the error by not triggering automatic dynamic shape. | true |
2,803,089,008 | [dynamo] Save/restore system random state more carefully | williamwen42 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0 | MEMBER | Internal example: [T207752792](https://www.internalfb.com/intern/tasks/?t=207752792)
There are some OSS unittests that are failing internally (e.g. `test/dynamo/test_unspec.py::UnspecTests::test_random_object`) likely because some internal logging code is burning some random numbers, leading to differing resulting random states from compiled vs. eager. In particular, if we skip `record_chromium_event_internal` and `log_chromium_event_internal` in `fb/_utils_internal.py`, then the test no longer fails internally.
Test case:
```python
def test_random_in_dynamo(self):
# test that system random calls still work even
# if Dynamo calls random methods.
def fn(x):
# r1 = random.random()
r1 = random.randint(1, 9)
y = x + random.uniform(10, 20)
r2 = random.randint(2, 18)
return y + r1, r2
orig_fn = torch._dynamo.eval_frame._maybe_set_eval_frame
def bad(*args, **kwargs):
# burn random call within dynamo
random.random()
return orig_fn(*args, **kwargs)
x = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
random.seed(1)
res1 = fn(x)
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
random.seed(1)
with unittest.mock.patch("torch._dynamo.eval_frame._maybe_set_eval_frame", bad):
res2 = opt_fn(x)
self.assertTrue(same(res1, res2))
```
Dynamo should save/restore system `random` state more carefully in order to prevent non-user random calls made during tracing to affect the final random state.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,803,084,246 | [audio hash update] update the pinned audio hash | pytorchupdatebot | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6 | COLLABORATOR | This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash. | true |
2,803,080,575 | [utilization] pipeline to create clean db records | yangw-dev | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | upload_utilization_script to generate db-ready-insert records to s3
- generate two files: metadata and timeseries in ossci-utilization buckets
- convert log record to db format ones
- add unit test job for tools/stats/
Related Prs:
setup composite action for data pipeline: https://github.com/pytorch/pytorch/pull/145310
add permission for composite action to access S3 bucket: https://github.com/pytorch-labs/pytorch-gha-infra/pull/595
add insert logic in s3 replicator: https://github.com/pytorch/test-infra/pull/6217 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.