id
int64
2.74B
3.05B
title
stringlengths
1
255
user
stringlengths
2
26
state
stringclasses
2 values
labels
listlengths
0
24
comments
int64
0
206
author_association
stringclasses
4 values
body
stringlengths
7
62.5k
is_title
bool
1 class
3,031,972,782
Use swap_tensors path in nn.Module.to for all subclasses that override __torch_dispatch__
mikaylagawarecki
open
[ "Merged", "Reverted", "ciflow/trunk", "release notes: nn", "ci-no-td" ]
9
CONTRIBUTOR
Fixes https://github.com/pytorch/pytorch/issues/148977 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152539
true
3,031,972,618
Disable SLEEF implementation of vec::maximum in vec128_float_neon.h | Accelerate aten::hardtanh_ by 21x
Rohanjames1997
closed
[ "module: cpu", "triaged", "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
4
CONTRIBUTOR
The `has_inf_nan` implementation in `vec::maximum` is scalar, and it slows down certain activations like `tanh` by almost 20 times. Additionally, the `vec::minimum` function simply uses NEON intrinsics and not SLEEF. This PR makes the two fns similar in implementation. Besides, the SLEEF function `Sleef_fmaxf4` ultimately invokes the `vmaxq_f32` NEON intrinsic through [vmax_vf_vf_vf](https://github.com/shibatch/sleef/blob/d28232a309e06bcb75e9fb0f6262d9251739fd1e/src/arch/helperadvsimd.h#L253). From a single threaded profile of mobilenet on an Arm Neoverse-V2 machine (code below), the `aten::hardtanh_` takes **5.653ms** per function call while using the current PyTorch 2.7 wheel, whereas it takes **266.096us** per function call while simply using `vmaxq_f32` - a 21x speedup, and overall inference is 1.8x faster. ___ Run the below script: `OMP_NUM_THREADS=1 python profile_mobilenet.py --iterations 10` <details > <summary>profile_mobilenet.py</summary> ``` import torch import torchvision.models as models from torch.profiler import profile, record_function, ProfilerActivity import argparse torch.manual_seed(42) def load_mobilenet(): model = models.mobilenet_v2(pretrained=True) model.eval() return model def generate_sample_input(batch_size=8): return torch.randn(batch_size, 3, 224, 224) def warmup(model, sample_input, num_warmup=10): with torch.inference_mode(): for _ in range(num_warmup): _ = model(sample_input) def parse_args(): parser = argparse.ArgumentParser() parser.add_argument('--batch_size', type=int, default=8) parser.add_argument('--iterations', type=int, default=100) return parser.parse_args() def main(): args = parse_args() model = load_mobilenet() sample_input = generate_sample_input(args.batch_size) print("Warming up...") warmup(model, sample_input) print("Warmup complete.") with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof: with torch.inference_mode(): for i in range(args.iterations): with record_function("model_inference"): outputs = model(sample_input) print(prof.key_averages().table(sort_by="cpu_time_total")) print(f"Throughput: {(args.iterations * args.batch_size / (prof.profiler.self_cpu_time_total / 1e6)):.3f} images/s") if __name__ == "__main__": main() ``` </details> <details> <summary>Profiler output using the current Pytorch 2.7 wheel </summary> ``` -------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls -------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ model_inference 2.39% 101.839ms 100.00% 4.254s 425.437ms 10 aten::hardtanh_ 0.02% 905.454us 46.50% 1.978s 5.653ms 350 aten::hardtanh 0.03% 1.239ms 46.48% 1.977s 5.650ms 350 aten::clamp 46.45% 1.976s 46.45% 1.976s 5.646ms 350 aten::conv2d 0.06% 2.468ms 43.89% 1.867s 3.591ms 520 aten::convolution 0.06% 2.491ms 43.83% 1.865s 3.586ms 520 aten::_convolution 0.13% 5.546ms 43.77% 1.862s 3.581ms 520 aten::thnn_conv2d 0.04% 1.658ms 24.13% 1.027s 3.019ms 340 aten::_slow_conv2d_forward 23.99% 1.021s 24.09% 1.025s 3.014ms 340 aten::mkldnn_convolution 14.42% 613.285ms 19.51% 829.885ms 4.610ms 180 aten::batch_norm 0.06% 2.368ms 6.89% 292.928ms 563.323us 520 aten::_batch_norm_impl_index 0.11% 4.600ms 6.83% 290.560ms 558.769us 520 aten::native_batch_norm 6.60% 280.762ms 6.69% 284.567ms 547.244us 520 aten::contiguous 0.01% 623.099us 5.01% 213.152ms 1.184ms 180 aten::clone 0.02% 988.729us 5.00% 212.529ms 1.181ms 180 aten::copy_ 4.94% 210.315ms 4.94% 210.315ms 1.052ms 200 aten::linear 0.00% 58.347us 0.18% 7.659ms 765.905us 10 aten::addmm 0.17% 7.373ms 0.18% 7.483ms 748.309us 10 aten::empty 0.17% 7.161ms 0.17% 7.161ms 1.790us 4000 aten::add 0.11% 4.742ms 0.11% 4.742ms 47.419us 100 aten::empty_like 0.03% 1.315ms 0.09% 3.890ms 5.557us 700 aten::view 0.05% 1.933ms 0.05% 1.933ms 2.801us 690 aten::as_strided_ 0.04% 1.599ms 0.04% 1.599ms 8.885us 180 aten::resize_ 0.04% 1.493ms 0.04% 1.493ms 2.871us 520 aten::adaptive_avg_pool2d 0.00% 55.360us 0.04% 1.491ms 149.051us 10 aten::mean 0.00% 116.997us 0.03% 1.435ms 143.515us 10 aten::sum 0.02% 935.980us 0.02% 992.121us 99.212us 10 aten::detach 0.02% 707.217us 0.02% 707.217us 2.080us 340 aten::div_ 0.00% 161.473us 0.01% 326.035us 32.604us 10 aten::to 0.00% 178.193us 0.01% 321.253us 0.892us 360 aten::_nnpack_available 0.01% 302.835us 0.01% 302.835us 0.891us 340 aten::_to_copy 0.00% 63.170us 0.00% 143.060us 14.306us 10 aten::t 0.00% 49.759us 0.00% 117.621us 11.762us 10 aten::transpose 0.00% 40.637us 0.00% 67.862us 6.786us 10 aten::flatten 0.00% 42.634us 0.00% 58.867us 5.887us 10 aten::fill_ 0.00% 56.141us 0.00% 56.141us 5.614us 10 aten::expand 0.00% 42.687us 0.00% 48.930us 4.893us 10 aten::empty_strided 0.00% 40.589us 0.00% 40.589us 4.059us 10 aten::as_strided 0.00% 33.468us 0.00% 33.468us 1.673us 20 aten::resolve_conj 0.00% 9.066us 0.00% 9.066us 0.453us 20 aten::dropout 0.00% 5.782us 0.00% 5.782us 0.578us 10 -------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ Self CPU time total: 4.254s Throughput: 18.804 images/s ``` </details> <details> <summary>Profiler output after this PR's changes </summary> ``` -------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls -------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ model_inference 4.43% 104.484ms 100.00% 2.359s 235.883ms 10 aten::conv2d 0.10% 2.313ms 79.19% 1.868s 3.592ms 520 aten::convolution 0.10% 2.293ms 79.09% 1.866s 3.588ms 520 aten::_convolution 0.23% 5.436ms 78.99% 1.863s 3.583ms 520 aten::thnn_conv2d 0.08% 1.799ms 44.29% 1.045s 3.072ms 340 aten::_slow_conv2d_forward 44.03% 1.039s 44.21% 1.043s 3.067ms 340 aten::mkldnn_convolution 24.91% 587.584ms 34.47% 812.992ms 4.517ms 180 aten::batch_norm 0.10% 2.350ms 11.83% 279.113ms 536.757us 520 aten::_batch_norm_impl_index 0.20% 4.788ms 11.73% 276.764ms 532.238us 520 aten::native_batch_norm 11.30% 266.660ms 11.46% 270.420ms 520.038us 520 aten::contiguous 0.02% 575.723us 9.41% 222.080ms 1.234ms 180 aten::clone 0.04% 1.061ms 9.39% 221.504ms 1.231ms 180 aten::copy_ 9.29% 219.131ms 9.29% 219.131ms 1.096ms 200 aten::hardtanh_ 0.04% 917.669us 3.95% 93.133ms 266.096us 350 aten::hardtanh 0.05% 1.130ms 3.91% 92.216ms 263.474us 350 aten::clamp 3.85% 90.894ms 3.86% 91.086ms 260.246us 350 aten::linear 0.00% 68.681us 0.33% 7.899ms 789.945us 10 aten::addmm 0.32% 7.598ms 0.33% 7.707ms 770.673us 10 aten::empty 0.30% 7.176ms 0.30% 7.176ms 1.794us 4000 aten::add 0.20% 4.627ms 0.20% 4.627ms 46.268us 100 aten::empty_like 0.06% 1.316ms 0.17% 3.973ms 5.676us 700 aten::view 0.08% 2.001ms 0.08% 2.001ms 2.899us 690 aten::adaptive_avg_pool2d 0.00% 53.745us 0.07% 1.548ms 154.791us 10 aten::resize_ 0.06% 1.533ms 0.06% 1.533ms 2.948us 520 aten::as_strided_ 0.06% 1.521ms 0.06% 1.521ms 8.450us 180 aten::mean 0.00% 117.637us 0.06% 1.494ms 149.417us 10 aten::sum 0.04% 973.291us 0.04% 1.013ms 101.342us 10 aten::detach 0.03% 652.224us 0.03% 652.224us 1.918us 340 aten::div_ 0.01% 195.077us 0.02% 363.103us 36.310us 10 aten::to 0.01% 212.758us 0.02% 359.655us 0.999us 360 aten::_nnpack_available 0.01% 295.235us 0.01% 295.235us 0.868us 340 aten::_to_copy 0.00% 68.726us 0.01% 146.897us 14.690us 10 aten::t 0.00% 53.873us 0.01% 124.033us 12.403us 10 aten::transpose 0.00% 42.512us 0.00% 70.160us 7.016us 10 aten::flatten 0.00% 44.040us 0.00% 66.631us 6.663us 10 aten::expand 0.00% 44.632us 0.00% 51.177us 5.118us 10 aten::fill_ 0.00% 40.134us 0.00% 40.134us 4.013us 10 aten::empty_strided 0.00% 35.291us 0.00% 35.291us 3.529us 10 aten::as_strided 0.00% 34.193us 0.00% 34.193us 1.710us 20 aten::resolve_conj 0.00% 8.594us 0.00% 8.594us 0.430us 20 aten::dropout 0.00% 6.758us 0.00% 6.758us 0.676us 10 -------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ Self CPU time total: 2.359s Throughput: 33.915 images/s ``` </details> ___ Using torchbench, the models `mobilenet_v2` and `mobilenet_v3_large` showed improvements as expected too. Before -> After (latency in ms) ``` "mobilenet_v3_large-eval_latency": 1207.212 -> 844.902 "mobilenet_v2-eval_latency": 1029.834 -> 662.476 ``` cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
true
3,031,970,557
[CI] Use cmake from pip instead of conda in CI docker images
clee2000
open
[ "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "ciflow/inductor", "ci-no-td" ]
13
CONTRIBUTOR
As in title idk how the install_cmake script is used because I see it being called with 3.18 but when I look at the build jobs some say 3.18 and others 3.31 Just make everything install cmake via the requirements-ci.txt. I don't know if the comment at https://github.com/pytorch/pytorch/blob/5d36485b4aa5823bb9ee5eb070548c4ee355f3b2/.ci/docker/common/install_conda.sh#L78 still holds, but pretty much every build has CONDA_CMAKE set to true, so I'm just defaulting to installing through pip Also defaulting to 4.0.0 everywhere except the executorch docker build because executorch reinstalls 3.31.something
true
3,031,883,217
[PT2] Port replace_lce_with_matmul / replace_first_lce_with_fused_matmul_lce to PT2 pre_grad passes (#152450)
kqfu
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
7
CONTRIBUTOR
Summary: Same with D71358949, but removing newly added log to avoid test failures. Port over replace_lce_with_matmul and replace_first_lce_with_fused_matmul_lce to PT2 pre_grad pass. Original dper pass diffs: D67884534, D68123479, D68384238 Test Plan: Test 1. Covers replace_lce_with_matmul and case 1 of replace_first_lce_with_fused_matmul_lce ``` CUDA_VISIBLE_DEVICES=6 TORCH_LOGS=+inductor,aot TORCH_COMPILE_DEBUG=1 TORCHINDUCTOR_MAX_AUTOTUNE=1 buck2 run mode/opt-split-dwarf mode/inplace -c fbcode.platform010_cuda_version=12 -c fbcode.nvcc_arch=h100 caffe2/torch/fb/model_transform/experimental/benchmark:mts_gpu_benchmark -- --model-path=manifold://ads_storage_fblearner/tree/user/facebook/fblearner/predictor/669809193/0/gpu_lowering/input.predictor.disagg.gpu.merge --lower-backend="AOT_INDUCTOR" --add_passes="use_matmul_fuse_lce_replace_first_LCE,use_contiguous_linear_reduction_replace_linear_reduction" --batch-size=3072 --gpu-trace --disable_acc_tracer=true 2>&1 | tee ~/logs/disable_acc_tracer/aoti_cmf_ctr_triton_669809193_0_diable_acc.log ``` Log: P1798246938 Test 2. Covers replace_lce_with_matmul and case 2 of replace_first_lce_with_fused_matmul_lce ``` CUDA_VISIBLE_DEVICES=7 TORCH_LOGS=+inductor,aot TORCH_COMPILE_DEBUG=1 TORCHINDUCTOR_MAX_AUTOTUNE=1 buck2 run mode/opt-split-dwarf mode/inplace -c fbcode.platform010_cuda_version=12 -c fbcode.nvcc_arch=h100 caffe2/torch/fb/model_transform/experimental/benchmark:mts_gpu_benchmark -- --model-path=manifold://ads_storage_fblearner/tree/user/facebook/fblearner/predictor/677734158/9/gpu_lowering/input.predictor.disagg.gpu.merge --lower-backend="AOT_INDUCTOR" --add_passes="use_matmul_fuse_lce_replace_first_LCE,use_matmul_lce_replace_normal_LCE" --batch-size=3072 --gpu-trace --disable_acc_tracer=true 2>&1 | tee ~/logs/disable_acc_tracer/aoti_cmf_ctr_triton_677734158_9_diable_acc.log ``` Log: P1798246675 Seeing logs like `[Pre grad(predispatch IR)] Apply use_matmul_fuse_lce_replace_first_LCE pass, save before/after graph to /tmp/tmp8lyzoh79, graph before/after are the same = False` Differential Revision: D73934142 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,031,815,308
optree package status in PyTorch
zou3519
open
[ "high priority", "triaged", "module: pytree", "dependency issue" ]
5
CONTRIBUTOR
## Motivation "optree >= 0.13.0" is an optional dependency for PyTorch. There's not a good way to specify this in the package metadata, so we have code checks doing this. We ran into some issues with the PyTorch 2.7 release where the "optional dependency checking" code broken and we ended up (1) crashing on optree < 0.13.0 and (2) hard requiring optree >= 0.13.0. Furthermore, I submitted two fixes for this, where the first fix didn't work and the second one actually worked. We should fix this so that future releases (e.g. 2.8.0) do not run into the risk of this problem. ## Pitch I see three high-level solutions: 1. We continue to have optree as an optional dependency. We would need to beef up our CI. In particular, we want to test three configurations: optree == 0.13.0, no optree, and optree < 0.13.0 2. We take the optree pypi package as a mandatory dependency. 3. We take optree as a required pytorch submodule and build it into PyTorch. From discussions with @malfet, @seemethere, @atalman and @albanD it sounded like we preferred (3). The risk of doing (2) is that we get into dependency hell: if a third-party library pins optree and pytorch pins optree then we end up in a not-so-good-place. The direction our project is going in is taking optree as a mandatory requirement somehow, so (1) seems not worth it. Thoughts? @XuehaiPan @angelayi cc @ezyang @gchanan @kadeng @msaroufim @XuehaiPan
true
3,031,617,655
AsyncCollectiveTensor doesn't trigger wait upon dtype cast
lw
closed
[ "oncall: distributed", "triaged" ]
9
CONTRIBUTOR
### 🐛 Describe the bug This repro fails: ```py import tempfile import torch import torch.distributed as dist import torch.distributed._functional_collectives as funcol from torch.distributed.device_mesh import DeviceMesh from torch.distributed.tensor.experimental import local_map from torch.distributed.tensor import DTensor, Replicate, Shard WORKAROUND = False @torch.library.custom_op( "custom_ns::custom_op_fwd", mutates_args=(), device_types="cuda", ) def custom_op_fwd( input_: torch.Tensor, ) -> torch.Tensor: input_ = funcol.all_gather_tensor(input_, gather_dim=0, group=dist.group.WORLD) print(f"type after funcol: {type(input_)=}") if WORKAROUND: input_ = input_.mul(1) print(f"type after mul (no-op): {type(input_)=}") input_ = input_.float() print(f"type after cast to float: {type(input_)=}") return input_.sum(dim=-1) @torch.library.custom_op( "custom_ns::custom_op_bwd", mutates_args=(), device_types="cuda", ) def custom_op_bwd( input_: torch.Tensor, grad_output: torch.Tensor, ) -> torch.Tensor: grad_output = grad_output.view((-1, 1)).expand((-1, input_.shape[1])).contiguous() grad_output = funcol.reduce_scatter_tensor( grad_output, "avg", scatter_dim=0, group=dist.group.WORLD ) print(f"type after funcol: {type(grad_output)=}") if WORKAROUND: grad_output = grad_output.mul(1) print(f"type after mul (no-op): {type(grad_output)=}") grad_output = grad_output.bfloat16() print(f"type after cast to bfloat16: {type(grad_output)=}") return grad_output def custom_op_setup_context( ctx: torch.autograd.function.FunctionCtx, inputs: tuple[torch.Tensor], output: torch.Tensor, ) -> None: ctx.save_for_backward(*inputs) def custom_op_bwd_bridge( ctx: torch.autograd.function.FunctionCtx, grad_output: torch.Tensor, ) -> tuple[torch.Tensor]: input_, = ctx.saved_tensors grad_input = custom_op_bwd(input_, grad_output) return grad_input torch.library.register_autograd( "custom_ns::custom_op_fwd", custom_op_bwd_bridge, setup_context=custom_op_setup_context, ) def reference(input_: torch.Tensor) -> torch.Tensor: return input_.float().sum(-1) def our(input_: torch.Tensor) -> torch.Tensor: return local_map(custom_op_fwd, out_placements=[Replicate(),])(input_) def run_repro(rank: int, world_size: int, rdv_dir: str) -> None: torch.manual_seed(0) torch.cuda.set_device(rank) input_ = torch.randn((16384, 4096), dtype=torch.bfloat16, device="cuda") grad_output = torch.randn((16384,), dtype=torch.bfloat16, device="cuda") ref_output, ref_grad_input_ = torch.autograd.functional.vjp( reference, input_, grad_output ) dist.init_process_group( backend="nccl", rank=rank, world_size=world_size, init_method=f"file://{rdv_dir}/rdv", ) device_mesh = DeviceMesh.from_group( dist.group.WORLD, device_type="cuda", mesh_dim_names=("tp",) ) d_input_ = DTensor.from_local(input_.tensor_split(world_size)[rank], device_mesh, (Shard(0),)) d_grad_output = DTensor.from_local(grad_output, device_mesh, (Replicate(),)) our_output, our_grad_input_ = torch.autograd.functional.vjp( our, d_input_, d_grad_output ) torch.testing.assert_close(our_output.full_tensor(), ref_output) torch.testing.assert_close(our_grad_input_.full_tensor(), ref_grad_input_) if __name__ == "__main__": world_size = torch.cuda.device_count() with tempfile.TemporaryDirectory() as rdv_dir: torch.multiprocessing.spawn( run_repro, args=(world_size, rdv_dir), nprocs=world_size, ) ``` The issue seems to be that for an AsyncCollectiveTensor `t`, invoking `t.float()` does _not_ trigger the `wait_tensor`, in which case it would return a regular `torch.Tensor`, but instead it returns a new AsyncCollectiveTensor with garbage data. Inserting a dummy `.mul(1)` triggers the wait and makes everything work again. ### Versions The above repro fails under PyTorch 2.6.0 (with CUDA 12.6, on a H100 machine). The original issue I had was also failing under similar conditions, but was _passing_ with a nightly build of PyTorch, hence it's possible that somehow the issue has been fixed in main. cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @ezyang @gchanan @zou3519 @kadeng @msaroufim
true
3,031,611,336
Do not check out nccl when not building it
mgorny
closed
[ "triaged", "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
7
CONTRIBUTOR
Add additional conditions to `build_pytorch_libs.py` to avoid fetching NCCL when `USE_CUDA` or `USE_NCCL` are disabled. While at it, adjust the existing condition for `USE_SYSTEM_NCCL` to use the utility function.
true
3,031,471,033
The 2.7.0 release tarball is missing `.ci/docker/ci_commit_pins/nccl-cu12.txt` required for building
mgorny
open
[ "oncall: package/deploy" ]
1
CONTRIBUTOR
### 🐛 Describe the bug When trying to build PyTorch 2.7.0 from the `.tar.gz` attached to [the release](https://github.com/pytorch/pytorch/releases/tag/v2.7.0), I'm seeing the following error: ```pytb Traceback (most recent call last): File "/var/tmp/conda-bld/work/setup.py", line 1503, in <module> main() File "/var/tmp/conda-bld/work/setup.py", line 1170, in main build_deps() File "/var/tmp/conda-bld/work/setup.py", line 490, in build_deps build_pytorch( File "/var/tmp/conda-bld/work/tools/build_pytorch_libs.py", line 122, in build_pytorch checkout_nccl() File "/var/tmp/conda-bld/work/tools/build_pytorch_libs.py", line 102, in checkout_nccl release_tag = read_nccl_pin() ^^^^^^^^^^^^^^^ File "/var/tmp/conda-bld/work/tools/build_pytorch_libs.py", line 97, in read_nccl_pin with open(nccl_pin_path) as f: ^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/var/tmp/conda-bld/work/.ci/docker/ci_commit_pins/nccl-cu12.txt' ``` And indeed, the top-level `.ci` directory is entirely missing from the tarball. However, the build code unconditionally reads it: https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/tools/build_pytorch_libs.py#L101-L122 ### Versions n/a
true
3,031,031,190
[inductor][triton] Inductor is not compatible with the latest upstream Triton
xuzhao9
closed
[ "oncall: pt2", "upstream triton" ]
1
CONTRIBUTOR
### 🐛 Describe the bug Upstream https://github.com/triton-lang/triton/commit/850525276426fb9814399a8e0ee8fdf744229b02 removes `launch_enter_hook` from triton.compiler.CompiledKernel and https://github.com/triton-lang/triton/commit/efa87747342dc50b3c87a357f13288168d6e6e1c renames it to `triton.knobs.runtime.launch_enter_hook`. Inductor is using the old API: https://github.com/pytorch/pytorch/blob/36acaaae3fb008955320484a8650761e31ce97ad/torch/_inductor/runtime/static_cuda_launcher.py#L48 cc @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov ### Error logs https://github.com/pytorch-labs/tritonbench/actions/runs/14735416820/job/41360153735 ### Versions Latest Nightly
true
3,030,865,350
NotImplementedError: Operator aten.view.dtype does not have a sharding strategy registered.
neverix
closed
[]
1
CONTRIBUTOR
### 🐛 Describe the bug As stated in the title, the dtype transmute operator doesn't work for DTensor. This code fails: ``` import torch import torch.distributed as dist import torch.distributed.tensor as dtensor from torch.distributed.tensor.device_mesh import init_device_mesh import os rank = int(os.environ.get("LOCAL_RANK")) dist.init_process_group( device_id=torch.device(rank), ) dist.barrier() tp = 1 world_size = dist.get_world_size() mesh = init_device_mesh( "cuda", (world_size // tp, tp), mesh_dim_names=("dp", "tp"), ) dist.barrier() x = torch.randn(2, 4, dtype=torch.float32).to(rank) x = dtensor.DTensor.from_local( x, mesh, (dtensor.Shard(0), dtensor.Shard(1)), ) x = x.view(dtype=torch.bfloat16) dist.barrier() dist.destroy_process_group() ``` With the error: `NotImplementedError: Operator aten.view.dtype does not have a sharding strategy registered.` ### Versions ``` [pip3] torch==2.6.0 [pip3] triton==3.2.0 ```
true
3,030,850,004
Add methods for checking Triton availability to the device interface
galexite
closed
[ "triaged", "open source", "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo" ]
7
CONTRIBUTOR
Adds the `is_triton_capable` and `raise_if_triton_unavailable` class methods to the device interface, to allow device types to run their own checks for Triton _capability_ (which means a device can actually support Triton in the first place) and _availability_ (if the correct backend of Triton is installed and is functional for the device). Using the device interface allows us to do these checks in a device-agnostic way, allow external backends to attest their Triton support by simply implementing those methods. The intention is for this to back things like the `has_triton` utility method. This has been split from #139171. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,030,777,501
flex attention does not leverage masking, memory error
Bastien-mva
closed
[ "triaged", "module: flex attention" ]
1
NONE
### 🐛 Describe the bug I use the `flex_attention` for very large time series, and want to compute a sliding window attention. I create a `block_mask` to avoid computing the not needed outer products of classical attention. However, it seems that `flex_attention` computes this outer product between query and key. This gives a memory error for large time series. I tried both on GPU and CPU. Here is a minimal example: ``` import torch from torch.nn.attention.flex_attention import flex_attention, create_block_mask WINDOW = 5 batch_size = 16 nhead = 3 seq_len = 25000 head_dim = 32 queries = torch.randn(batch_size, nhead, seq_len, head_dim).cuda() values = torch.randn(batch_size, nhead, seq_len, head_dim).cuda() keys = torch.randn(batch_size, nhead, seq_len, head_dim).cuda() def sliding_window_mask(b, h, q_idx, kv_idx): return q_idx - kv_idx <= WINDOW block_mask = create_block_mask( sliding_window_mask, B=None, H=None, Q_LEN=seq_len, KV_LEN=seq_len, device=queries.device, _compile=True, ) flex_attention(queries, keys, values, None, block_mask) ``` I thought the whole point of using a mask was not to compute the useless outer products that will not be used after. Am I missing something here ? This should be O(seq_len*window) in memory no ? ### Versions Collecting environment information... PyTorch version: 2.7.0+rocm6.3 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.3.42131-fa1d09cbd OS: Ubuntu 24.04.2 LTS (x86_64) GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.39 Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-6.11.0-24-generic-x86_64-with-glibc2.39 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Radeon Graphics (gfx1035) Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 6.3.42131 MIOpen runtime version: 3.3.0 Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: AuthenticAMD Model name: AMD Ryzen 7 7735HS with Radeon Graphics CPU family: 25 Model: 68 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 1 Frequency boost: enabled CPU(s) scaling MHz: 41% CPU max MHz: 4829.0000 CPU min MHz: 400.0000 BogoMIPS: 6387.63 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap Virtualization: AMD-V L1d cache: 256 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 4 MiB (8 instances) L3 cache: 16 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-15 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy_extensions==1.1.0 [pip3] numpy==2.2.5 [pip3] nvidia-cublas-cu12==12.6.4.1 [pip3] nvidia-cuda-cupti-cu12==12.6.80 [pip3] nvidia-cuda-nvrtc-cu12==12.6.77 [pip3] nvidia-cuda-runtime-cu12==12.6.77 [pip3] nvidia-cudnn-cu12==9.5.1.17 [pip3] nvidia-cufft-cu12==11.3.0.4 [pip3] nvidia-curand-cu12==10.3.7.77 [pip3] nvidia-cusolver-cu12==11.7.1.2 [pip3] nvidia-cusparse-cu12==12.5.4.2 [pip3] nvidia-cusparselt-cu12==0.6.3 [pip3] nvidia-nccl-cu12==2.26.2 [pip3] nvidia-nvjitlink-cu12==12.6.85 [pip3] nvidia-nvtx-cu12==12.6.77 [pip3] pytorch-triton-rocm==3.3.0 [pip3] torch==2.7.0+rocm6.3 [pip3] torchaudio==2.7.0+rocm6.3 [pip3] torchvision==0.22.0+rocm6.3 [pip3] triton==3.3.0 [conda] Could not collect cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
true
3,030,729,300
can't reconstruct the communication group using PyTorch.
nannaer
open
[ "oncall: distributed", "triaged", "module: c10d" ]
2
NONE
### 🐛 Describe the bug I attempted to use PyTorch to establish communication groups, but I encountered an error. The code flow is as follows: First, use two processes, 0 and 1, as the first communication group to perform an all-reduce operation, and use three processes, 2, 3, and 4, as the second communication group to perform an all-reduce operation. Then, add a pre - launched process (i.e., process 2) to the first communication group. At this point, there should be two new communication groups. The first communication group consists of three processes, 0, 1, and 2, performing an all-reduce operation, and the second communication group consists of two processes, 3 and 4, performing an all-reduce operation. ```python import os import torch import torch.distributed as dist import torch.multiprocessing as mp import time def setup(rank, world_size): os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '30030' print(f"Rank {rank}: Initializing the distributed environment...") dist.init_process_group( backend="nccl", rank=rank, world_size=world_size ) print(f"Rank {rank}: Distributed environment initialization completed") def test_loop(rank, world_size): setup(rank, world_size) dist.barrier() # ================== Phase 1: Initial Grouping ================== print(f"\nRank {rank}: ===== Entering Phase 1: Initial Grouping =====") initial_groups = {} # Create initial communication groups if rank in [0, 1]: print(f"Rank {rank}: Creating initial group1...") group = dist.new_group(ranks=[0, 1]) initial_groups['group1'] = group print(f"Rank {rank}: Initial group1 created") elif rank in [2, 3, 4]: print(f"Rank {rank}: Creating initial group2...") group = dist.new_group(ranks=[2, 3, 4]) initial_groups['group2'] = group print(f"Rank {rank}: Initial group2 created") dist.barrier() # Perform all-reduce within the group tensor = torch.tensor([rank], dtype=torch.float32) if 'group1' in initial_groups: print(f"Rank {rank}: [Phase 1] Starting all-reduce for group1") dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=initial_groups['group1']) print(f"Rank {rank}: [Phase 1] Completed all-reduce for group1, result: {tensor}") elif 'group2' in initial_groups: print(f"Rank {rank}: [Phase 1] Starting all-reduce for group2") dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=initial_groups['group2']) print(f"Rank {rank}: [Phase 1] Completed all-reduce for group2, result: {tensor}") # ================== Synchronous Wait ================== print(f"\nRank {rank}: Waiting for all processes to complete Phase 1...") dist.barrier() print(f"Rank {rank}: All processes have completed Phase 1") # ================== Phase 2: Reconstruct Communication Groups ================== print(f"\nRank {rank}: ===== Entering Phase 2: Reconstruct Communication Groups =====") new_groups = {} # Create new communication groups if rank in [0, 1, 2]: print(f"Rank {rank}: Starting to create new_group1") start_time = time.time() new_group = dist.new_group(ranks=[0, 1, 2]) end_time = time.time() new_groups['new_group1'] = new_group print(f"Rank {rank}: Completed creating new_group1 (time taken: {(end_time - start_time)*1000:.2f}ms)") elif rank in [3, 4]: print(f"Rank {rank}: Starting to create new_group2") start_time = time.time() new_group = dist.new_group(ranks=[3, 4]) end_time = time.time() new_groups['new_group2'] = new_group print(f"Rank {rank}: Completed creating new_group2 (time taken: {(end_time - start_time)*1000:.2f}ms)") # Perform all-reduce within the new group tensor = torch.tensor([rank], dtype=torch.float32) if 'new_group1' in new_groups: print(f"Rank {rank}: [Phase 2] Starting all-reduce for new_group1") dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=new_groups['new_group1']) print(f"Rank {rank}: [Phase 2] Completed all-reduce for new_group1, result: {tensor}") elif 'new_group2' in new_groups: print(f"Rank {rank}: [Phase 2] Starting all-reduce for new_group2") dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=new_groups['new_group2']) print(f"Rank {rank}: [Phase 2] Completed all-reduce for new_group2, result: {tensor}") # Clean up print(f"\nRank {rank}: Cleaning up resources...") dist.destroy_process_group() print(f"Rank {rank}: Resource cleanup completed") if __name__ == "__main__": num_processes = 5 torch.multiprocessing.spawn( test_loop, args=(num_processes,), nprocs=num_processes, join=True ) ``` The error message is as follows. ```python W0430 17:30:06.737000 145845 site-packages/torch/multiprocessing/spawn.py:169] Terminating process 146342 via signal SIGTERM W0430 17:30:06.737000 145845 site-packages/torch/multiprocessing/spawn.py:169] Terminating process 146343 via signal SIGTERM W0430 17:30:06.738000 145845 site-packages/torch/multiprocessing/spawn.py:169] Terminating process 146344 via signal SIGTERM W0430 17:30:06.738000 145845 site-packages/torch/multiprocessing/spawn.py:169] Terminating process 146346 via signal SIGTERM Traceback (most recent call last): File "/nvme1/chenjiefei/project/vllm/tests/distributed/test_raw_gloo.py", line 198, in <module> mp.spawn( File "/nvme1/chenjiefei/miniconda3/envs/vllm-pp/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 340, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method="spawn") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nvme1/chenjiefei/miniconda3/envs/vllm-pp/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 296, in start_processes while not context.join(): ^^^^^^^^^^^^^^ File "/nvme1/chenjiefei/miniconda3/envs/vllm-pp/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 215, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException: -- Process 3 terminated with the following error: Traceback (most recent call last): File "/nvme1/chenjiefei/miniconda3/envs/vllm-pp/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 90, in _wrap fn(i, *args) File "/nvme1/chenjiefei/project/vllm/tests/distributed/test_raw_gloo.py", line 151, in test_loop dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=initial_groups['group2']) File "/nvme1/chenjiefei/miniconda3/envs/vllm-pp/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/nvme1/chenjiefei/miniconda3/envs/vllm-pp/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 2806, in all_reduce work = group.allreduce([tensor], opts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.hpp:268, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5 ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error. Last error: socketStartConnect: Connect to 10.130.8.138<40441> failed : Software caused connection abort ``` ### Versions you can reproduce the error by run the script above cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,030,571,422
Use std::apply for CPU code
cyyever
open
[ "module: cpu", "triaged", "open source", "topic: not user facing" ]
3
COLLABORATOR
The supported compilers are recent enough to enable std::apply in C++17. cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
true
3,030,567,310
elastic: do not shutdown rendezvous on leaving workers
georgkaleido
open
[ "oncall: distributed", "triaged", "open source", "release notes: distributed (torchelastic)" ]
1
NONE
In #117066, shutdown of the rendezvous was added if a worker shuts down. This is incorrect, because the rendezvous is actually shutdown in [this file](https://github.com/pytorch/pytorch/blob/fa6f9eb2be07f6289d2ab4e781077f7fc75dbe55/torch/distributed/launcher/api.py#L290) but should not be shutdown if a signal is received. See also [this pull request](https://github.com/pytorch/pytorch/pull/67749). #124819 then tried to remediate the situation by fixing the faulty shutdown for the restart case. But this is only triggered if the agent restarts the training, but not if the shutdown of the rendezvous happened before. Removing both these changes restores the original behavior. The rendezvous should only be shutdown if a run completes or fails, not for a single worker leaving. Fixes #150916 Fixes #147064 cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,030,410,049
Release torch with CUDA12.1 for 2.6 and even latest version
zh794390558
closed
[ "oncall: releng" ]
1
NONE
### 🚀 The feature, motivation and pitch Release torch with CUDA12.1 for 2.6 and even latest version ### Alternatives _No response_ ### Additional context _No response_
true
3,030,367,093
[compile async] [cache] testing
ChuanqiXu9
open
[ "open source", "topic: not user facing", "module: inductor" ]
7
CONTRIBUTOR
The comment (https://github.com/pytorch/pytorch/blob/5a52e050248c71dd6e84f51d25cbd17a88555800/torch/_inductor/compile_fx_subproc.py#L70-L87) say it is problematic to not clean the cache for subprocess compile. But I have some problems to test this locally. I followed the suggestion here https://dev-discuss.pytorch.org/t/how-to-test-properly/2492 to test it. Would anyone like to approve this to test this? Thanks in ahead. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @aorenste cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,030,312,358
Remove the unnecessary cuda/Tensor.cpp
FFFrog
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing", "skip-pr-sanity-checks" ]
4
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152522 * #152521 * #152513 * #152512 As the title stated. **Question:** I have carefully looked through all the .h files in Tensor.cpp and from my perspective this file does not make sense. Does anyone know what the background is for doing this? cc @albanD
true
3,030,312,126
Make torch/csrc/utils.h to be device-agnostic
FFFrog
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing", "ciflow/rocm" ]
6
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152522 * __->__ #152521 * #152513 * #152512 `torch/csrc/utils.h` should be device-independent. Currently, it contains CUDA-related implementations, which indirectly causes the [failure of ROCm testing](https://github.com/pytorch/pytorch/pull/151914#issuecomment-2839691038) (The reason is that the ROCm test environment shouldn`t expose HIP-related header files, which causes the JIT compilation to fail during testing) Therefore, move CUDA-related implementations to `torch/csrc/cuda/utils.h`. **Question:** This change may introduce BC-breack. I searched for this function globally on github and I think the impact is very small.
true
3,030,309,032
DISABLED test_comprehensive_lu_cuda_float32 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
4
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_lu_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41392207876). Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_comprehensive_lu_cuda_float32` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper return test(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn return fn(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched return func(*newargs, **newkeywargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner return func(*args, **kwds) ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner return func(*args, **kwds) ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner return func(*args, **kwds) ^^^^^^^^^^^^^^^^^^^ File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner raise e File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner fn(self, device, dtype, op) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive raise e File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive self.check_model_gpu( File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner return func(*args, **kwds) ^^^^^^^^^^^^^^^^^^^ File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu check_model( File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model actual_grad = compute_grads(example_inputs, kwargs, actual, grads) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads return torch.autograd.grad( ^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 503, in grad result = _engine_run_backward( ^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply return user_fn(self, *args) ^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2163, in backward return impl_fn() ^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2149, in impl_fn out = CompiledFunction._backward_impl(ctx, all_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2241, in _backward_impl CompiledFunction.compiled_bw = aot_config.bw_compiler( ^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__ return self.compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler disable( File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2201, in bw_compiler return inner_compile( ^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 726, in compile_fx_inner return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper inner_compiled_fn = compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 862, in _compile_fx_inner raise InductorError(e, currentframe()).with_traceback( File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 846, in _compile_fx_inner mb_compiled_graph = fx_codegen_and_compile( ^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1460, in fx_codegen_and_compile return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1347, in codegen_and_compile compiled_module = graph.compile_to_module() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module return self._compile_to_module() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module mod = PyCodeCache.load_by_key_path( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path mod = _reload_python_module(key, path, set_sys_modules=in_toplevel) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module exec(code, mod.__dict__, mod.__dict__) File "/tmp/tmp4uk8am0s/gc/cgcy5el4nhw3okrgdi6gx5q7lckdlzhwobgat6d45umgkdklwgb3.py", line 243, in <module> async_compile.wait(globals()) File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 448, in wait self._wait_futures(scope) File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures scope[key] = result.result() ^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result return self.result_fn() ^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 343, in get_result kernel.precompile( File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile self._make_launchers() File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers launchers.append(result.make_launcher()) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher self.reload_cubin_path() File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path raise RuntimeError( torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpg6ru0zop/triton/RX3R4CGSAX6ULKAUAPP2RQK3HJWYNPF3X27PHUDKXPUFUALGM24A/triton_poi_fused_cat_2.cubin') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test result = test(self, **param_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper raise e_tracked from e Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(3, 5), device="cuda:0", dtype=torch.float32], args=(True,True), kwargs={}, broadcasts_input=False, name='') To execute this test, run the following from the base repo dir: PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_lu_cuda_float32 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_torchinductor_opinfo.py` cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
true
3,030,274,206
update README
TheHiddenLayer
closed
[ "open source", "topic: not user facing" ]
2
NONE
Fixes #ISSUE_NUMBER
true
3,030,243,769
Remove redundant line in partitioner
fmassa
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "ciflow/inductor" ]
5
MEMBER
Summary: This is a cleanup from https://github.com/pytorch/pytorch/pull/152264, which contained a line which was a vestige from a previous implementation. Test Plan: Let CI run Differential Revision: D73904636
true
3,030,230,515
second test to fix after added logging
wdvr
closed
[ "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Fixes #ISSUE_NUMBER cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,030,210,866
[MPS] Migrate mul to TensorIterator
malfet
closed
[ "Merged", "topic: not user facing", "release notes: mps", "ciflow/mps" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152743 * #152737 * __->__ #152515 What initially supposed to be a very straightforward change resulted in small refactor of binary op tensor generators when invoked for mixed dtype, which surfaced via `test_output_grad_match_sinc_mps_float16` test failure. If operands are of different dtype (in particular float16 tensor and float32 scalar), one must perform an operation with `opmath_t` (or `TensorIterator::common_dtype()`) precision, rather than casting both operands to output dtype and performing it then, which can be demonstrated via the following example: ``` >>> torch.tensor([-1.8633, 6.2031, -2.2500, -3.3926, 8.5938, 5.9766], dtype=torch.half).mul(torch.pi) tensor([ -5.8555, 19.4844, -7.0703, -10.6562, 27.0000, 18.7812], dtype=torch.float16) >>> torch.tensor([-1.8633, 6.2031, -2.2500, -3.3926, 8.5938, 5.9766], dtype=torch.half).mul(torch.tensor(torch.pi, dtype=torch.float16)) tensor([ -5.8516, 19.4844, -7.0664, -10.6562, 26.9844, 18.7656], dtype=torch.float16) ``` Solve this problem for now, but introducing `REGISTER_OPMATH_BINARY_OP` that indicates that operands must be cast to opmath_t, before performing the computation.
true
3,030,177,025
[MPS][BE] Migrate `lerp.Scalar.out` to tensor iterator
malfet
closed
[ "Merged", "release notes: mps", "ciflow/mps" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152515 * __->__ #152514
true
3,030,145,839
Remove unnecessary __STDC_FORMAT_MACROS macro
FFFrog
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
5
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152522 * #152521 * __->__ #152513 * #152512 As the title stated.
true
3,030,136,915
Remove unnecessary condition compilation macro
FFFrog
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
4
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152522 * #152521 * #152513 * __->__ #152512 As the title stated.
true
3,030,129,039
DISABLED test_inductor_debug (__main__.LoggingTests)
malfet
closed
[ "high priority", "triage review", "module: flaky-tests", "skipped" ]
2
CONTRIBUTOR
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22dynamo%2Ftest_logging.py%3A%3ALoggingTests%3A%3Atest_inductor_debug%22%5D)). cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000
true
3,030,027,552
[BE] Migrate all add/sub ops to Metal kernels
malfet
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "release notes: mps", "ciflow/mps" ]
5
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152510 As typecasting harness shoudl take care of all permutations Fix bug in `exec_binary_kernel` where it was not properly downcasting CPU double/complexDouble scalars to floats Fixes https://github.com/pytorch/pytorch/issues/152582
true
3,029,974,976
[2/N] Deprecate c10::string_view and at::string
cyyever
open
[ "open source", "ciflow/trunk", "topic: not user facing" ]
3
COLLABORATOR
Fixes #ISSUE_NUMBER
true
3,029,962,779
[quantizer] Fix quantizer tests after dq conv is enabled
mcr229
closed
[ "fb-exported", "ciflow/trunk", "release notes: quantization", "release notes: AO frontend" ]
2
CONTRIBUTOR
Summary: X-link: https://github.com/pytorch/executorch/pull/10569 Fixing some broken tests after we enabled dq convs Test Plan: CI Differential Revision: D73898719
true
3,029,932,451
[inductor] [compile async] Don't compile in eager
ChuanqiXu9
open
[ "triaged", "open source", "topic: not user facing", "module: inductor" ]
7
CONTRIBUTOR
Previously we will compile in eager mode. This looks not intentional according to the test. There is a check to check the number of compilations (in current process) to be 0. But maybe due to an oversight, the number it checks is always a zero. In _InProcessFxCompile and _SerializedFxCompile, we increment the number of `codegen_and_compile` by `self`, which is a member variable attached to the instance. But in test, we check the number of `codegen_and_compile` by the class. I think we should increment the number of `codegen_and_compile` by the class. Then the test will fail now. See torch/_inductor/compile_fx_async.py for the fix. CC @aorenste cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,906,972
[Hierarchical Compile] Take into account mutation deps in cycle detection
mlazos
open
[ "module: dynamo", "ciflow/inductor", "release notes: dynamo" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152589 * #152572 * #152570 * __->__ #152506 * #152410 * #152505 * #152389 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,029,906,894
[Hierarchical Compilation] Use universal flatten APIs
mlazos
open
[ "module: dynamo", "ciflow/inductor", "release notes: dynamo" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152589 * #152572 * #152570 * #152506 * #152410 * __->__ #152505 * #152389 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,029,900,600
[Metal] Extend typecasted op support to complex dtypes
malfet
closed
[ "Merged", "topic: not user facing", "release notes: mps", "ciflow/mps" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152514 * #152510 * #152485 * __->__ #152504 First of all, by extending `c10::metal::cast_to` to work correctly with complex dtypes, by introducing two more specializations: one that casts complex to scalar, and another that casts scalar to complex (as default metal typecast will turn `float x` into `float2(x, x)`) Add ComplexHalf and ComplexFloat enum values to `c10::metal::ScalarTypes` and handle them in `val_at_offs(ptr, offs, type)`
true
3,029,884,179
Return ConstantVariable(None) from WithExitFunctionVariable.exit to prevent NoneType crash inside autocast exception path
jansel
closed
[ "Merged", "ciflow/trunk", "module: dynamo", "ciflow/inductor", "release notes: dynamo" ]
3
CONTRIBUTOR
Copy of #152013 with PR time benchmarks updated (regressions seem unrelated) cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,029,879,537
Refactor nested benchmark functions in AlgorithmSelectorCache
masnesral
closed
[ "topic: not user facing", "module: inductor", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152502 * #152147 Summary: The motivation is make AlgorithmSelectorCache.benchmark_in_current_process() a toplevel method so that I can leverage it for remote auto-tune. Currently, it's nested inside make_benchmark_fn(). Just for consistency, I also made AlgorithmSelectorCache.benchmark_in_sub_process a toplevel staticmethod. That method technically didn't need to move, but IMO refactoring makes the code a bit more readable. Test Plan: existing unit tests cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,872,798
[Metal] Extend typecasted op support to complex dtypes
malfet
closed
[ "topic: bug fixes", "release notes: mps", "ciflow/mps" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * (to be filled)
true
3,029,852,966
DISABLED test_comprehensive_repeat_cuda_float64 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
2
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_repeat_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41381960290). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_comprehensive_repeat_cuda_float64` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper return test(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn return fn(self, *args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched return func(*newargs, **newkeywargs) File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner raise e File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner fn(self, device, dtype, op) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive raise e File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive self.check_model_gpu( File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu check_model( File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model actual = run(*example_inputs, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1 File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 862, in _compile_fx_inner raise InductorError(e, currentframe()).with_traceback( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 846, in _compile_fx_inner mb_compiled_graph = fx_codegen_and_compile( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1460, in fx_codegen_and_compile return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1347, in codegen_and_compile compiled_module = graph.compile_to_module() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module return self._compile_to_module() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module mod = PyCodeCache.load_by_key_path( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path mod = _reload_python_module(key, path, set_sys_modules=in_toplevel) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module exec(code, mod.__dict__, mod.__dict__) File "/tmp/tmpfhhehv7f/bx/cbxzzc6lurule353wyo3f3furudihsrzj4ahuph247vpj4syk4rk.py", line 74, in <module> async_compile.wait(globals()) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 479, in wait self._wait_futures(scope) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 499, in _wait_futures kernel = result.result() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3508, in result return self.result_fn() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 368, in get_result kernel.precompile( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 324, in precompile self._make_launchers() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 481, in _make_launchers launchers.append(result.make_launcher()) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1278, in make_launcher self.reload_cubin_path() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1270, in reload_cubin_path raise RuntimeError( torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpz8tgkuit/triton/EHMJ6KW36AZDNSE22KDAQKRF7EKJ7K3IB5GJE32YJSHDQMMBRGIA/triton_poi_fused_repeat_0.cubin') Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper raise e_tracked from e Exception: Caused by sample input at index 20: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float64], args=((3,1,1)), kwargs={}, broadcasts_input=False, name='') To execute this test, run the following from the base repo dir: PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=20 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_repeat_cuda_float64 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_torchinductor_opinfo.py` cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
true
3,029,846,567
[ez][export] suggest torch._checks only for booleans
pianpwk
closed
[ "Merged", "ciflow/trunk", "fx", "ciflow/inductor", "release notes: export" ]
3
CONTRIBUTOR
We were doing this when the error was coming from int/float casts, suggesting fixes like `torch._check(zuf0), torch._check(~zuf0)` cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
true
3,029,816,723
[cutlass backend] Add addmm dynamic support
henrylhtsang
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152498 Differential Revision: [D73893133](https://our.internmc.facebook.com/intern/diff/D73893133/) cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,814,717
[export] support SymInt minlength for torch.bincount()
pianpwk
closed
[ "Merged", "ciflow/trunk", "release notes: export" ]
4
CONTRIBUTOR
null
true
3,029,803,932
Expose NCCL communicator from ProcessGroupNCCL via an unsafe API
GD06
closed
[ "oncall: distributed", "fb-exported", "Merged", "ciflow/trunk", "release notes: distributed (c10d)", "ciflow/autoformat" ]
14
CONTRIBUTOR
Differential Revision: D73892691 cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,029,790,604
[export] Refactor pt2 save/load
angelayi
open
[ "ciflow/trunk", "module: inductor", "ciflow/inductor", "release notes: export" ]
3
CONTRIBUTOR
Refactor the pt2 archive saving to consolidate the format of torch.export.save and torch._inductor.package.package_aoti. This PR adds the following functions, which torch.export.save and AOTI packaging calls into: ```python package_pt2( f: FileLike, *, exported_programs: Optional[Union[ExportedProgram, dict[str, ExportedProgram]]] = None, aoti_files: Optional[Union[list[str], dict[str, list[str]]]] = None, extra_files: Optional[dict[str, Any]] = None, ) -> FileLike @dataclass class PT2ArchiveContents: exported_programs: dict[str, ExportedProgram] aoti_runners: dict[str, AOTICompiledModel] extra_files: dict[str, Any] load_pt2(f: FileLike) -> PT2ArchiveContents ``` Power users directly call into these APIs if they want to bundle multiple exported programs, aoti files, or extra metadata. This is how the pt2 archive looks like ([spec](https://docs.google.com/document/d/1RQ4cmywilnFUT1VE-4oTGxwXdc8vowCSZsrRgo3wFA8/edit?tab=t.0)): ``` ├── archive_format ├── version ├── .data ├── data │ ├── aotinductor │ │ └── model1 │ │ ├── model1.cpp │ │ ├── model1.so # currently AOTI automatically moves weights in here, TODO to move it out │ │ ├── cg7domx3woam3nnliwud7yvtcencqctxkvvcafuriladwxw4nfiv.cubin │ │ └── cubaaxppb6xmuqdm4bej55h2pftbce3bjyyvljxbtdfuolmv45ex.cubin │ ├── weights │ │ ├── model1.pt # TODO to dedup weights between model1/model2 │ │ └── model2.pt │ └── constants │ │ ├── model1.pt # TODO to dedup weights between model1/model2 │ │ └── model2.pt │ └── sample_inputs │ ├── model1.pt # TODO to dedup weights between model1/model2 │ └── model2.pt ├── extra │ └── user_metadata.txt └── models ├── model1.json └── model2.json ``` Future todos: - unbundle the weights -- instead of .pt, we can use bin files, which will also allow us to dedup weights if we store multiple models - update aoti_compile_and_package to also save the exported program - integrate TNR with this packaging flow cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,758,654
[inductor][invoke_subgraph] Free the buffers before the subgraph call
anijain2305
closed
[ "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor", "ci-no-td", "ciflow/pull" ]
19
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152675 * __->__ #152494 Before ![image](https://github.com/user-attachments/assets/62b24c14-69e6-40fb-94e3-223930132ef6) After ![image](https://github.com/user-attachments/assets/9f340d4e-80a9-45aa-9400-626fff5b5ecd) tlparse - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmph5dwWt/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,756,050
fix tests broken after #152450
wdvr
open
[ "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor", "ci-no-td" ]
14
CONTRIBUTOR
Updating test expected value after #152450 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,029,750,573
[ROCm] add almalinux images
jeffdaily
closed
[ "module: rocm", "open source", "Merged", "topic: not user facing", "ciflow/rocm" ]
3
COLLABORATOR
Fixes #ISSUE_NUMBER cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
true
3,029,734,938
[CUDA][SDPA] Bump python `fused_attention_vs_math_ref_grads` `fudge_factor` for `sm120`
eqy
closed
[ "module: cuda", "open source", "Merged", "ciflow/trunk", "topic: not user facing", "module: sdpa" ]
3
COLLABORATOR
🍦 cc @ptrblck @msaroufim @jerryzh168
true
3,029,727,452
[invoke_subgraph] Simplify output code for subgraph output node
anijain2305
closed
[ "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor", "ci-no-td" ]
8
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152675 * #152494 * __->__ #152490 * #152383 Before - [manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmppQg3F8/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmppQg3F8/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) ![image](https://github.com/user-attachments/assets/8fecdc23-eb78-4e15-9d03-c4bae4b49434) After fix - https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp9a5EM0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000 ![image](https://github.com/user-attachments/assets/8e98120c-d82e-42dc-bc50-a6bfd4f9923c) cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,727,054
UNSTABLE Lint / Lint URLs / linux-job
huydhn
open
[ "module: ci", "triaged", "unstable" ]
1
CONTRIBUTOR
This is added recently, mark it as unstable for now cc @seemethere @malfet @pytorch/pytorch-dev-infra
true
3,029,707,668
[ROCm] Use almalinux docker files for building Magma
jeffdaily
closed
[ "module: rocm", "open source", "Merged", "topic: not user facing", "ciflow/rocm" ]
3
COLLABORATOR
Fixes #151707 for ROCm Magma builds. See also #152358. Depends on #152492. cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
true
3,029,706,699
[inductor][BE] Add more debug logs for why fx graph cache doesn't happen
henrylhtsang
closed
[ "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "module: inductor", "module: dynamo", "ciflow/inductor", "ci-no-td" ]
10
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152487 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,694,822
Change unsafe_marked_cacheable_functions to a dictionary, so that you can specify a static cache key
jamesjwu
open
[ "ciflow/trunk", "topic: not user facing", "module: inductor", "module: dynamo", "ciflow/inductor", "merging" ]
11
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152486 Fixes https://github.com/pytorch/pytorch/issues/152434 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,684,105
[MPS][BE] Remove `exec_binary_alpha_kernel`
malfet
closed
[ "Merged", "topic: not user facing", "release notes: mps", "ciflow/mps" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152515 * #152514 * #152510 * __->__ #152485 Which was almost a complete copy-n-paste from exec_binary_kernel anyway Just add `Scalar` as an optional argument and figure out kernel name during the invocation rather than in executor
true
3,029,681,192
Fix flaky test in test_custom_ops
angelayi
open
[ "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "merging", "ci-no-td", "ciflow/pull" ]
17
CONTRIBUTOR
Hopefully fixes https://github.com/pytorch/pytorch/issues/151301, https://github.com/pytorch/pytorch/issues/151281 by making the ops have different names
true
3,029,661,503
[inductor][BE] cleanup and improve precompilation loggings
henrylhtsang
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152483 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,655,481
Update CODEOWNERS (torch/utils/data/)
divyanshk
closed
[ "Merged", "ciflow/trunk", "topic: not user facing" ]
4
CONTRIBUTOR
Updating codeowners for dataloading
true
3,029,654,460
[nativert] port enumerate from folly to c10::utill
dolpm
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
17
CONTRIBUTOR
Summary: nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed. This diff ports an enumeration util from folly into c10. Test Plan: CI Differential Revision: D73881042
true
3,029,650,141
[dynamo] Try tracing into einops
StrongerXi
open
[ "triaged", "oncall: pt2", "module: dynamo" ]
0
CONTRIBUTOR
### 🐛 Describe the bug This will allow us to 1. get rid of https://github.com/pytorch/pytorch/blob/accffef504b9162718c535201c263070565f28fa/torch/_dynamo/decorators.py#L704-L735 2. handle `einops` in a better way for AOTDispatcher cache: #152369. ### Error logs _No response_ ### Versions main cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
true
3,029,650,004
[MPS] Fix lerp for complex numbers
malfet
closed
[ "Merged", "topic: bug fixes", "release notes: mps", "ciflow/mps" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152510 * #152485 * #152504 * __->__ #152479 * #152466 As well as `.add`/`.sub` with complex alpha Before this change `python3 -c "import torch;print(torch.rand(10, device='mps', dtype=torch.complex64).add(torch.rand(10, device='mps', dtype=torch.complex64), alpha=.5j))"` used to fail with ``` RuntimeError: value cannot be converted to type double without overflow ```
true
3,029,637,430
[ONNX] Suggest users setting dynamo=True when exporting
titaiwangms
closed
[ "open source", "Merged", "ciflow/trunk", "release notes: onnx" ]
7
COLLABORATOR
Fixes #152025
true
3,029,633,929
[DO NOT REVIEW] Attempt a mixed precision fused adam
janeyx99
open
[ "ciflow/trunk", "release notes: foreach_frontend" ]
2
CONTRIBUTOR
Non ghstack version of stack at #147653 for easy import
true
3,029,627,570
[inductor][BE] cleanup and improve precompilation loggings
henrylhtsang
closed
[ "topic: not user facing", "module: inductor", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152476 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,613,684
[nativert] Move TensorMeta to pytorch core
yiming0416
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
20
CONTRIBUTOR
Summary: Torch Native Runtime RFC: https://github.com/pytorch/rfcs/pull/72 This diff moves `TensorMeta.cpp` and `TensorMeta.h` to PyTorch core under `torch/nativert/graph/` Existing `torch::_export::TensorMeta` in `torch/csrc/utils/generated_serialization_types.h` is auto-generated from the export serde schema and therefore only containing the most basic serializable types. We need the newly added `TensorMeta.cpp` to deserialize the metadata into a in-memory class with c10 types so that it can be consumed by the runtime later. Test Plan: Added test under `test/cpp/nativert/test_tensor_meta.cpp` Differential Revision: D73820548
true
3,029,604,114
[CUDA] Fix `test_multi_device_context_manager` on CUDA
eqy
closed
[ "module: multi-gpu", "module: cuda", "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
3
COLLABORATOR
Seems there was a typo where `set_device` was called when the intent was to use `current_device` As-is the test will fail on multigpu systems with `TypeError: set_device() missing 1 required positional argument: 'device'` cc @ptrblck @msaroufim @jerryzh168
true
3,029,602,292
[dynamo] Dynamo fails to run torch.cat() with FakeTensors because it can't confirm 's0 + s1*u0' is nonzero
Adam27X
open
[ "triaged", "oncall: pt2", "module: fakeTensor", "module: dynamic shapes", "module: dynamo" ]
1
NONE
### 🐛 Describe the bug Here's a fully reproducible test case. I found this issue using PyTorch 2.5.0 but was able to reproduce it with PyTorch 2.7.0: ``` import torch from typing import List from torch._dynamo.backends.common import aot_autograd from functorch.compile import make_boxed_func print(torch.__version__) device = torch.device("cuda") class Model(torch.nn.Module): def __init__(self): self.vsite_count = 8 super().__init__() def forward(self, ml_atoms): ml_atoms_vs = ml_atoms[:, ml_atoms[0]].repeat(1, self.vsite_count) #Edit for FxGraph generation with PyTorch 2.5-2.7 ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) return ml_atoms ml_atoms_in = torch.zeros([1,4296], device=device, dtype=torch.bool) ml_atoms_in[:,0:6] = True model = Model() torch._dynamo.config.capture_dynamic_output_shape_ops = True torch._dynamo.config.capture_scalar_outputs = True def test_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]): graph_modules.append(gm) return make_boxed_func(gm.forward) test_compiler = aot_autograd(fw_compiler=test_compiler,bw_compiler=test_compiler, decompositions=torch._decomp.core_aten_decompositions()) prototype = torch.compile(model, fullgraph=True, dynamic=True, backend=test_compiler) ml_atoms_cat = prototype(ml_atoms_in) ``` I think the issue here is that torch.cat has a check that the size of the concatenated dimension is nonzero and for some reason dynamo can't figure out that `s0 + s1*u0` must be nonzero because `s0`, `s1`, and `u0` are all symints. Note that this is something that worked with PyTorch 2.2.0. Perhaps this check was added since then. ### Error logs ``` 2.7.0+cu126 I0429 17:30:24.974000 2078076 torch/fx/experimental/symbolic_shapes.py:3334] [0/0] create_env I0429 17:30:25.044000 2078076 torch/fx/experimental/symbolic_shapes.py:4606] [0/0] create_symbol s0 = 4296 for L['ml_atoms'].size()[1] [2, int_oo] ml_atoms_vs = ml_atoms[:, ml_atoms[0]].repeat(1, self.vsite_count) #Garand edit for FxGraph generation with PyTorch 2.5-2.7 # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:17 in forward (_dynamo/variables/builder.py:3033 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0" I0429 17:30:25.052000 2078076 torch/fx/experimental/symbolic_shapes.py:4276] [0/0] create_unbacked_symint u0 [-int_oo, int_oo] ml_atoms_vs = ml_atoms[:, ml_atoms[0]].repeat(1, self.vsite_count) #Garand edit for FxGraph generation with PyTorch 2.5-2.7 # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:17 in forward (_subclasses/fake_impls.py:465 in nonzero) I0429 17:30:25.053000 2078076 torch/fx/experimental/symbolic_shapes.py:7162] [0/0] constrain_symbol_range u0 [0, 9223372036854775806] I0429 17:30:25.071000 2078076 torch/fx/experimental/symbolic_shapes.py:1130] [0/0] compute_unbacked_bindings [u0] I0429 17:30:25.074000 2078076 torch/fx/experimental/symbolic_shapes.py:4606] [0/0] create_symbol s1 = 8 for L['self'].vsite_count [-int_oo, int_oo] ml_atoms_vs = ml_atoms[:, ml_atoms[0]].repeat(1, self.vsite_count) #Garand edit for FxGraph generation with PyTorch 2.5-2.7 # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:17 in forward (_dynamo/variables/builder.py:2010 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s1" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0" I0429 17:30:25.087000 2078076 torch/fx/experimental/symbolic_shapes.py:6630] [0/0] runtime_assert s1*u0 >= 0 [guard added] ml_atoms_vs = ml_atoms[:, ml_atoms[0]].repeat(1, self.vsite_count) #Garand edit for FxGraph generation with PyTorch 2.5-2.7 # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:17 in forward (_refs/__init__.py:4796 in new_empty), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s1*u0 >= 0" I0429 17:30:25.118000 2078076 torch/fx/experimental/symbolic_shapes.py:6630] [0/0] runtime_assert s0 + s1*u0 >= 0 [guard added] ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:18 in forward (_prims_common/__init__.py:620 in validate_dim_length), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s0 + s1*u0 >= 0" W0429 17:30:25.138000 2078076 torch/fx/experimental/symbolic_shapes.py:6679] [0/0] failed during evaluate_expr(Eq(s0 + s1*u0, 0), hint=None, size_oblivious=True, forcing_spec=False E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] failed while running evaluate_expr(*(Eq(s0 + s1*u0, 0), None, False, True), **{}) E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] Traceback (most recent call last): E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/recording.py", line 263, in wrapper E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] return retlog(fn(*args, **kwargs)) E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6671, in evaluate_expr E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] return self._evaluate_expr( E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6894, in _evaluate_expr E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] raise self._make_data_dependent_error( E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(8*u0 + 4296, 0) (unhinted: Eq(s0 + s1*u0, 0)). (Size-like symbols: u0) E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] Caused by: ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:18 in forward (_ops.py:756 in __call__) E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] For more information, run with TORCH_LOGS="dynamic" E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0" E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] User Stack (most recent call last): E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] (snipped, see stack below for prefix) E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] File "/u/pgh/mclaugha/./sandbox/torch_compile_cat_dynamic_shapes/test.py", line 18, in forward E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] E0429 17:30:25.139000 2078076 torch/fx/experimental/recording.py:299] [0/0] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 Traceback (most recent call last): File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3284, in run_node return node.target(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/utils/_stats.py", line 27, in wrapper return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1823, in dispatch return self._cached_dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1393, in _cached_dispatch_impl output = self._dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2333, in _dispatch_impl decomposition_table[func](*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 308, in _fn result = fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_compile.py", line 51, in inner return disable_fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 149, in _fn result = fn(**bound.arguments) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 2863, in cat return prims.cat(filtered, dim).clone(memory_format=memory_format) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_ops.py", line 756, in __call__ return self._op(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/utils/_stats.py", line 27, in wrapper return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1823, in dispatch return self._cached_dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1393, in _cached_dispatch_impl output = self._dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2355, in _dispatch_impl func.prim_meta_impl(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_prims/__init__.py", line 1794, in _cat_meta return TensorMeta( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_prims/__init__.py", line 264, in TensorMeta return torch.empty_strided(shape, strides, dtype=dtype, device=device) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/utils/_stats.py", line 27, in wrapper return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1823, in dispatch return self._cached_dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1384, in _cached_dispatch_impl output = self._dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2397, in _dispatch_impl op_impl_out = op_impl(self, func, *args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_impls.py", line 189, in constructors r = func(*args, **new_kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_ops.py", line 756, in __call__ return self._op(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 588, in guard_size_oblivious r = self.evaluate(size_oblivious=True) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 510, in evaluate return self.shape_env.evaluate_sym_node(self, size_oblivious) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6655, in evaluate_sym_node return self.evaluate_expr( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/recording.py", line 263, in wrapper return retlog(fn(*args, **kwargs)) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6671, in evaluate_expr return self._evaluate_expr( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6894, in _evaluate_expr raise self._make_data_dependent_error( torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(8*u0 + 4296, 0) (unhinted: Eq(s0 + s1*u0, 0)). (Size-like symbols: u0) Caused by: ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:18 in forward (_ops.py:756 in __call__) For more information, run with TORCH_LOGS="dynamic" For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0" If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing User Stack (most recent call last): (snipped, see stack below for prefix) File "/u/pgh/mclaugha/./sandbox/torch_compile_cat_dynamic_shapes/test.py", line 18, in forward ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3127, in get_fake_value ret_val = wrap_fake_exception( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2641, in wrap_fake_exception return fn() File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3128, in <lambda> lambda: run_node(tx.output, node, args, kwargs, nnmodule) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3325, in run_node raise RuntimeError(make_error_message(e)).with_traceback( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3284, in run_node return node.target(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/utils/_stats.py", line 27, in wrapper return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1823, in dispatch return self._cached_dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1393, in _cached_dispatch_impl output = self._dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2333, in _dispatch_impl decomposition_table[func](*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 308, in _fn result = fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_compile.py", line 51, in inner return disable_fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 149, in _fn result = fn(**bound.arguments) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 2863, in cat return prims.cat(filtered, dim).clone(memory_format=memory_format) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_ops.py", line 756, in __call__ return self._op(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/utils/_stats.py", line 27, in wrapper return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1823, in dispatch return self._cached_dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1393, in _cached_dispatch_impl output = self._dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2355, in _dispatch_impl func.prim_meta_impl(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_prims/__init__.py", line 1794, in _cat_meta return TensorMeta( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_prims/__init__.py", line 264, in TensorMeta return torch.empty_strided(shape, strides, dtype=dtype, device=device) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/utils/_stats.py", line 27, in wrapper return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1823, in dispatch return self._cached_dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1384, in _cached_dispatch_impl output = self._dispatch_impl(func, types, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2397, in _dispatch_impl op_impl_out = op_impl(self, func, *args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_subclasses/fake_impls.py", line 189, in constructors r = func(*args, **new_kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_ops.py", line 756, in __call__ return self._op(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 588, in guard_size_oblivious r = self.evaluate(size_oblivious=True) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 510, in evaluate return self.shape_env.evaluate_sym_node(self, size_oblivious) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6655, in evaluate_sym_node return self.evaluate_expr( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/recording.py", line 263, in wrapper return retlog(fn(*args, **kwargs)) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6671, in evaluate_expr return self._evaluate_expr( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 6894, in _evaluate_expr raise self._make_data_dependent_error( RuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in method cat of type object at 0x7fa772b7df40>(*((FakeTensor(..., device='cuda:0', size=(1, s0), dtype=torch.bool), FakeTensor(..., device='cuda:0', size=(1, s1*u0), dtype=torch.bool)),), **{'dim': 1}): got GuardOnDataDependentSymNode('Could not guard on data-dependent expression Eq(8*u0 + 4296, 0) (unhinted: Eq(s0 + s1*u0, 0)). (Size-like symbols: u0)\n\nCaused by: ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:18 in forward (_ops.py:756 in __call__)\nFor more information, run with TORCH_LOGS="dynamic"\nFor extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"\nIf you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1\nFor more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing\n\nUser Stack (most recent call last):\n (snipped, see stack below for prefix)\n File "/u/pgh/mclaugha/./sandbox/torch_compile_cat_dynamic_shapes/test.py", line 18, in forward\n ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1)\n\nFor C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/u/pgh/mclaugha/./sandbox/torch_compile_cat_dynamic_shapes/test.py", line 37, in <module> ml_atoms_cat = prototype(ml_atoms_in) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1432, in __call__ return self._torchdynamo_orig_callable( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 598, in __call__ return _compile( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1059, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function return function(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 761, in compile_inner return _compile_inner(code, one_graph, hooks, transform) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 797, in _compile_inner out_code = transform_code_object(code, transform) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object transformations(instructions, code_options) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 257, in _fn return fn(*args, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in transform tracer.run() File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run super().run() File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run while self.step(): File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step self.dispatch_table[inst.opcode](self, inst) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 819, in wrapper return inner_fn(self, inst) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2278, in CALL_FUNCTION_KW self.call_function(fn, args, kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1170, in call_function self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 1181, in call_function tensor_variable = wrap_fx_proxy( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2302, in wrap_fx_proxy return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2368, in wrap_fx_proxy_cls return _wrap_fx_proxy( File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2464, in _wrap_fx_proxy example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True) File "/u/pgh/mclaugha/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 3214, in get_fake_value raise UserError( # noqa: B904 torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(8*u0 + 4296, 0) (unhinted: Eq(s0 + s1*u0, 0)). (Size-like symbols: u0) Caused by: ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) # ./sandbox/torch_compile_cat_dynamic_shapes/test.py:18 in forward (_ops.py:756 in __call__) For more information, run with TORCH_LOGS="dynamic" For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0" If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing User Stack (most recent call last): (snipped, see stack below for prefix) File "/u/pgh/mclaugha/./sandbox/torch_compile_cat_dynamic_shapes/test.py", line 18, in forward ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example from user code: File "/u/pgh/mclaugha/./sandbox/torch_compile_cat_dynamic_shapes/test.py", line 18, in forward ml_atoms = torch.cat((ml_atoms, ml_atoms_vs), dim=1) ``` ### Versions 2.7.0 cc @chauhang @penguinwu @eellison @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @bdhirsh
true
3,029,602,252
[CUDAGraph Trees] support memory allocation on side stream
BoyuanFeng
closed
[ "Merged", "Reverted", "module: cuda graphs", "ciflow/trunk", "topic: not user facing", "module: inductor", "module: dynamo", "ciflow/inductor", "ci-no-td" ]
13
CONTRIBUTOR
I tried `beginAllocateToPool` instead of `_cuda_beginAllocateCurrentStreamToPool` and the error in #151199 does not happen any more. However, this approach is unsafe for multithreading. When multiple run_eager happens concurrently, we expect memory allocation to different mem_pool. Since beginAllocateToPool does not check stream, these memory allocation may happen on the same mem_pool. So, I use `_cuda_beginAllocateCurrentThreadToPool` to direct all memory allocation on the same thread to a given mem_pool. In particular, `_cuda_beginAllocateCurrentThreadToPool` records the launching thread id, and during runtime checks if the current thread id matches the launching thread id. Fixes #151199 cc @mcarilli @ezyang @eellison @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,600,038
add device generalisation support for distributed tests
harikodali
open
[ "oncall: distributed", "triaged", "open source", "release notes: distributed (c10d)", "ciflow/xpu" ]
1
NONE
Modified following files for device generalization test/distributed/optim/test_zero_redundancy_optimizer.py test/distributed/test_c10d_logger.py test/distributed/test_compute_comm_reordering.py torch/testing/_internal/common_distributed.py device info is retrieved and used for tests, via get_devtype() from torch.testing._internal.common_fsdp. DistributedTestBase is used instead of MultiProcessTestCase, to make use of helper functions. cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,029,599,825
DISABLED test_comprehensive_polygamma_polygamma_n_1_cuda_float16 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
5
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_polygamma_polygamma_n_1_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41372514673). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_comprehensive_polygamma_polygamma_n_1_cuda_float16` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper return test(*args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn return fn(self, *args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/mock.py", line 1424, in patched return func(*newargs, **newkeywargs) File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner raise e File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner fn(self, device, dtype, op) ~~^^^^^^^^^^^^^^^^^^^^^^^^^ File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive raise e File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive self.check_model_gpu( ~~~~~~~~~~~~~~~~~~~~^ fn, ^^^ ...<2 lines>... **adjusted_kwargs, ^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu check_model( ~~~~~~~~~~~^ self, ^^^^^ ...<13 lines>... output_process_fn_grad=output_process_fn_grad, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model actual_grad = compute_grads(example_inputs, kwargs, actual, grads) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads return torch.autograd.grad( ~~~~~~~~~~~~~~~~~~~^ flat_diff_results, ^^^^^^^^^^^^^^^^^^ ...<3 lines>... retain_graph=True, ^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/__init__.py", line 503, in grad result = _engine_run_backward( outputs, ...<5 lines>... accumulate_grad=False, ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ t_outputs, *args, **kwargs ^^^^^^^^^^^^^^^^^^^^^^^^^^ ) # Calls into the C++ engine to run the backward pass ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py", line 307, in apply return user_fn(self, *args) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2163, in backward return impl_fn() File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2149, in impl_fn out = CompiledFunction._backward_impl(ctx, all_args) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2241, in _backward_impl CompiledFunction.compiled_bw = aot_config.bw_compiler( ~~~~~~~~~~~~~~~~~~~~~~^ copy.deepcopy(bw_module), placeholder_list ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__ return self.compiler_fn(gm, example_inputs) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler disable( ~~~~~~~~ bw_compiler_fn, reason="do not trace backward compiler function" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ )(*args, **kwargs), ~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn return fn(*args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function return function(*args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2201, in bw_compiler return inner_compile( gm, ...<5 lines>... boxed_forward_device_index=forward_device, ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 726, in compile_fx_inner return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ gm, ^^^ example_inputs, ^^^^^^^^^^^^^^^ **kwargs, ^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper inner_compiled_fn = compiler_fn(gm, example_inputs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 862, in _compile_fx_inner raise InductorError(e, currentframe()).with_traceback( e.__traceback__ ) from None File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 846, in _compile_fx_inner mb_compiled_graph = fx_codegen_and_compile( gm, example_inputs, inputs_to_check, **graph_kwargs ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1460, in fx_codegen_and_compile return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs) ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1347, in codegen_and_compile compiled_module = graph.compile_to_module() File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module return self._compile_to_module() ~~~~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module mod = PyCodeCache.load_by_key_path( key, ...<2 lines>... attrs={**self.constants, **self.torchbind_constants}, ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path mod = _reload_python_module(key, path, set_sys_modules=in_toplevel) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module exec(code, mod.__dict__, mod.__dict__) ~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/tmpfcfyaga1/le/clerbfl5cvgzsi7mue3yy6hq2346emksh4svebdjys63r4nrnkef.py", line 76, in <module> async_compile.wait(globals()) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 448, in wait self._wait_futures(scope) ~~~~~~~~~~~~~~~~~~^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures scope[key] = result.result() ~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3508, in result return self.result_fn() ~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 343, in get_result kernel.precompile( ~~~~~~~~~~~~~~~~~^ warm_cache_only=False, ^^^^^^^^^^^^^^^^^^^^^^ reload_kernel=reload_kernel_in_parent, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ static_triton_bundle_key=CompiledTritonKernels.key(source_code), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile self._make_launchers() ~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers launchers.append(result.make_launcher()) ~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher self.reload_cubin_path() ~~~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path raise RuntimeError( "Cubin file saved by TritonBundler not found at %s", cubin_location ) torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp31qnqqmw/triton/OPN4Y6JTPZRK7W3V3VNIMP4B6FRUQAWHOA3KTF3NPIVJSIQC3SUA/triton_poi_fused_mul_0.cubin') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) ~~~~~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) ~~~~~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper raise e_tracked from e Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], args=(1), kwargs={}, broadcasts_input=False, name='') To execute this test, run the following from the base repo dir: PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_polygamma_polygamma_n_1_cuda_float16 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_torchinductor_opinfo.py` ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_torchinductor_opinfo.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1) headers: {"connection":"keep-alive","content-length":"46058","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"af763533c4c8961a66cc9a4318e1cda0b3b2d5420172f31f6b1a6dbe29ece83d\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"DEAA:1F7751:11F4B4C:14FC1CB:68114782","accept-ranges":"bytes","date":"Tue, 29 Apr 2025 21:41:22 GMT","via":"1.1 varnish","x-served-by":"cache-sjc10070-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1745962882.468956,VS0,VE212","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"dfbef9f3d638ff993eb11574bbe19b3acf91acca","expires":"Tue, 29 Apr 2025 21:46:22 GMT","source-age":"0"} cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
true
3,029,599,820
DISABLED test_comprehensive_polygamma_polygamma_n_0_cuda_float16 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
6
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_polygamma_polygamma_n_0_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41372364802). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_comprehensive_polygamma_polygamma_n_0_cuda_float16` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1131, in test_wrapper return test(*args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1430, in only_fn return fn(self, *args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1211, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/mock.py", line 1424, in patched return func(*newargs, **newkeywargs) File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner raise e File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner fn(self, device, dtype, op) ~~^^^^^^^^^^^^^^^^^^^^^^^^^ File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive raise e File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive self.check_model_gpu( ~~~~~~~~~~~~~~~~~~~~^ fn, ^^^ ...<2 lines>... **adjusted_kwargs, ^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu check_model( ~~~~~~~~~~~^ self, ^^^^^ ...<13 lines>... output_process_fn_grad=output_process_fn_grad, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model actual_grad = compute_grads(example_inputs, kwargs, actual, grads) File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads return torch.autograd.grad( ~~~~~~~~~~~~~~~~~~~^ flat_diff_results, ^^^^^^^^^^^^^^^^^^ ...<3 lines>... retain_graph=True, ^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/__init__.py", line 503, in grad result = _engine_run_backward( outputs, ...<5 lines>... accumulate_grad=False, ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ t_outputs, *args, **kwargs ^^^^^^^^^^^^^^^^^^^^^^^^^^ ) # Calls into the C++ engine to run the backward pass ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py", line 307, in apply return user_fn(self, *args) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2163, in backward return impl_fn() File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2149, in impl_fn out = CompiledFunction._backward_impl(ctx, all_args) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2241, in _backward_impl CompiledFunction.compiled_bw = aot_config.bw_compiler( ~~~~~~~~~~~~~~~~~~~~~~^ copy.deepcopy(bw_module), placeholder_list ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__ return self.compiler_fn(gm, example_inputs) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler disable( ~~~~~~~~ bw_compiler_fn, reason="do not trace backward compiler function" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ )(*args, **kwargs), ~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn return fn(*args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function return function(*args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2201, in bw_compiler return inner_compile( gm, ...<5 lines>... boxed_forward_device_index=forward_device, ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 726, in compile_fx_inner return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ gm, ^^^ example_inputs, ^^^^^^^^^^^^^^^ **kwargs, ^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper inner_compiled_fn = compiler_fn(gm, example_inputs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 862, in _compile_fx_inner raise InductorError(e, currentframe()).with_traceback( e.__traceback__ ) from None File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 846, in _compile_fx_inner mb_compiled_graph = fx_codegen_and_compile( gm, example_inputs, inputs_to_check, **graph_kwargs ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1460, in fx_codegen_and_compile return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs) ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1347, in codegen_and_compile compiled_module = graph.compile_to_module() File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2219, in compile_to_module return self._compile_to_module() ~~~~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2266, in _compile_to_module mod = PyCodeCache.load_by_key_path( key, ...<2 lines>... attrs={**self.constants, **self.torchbind_constants}, ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path mod = _reload_python_module(key, path, set_sys_modules=in_toplevel) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module exec(code, mod.__dict__, mod.__dict__) ~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/tmp8r24cgp0/4q/c4qwyn7lpefsq4v7psi2ipa66uaugapjlmpblytcooo3sdy2igx2.py", line 79, in <module> async_compile.wait(globals()) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 448, in wait self._wait_futures(scope) ~~~~~~~~~~~~~~~~~~^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 468, in _wait_futures scope[key] = result.result() ~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3508, in result return self.result_fn() ~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 343, in get_result kernel.precompile( ~~~~~~~~~~~~~~~~~^ warm_cache_only=False, ^^^^^^^^^^^^^^^^^^^^^^ reload_kernel=reload_kernel_in_parent, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ static_triton_bundle_key=CompiledTritonKernels.key(source_code), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 322, in precompile self._make_launchers() ~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 479, in _make_launchers launchers.append(result.make_launcher()) ~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1276, in make_launcher self.reload_cubin_path() ~~~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1268, in reload_cubin_path raise RuntimeError( "Cubin file saved by TritonBundler not found at %s", cubin_location ) torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpjt7206uo/triton/A7NL4DJP7DPYVZJOUY3KZAS6IXTYSHXPX4GE3WY2YQVV744RRENQ/triton_poi_fused_mul_0.cubin') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) ~~~~~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) ~~~~~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_device_type.py", line 1143, in test_wrapper raise e_tracked from e Exception: Caused by sample input at index 7: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float16], args=(3), kwargs={}, broadcasts_input=False, name='') To execute this test, run the following from the base repo dir: PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=7 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_polygamma_polygamma_n_0_cuda_float16 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_torchinductor_opinfo.py` ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_torchinductor_opinfo.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1) headers: {"connection":"keep-alive","content-length":"46058","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"af763533c4c8961a66cc9a4318e1cda0b3b2d5420172f31f6b1a6dbe29ece83d\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"DEAA:1F7751:11F4B4C:14FC1CB:68114782","accept-ranges":"bytes","date":"Tue, 29 Apr 2025 21:41:22 GMT","via":"1.1 varnish","x-served-by":"cache-sjc1000114-SJC","x-cache":"MISS","x-cache-hits":"0","x-timer":"S1745962882.467390,VS0,VE210","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"4bf7e97aa0bc090adc18f7ec19f0e74df6647bd6","expires":"Tue, 29 Apr 2025 21:46:22 GMT","source-age":"0"} cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
true
3,029,593,172
[CUDA][TF32] Account for TF32 in `compile_kernel_advanced`
eqy
closed
[ "module: cuda", "open source", "Merged", "module: tf32", "ciflow/trunk", "topic: not user facing" ]
3
COLLABORATOR
Also cleanup some uses of `assert_close` in favor of `self.assertEqual` cc @ptrblck @msaroufim @jerryzh168 @zasdfgbnm
true
3,029,591,389
`torch.export` fails on `InstanceNorm1d`
ar0ck
closed
[ "oncall: pt2", "oncall: export" ]
2
CONTRIBUTOR
### 🐛 Describe the bug ```python import torch mod = torch.nn.InstanceNorm1d(1) args = torch.randn(1, 2), torch.export.export(mod, args) ``` ### Error logs ```python Traceback (most recent call last): File "bug.py", line 5, in <module> torch.export.export(mod, args) File ".../lib/python3.12/site-packages/torch/export/__init__.py", line 318, in export raise e File ".../lib/python3.12/site-packages/torch/export/__init__.py", line 285, in export return _export( ^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1110, in wrapper raise e File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1076, in wrapper ep = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/exported_program.py", line 122, in wrapper return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 2122, in _export ep = _export_for_training( ^^^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1110, in wrapper raise e File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1076, in wrapper ep = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/exported_program.py", line 122, in wrapper return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1983, in _export_for_training export_artifact = export_func( ^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1925, in _non_strict_export aten_export_artifact = _to_aten_func( # type: ignore[operator] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1710, in _export_to_aten_ir_make_fx gm, graph_signature = transform(_make_fx_helper)( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1851, in _aot_export_non_strict gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.12/site-packages/torch/export/_trace.py", line 1639, in _make_fx_helper buf: input_names[param_len + i] ~~~~~~~~~~~^^^^^^^^^^^^^^^ IndexError: list index out of range ``` ### Versions ``` Collecting environment information... PyTorch version: 2.8.0.dev20250428+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 24.04.2 LTS (x86_64) GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version: Could not collect CMake version: version 3.28.3 Libc version: glibc-2.39 Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.39 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 20 On-line CPU(s) list: 0-19 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i7-1370P CPU family: 6 Model: 186 Thread(s) per core: 2 Core(s) per socket: 14 Socket(s): 1 Stepping: 2 CPU(s) scaling MHz: 16% CPU max MHz: 5200.0000 CPU min MHz: 400.0000 BogoMIPS: 4377.60 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 544 KiB (14 instances) L1i cache: 704 KiB (14 instances) L2 cache: 11.5 MiB (8 instances) L3 cache: 24 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-19 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Mitigation; Clear Register File Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] torch==2.8.0.dev20250428+cpu [conda] Could not collect ``` cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
true
3,029,584,793
[MPS][BE] Introduce `c10::metal::mul`
malfet
closed
[ "Merged", "topic: not user facing", "release notes: mps", "ciflow/mps" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152510 * #152485 * #152504 * #152479 * __->__ #152466 Which multiplies two arguments for either scalar or complex data types This allows one to get rid of bunch of complex specialization in BinaryOps
true
3,029,536,955
torch.export with dynamic shapes on Static Cache HF LLama model fails
peri044
open
[ "oncall: pt2", "oncall: export" ]
1
CONTRIBUTOR
### 🐛 Describe the bug I'm trying to export HF Llama model with Static Cache. StaticCache export is supported but only can be used with static input shapes (https://github.com/huggingface/transformers/blob/f39f4960f30e3eadd6d948e4dcb2da32eda253b5/tests/utils/test_cache_utils.py#L247-L271 and https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/executorch.py#L493-L497). When I try to export it using dynamic shapes, I face the following error ```py torch._dynamo.exc.UserError: Cannot associate shape {} specified at `dynamic_shapes['past_key_values']` to non-tensor type <class 'transformers.cache_utils.StaticCache'> at `inputs['past_key_values']` (expected None) For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()` ``` If I change the past_key_values (in `dynamic_shapes`) to `"past_key_values" : None`, the error is ```py ValueError: Unsupported input type <class 'transformers.cache_utils.StaticCache'>. Export only supports pytree containers of basic types (Tensor, int, float, ...) as input. To register a custom dataclass, use torch.export.register_dataclass. To register a custom container type, use torch.utils._pytree.register_pytree_node. To register a constant input, use torch.utils._pytree.register_constant ``` What is the correct way to export it with dynamic shapes ? Is this supported ? Thanks !! cc: @angelayi ```py import torch import transformers from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TorchExportableModuleWithStaticCache, StaticCache with torch.inference_mode(): max_seq_len = 2176 DEVICE="cuda" model_id = "meta-llama/Llama-3.2-1B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map=DEVICE, torch_dtype=torch.float16, attn_implementation="sdpa", num_hidden_layers=1, generation_config=GenerationConfig( use_cache=True, cache_implementation="static", max_length=max_seq_len, cache_config={ "batch_size": 1, "max_cache_len": max_seq_len, "device": DEVICE, }, ), ).eval().cuda() static_cache = StaticCache( config=model.config, max_batch_size=model.generation_config.cache_config.batch_size, max_cache_len=model.generation_config.cache_config.max_cache_len, device=model.generation_config.cache_config.device, dtype=model.dtype, ) for i in range(len(static_cache.key_cache)): model.register_buffer(f"key_cache_{i}", static_cache.key_cache[i], persistent=False) model.register_buffer(f"value_cache_{i}", static_cache.value_cache[i], persistent=False) model.is_causal = any("CausalLM" in arch for arch in model.model.config.architectures) if model.is_causal: causal_mask = torch.tril( torch.ones( static_cache.max_cache_len, static_cache.max_cache_len, dtype=torch.bool, ) ) model.register_buffer("mask", causal_mask, persistent=False) tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "What is parallel programming ?" model_inputs = tokenizer(prompt, return_tensors="pt") input_ids = model_inputs["input_ids"].to(DEVICE) cache_position = torch.arange(input_ids.shape[1]).to(DEVICE) seq_len = torch.export.Dim("seq_len", min=1, max=max_seq_len) cache_len = torch.export.Dim("cache_len", min=1, max=max_seq_len) exported_program = torch.export.export( model, args=(), kwargs={"input_ids" : input_ids, "cache_position": cache_position, "past_key_values": static_cache}, dynamic_shapes={"input_ids" : {1: seq_len}, "cache_position" : {0: cache_len}, "past_key_values" : {}}, strict=False, ) gm = exported_program.module() print(gm.graph) ``` ### Versions >>> import torch >>> torch.__version__ '2.8.0.dev20250423+cu128' >>> transformers.__version__ '4.49.0' cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
true
3,029,534,053
Run link checks on modified files on push too
shoumikhin
closed
[ "Merged", "topic: not user facing" ]
3
CONTRIBUTOR
https://github.com/pytorch/pytorch/issues/152439
true
3,029,517,587
consolidate guard_or_x and definitely_x
laithsakka
closed
[ "Merged", "ciflow/trunk", "release notes: fx", "fx", "module: inductor", "ciflow/inductor", "suppress-bc-linter" ]
12
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152463 definitely_true is almost same as guard_or_false, the potential differences are not meaningful to a degree that justify the existence of both. same for definitely_false, it can be expressed with guard_or_true and guard_or_false. cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,492,771
[standalone_compile] fix dynamic shapes with config_patches
zou3519
closed
[ "Merged", "ciflow/trunk", "module: inductor", "ciflow/inductor", "release notes: inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152462 compile_fx with config_patches goes down another path where we need to propagate the kwarg... Test Plan: - updated test cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,454,563
Remove dead binary_ios_build, test, upload scripts
clee2000
closed
[ "Merged", "topic: not user facing" ]
3
CONTRIBUTOR
Can't find any mentions of them in the codebase, presumably no longer used?
true
3,029,427,049
[pytorch][triton] flex attention fwd kernel with TMA loads (#151923)
mandroid6
open
[ "fb-exported", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
6
CONTRIBUTOR
Summary: Device side TMA for flex_attention fwd kernel, Q K V tensors Test Plan: Unit test: ``` buck test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:flex_attention -- test_tma_with_customer_kernel_options ``` https://www.internalfb.com/intern/testinfra/testrun/14355223891618726 Differential Revision: D71082691 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,420,061
[IR] Input Adapter refactor prototype
felixsu2006
open
[ "fb-exported", "release notes: export" ]
3
CONTRIBUTOR
Summary: 1. Adding `input` field to `_adapt_flat_args` function 2. In `process_forward_inputs`, `reorder_kwargs` will now do nothing if no kwargs are provided (previously would error) 3. Pass `args` as input to `_adapt_flat_args` These changes are made to update the InputAdapter see more context in D72341439 Test Plan: see D72341439 Differential Revision: D72341439
true
3,029,407,184
fix: Update padding_mode to use Literal for type checking
sujeet4010
open
[ "triaged", "open source", "topic: not user facing" ]
2
NONE
Fixes #152280
true
3,029,388,614
[inductor] Fix usage of launch_enter_hook/launch_exit_hook
danzimm
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor", "release notes: inductor" ]
5
CONTRIBUTOR
In https://github.com/triton-lang/triton/pull/6467 I moved where `launch_enter_hook`/`launch_exit_hook` are specified (from the kernel class to a config). This PR updates the usages to use the config module if it exists to support tip of main triton. In https://github.com/triton-lang/triton/pull/6641 I renamed `triton.config` to `triton.knobs`, hence the second commit in this PR. Test Plan: Setup OSS PT with tip of main triton (namely including https://github.com/triton-lang/triton/pull/6641) and run `python test/inductor/test_pad_mm.py` cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @oulgen @jamesjwu
true
3,029,280,080
Fix XLA issue.
laithsakka
closed
[ "release notes: jit", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152456 * #152455
true
3,029,279,968
[export] add runtime assert messages to python torch checks (#150719)
laithsakka
open
[ "release notes: fx", "fx", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152456 * __->__ #152455 Summary: ~fixes https://github.com/pytorch/pytorch/issues/150063 (for python at least) Before: ``` Runtime assertion failed for expression Eq(Mod(s16*s35, s35 - 1), 0) on node 'eq' ``` Now: ``` RuntimeError: Runtime assertion failed for expression Eq(Mod(s16*s35, s35 - 1), 0) on node 'eq' The original traceback points to the following location and error message: /data/users/pianpwk/pytorch/torch/_prims_common/__init__.py:954 shape '[s35 - 1, ((s16*s35)//(s35 - 1))]' is invalid for input of size s16*s35 ``` cc ezyang SherlockNoMad EikanWang jgong5 wenzhe-nrv Reviewed By: laithsakka Differential Revision: D72483950 Pulled By: pianpwk cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
true
3,029,254,320
Decorate `test_host_memory_stats` with `@serialTest`
eqy
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
3
COLLABORATOR
Seems to need it as it is expecting only its allocation behavior to be visible, to address #152422
true
3,029,244,351
Add epoch to fake tensor cache key
tugsbayasgalan
open
[ "fb-exported", "ciflow/trunk", "topic: not user facing", "ciflow/inductor" ]
20
CONTRIBUTOR
Summary: This is especially necessary when the output of an op is unbacked symbol. In those cases,we need to regenerate new unbacked val and bind it to the old unbacked symbol via rebind_unbacked_symbol in downstream use. Test Plan: CI Differential Revision: D73869239
true
3,029,244,113
Implement async manifold cache write
oulgen
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
4
CONTRIBUTOR
Summary: This diff implements an AsyncManifoldCache class that performs cache write and update ttl operations in an async manner. Essentially we are ok with the fire and forget approach where we dont guarantee that we can observe our writes, this gives us better runtime latency. Test Plan: added new unit test Reviewed By: jamesjwu Differential Revision: D73867797 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,234,271
[dynamo] `torch.compile` prevents fsdp warning from getting generated
StrongerXi
open
[ "oncall: distributed", "triaged", "oncall: pt2" ]
0
CONTRIBUTOR
### 🐛 Describe the bug Repro: ``` PYTORCH_TEST_WITH_DYNAMO=1 python test/distributed/fsdp/test_wrap.py TestAutoWrap.test_frozen_params ``` ### Error logs ``` Traceback (most recent call last): File "/home/ryanguo99/repos/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) File "/home/ryanguo99/repos/pytorch/test/distributed/fsdp/test_wrap.py", line 883, in test_frozen_params self._test_frozen_params(use_orig_params, policy) File "/home/ryanguo99/repos/pytorch/test/distributed/fsdp/test_wrap.py", line 894, in _test_frozen_params with ctx: AssertionError: UserWarning not triggered To execute this test, run the following from the base repo dir: PYTORCH_TEST_WITH_DYNAMO=1 python test/distributed/fsdp/test_wrap.py TestAutoWrap.test_frozen_params ``` ### Versions 48555f19062, python 3.11 cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu
true
3,029,230,610
[PT2] Port replace_lce_with_matmul / replace_first_lce_with_fused_matmul_lce to PT2 pre_grad passes
kqfu
open
[ "fb-exported", "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor", "ci-no-td" ]
12
CONTRIBUTOR
Summary: Port over replace_lce_with_matmul and replace_first_lce_with_fused_matmul_lce to PT2 pre_grad pass. Original dper pass diffs: D67884534, D68123479, D68384238 Test Plan: Test 1. Covers replace_lce_with_matmul and case 1 of replace_first_lce_with_fused_matmul_lce ``` CUDA_VISIBLE_DEVICES=6 TORCH_LOGS=+inductor,aot TORCH_COMPILE_DEBUG=1 TORCHINDUCTOR_MAX_AUTOTUNE=1 buck2 run mode/opt-split-dwarf mode/inplace -c fbcode.platform010_cuda_version=12 -c fbcode.nvcc_arch=h100 caffe2/torch/fb/model_transform/experimental/benchmark:mts_gpu_benchmark -- --model-path=manifold://ads_storage_fblearner/tree/user/facebook/fblearner/predictor/669809193/0/gpu_lowering/input.predictor.disagg.gpu.merge --lower-backend="AOT_INDUCTOR" --add_passes="use_matmul_fuse_lce_replace_first_LCE,use_contiguous_linear_reduction_replace_linear_reduction" --batch-size=3072 --gpu-trace --disable_acc_tracer=true 2>&1 | tee ~/logs/disable_acc_tracer/aoti_cmf_ctr_triton_669809193_0_diable_acc.log ``` Log: P1798246938 Test 2. Covers replace_lce_with_matmul and case 2 of replace_first_lce_with_fused_matmul_lce ``` CUDA_VISIBLE_DEVICES=7 TORCH_LOGS=+inductor,aot TORCH_COMPILE_DEBUG=1 TORCHINDUCTOR_MAX_AUTOTUNE=1 buck2 run mode/opt-split-dwarf mode/inplace -c fbcode.platform010_cuda_version=12 -c fbcode.nvcc_arch=h100 caffe2/torch/fb/model_transform/experimental/benchmark:mts_gpu_benchmark -- --model-path=manifold://ads_storage_fblearner/tree/user/facebook/fblearner/predictor/677734158/9/gpu_lowering/input.predictor.disagg.gpu.merge --lower-backend="AOT_INDUCTOR" --add_passes="use_matmul_fuse_lce_replace_first_LCE,use_matmul_lce_replace_normal_LCE" --batch-size=3072 --gpu-trace --disable_acc_tracer=true 2>&1 | tee ~/logs/disable_acc_tracer/aoti_cmf_ctr_triton_677734158_9_diable_acc.log ``` Log: P1798246675 Seeing logs like `[Pre grad(predispatch IR)] Apply use_matmul_fuse_lce_replace_first_LCE pass, save before/after graph to /tmp/tmp8lyzoh79, graph before/after are the same = False` Reviewed By: huxintong Differential Revision: D71358949 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,230,160
Add new profiling events to `DebugAutotuner`
exclamaforte
open
[ "module: inductor", "ciflow/inductor" ]
2
CONTRIBUTOR
This PR adds support for more granular profiling events in DebugAutotuner. These profiling events are already present in CachingAutotuner, and their lack in DebugAutotuner breaks #149697 when `config.profile_bandwidth` is set. Also fixes an outstanding bug, where attempting to use the profiler to benchmark in DebugAutotuner would fail when the compilation was already in a profiler instance. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,225,969
add device generalisation support for distributed tests
harikodali
closed
[ "oncall: distributed", "open source", "topic: not user facing" ]
1
NONE
test/distributed/optim/test_zero_redundancy_optimizer.py test/distributed/test_c10d_logger.py test/distributed/test_compute_comm_reordering.py are modified for device generic support device info is retrieved and used for tests, via get_devtype() from torch.testing._internal.common_fsdp DistributedTestBase is used instead of MultiProcessTestCase, to make use of helper functions cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,029,220,449
[dynamo] guard code generation triggers attribute error on DeviceMesh object
StrongerXi
open
[ "triaged", "oncall: pt2", "module: dynamo" ]
2
CONTRIBUTOR
### 🐛 Describe the bug Repro: ``` PYTORCH_TEST_WITH_INDUCTOR=1 python test/distributed/tensor/debug/test_comm_mode.py TestCommMode.test_comm_mode_with_dtensor ``` ### Error logs ``` Traceback (most recent call last): File "/home/ryanguo99/repos/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) File "/home/ryanguo99/repos/pytorch/test/distributed/tensor/debug/test_comm_mode.py", line 95, in test_comm_mode_with_dtensor mesh = DeviceMesh(self.device_type, list(range(self.world_size))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 1458, in __call__ return self._torchdynamo_orig_callable( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 1237, in __call__ result = self._inner_convert( ^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 624, in __call__ return _compile( ^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 1133, in _compile raise InternalTorchDynamoError( File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 1082, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_utils_internal.py", line 97, in wrapper_function return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 777, in compile_inner return _compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 922, in _compile_inner check_fn = CheckFunctionManager( ^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/guards.py", line 2564, in __init__ builder, guard_manager = self.build_guards( ^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/guards.py", line 2784, in build_guards guard.create(builder) File "/home/ryanguo99/repos/pytorch/torch/_guards.py", line 358, in create return self.create_fn(builder, self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/guards.py", line 1696, in EQUALS_MATCH code = [f"{ref} == {val!r}"] ^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/distributed/device_mesh.py", line 629, in __repr__ if not self.mesh_dim_names ^^^^^^^^^^^^^^^^^^^ torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'DeviceMesh' object has no attribute 'mesh_dim_names' To execute this test, run the following from the base repo dir: PYTORCH_TEST_WITH_INDUCTOR=1 python test/distributed/tensor/debug/test_comm_mode.py TestCommMode.test_comm_mode_with_dtensor ``` ### Versions 6e5e9dc321c, python 3.11 cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
true
3,029,200,397
[TorchDynamo] Fix failure to realize LazyVariableTracker on stack
Lucaskabela
closed
[ "module: dynamo", "ciflow/inductor" ]
3
CONTRIBUTOR
Fixes a bug with install and inline for dynamo where lazy variable tracker was not realized before creating the `FakeRootModule` on [L1129](https://github.com/pytorch/pytorch/blob/a3123dd3ab936a59dc3219dd77785b71c9147019/torch/_dynamo/output_graph.py#L1129) in output_graph.py As a result, when we try to access the graph attribute `bff`, the attribute was not found and would crash with an error like: ``` File "/data/users/lucaskabela/pytorch/torch/fx/graph_module.py", line 493, in __init__ _copy_attr(root, self, node.target) File "/data/users/lucaskabela/pytorch/torch/fx/graph_module.py", line 238, in _copy_attr orig = getattr(from_module, field) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/users/lucaskabela/pytorch/torch/nn/modules/module.py", line 1944, in __getattr__ raise AttributeError( torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'FakeRootModule' object has no attribute 'L__self___bff' ``` If we realize the nested structures properly before (since they are on the [stack in L1118](https://github.com/pytorch/pytorch/blob/a3123dd3ab936a59dc3219dd77785b71c9147019/torch/_dynamo/output_graph.py#L1117)), this crash is resolved ### Test: ``` python test/export/test_export_with_inline_and_install.py InlineAndInstallStrictExportTestExport.test_constant_output_inline_and_install_strict ``` cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,029,188,296
Use Literal for padding_mode in Conv2d for better type checking
sudiptap
closed
[ "oncall: distributed", "module: cpu", "open source", "release notes: quantization", "topic: not user facing", "module: inductor", "module: dynamo", "release notes: distributed (checkpoint)", "release notes: inductor (aoti)" ]
6
NONE
Fixes #152280 — use Literal for padding_mode to improve type checking. cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,183,923
Log aot and idx waitcounters.
jovianjaison
closed
[ "fb-exported", "Merged", "Reverted", "ciflow/trunk", "module: inductor", "module: dynamo", "ciflow/inductor", "release notes: AO frontend", "ci-no-td" ]
33
CONTRIBUTOR
Summary: Added for create_aot_dispatcher_function and compile_fx_inner. Note: Log wait counters flag is already set for: 1. async_compile.precompile 2. remote_fx_graph_cache_get 3. remote_fx_graph_cache_put Test Plan: contbuild Differential Revision: D73866124 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,029,167,566
[MPS][BE] Delete unused lerp functors
malfet
closed
[ "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "release notes: mps", "ciflow/mps", "ci-no-td" ]
10
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152504 * #152485 * #152479 * #152466 * __->__ #152443 For `lerp.Scalar_out` weight (aka alpha) is not an optional argument, so no point in having those specializations. But move `alpha=1.0` ahead of dispatching to Metal shaders, as plain copy of tensor should still be faster https://github.com/pytorch/pytorch/blob/a1a4fee3b84b2d46f8c084f1aa9e6a45fa54b9c3/aten/src/ATen/native/mps/operations/BinaryOps.mm#L285-L290
true
3,029,160,787
`torch.compile` causes assertion error in distributed checkpoint wrapper test
StrongerXi
open
[ "oncall: distributed", "triaged", "oncall: pt2" ]
1
CONTRIBUTOR
### 🐛 Describe the bug Repro: ``` PYTORCH_TEST_WITH_INDUCTOR=1 python test/distributed/fsdp/test_checkpoint_wrapper.py CheckpointWrapperTest.test_checkpoint_wrapper_args_kwargs ``` ### Error logs ``` Traceback (most recent call last): File "/home/ryanguo99/.conda/envs/comfyui/lib/python3.11/unittest/case.py", line 57, in testPartExecutor yield File "/home/ryanguo99/.conda/envs/comfyui/lib/python3.11/unittest/case.py", line 623, in run self._callTestMethod(testMethod) File "/home/ryanguo99/.conda/envs/comfyui/lib/python3.11/unittest/case.py", line 579, in _callTestMethod if method() is not None: ^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper method(*args, **kwargs) File "/home/ryanguo99/repos/pytorch/test/distributed/fsdp/test_checkpoint_wrapper.py", line 128, in test_checkpoint_wrapper_args_kwargs torch.nn.Linear(1, 1), File "/home/ryanguo99/repos/pytorch/test/distributed/fsdp/test_checkpoint_wrapper.py", line 127, in torch_dynamo_resume_in_test_checkpoint_wrapper_args_kwargs_at_128 m = checkpoint_wrapper( File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 1458, in __call__ ^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 1082, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_utils_internal.py", line 97, in wrapper_function return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 777, in compile_inner return _compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 813, in _compile_inner out_code = transform_code_object(code, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object transformations(instructions, code_options) File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 264, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/convert_frame.py", line 741, in transform tracer.run() File "/home/ryanguo99/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 3494, in run super().run() File "/home/ryanguo99/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 1345, in run while self.step(): ^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 1254, in step self.dispatch_table[inst.opcode](self, inst) File "/home/ryanguo99/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 825, in wrapper return handle_graph_break(self, inst, speculation.reason) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 875, in handle_graph_break self.output.compile_subgraph(self, reason=reason) File "/home/ryanguo99/repos/pytorch/torch/_dynamo/output_graph.py", line 1239, in compile_subgraph self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) File "/home/ryanguo99/repos/pytorch/torch/_dynamo/output_graph.py", line 1500, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/output_graph.py", line 1552, in call_user_compiler return self._call_user_compiler(gm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/output_graph.py", line 1609, in _call_user_compiler raise BackendCompilerFailed( File "/home/ryanguo99/repos/pytorch/torch/_dynamo/output_graph.py", line 1584, in _call_user_compiler compiled_fn = compiler_fn(gm, self.example_inputs()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__ compiled_gm = compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/backends/inductor.py", line 23, in inductor return compile_fx(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_inductor/compile_fx.py", line 2279, in compile_fx return aot_autograd( ^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/backends/common.py", line 106, in __call__ cg = aot_module_simplified(gm, example_inputs, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_functorch/aot_autograd.py", line 1171, in aot_module_simplified compiled_fn = AOTAutogradCache.load( ^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 873, in load compiled_fn = dispatch_and_compile() ^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_functorch/aot_autograd.py", line 1156, in dispatch_and_compile compiled_fn, _ = create_aot_dispatcher_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_functorch/aot_autograd.py", line 576, in create_aot_dispatcher_function return _create_aot_dispatcher_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_functorch/aot_autograd.py", line 677, in _create_aot_dispatcher_function fw_metadata = run_functionalized_fw_and_collect_metadata( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 198, in inner flat_f_outs = f(*flat_f_args) ^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 899, in functional_call out = PropagateUnbackedSymInts(mod).run( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/fx/interpreter.py", line 171, in run self.env[node] = self.run_node(node) ^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/fx/experimental/symbolic_shapes.py", line 7404, in run_node result = super().run_node(n) ^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/fx/interpreter.py", line 240, in run_node return getattr(self, n.op)(n.target, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/fx/interpreter.py", line 320, in call_function return target(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_higher_order_ops/wrap.py", line 236, in __call__ return checkpoint(Interpreter(gmod).run, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_compile.py", line 51, in inner return disable_fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/_dynamo/eval_frame.py", line 856, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/ryanguo99/repos/pytorch/torch/utils/checkpoint.py", line 494, in checkpoint next(gen) File "/home/ryanguo99/repos/pytorch/torch/utils/checkpoint.py", line 1479, in _checkpoint_without_reentrant_generator isinstance(forward_context, TorchDispatchMode) and torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: AssertionError: In torch.compile mode, `context_fn` arg passed to `torch.utils.checkpoint` must generate a tuple of two `TorchDispatchMode`s. While executing %tag_activation_checkpoint : [num_users=1] = call_function[target=torch.ops.higher_order.tag_activation_checkpoint](args = (%wrap_body_0, %randn, %l_stack0_modules_checkpoint_wrapped_module_parameters_weight_, %l_stack0_modules_checkpoint_wrapped_module_parameters_bias_), kwargs = {use_reentrant: False}) GraphModule: class GraphModule(torch.nn.Module): def forward(self, L_stack0_modules_checkpoint_wrapped_module_parameters_weight_: "f32[1, 1][1, 1]", L_stack0_modules_checkpoint_wrapped_module_parameters_bias_: "f32[1][1]"): l_stack0_modules_checkpoint_wrapped_module_parameters_weight_ = L_stack0_modules_checkpoint_wrapped_module_parameters_weight_ l_stack0_modules_checkpoint_wrapped_module_parameters_bias_ = L_stack0_modules_checkpoint_wrapped_module_parameters_bias_ # File: /home/ryanguo99/repos/pytorch/test/distributed/fsdp/test_checkpoint_wrapper.py:133 in torch_dynamo_resume_in_test_checkpoint_wrapper_args_kwargs_at_127, code: m(torch.randn(2, 1)).sum().backward() randn: "f32[2, 1][1, 1]" = torch.randn(2, 1) # File: /home/ryanguo99/repos/pytorch/torch/distributed/algorithms/_checkpoint/checkpoint_wrapper.py:171 in forward, code: return self.checkpoint_fn( # type: ignore[misc] wrap_body_0 = self.wrap_body_0 tag_activation_checkpoint = torch.ops.higher_order.tag_activation_checkpoint(wrap_body_0, randn, l_stack0_modules_checkpoint_wrapped_module_parameters_weight_, l_stack0_modules_checkpoint_wrapped_module_parameters_bias_, use_reentrant = False); wrap_body_0 = randn = l_stack0_modules_checkpoint_wrapped_module_parameters_weight_ = l_stack0_modules_checkpoint_wrapped_module_parameters_bias_ = None getitem: "f32[2, 1][1, 1]" = tag_activation_checkpoint[0]; tag_activation_checkpoint = None # File: /home/ryanguo99/repos/pytorch/test/distributed/fsdp/test_checkpoint_wrapper.py:133 in torch_dynamo_resume_in_test_checkpoint_wrapper_args_kwargs_at_127, code: m(torch.randn(2, 1)).sum().backward() sum_1: "f32[][]" = getitem.sum(); getitem = None return (sum_1,) class wrap_body_0(torch.nn.Module): def forward(self, randn: "f32[2, 1][1, 1]", l_stack0_modules_checkpoint_wrapped_module_parameters_weight_: "f32[1, 1][1, 1]", l_stack0_modules_checkpoint_wrapped_module_parameters_bias_: "f32[1][1]"): # File: /home/ryanguo99/repos/pytorch/torch/distributed/algorithms/_checkpoint/checkpoint_wrapper.py:171 in forward, code: return self.checkpoint_fn( # type: ignore[misc] linear: "f32[2, 1][1, 1]" = torch._C._nn.linear(randn, l_stack0_modules_checkpoint_wrapped_module_parameters_weight_, l_stack0_modules_checkpoint_wrapped_module_parameters_bias_); randn = l_stack0_modules_checkpoint_wrapped_module_parameters_weight_ = l_stack0_modules_checkpoint_wrapped_module_parameters_bias_ = None return (linear,) Original traceback: File "/home/ryanguo99/repos/pytorch/test/distributed/fsdp/test_checkpoint_wrapper.py", line 133, in torch_dynamo_resume_in_test_checkpoint_wrapper_args_kwargs_at_127 m(torch.randn(2, 1)).sum().backward() File "/home/ryanguo99/repos/pytorch/torch/distributed/algorithms/_checkpoint/checkpoint_wrapper.py", line 171, in forward return self.checkpoint_fn( # type: ignore[misc] Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" To execute this test, run the following from the base repo dir: PYTORCH_TEST_WITH_INDUCTOR=1 python test/distributed/fsdp/test_checkpoint_wrapper.py CheckpointWrapperTest.test_checkpoint_wrapper_args_kwargs ``` ### Versions 6e5e9dc321c, Python 3.11 cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu
true
3,029,140,184
Inductor pattern matching on mutable ops
yf225
open
[ "triaged", "module: functionalization", "oncall: pt2", "module: inductor", "module: pt2-dispatcher" ]
0
CONTRIBUTOR
### 🚀 The feature, motivation and pitch Pattern matching on mutable ops (e.g. to replace it with custom kernels) is a common usage that shows up in libraries such as vLLM. We'd like to improve Inductor pattern matcher's support for it. This is one use case where both `x` and `out` are non-view tensors: ```python import torch from torch.library import register_fake from torch._inductor.pattern_matcher import register_replacement, fwd_only, PatternMatcherPass @torch.library.custom_op("mylib::foo_inplace", mutates_args={"x"}) def foo_inplace(x: torch.Tensor) -> None: x.add_(1) # NOTE: only returning None is supported; the custom op cannot return `out`. @torch.library.custom_op("mylib::bar", mutates_args={"out"}) def bar_out(x: torch.Tensor, out: torch.Tensor) -> None: out.copy_(x + 2) @register_fake("mylib::bar") def bar_out_fake(x: torch.Tensor, out: torch.Tensor) -> None: return None @torch.library.custom_op("mylib::foobar_out", mutates_args={"out"}) def foobar_out(x: torch.Tensor, out: torch.Tensor) -> torch.Tensor: x.add_(1) out.copy_(x + 2) return out def pattern(x, out): foo_inplace(x) bar_out(x, out) return out def replacement(x, out): return foobar_out(x, out) patterns = PatternMatcherPass() register_replacement( search_fn=pattern, replace_fn=replacement, example_inputs=(torch.randn(3), torch.randn(3)), trace_fn=fwd_only, pass_dicts=patterns, ) # user-function @torch.compile(fullgraph=True) def f(x): x = x.clone() out = torch.empty_like(x) foo_inplace(x) bar_out(x, out) return out x = torch.randn(3, device="cpu") f(x) ``` which currently gives error: ```python Traceback (most recent call last): File "/home/local/pytorch/test/test_mutable_op_pattern_match.py", line 39, in <module> register_replacement( File "/home/local/miniconda3/lib/python3.12/site-packages/torch/_inductor/pattern_matcher.py", line 1563, in register_replacement pattern.register(pass_dicts) File "/home/local/miniconda3/lib/python3.12/site-packages/torch/_inductor/pattern_matcher.py", line 1079, in register assert hasattr(self.pattern, "fns") AssertionError ``` More generally, there are 4 use cases we want to support: - [ ] `x`: non-view, `out`: non-view - [ ] `x`: view, `out`: non-view - [ ] `x`: non-view, `out`: view - [ ] `x`: view, `out`: view For view cases, we'd like to explore the `opaque_view` idea. cc. @zou3519 @eellison ### Alternatives _No response_ ### Additional context _No response_ cc @bdhirsh @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519
true
3,029,077,321
Add two missing JIT tests to CMake
swolchok
closed
[ "oncall: jit", "Merged", "ciflow/trunk", "topic: not user facing" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152440 Looks like I forgot to add these. cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
true
3,029,071,561
Newly added lint-urls jobs are very flaky
malfet
open
[ "module: ci", "triaged", "module: flaky-tests", "module: regression" ]
10
CONTRIBUTOR
### 🐛 Describe the bug May be it's just me, but newly added lint jobs keeps intermittently failing on PRs/trunk, for example https://github.com/pytorch/pytorch/actions/runs/14737078933/job/41365789268 ### Versions CI cc @seemethere @pytorch/pytorch-dev-infra @clee2000
true