id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,791,408,646 | Fixed typo in docs/source/scripts/build_activation_images.py | Sai-Pra | closed | [
"open source"
] | 2 | CONTRIBUTOR | Fixed typo, changed programmaticaly to programmatically | true |
2,791,397,569 | [export] check non-negative modulus, avoid unnecessary congruences, in export solver | pianpwk | open | [
"fb-exported",
"Stale",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Fixes #144852
Summary: if we find x = a * k + b, make sure a is non-negative. Also don't introduce unnecessary congruences on arbitrary floordiv expressions. Test case checks we don't introduce a divisiblity guard.
Test Plan: test_export
Differential Revision: D68245292
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,791,393,947 | [MPSInductor] Fix codegen regression | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144924
* #144917
Caused by https://github.com/pytorch/pytorch/pull/144649
Do not try to insert anything into the header if wrapper is not ready yet
Fixes `test_sort_mps`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,791,393,383 | DISABLED test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 8 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35679666011).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 9 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 925, in test_aoti_eager_dtype_device_layout
res = torch.tril_indices(
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,791,393,381 | DISABLED test_aoti_eager_support_str_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_support_str_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35679665765).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_support_str_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1023, in test_aoti_eager_support_str
res_value = getattr(torch.ops.aten, op_name)(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_aoti_eager_support_str_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,791,393,380 | DISABLED test_repeat_graph_capture_cublas_workspace_memory (__main__.TestCuda) | pytorch-bot[bot] | open | [
"module: cuda",
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped"
] | 5 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_repeat_graph_capture_cublas_workspace_memory&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670628070).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 6 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_repeat_graph_capture_cublas_workspace_memory`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_cuda.py", line 2048, in test_repeat_graph_capture_cublas_workspace_memory
self.assertFalse(used_gb_before + 0.1 < used_gb_after)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 681, in assertFalse
raise self.failureException(msg)
AssertionError: True is not false
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_cuda.py TestCuda.test_repeat_graph_capture_cublas_workspace_memory
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @eqy @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | true |
2,791,392,704 | DISABLED test_returning_symint (__main__.TestPythonRegistration) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"skipped",
"module: __torch_dispatch__"
] | 3 | NONE | Platforms: asan, linux, rocm, slow, win, windows, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_returning_symint&suite=TestPythonRegistration&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670367109).
Over the past 3 hours, it has been determined flaky in 138 workflow(s) with 276 failures and 138 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_returning_symint`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_python_dispatch.py", line 574, in test_returning_symint
def test_returning_symint(self) -> None:
File "/var/lib/jenkins/workspace/test/test_python_dispatch.py", line 576, in torch_dynamo_resume_in_test_returning_symint_at_575
fake_tensor_mode = FakeTensorMode(shape_env=shape_env)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 725, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1403, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1076, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 986, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 683, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2376, in CALL
self._call(inst)
~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2370, in _call
self.call_function(fn, args, kwargs)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 921, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/torch.py", line 955, in call_function
tensor_variable = wrap_fx_proxy(
tx=tx,
...<4 lines>...
),
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 2162, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 2228, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
target_cls, tx, proxy, example_value, subclass_type, **options
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 2324, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 3023, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2958, in get_fake_value
ret_val = wrap_fake_exception(
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2504, in wrap_fake_exception
return fn()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 2959, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 3091, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
e.__traceback__
) from e
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/utils.py", line 3073, in run_node
return node.target(*args, **kwargs)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_stats.py", line 26, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 1284, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 1825, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 1386, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 2363, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_subclasses/fake_impls.py", line 188, in constructors
with in_kernel_invocation_manager(fake_mode):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/contextlib.py", line 141, in __enter__
return next(self.gen)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_subclasses/fake_tensor.py", line 517, in in_kernel_invocation_manager
assert meta_in_tls == prev_in_kernel, f"{meta_in_tls}, {prev_in_kernel}"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method rand of type object at 0x7f450c3a4610>(*(2, 3), **{}):
True, False
from user code:
File "/var/lib/jenkins/workspace/test/test_python_dispatch.py", line 578, in torch_dynamo_resume_in_test_returning_symint_at_576
ft = fake_tensor_mode.from_tensor(torch.rand(2, 3))
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_python_dispatch.py TestPythonRegistration.test_returning_symint
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_python_dispatch.py`
cc @clee2000 @wdvr @Chillee @ezyang @zou3519 @albanD @samdow | true |
2,791,392,684 | DISABLED test_aoti_eager_cache_hit_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_cache_hit_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35679665075).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_cache_hit_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1071, in test_aoti_eager_cache_hit
res_value = getattr(torch.ops.aten, op_name)(input_tensor)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: aot_compile_function.ptr() != nullptr && aot_compile_function.ptr() != Py_None INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/inductor/aoti_eager/kernel_holder.cpp":507, please report a bug to PyTorch. Failed to import - torch._inductor.aoti_eager.aoti_compile_with_persistent_cache
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_aoti_eager_cache_hit_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,791,392,630 | DISABLED test_distributed_checkpoint_state_dict_type1_cuda (__main__.TestDistributedCheckpointCUDA) | pytorch-bot[bot] | open | [
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 2 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_distributed_checkpoint_state_dict_type1_cuda&suite=TestDistributedCheckpointCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670700293).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_distributed_checkpoint_state_dict_type1_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 597, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 837, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 886, in _check_return_codes
raise RuntimeError(error)
RuntimeError: Process 1 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 726, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 599, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3128, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3128, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 485, in instantiated_test
raise rte
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 465, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 199, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/checkpoint_utils.py", line 44, in wrapper
func(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_distributed_checkpoint.py", line 67, in test_distributed_checkpoint
state_dict = model.state_dict()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2204, in state_dict
module.state_dict(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2204, in state_dict
module.state_dict(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2204, in state_dict
module.state_dict(
[Previous line repeated 1 more time]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2210, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 714, in _post_state_dict_hook
processed_state_dict = _post_state_dict_hook_fn[fsdp_state._state_dict_type](
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 432, in _local_post_state_dict_hook
sharded_tensor = init_from_local_shards(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py", line 407, in init_from_local_shards
return ShardedTensor._init_from_local_shards(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/api.py", line 753, in _init_from_local_shards
dist.all_gather_object(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3037, in all_gather_object
input_tensor.resize_(max_object_size)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory.
Exception raised from allocate at /var/lib/jenkins/workspace/c10/cuda/CUDACachingAllocator.cpp:3623 (most recent call first):
C++ CapturedTraceback:
#4 std::_Function_handler<std::shared_ptr<c10::LazyValue<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const> (), c10::SetStackTraceFetcher(std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) from Logging.cpp:0
#5 c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) from ??:0
#6 c10::cuda::CUDACachingAllocator::Native::NativeCachingAllocator::allocate(unsigned long) from :0
#7 at::native::resize_bytes_cuda(c10::StorageImpl*, unsigned long) from ??:0
#8 at::native::resize_cuda_(at::Tensor const&, c10::ArrayRef<long>, std::optional<c10::MemoryFormat>) from ??:0
#9 at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from ??:0
#10 torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from VariableTypeManual.cpp:0
#11 at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from ??:0
#12 torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from VariableTypeManual.cpp:0
#13 at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from ??:0
#14 torch::autograd::THPVariable_resize_(_object*, _object*, _object*) from python_variable_methods.cpp:0
#15 method_vectorcall_VARARGS_KEYWORDS from :0
#16 _PyEval_EvalFrameDefault from ??:0
#17 _PyFunction_Vectorcall from ??:0
#18 PyObject_Call from ??:0
#19 _PyEval_EvalFrameDefault from ??:0
#20 _PyFunction_Vectorcall from ??:0
#21 _PyEval_EvalFrameDefault from ??:0
#22 method_vectorcall from :0
#23 PyObject_Call from ??:0
#24 _PyEval_EvalFrameDefault from ??:0
#25 _PyFunction_Vectorcall from ??:0
#26 _PyEval_EvalFrameDefault from ??:0
#27 _PyFunction_Vectorcall from ??:0
#28 _PyEval_EvalFrameDefault from ??:0
#29 _PyFunction_Vectorcall from ??:0
#30 _PyEval_EvalFrameDefault from ??:0
#31 _PyFunction_Vectorcall from ??:0
#32 _PyEval_EvalFrameDefault from ??:0
#33 method_vectorcall from :0
#34 _PyEval_EvalFrameDefault from ??:0
#35 method_vectorcall from :0
#36 _PyEval_EvalFrameDefault from ??:0
#37 method_vectorcall from :0
#38 _PyEval_EvalFrameDefault from ??:0
#39 method_vectorcall from :0
#40 _PyEval_EvalFrameDefault from ??:0
#41 method_vectorcall from :0
#42 _PyEval_EvalFrameDefault from ??:0
#43 _PyFunction_Vectorcall from ??:0
#44 PyObject_Call from ??:0
#45 _PyEval_EvalFrameDefault from ??:0
#46 _PyFunction_Vectorcall from ??:0
#47 PyObject_Call from ??:0
#48 _PyEval_EvalFrameDefault from ??:0
#49 _PyFunction_Vectorcall from ??:0
#50 PyObject_Call from ??:0
#51 _PyEval_EvalFrameDefault from ??:0
#52 method_vectorcall from :0
#53 _PyEval_EvalFrameDefault from ??:0
#54 method_vectorcall from :0
#55 _PyEval_EvalFrameDefault from ??:0
#56 method_vectorcall from :0
#57 _PyEval_EvalFrameDefault from ??:0
#58 method_vectorcall from :0
#59 _PyEval_EvalFrameDefault from ??:0
#60 _PyFunction_Vectorcall from ??:0
#61 _PyEval_EvalFrameDefault from ??:0
#62 method_vectorcall from :0
#63 PyObject_Call from ??:0
#64 _PyEval_EvalFrameDefault from ??:0
#65 _PyFunction_Vectorcall from ??:0
#66 _PyEval_EvalFrameDefault from ??:0
#67 _PyFunction_Vectorcall from ??:0
#68 _PyEval_EvalFrameDefault from ??:0
#69 _PyFunction_Vectorcall from ??:0
#70 _PyEval_EvalFrameDefault from ??:0
#71 _PyFunction_Vectorcall from ??:0
#72 _PyEval_EvalFrameDefault from ??:0
#73 _PyEval_Vector from :0
#74 PyEval_EvalCode from ??:0
#75 run_eval_code_obj from :0
#76 run_mod from :0
#77 PyRun_StringFlags.localalias from :0
#78 PyRun_SimpleStringFlags.localalias from :0
#79 Py_RunMain.localalias from :0
#80 Py_BytesMain from ??:0
#81 __libc_start_main from ??:0
#82 _start from ??:0
To execute this test, run the following from the base repo dir:
python test/distributed/fsdp/test_distributed_checkpoint.py TestDistributedCheckpointCUDA.test_distributed_checkpoint_state_dict_type1_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/fsdp/test_distributed_checkpoint.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr | true |
2,791,353,487 | [MPSInductor] Properly convert index | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144924
* __->__ #144917
By calling `self.index_to_str` from `load`,`store` and `check_bounds` in order to properly handle sizevars variables renames
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,791,351,573 | [c10d][fr] Fix the bug when we still mark mismatch when there are match case | fduwjj | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144916
When we introduce partial match, we accidentally introduce the mark of mismatch for the full match case. This is wrong and this PR fix it.
| true |
2,791,338,736 | Update clickhouse-connect to 0.8.14 | clee2000 | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Corresponds to https://github.com/pytorch/test-infra/pull/6177
I only tested the slow test script but I also did testing on the new version with scripts in https://github.com/pytorch/test-infra/pull/6177 | true |
2,791,305,344 | Prevent _legacy_load with weights_only=True | mikaylagawarecki | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements",
"ciflow/inductor",
"keep-going",
"ci-no-td"
] | 15 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144914
| true |
2,791,292,409 | UserWarning: cuDNN SDPA backward got grad_output.strides() != output.strides() | vecorro | open | [
"module: cudnn",
"triaged",
"module: sdpa"
] | 0 | NONE | ### 🐛 Describe the bug
I'm getting this warning when using TRainer and FSDP to pre-train Llama3.1-8b.
`UserWarning: cuDNN SDPA backward got grad_output.strides() != output.strides()`
This might introduce overhead in the training processes.
I have tried to disable the backend with:
```
import os
os.environ["TORCH_CUDNN_SDPA_ENABLED"] = "0"
from torch.nn.attention import SDPBackend
torch.backends.cuda.sdp_kernel = SDPBackend.FLASH_ATTENTION
```
However, the HF Trainer ignores these settings and continues using SDPA.
Here is the full script:
```
import datasets
import torch
import time
from torch.utils.data import DataLoader, Dataset
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
TrainingArguments,
TrainerCallback,
Trainer,
set_seed,
DataCollatorWithPadding,
)
from transformers.integrations import TensorBoardCallback
import GPUtil, psutil
from torch.utils.tensorboard import SummaryWriter
# Explicitly disable cuDNN SDPA to avoid stride mismatch warnings
import os
os.environ["TORCH_CUDNN_SDPA_ENABLED"] = "0"
# Set Flash Attention as the preferred backend
from torch.nn.attention import SDPBackend
torch.backends.cuda.sdp_kernel = SDPBackend.FLASH_ATTENTION
# Model and dataset configuration
LLM_MODEL = "meta-llama/Meta-Llama-3.1-8B"
DATASET_PATH = "../data-prep/data_files/llama31_tokenized_docs_full_dataset.parquet"
OUTPUT_DIR = "./llama3_8b_ddp_pretraining"
set_seed(42)
# Load model and tokenizer
model_name = LLM_MODEL
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
model.config.use_cache = False
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Ensure pad token is set for the tokenizer
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Custom dataset class with contiguous tensors
class CustomDataset(Dataset):
def __init__(self, dataset_name, tokenizer, split="train", max_tokens=None, max_length=512):
self.dataset = datasets.load_dataset(
"parquet",
data_files=dataset_name,
split=split
)
if max_tokens is not None:
self.dataset = self.dataset.filter(lambda x: x["num_tokens"] <= max_tokens)
self.tokenizer = tokenizer
self.max_length = max_length
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
input_ids = self.dataset[idx]["input_ids"]
if len(input_ids) > self.max_length:
input_ids = input_ids[:self.max_length]
attention_mask = [1] * len(input_ids)
padding_length = self.max_length - len(input_ids)
if padding_length > 0:
input_ids += [self.tokenizer.pad_token_id] * padding_length
attention_mask += [0] * padding_length
# Ensure tensors are contiguous
input_ids = torch.tensor(input_ids, dtype=torch.long).contiguous()
attention_mask = torch.tensor(attention_mask, dtype=torch.long).contiguous()
labels = input_ids.clone().contiguous()
return {"input_ids": input_ids, "attention_mask": attention_mask, "labels": labels}
# Initialize dataset and data collator
train_dataset = CustomDataset(
dataset_name=DATASET_PATH,
tokenizer=tokenizer,
split="train",
max_tokens=512,
max_length=512,
)
print(f"Training dataset size is: {len(train_dataset.dataset)} samples")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# Training arguments
training_args = TrainingArguments(
output_dir=OUTPUT_DIR,
optim="adamw_torch",
num_train_epochs=1,
per_device_train_batch_size=64,
gradient_accumulation_steps=8,
learning_rate=3e-5,
weight_decay=0.01,
warmup_steps=10,
lr_scheduler_type="cosine",
gradient_checkpointing=True,
dataloader_num_workers=8,
bf16=True,
logging_steps=10,
report_to="tensorboard",
save_strategy="epoch",
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=None,
data_collator=data_collator,
)
trainer.train()
```
### Versions
```
NVIDIA (PyTorch container) Release 24.12 (build 126674149)
Using CUDA 12.6 driver version 560.35.05 with kernel driver version 550.127.08
pytorch-triton 3.0.0+72734f086
torch 2.6.0a0+df5bbc09d1.nv24.12
torch-tb-profiler 0.4.3
torch_tensorrt 2.6.0a0
torchprofile 0.0.4
torchvision 0.20.0a0
transformers 4.48.0
accelerate 1.2.1
```
cc @csarofeen @ptrblck @xwang233 @eqy | true |
2,791,283,544 | DISABLED test_flex_attention (__main__.TestCompiledAutograd) | jeffdaily | closed | [
"module: rocm",
"triaged",
"skipped",
"oncall: pt2",
"module: higher order operators",
"module: compiled autograd",
"module: pt2-dispatcher",
"module: flex attention"
] | 1 | COLLABORATOR | Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_compiled_autograd.py%3A%3ATestCompiledAutograd%3A%3Atest_flex_attention%22%5D)).
Caused by this PR: https://github.com/pytorch/pytorch/pull/144533
The MI200 CI runners were passing all inductor UTs prior to merge. Post-merge on MI300 we see this failure. Hopefully just the one test.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @chauhang @penguinwu @zou3519 @ydwu4 @xmfan @yf225 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,791,263,757 | [Submodule] Upgrade to Cutlass 3.6 part deux | drisspg | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: sparse"
] | 7 | CONTRIBUTOR | # Summary
Take 2 of [D67866269](https://www.internalfb.com/diff/D67866269)
Main change is that we identified and fixed the FA2 regression. More details can be found here https://github.com/pytorch/pytorch/issues/144729 and have landed that before this here: [D68194635](https://www.internalfb.com/diff/D68194635)
Differential Revision: D68194470
| true |
2,791,255,925 | cpp_wrapper: Properly handle scalars when input to tensor arguments | benjaminglass1 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145095
* __->__ #144910
Additionally, reduce code duplication in `cpp_wrapper_cpu_array_ref.py`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,791,237,403 | Update .gitignore | imranMkhan | closed | [
"topic: not user facing"
] | 2 | NONE | Fixes #ISSUE_NUMBER
| true |
2,791,234,339 | A more flexible API for torch.compile fullgraph=True | xmfan | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 7 | MEMBER | ### 🚀 The feature, motivation and pitch
We're encouraging people to use fullgraph=True to better identify graph breaks. At the same time, we're empowering users to use escape hatches like torch._dynamo.disable. These two work against each other, and the best workaround I can think of is to ask users to stop fullgraping and to use some other tool to inspect their graph breaks e.g. tlparse, graph count, TORCH_LOGS.
We should consider special casing torch._dynamo.disable so that it does not raise errors with fullgraph=True. This could be controlled by a flag, but I think it can be the default enablement experience.
UPDATE: New proposal, what if we had another UX for specifying fullgraph=True. One option could be a context manager:
```python
@torch.compile
def fn():
with torch._dynamo.error_on_graph_breaks():
torch._dynamo.graph_break() # loud error
torch._dynamo.graph_break() # silent error
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,791,073,206 | `torch.export` for Yolo Pose fails | agunapal | closed | [
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo",
"oncall: export"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
I get an error when I try to export the Yolo-Pose model with `strict=True`
The error goes away with `strict=False`
`pip install ultralytics`
```
from ultralytics import YOLO
import torch
from torch.export import export
pose_model = YOLO("yolo11n-pose.pt") # Load model
pose_model.model.eval()
inputs = torch.rand((1,3,640,640))
exported_program: torch.export.ExportedProgram= export(pose_model.model, args=(inputs,))
```
Error Logs
```
Traceback (most recent call last):
File "/home/agunapal/export_games/pose/pose_export.py", line 7, in <module>
exported_program: torch.export.ExportedProgram= export(pose_model.model, args=(inputs,))
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/exported_program.py", line 122, in wrapper
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1957, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1251, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1279, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 660, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1539, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1395, in __call__
return self._torchdynamo_orig_callable(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 545, in __call__
return _compile(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1027, in _compile
raise InternalTorchDynamoError(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 977, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 706, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 741, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 229, in _fn
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 658, in transform
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2912, in run
super().run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1828, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2108, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2174, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2272, in _wrap_fx_proxy
return handle_traced_output(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2291, in handle_traced_output
set_example_value(proxy.node, example_value)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1640, in set_example_value
if symbol_to_path := torch.fx.experimental.symbolic_shapes.compute_unbacked_bindings(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 999, in compute_unbacked_bindings
raise PendingUnbackedSymbolNotFound(
torch._dynamo.exc.InternalTorchDynamoError: PendingUnbackedSymbolNotFound: Pending unbacked symbols {zuf0} not in returned outputs FakeTensor(..., size=(6400, 1)) ((1, 1), 0).
Did you accidentally call new_dynamic_size() or item() more times than you needed to in your fake implementation?
For more help, see https://docs.google.com/document/d/1RWrH-3wLEpzR9kCS6gGBNen_-Fs-8PVbWWFE5AcgeWE/edit
from user code:
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 112, in forward
return self.predict(x, *args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 130, in predict
return self._predict_once(x, profile, visualize, embed)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 151, in _predict_once
x = m(x) # run
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/modules/head.py", line 240, in forward
x = Detect.forward(self, x)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/modules/head.py", line 72, in forward
y = self._inference(x)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/modules/head.py", line 105, in _inference
self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/utils/tal.py", line 314, in make_anchors
stride_tensor.append(torch.full((h * w, 1), stride, dtype=dtype, device=device))
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241112+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn.so.9.1.0
/usr/lib64/libcudnn_adv.so.9.1.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn.so.9.1.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.1.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib64/libcudnn_graph.so.9.1.0
/usr/lib64/libcudnn_heuristic.so.9.1.0
/usr/lib64/libcudnn_ops.so.9.1.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241112+cu121
[pip3] torchaudio==2.5.0.dev20241112+cu121
[pip3] torchvision==0.20.0.dev20241112+cu121
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241112+cu121 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241112+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112+cu121 pypi_0 pypi
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,791,045,826 | torch.export fails for whisper tiny | agunapal | closed | [
"oncall: pt2",
"module: dynamo",
"oncall: export"
] | 5 | CONTRIBUTOR | ### 🐛 Describe the bug
Trying to export whisper model. Getting an error when I run with `strict=True`
The model exports when I used `strict=False`
Is this a valid Dynamo related issue which is addressed by non-strict mode?
```
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset
# load model and processor
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
input_features = torch.randn(1,80, 3000)
attention_mask = torch.ones(1, 3000)
decoder_input_ids = torch.tensor([[1, 1, 1 , 1]]) * model.config.decoder_start_token_id
model.eval()
exported_program: torch.export.ExportedProgram= torch.export.export(model, args=(input_features, attention_mask, decoder_input_ids,), strict=True)
```
Errors Logs
```
File "/home/agunapal/export_games/asr_1.py", line 16, in <module>
exported_program: torch.export.ExportedProgram= torch.export.export(model, args=(input_features, attention_mask, decoder_input_ids,), strict=True)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/exported_program.py", line 122, in wrapper
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1957, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1251, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1279, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 660, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1539, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1395, in __call__
return self._torchdynamo_orig_callable(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 545, in __call__
return _compile(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1027, in _compile
raise InternalTorchDynamoError(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 977, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 706, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 741, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 229, in _fn
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 658, in transform
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2912, in run
super().run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1828, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1828, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1828, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 547, in inner
if truth_fn(mod):
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/cache_utils.py", line 406, in __len__
return len(self.key_cache)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1935, in __getattr__
raise AttributeError(
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'DynamicCache' object has no attribute 'key_cache'
from user code:
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1767, in forward
outputs = self.model(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1634, in forward
decoder_outputs = self.decoder(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1324, in forward
layer_outputs = decoder_layer(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 732, in forward
hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 520, in forward
if is_cross_attention and past_key_value and is_updated:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241112+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn.so.9.1.0
/usr/lib64/libcudnn_adv.so.9.1.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn.so.9.1.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.1.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib64/libcudnn_graph.so.9.1.0
/usr/lib64/libcudnn_heuristic.so.9.1.0
/usr/lib64/libcudnn_ops.so.9.1.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241112+cu121
[pip3] torchaudio==2.5.0.dev20241112+cu121
[pip3] torchvision==0.20.0.dev20241112+cu121
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241112+cu121 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241112+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112+cu121 pypi_0 pypi
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,791,029,904 | Update fbgemm_gpu pin | malfet | open | [
"topic: not user facing",
"ciflow/inductor",
"no-runner-experiments"
] | 6 | CONTRIBUTOR | Used in inductor tests, to unblock https://github.com/pytorch/pytorch/pull/138626
| true |
2,791,021,623 | DISABLED test_compile_forward_chunk_cpu_float32 (__main__.TestNestedTensorOpInfoCPU) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"module: nestedtensor",
"skipped",
"module: unknown"
] | 4 | NONE | Platforms: asan, linux, mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_chunk_cpu_float32&suite=TestNestedTensorOpInfoCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35668603559).
Over the past 3 hours, it has been determined flaky in 58 workflow(s) with 0 failures and 58 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_chunk_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_nestedtensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | true |
2,791,021,533 | DISABLED test_pt2_traceable_aot_eager_cpu_float8_e4m3fn (__main__.TestFloat8DtypeCPUOnlyCPU) | pytorch-bot[bot] | closed | [
"oncall: quantization",
"module: flaky-tests",
"skipped",
"module: unknown"
] | 12 | NONE | Platforms: linux, mac, macos, asan
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pt2_traceable_aot_eager_cpu_float8_e4m3fn&suite=TestFloat8DtypeCPUOnlyCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35666300791).
Over the past 3 hours, it has been determined flaky in 24 workflow(s) with 48 failures and 24 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pt2_traceable_aot_eager_cpu_float8_e4m3fn`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_quantization.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr | true |
2,791,021,454 | DISABLED test_channel_group_quantization (__main__.TestQuantizePT2EAffineQuantization) | pytorch-bot[bot] | closed | [
"oncall: quantization",
"module: flaky-tests",
"skipped",
"module: unknown"
] | 9 | NONE | Platforms: asan, linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_channel_group_quantization&suite=TestQuantizePT2EAffineQuantization&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35666010341).
Over the past 3 hours, it has been determined flaky in 13 workflow(s) with 26 failures and 13 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_channel_group_quantization`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/quantization/pt2e/test_quantize_pt2e.py", line 2487, in test_channel_group_quantization
from torch.ao.quantization.pt2e._affine_quantization import (
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12793413718/lib/python3.9/site-packages/torch/ao/quantization/pt2e/_affine_quantization.py", line 189, in <module>
register_custom_op = _register_custom_op(quant_lib)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12793413718/lib/python3.9/site-packages/torch/ao/quantization/pt2e/_affine_quantization.py", line 161, in _register_custom_op
from torch._inductor.decomposition import register_decomposition
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12793413718/lib/python3.9/site-packages/torch/_inductor/decomposition.py", line 98, in <module>
decompositions = {**core_aten_decompositions(), **inductor_decompositions}
TypeError: 'CustomDecompTable' object is not a mapping
To execute this test, run the following from the base repo dir:
python test/quantization/pt2e/test_quantize_pt2e.py TestQuantizePT2EAffineQuantization.test_channel_group_quantization
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_quantization.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_quantization.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr | true |
2,791,021,375 | DISABLED test_compile_forward_select_cpu_float32 (__main__.TestNestedTensorOpInfoCPU) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"module: nestedtensor",
"skipped",
"module: unknown"
] | 6 | NONE | Platforms: asan, linux, mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_select_cpu_float32&suite=TestNestedTensorOpInfoCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35668072388).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 0 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_select_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_nestedtensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr | true |
2,791,021,309 | DISABLED test_compile_forward_chunk_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"module: nestedtensor",
"skipped",
"module: unknown"
] | 8 | NONE | Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_chunk_cuda_float32&suite=TestNestedTensorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35667819694).
Over the past 3 hours, it has been determined flaky in 26 workflow(s) with 2 failures and 26 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_chunk_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_nestedtensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | true |
2,791,020,926 | DISABLED test_compile_forward_clone_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"module: nestedtensor",
"skipped",
"module: unknown"
] | 4 | NONE | Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_clone_cuda_float32&suite=TestNestedTensorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670326638).
Over the past 3 hours, it has been determined flaky in 17 workflow(s) with 0 failures and 17 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_clone_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_nestedtensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | true |
2,791,020,866 | DISABLED test_re_export_preserve_handle (__main__.TestNumericDebugger) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"module: macos",
"skipped"
] | 3 | NONE | Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_re_export_preserve_handle&suite=TestNumericDebugger&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35666010341).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_re_export_preserve_handle`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_quantization.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @malfet @albanD | true |
2,791,020,811 | DISABLED test_recompile_on_global_state_change (__main__.MiscTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped"
] | 9 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_recompile_on_global_state_change&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35668591743).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_recompile_on_global_state_change`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_misc.py", line 7855, in test_recompile_on_global_state_change
assert read_state() == new_state
AssertionError
```
</details>
Test file path: `dynamo/test_misc.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_misc.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | true |
2,791,020,723 | DISABLED test_recompile_on_global_state_change_dynamic_shapes (__main__.DynamicShapesMiscTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped"
] | 9 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_recompile_on_global_state_change_dynamic_shapes&suite=DynamicShapesMiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670761806).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_recompile_on_global_state_change_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_misc.py", line 7855, in test_recompile_on_global_state_change
assert read_state() == new_state
AssertionError
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_dynamic_shapes.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | true |
2,791,020,695 | DISABLED test_mismatched_global_state (__main__.GraphRegionTrackerTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped"
] | 11 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mismatched_global_state&suite=GraphRegionTrackerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35666846912).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mismatched_global_state`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_graph_region_tracker.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_graph_region_tracker.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | true |
2,791,018,380 | serde unbacked bindings | avikchaudhuri | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 10 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144894
Adds unbacked bindings during deserialization. These are carried by a node's metadata, and map pending fresh unbacked symbols to paths to such symbols inside the corresponding example value carried by the node's metadata.
Since it is awkward to serialize paths, we only serialize the names of these symbols and reconstruct the paths on deserialization, using a shape env util. We also need to bump counters for unbacked symbols here, because the shape env util we use to create these symbols (when deserializing example values) don't do so, and not doing so makes later passes (like `run_decompositions`) crash because new unbacked symbols don't get new names.
This is enough for non-strict. For strict, the unbacked bindings and example values in node metadata can get out of sync, because of running AOTAutograd as an additional step after Dynamo. So we have to sync those back.
Differential Revision: [D68232274](https://our.internmc.facebook.com/intern/diff/D68232274/) | true |
2,791,016,854 | TIMM Training cudagraphs poolformer_m36 regression | zou3519 | closed | [
"high priority",
"triaged",
"module: cuda graphs",
"oncall: pt2",
"pt2-pass-rate-regression"
] | 3 | CONTRIBUTOR | Used to pass, now "eager_two_runs_differ". This probably just needs some tolerance adjustments
https://hud.pytorch.org/benchmark/timm_models/inductor_with_cudagraphs?dashboard=torchinductor&startTime=Fri,%2019%20Jul%202024%2020:48:05%20GMT&stopTime=Wed,%2015%20Jan%202025%2021:48:05%20GMT&granularity=week&mode=training&model=poolformer_m36&dtype=amp&deviceName=cuda%20(a100)&lBranch=main&lCommit=1dab79470dbecef79ba4c7d4308d8a181091e58e&rBranch=main&rCommit=a8319698b3ba7c858fa3e4f3aac88d3fe9dc00d1
cc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang | true |
2,791,009,120 | serde unbacked bindings | avikchaudhuri | closed | [
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144892
Differential Revision: [D68232274](https://our.internmc.facebook.com/intern/diff/D68232274/) | true |
2,790,993,484 | TorchBench mobilenet_v2 cudagraphs_freezing inference regression | zou3519 | closed | [
"high priority",
"triaged",
"module: cuda graphs",
"oncall: pt2",
"pt2-pass-rate-regression"
] | 2 | CONTRIBUTOR | https://hud.pytorch.org/benchmark/torchbench/inductor_with_cudagraphs_freezing?dashboard=torchinductor&startTime=Fri,%2019%20Jul%202024%2020:38:32%20GMT&stopTime=Wed,%2015%20Jan%202025%2021:38:32%20GMT&granularity=week&mode=inference&model=mobilenet_v2&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=main&lCommit=2ed4d65af0a1993c0df7b081f4088d0f3614283e&rBranch=main&rCommit=a8319698b3ba7c858fa3e4f3aac88d3fe9dc00d1
Regressed sometime in August
cc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang | true |
2,790,973,837 | Ways the HUD compilers dashboard could be better | zou3519 | open | [
"triaged",
"enhancement",
"module: devx"
] | 1 | CONTRIBUTOR | I got here because I'm trying to answer the question of "which compiler benchmarks regressed in the past year?" I've spent a couple of hours on the HUD dashboard page, and I still haven't figured this out yet. Here's some of the gripes that I ran into while trying to answer this question.
1) The page seems to refresh itself every couple of minutes. This disrupts the train of thought. Also, I am not sure if the settings change when it refreshes.
2) The passrate chart and the graphs don't have all of the data. In particular, the passrate chart doesn't contain the max_autotune configs. I don't know how to actually click into the max_autotune data.

3) https://github.com/pytorch/test-infra/issues/6173
4) There's one passrate chart but there are 3 passrate graphs. Scrolling between the graphs is kind of annoying
5) The graphs have so many series that some of them are hidden. Might be nicer to increase the height?

6) It's not clear to me how to hack on these charts. Using our internal tools (like scuba and unidash), it's easy (and well-known) on how to look up information.
Hypothesis: If we feed the data to internal sources and use internal tooling as the UXs, then we would be more productive than trying to roll our own UX.
cc @ZainRizvi @kit1980 @huydhn @clee2000 | true |
2,790,871,759 | Support remaining *_like factory functions for NJT | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: nested tensor"
] | 12 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144889
Fixes #144761
This PR adds NJT impls for those *_like functions that were previously missing:
* `full_like()`
* `rand_like()`
* `randint_like()`
It also fixes a bug in existing *_like functions when a new device is specified. Fix is to also transfer `offsets` / `lengths` to the new device. | true |
2,790,866,668 | TIMM cudagraphs_freezing inference regression | zou3519 | closed | [
"high priority",
"triaged",
"module: cuda graphs",
"oncall: pt2",
"pt2-pass-rate-regression"
] | 1 | CONTRIBUTOR | https://hud.pytorch.org/benchmark/timm_models/inductor_with_cudagraphs_freezing?dashboard=torchinductor&startTime=Mon,%2016%20Dec%202024%2020:49:27%20GMT&stopTime=Wed,%2015%20Jan%202025%2020:49:27%20GMT&granularity=day&mode=inference&model=lcnet_050&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=main&lCommit=1dab79470dbecef79ba4c7d4308d8a181091e58e&rBranch=main&rCommit=297ce776363cc4802fa74d210fced2b4128960d5
This model used to pass sometime in the last year but is now failing with an accuracy issue
cc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang | true |
2,790,810,275 | Binary upload checksum | clee2000 | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/binaries_wheel",
"ci-no-td"
] | 8 | CONTRIBUTOR | Equivalent to https://github.com/pytorch/test-infra/pull/6172 but for pytorch | true |
2,790,806,102 | [SymmetricMemory] fix an issue where rendezvous is performed with wrong device context when torch.cuda.set_device() is not callled | yifuwang | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145652
* __->__ #144886
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,790,703,775 | [dynamo] add option to not skip on empty graph | williamwen42 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144885
Temporary fix to https://github.com/pytorch/pytorch/issues/144360.
Turning the config on globally will cause a bunch of tests to fail, which needs to be addressed in followups.
I had a previous attempt at https://github.com/pytorch/pytorch/pull/144712, but this is a more complicated change and will likely be absorbed into work to refactor Dynamo's exception handling.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,790,688,709 | Adding more compile time logging in pad_mm | Mingming-Ding | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 22 | CONTRIBUTOR | Summary: As title
Test Plan:
[midin@6262.od /data/sandcastle/boxes/fbsource/fbcode (99e64d2e4)]$ tlp buck run mode/opt caffe2/test/inductor:pad_mm -- -r test_exclude_padding
https://interncache-all.fbcdn.net/manifold/perfetto-artifacts/tree/ui/index.html?url=https%3A%2F%2Finterncache-all.fbcdn.net%2Fmanifold%2Ftlparse_reports%2Ftree%2Flogs%2F.tmpiJLgXX%2Fchromium_events.json#!/viewer?url=https%3A%2F%2Finterncache-all.fbcdn.net%2Fmanifold%2Ftlparse_reports%2Ftree%2Flogs%2F.tmpiJLgXX%2Fchromium_events.json&local_cache_key
{F1974355662}
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,790,674,148 | Fix erroneous at_vreinterpretq_u16_bf16 call | swolchok | closed | [
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/linux-aarch64"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144883
Here, `mask` is definitely a `uint16x8_t`, not an `at_bfloat16x8_t`, so we shouldn't be reintepreting it. Candidate fix for #144818 .
Differential Revision: [D68224128](https://our.internmc.facebook.com/intern/diff/D68224128/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,790,671,403 | fix typo in doc and import for torch._library.triton | ydwu4 | closed | [
"Merged",
"ciflow/trunk",
"topic: docs",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144882
Previously, the doc's suggested `from torch._library.triton import wrap_triton, triton_op` doesn't work because wrap_triton is not imported in torch/_library/__init__.py but `from torch.library import wrap_triton` works. This PR imports wrap_triton and fix the doc.
| true |
2,790,640,165 | Uniformly update scipy pin to 1.14.1 | ezyang | closed | [
"release notes: releng"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144695
* #144863
* __->__ #144881
Signed-off-by: Edward Z. Yang <ezyang@meta.com> | true |
2,790,580,218 | update guard_size_oblivious comment | laithsakka | closed | [
"fb-exported",
"Stale",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Summary:
I just wish this was here when i learned about guard_size_oblivious, it turn out it is
there on the other guard_size_oblivious function call.
But it could have saved me some time to see it here as well.
Differential Revision: D68216350
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,790,532,890 | FlexAttention Compilation Uses Non-Standard Invocation Of Inductor Ops | eellison | open | [
"triaged",
"oncall: pt2",
"module: inductor",
"module: higher order operators",
"module: pt2-dispatcher",
"internal ramp-up task",
"module: flex attention"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
`Modification Wrapper` uses a non-standard way of invoking inductor operators.
In https://github.com/pytorch/pytorch/blob/d065e8a9de7d6b91bd18286bf45e5094f1278f9f/torch/_inductor/select_algorithm.py#L623-L634
it passes string arguments to `subgraph.data.inner_fn(())` instead of `CSEVariable`. This makes the typing incorrect throughout codegen, and prevents relying on the properties of CSEVariable. I recently adding tracking of Dtypes to every intermediary in inductor codegen and enabled tests in opinfos. I would like to rely on them in codegen bc it enables:
- [Deletion of 7 ops from the inductor opset](https://github.com/pytorch/pytorch/blob/069419569d01c168952dc80bcc61bcb81a2bf3de/torch/_inductor/ops_handler.py#L719-L744)
- [Some codegen cleanups](https://github.com/pytorch/pytorch/blob/069419569d01c168952dc80bcc61bcb81a2bf3de/torch/_inductor/codegen/triton.py#L1374)
Dtype tracking is also being used today for both MTIA for low-precision, and prologue fusion low-precision (neither of which interaction with flex attention today).
I suspect this is also related to this error: https://github.com/pytorch/pytorch/issues/144869
When this is fixed we should be able to remove this special casing https://github.com/pytorch/pytorch/blob/069419569d01c168952dc80bcc61bcb81a2bf3de/torch/_inductor/dtype_propagation.py#L69.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
### Versions
on master | true |
2,790,489,838 | [Release/2.6] Enable python-3.13t aarch64 builds | malfet | closed | [
"release notes: releng"
] | 2 | CONTRIBUTOR | Cherry-picks following 2 commits
- https://github.com/pytorch/pytorch/pull/144716
- https://github.com/pytorch/pytorch/pull/144698
And regenerated the workflow file by running `RELEASE_VERSION_TAG=2.6 .github/regenerate.sh`
| true |
2,790,482,763 | [mps] Massage test_full_truncation to work only on the supported dtypes. | dcci | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 8 | MEMBER | Converted a first one to make sure the pattern was the one we wanted -- if we're OK with this, I'll probably adjust all the other failing ones in a batch or two. Let me know.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,790,447,154 | Enable sleef for Win Arm64 | iremyux | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: build",
"module: inductor",
"ciflow/inductor"
] | 4 | COLLABORATOR | Sleef module was disabled for Windows Arm64 on https://github.com/iremyux/pytorch/commit/b021486405de45e184b34c4eeeba7c3b6cf2da73
This PR enables it again since the issue is no longer valid.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,790,443,660 | [Monitoring] Display on HUD the information about runners that failed to be created (which cause jobs to queue) | ZainRizvi | open | [
"module: ci",
"triaged"
] | 1 | CONTRIBUTOR | ## Context
When job queuing for a significant period of time, it'll usually be for one of the following reasons:
- The desired machine is out of stock. We'll retry creating that instance until it becomes available
- There's a bug preventing that runner type from coming online, or perhaps even being provisioned
- Some other AWS issue that prevented the runner from being provisioned
## The Ask
This has two parts: Data Export and Visualization
### Data Export
Update the autoscaler lambdas to export the following data to ClickHouse:
- When instances are provisioned successfully
- When instances fail to get provisioned, along with their error codes
- Number of instances currently being retried
### Visualization
Add a new charts to HUD to show the number of runners of each type that have been provisioned, the number that failed (along with the reason), and the number currently waiting to be retried.
This could end up looking similar to the internal charts we have at https://fburl.com/unidash/z3wfjdwv.
Why do we need new charts if similar data is already available internally? Because the internal charts cannot capture stats about the LF fleet, and we want to be able to track service health across both the Meta and LF fleets.
cc @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,790,425,283 | [dynamo] Support mutation on type objects | StrongerXi | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 0 | CONTRIBUTOR | This tracks (2) from https://github.com/pytorch/pytorch/pull/144419#issuecomment-2583533712.
Repro:
```python
@torch.compile(backend="eager", fullgraph=True)
def f(x):
Foo.a = 1
return x + 1
f(torch.ones(1))
# File ".../torch/_dynamo/symbolic_convert.py", line 1843, in STORE_ATTR
# BuiltinVariable(setattr).call_function(
# File ".../torch/_dynamo/variables/builtin.py", line 1003, in call_function
# return handler(tx, args, kwargs)
# ^^^^^^^^^^^^^^^^^^^^^^^^^
# File ".../torch/_dynamo/variables/builtin.py", line 845, in builtin_dispatch
# unimplemented(error_msg)
# File ".../torch/_dynamo/exc.py", line 356, in unimplemented
# raise Unsupported(msg, case_name=case_name)
#torch._dynamo.exc.Unsupported: builtin: setattr [<class 'torch._dynamo.variables.user_defined.UserDefinedClassVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False
```
We _might_ also want to support mutation on `__dict__` object as a result, although that could be subsumed by #144873.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,790,423,672 | [dynamo] Model `__dict__` with `ConstDictVariable` rather than `GetAttrVariable` | StrongerXi | closed | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1 | CONTRIBUTOR | This tracks (1) from https://github.com/pytorch/pytorch/pull/144419#pullrequestreview-2541259169.
It'll lead to removal of duplicated logic for dictionary object handling below, and make it easier to reason about `__dict__` in general.
https://github.com/pytorch/pytorch/blob/d85ae4be734cfd53f5b893240894381ac65fe8b4/torch/_dynamo/variables/misc.py#L1027-L1074
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,790,396,022 | Add flop formula for _scaled_mm | lw | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 12 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144872
This will make it work correctly with the partitioner's AutoAC | true |
2,790,385,402 | Implement grid_sampler_2d_backward for MPS device | chriscremer | closed | [
"release notes: mps"
] | 2 | NONE | Implement `grid_sampler_2d_backward` for the MPS device.
* **Add `aten/src/ATen/native/mps/GridSamplerMPS.mm`**
- Implement `grid_sampler_2d_backward_mps` function.
- Include necessary headers for MPS device support.
- Define the `grid_sampler_2d_backward_mps` function.
- Implement the backward pass logic for grid sampler.
- Register the `grid_sampler_2d_backward_stub` function for MPS device.
* **Add `aten/src/ATen/native/mps/GridSamplerMPS.h`**
- Declare the `grid_sampler_2d_backward_mps` function for MPS device.
- Include necessary headers for MPS device support.
* **Add `aten/src/ATen/native/mps/DispatchStub.h`**
- Add `grid_sampler_2d_backward_mps` to the dispatch table for MPS device.
- Include the `GridSamplerMPS.h` header.
| true |
2,790,334,707 | [BE] - Remove conda test and upload scripts and env variables from Workflows Part 1 | atalman | closed | [
"Merged",
"ciflow/binaries",
"release notes: releng"
] | 4 | CONTRIBUTOR | Remove conda test and upload scripts and env variables from Workflows
Related to: https://github.com/pytorch/pytorch/issues/138506 | true |
2,790,332,270 | FlexAttention errors with certain functions and half precision in score_mod | michael-diggin | closed | [
"triaged",
"oncall: pt2",
"module: flex attention"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
Using certain functions in `score_mod` as part of FlexAttention error when using float16 or bfloat16. This is on nightly, to reproduce:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
flex_attention = torch.compile(flex_attention, dynamic=False)
q = torch.randn((1, 1, 128, 16), dtype=torch.float16, device="cuda")
k = torch.randn((1, 1, 128, 16), dtype=torch.float16, device="cuda")
v = torch.randn((1, 1, 128, 16), dtype=torch.float16, device="cuda")
mass = torch.ones((1), dtype=torch.float16, device="cuda")
def score_mod(score, b, h, q_idx, kv_idx):
return score + torch.log(mass[0])
out = flex_attention(q, k, v, score_mod=score_mod) # fails
```
Using `torch.log(mass[0].to(torch.float32))` succeeds.
I believe it's because the lowering from `torch.log` to Triton isn't converting to `tl.float32` before the log call, which Triton needs (and hence the same error occurs when using some other operations, like `sin`, `cos`, etc), since the error contains:
```
ValueError: Expected dtype ['fp32', 'fp64'] but got fp16
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 50:11:
# ~~~~~~~~~~~~~~~~~~~ Apply score modification ~~~~~~~~~~~~~~~~~~~
if CHECK_BLOCK_BOUNDARY:
# If this is the last block of a non divisible seqlen, we still need to load [BLOCK_M, BLOCK_N] elements,
# which is larger than the actual number of elements. To avoid access memory out of bound,
# we need to mask out the elements that are out of Q_LEN & KV_LEN.
m = offs_m % Q_LEN
n = offs_n % KV_LEN
else:
m = offs_m
n = offs_n
tmp0 = tl_math.log(tl.load(in_ptr8 + 0))
```
I did have a look into it, and while there is a decorator on log here: https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/triton.py#L1218, the arguments provided from the lowering process are just the string `'tl.load(in_ptr8 + 0)'` rather than a CSEVariable and hence don't get upcast:
https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/triton.py#L763
<details>
<summary>Full error</summary>
```
InductorError: SubprocException: An exception occurred in a subprocess:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/triton/language/core.py", line 35, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/triton/language/math.py", line 26, in check
raise ValueError(f"Expected dtype {dtypes} but got {arg.type.scalar.name}")
ValueError: Expected dtype ['fp32', 'fp64'] but got fp16
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 50:11:
# ~~~~~~~~~~~~~~~~~~~ Apply score modification ~~~~~~~~~~~~~~~~~~~
if CHECK_BLOCK_BOUNDARY:
# If this is the last block of a non divisible seqlen, we still need to load [BLOCK_M, BLOCK_N] elements,
# which is larger than the actual number of elements. To avoid access memory out of bound,
# we need to mask out the elements that are out of Q_LEN & KV_LEN.
m = offs_m % Q_LEN
n = offs_n % KV_LEN
else:
m = offs_m
n = offs_n
tmp0 = tl_math.log(tl.load(in_ptr8 + 0))
^
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 44:28:
SPARSE_KV_MULTIPLE: tl.constexpr = (SPARSE_KV_BLOCK_SIZE // BLOCK_N)
RCP_LN2: tl.constexpr = 1.44269504
if PRESCALE_QK:
q = (q * SM_SCALE * RCP_LN2).to(MATMUL_PRECISION)
# loop over k, v and update accumulator until block_n_end
for start_n in range(block_n_start, block_n_end):
if IS_DIVISIBLE:
acc, l_i, m_i = forward_block_mn(
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_worker/subproc_pool.py", line 337, in do_job
result = job()
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/runtime/compile_tasks.py", line 74, in _worker_compile_triton
load_kernel().precompile(warm_cache_only=True)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/runtime/triton_heuristics.py", line 262, in precompile
compiled_binary, launcher = self._precompile_config(
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/runtime/triton_heuristics.py", line 449, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
File "/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py", line 273, in compile
module = src.make_ir(options, codegen_fns, module_map, context)
File "/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py", line 100, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
triton.compiler.errors.CompilationError: at 158:20:
)
V_block_ptr = tl.make_block_ptr(
base=V,
shape=(KV_LEN, V_HEAD_DIM),
strides=(stride_vn, stride_vk),
offsets=(kv_start, 0),
block_shape=(BLOCK_N, V_HEAD_DIM),
order=(1, 0)
)
offs_n = kv_start + tl.arange(0, BLOCK_N)
acc, l_i, m_i = forward_inner(
^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
</details>
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250115+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250115+cu124
[pip3] torchaudio==2.5.1+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu121
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,790,209,100 | FUNC_INLINELIST doesn't exist | zou3519 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 2 | CONTRIBUTOR | probably just obsolete comment: https://github.com/pytorch/pytorch/blob/7c52c97a65f58e1de2967509ab732e20f468dae8/torch/_dynamo/trace_rules.py#L3176
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,790,112,994 | Unconditionally exclude upper bound in all size oblivious tests | ezyang | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144867
I was thinking about https://github.com/pytorch/pytorch/pull/144471 some more and I thought, "Hmm, why not just always exclude the constant upper bound." So here it is.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,790,081,956 | [AOTI] Add an option to skip optimizing generated wrapper code | desertfire | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 10 | CONTRIBUTOR | Summary: In some cases, generated wrapper code faces a long cpp compilation time. As an alleviation, this PR adds an option to skip cpp compiler optimizers for the generated main wrapper function body.
D68174038
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov | true |
2,790,052,755 | [64-bit] Int64 casting for UpSampleNearest3D | jataylo | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/inductor",
"ciflow/slow",
"ciflow/rocm"
] | 11 | COLLABORATOR | Fixes #144855
Follows approach in https://github.com/pytorch/pytorch/pull/141923 to use int64 types to increase INT_MAX limits | true |
2,790,049,798 | Undo leading underscore on ctx for breakpoint | ezyang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144864
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,790,013,574 | Update executorch pin | ezyang | closed | [
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144695
* __->__ #144863
* #144881
Signed-off-by: Edward Z. Yang <ezyang@meta.com> | true |
2,789,880,755 | Create aaaa | swgu98 | closed | [
"open source"
] | 3 | NONE | Fixes #ISSUE_NUMBER
| true |
2,789,830,741 | Region check for in-place read and write does not always work | jenspetersen | open | [
"triaged",
"module: partial aliasing"
] | 1 | NONE | ### 🐛 Describe the bug
Hello!
If I try to read and write from and to the same locations along the first axis of a tensor, I get a RuntimeError, which is expected:
```python
>>> arr = torch.arange(9).reshape(3, 3)
>>> arr[1:, :] = arr[:-1, :]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[25], line 1
----> 1 arr[1:, :] = arr[:-1, :]
2 arr
RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.
```
However, if I do the same on the second axis, there is no error, but (for me) unexpected behaviour:
```python
>>> arr = torch.arange(9).reshape(3, 3)
>>> arr[:, 1:] = arr[:, :-1]
>>> arr
tensor([[0, 0, 0],
[3, 3, 3],
[6, 6, 6]])
```
The expected behaviour would be what happens on the GPU (and also for numpy arrays):
```python
>>> arr = torch.arange(9).reshape(3, 3).cuda()
>>> arr[:, 1:] = arr[:, :-1]
>>> arr
tensor([[0, 0, 1],
[3, 3, 4],
[6, 6, 7]], device='cuda:0')
```
I'm not sure if this is actually a bug or just something to be aware of, but I would at least expect CPU and GPU operations to behave the same. How does the check work that results in the RuntimeError in the first case? Is it too expensive to make work for arbitrary slices?
Thanks!
### Versions
Tested torch versions up to 2.4.0 | true |
2,789,769,302 | Exporting a model with dynamic axes and dynamo fails with `TypeError: unhashable type: 'list'` | koute | closed | [
"module: onnx",
"oncall: pt2",
"oncall: export"
] | 3 | NONE | ### 🐛 Describe the bug
Consider the following code:
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
B, C, H, W = x.shape
return x.view(B, C, H * W)
model = Model()
input_tensor = torch.rand((2, 64, 128, 128))
torch.onnx.export(
model,
(input_tensor,),
"model.onnx",
input_names = ["input"],
output_names = ["output"],
dynamo = True,
dynamic_axes = { "input": {0: "batch", 2: "height", 3: "width"} }
)
```
This fails with the following error:
```
.venv/lib/python3.11/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
.venv/lib/python3.11/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
[torch.onnx] Obtain model graph for `Model()` with `torch.export.export`...
[torch.onnx] Obtain model graph for `Model()` with `torch.export.export`... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ❌
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_building.py", line 519, in _call_op
converted_named_inputs = _process_python_sequences(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_building.py", line 434, in _process_python_sequences
_get_or_create_constant(constant_farm, [arg], dtype, opset) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_building.py", line 267, in _get_or_create_constant
constant_value = constant_farm.get((arg, dtype)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: unhashable type: 'list'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_building.py", line 579, in eval
outputs = self._call_op(op_signature, named_inputs, named_attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_building.py", line 528, in _call_op
raise _errors.GraphConstructionError(
torch.onnx._internal.exporter._errors.GraphConstructionError: Error processing Python constants for operator '::Cast'. named_inputs={'input': [SymbolicTensor('sym_size_int_4', type=Tensor(INT64), shape=[], producer=node_Squeeze_1, index=0), 64, SymbolicTensor('mul', type=Tensor(INT64), shape=[], producer=node_Mul_6, index=0)]}, named_attrs={'to': INT64}, opset=, op_signature=''::Cast(input: T1, to: INT = None) -> (T2) where T1=DOUBLE | UINT64 | UINT16 | INT16 | UINT32 | INT8 | INT64 | STRING | UINT8 | BOOL | INT32 | FLOAT16 | BFLOAT16 | FLOAT, T2=DOUBLE | UINT64 | UINT16 | INT16 | UINT32 | INT8 | INT64 | STRING | UINT8 | BOOL | INT32 | FLOAT16 | BFLOAT16 | FLOAT.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 469, in _handle_call_function_node_with_lowering
outputs = onnx_function(*onnx_args, **onnx_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/onnxscript/values.py", line 635, in __call__
return self.func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/onnxscript/function_libs/torch_lib/ops/core.py", line 8804, in aten_view
size = op.Cast(size, to=INT64.dtype) # Reshape only support INT64 as second input
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/onnxscript/onnx_opset/_impl/opset13.py", line 291, in Cast
return op(*self._prepare_inputs(schema, input), to=to)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/onnxscript/values.py", line 304, in __call__
return evaluator.default().eval(schema, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_building.py", line 584, in eval
raise _errors.GraphConstructionError(
torch.onnx._internal.exporter._errors.GraphConstructionError: Error calling operator 'Cast' with args ([SymbolicTensor('sym_size_int_4', type=Tensor(INT64), shape=[], producer=node_Squeeze_1, index=0), 64, SymbolicTensor('mul', type=Tensor(INT64), shape=[], producer=node_Mul_6, index=0)],) and kwargs {'to': INT64}.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 553, in _add_nodes
_handle_call_function_node_with_lowering(
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 471, in _handle_call_function_node_with_lowering
raise _errors.GraphConstructionError(
torch.onnx._internal.exporter._errors.GraphConstructionError: Error when calling function 'TracedOnnxFunction(<function aten_view at 0x75899114ec00>)' with args '[SymbolicTensor('x', type=Tensor(FLOAT), shape=[s0,64,s1,s2], producer=None, index=None), [SymbolicTensor('sym_size_int_4', type=Tensor(INT64), shape=[], producer=node_Squeeze_1, index=0), 64, SymbolicTensor('mul', type=Tensor(INT64), shape=[], producer=node_Mul_6, index=0)]]' and kwargs '{}'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 1134, in export
onnx_program = _exported_program_to_onnx_program(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 791, in _exported_program_to_onnx_program
values = _add_nodes(exported_program, model, lower=lower, registry=registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 565, in _add_nodes
raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Error when translating node %view : [num_users=1] = call_function[target=torch.ops.aten.view.default](args = (%x, [%sym_size_int_4, 64, %mul]), kwargs = {}). See the stack trace for more information.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "model-ocr/scripts/./torch-test2.py", line 30, in <module>
torch.onnx.export(
File ".venv/lib/python3.11/site-packages/torch/onnx/__init__.py", line 345, in export
return exporter.export_compat(
^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_compat.py", line 161, in export_compat
onnx_program = _core.export(
^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 1181, in export
raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Failed to convert the exported program to an ONNX model. This is step 2/2 of exporting the model to ONNX. Next steps:
- If there is a missing ONNX function, implement it and register it to the registry.
- If there is an internal error during ONNX conversion, debug the error and summit a PR to PyTorch.
- Save the ExportedProgram as a pt2 file and create an error report with `export(..., report=True)`. Create an issue in the PyTorch GitHub repository against the *onnx* component. Attach the pt2 model and the error report.
## Exception summary
<class 'TypeError'>: unhashable type: 'list'
⬆️
<class 'torch.onnx._internal.exporter._errors.GraphConstructionError'>: Error processing Python constants for operator '::Cast'. named_inputs={'input': [SymbolicTensor('sym_size_int_4', type=Tensor(INT64), shape=[], producer=node_Squeeze_1, index=0), 64, SymbolicTensor('mul', type=Tensor(INT64), shape=[], producer=node_Mul_6, index=0)]}, named_attrs={'to': INT64}, opset=, op_signature=''::Cast(input: T1, to: INT = None) -> (T2) where T1=DOUBLE | UINT64 | UINT16 | INT16 | UINT32 | INT8 | INT64 | STRING | UINT8 | BOOL | INT32 | FLOAT16 | BFLOAT16 | FLOAT, T2=DOUBLE | UINT64 | UINT16 | INT16 | UINT32 | INT8 | INT64 | STRING | UINT8 | BOOL | INT32 | FLOAT16 | BFLOAT16 | FLOAT.
⬆️
<class 'torch.onnx._internal.exporter._errors.GraphConstructionError'>: Error calling operator 'Cast' with args ([SymbolicTensor('sym_size_int_4', type=Tensor(INT64), shape=[], producer=node_Squeeze_1, index=0), 64, SymbolicTensor('mul', type=Tensor(INT64), shape=[], producer=node_Mul_6, index=0)],) and kwargs {'to': INT64}.
⬆️
<class 'torch.onnx._internal.exporter._errors.GraphConstructionError'>: Error when calling function 'TracedOnnxFunction(<function aten_view at 0x75899114ec00>)' with args '[SymbolicTensor('x', type=Tensor(FLOAT), shape=[s0,64,s1,s2], producer=None, index=None), [SymbolicTensor('sym_size_int_4', type=Tensor(INT64), shape=[], producer=node_Squeeze_1, index=0), 64, SymbolicTensor('mul', type=Tensor(INT64), shape=[], producer=node_Mul_6, index=0)]]' and kwargs '{}'
⬆️
<class 'torch.onnx._internal.exporter._errors.ConversionError'>: Error when translating node %view : [num_users=1] = call_function[target=torch.ops.aten.view.default](args = (%x, [%sym_size_int_4, 64, %mul]), kwargs = {}). See the stack trace for more information.
(Refer to the full stack trace above for more information.)
```
Removing the `dynamic_axes` or setting `dynamo = False` fixes the issue.
### Versions
(The `collect_env.py` script doesn't work for me so I'm pasting the versions manually)
```
torch 2.5.1
triton 3.1.0
python 3.11.8
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,789,714,297 | Torch compile cache | christopher5106 | open | [
"triaged",
"oncall: pt2"
] | 8 | NONE | ### 🐛 Describe the bug
Hi,
I'm setting the following values
TORCHINDUCTOR_FX_GRAPH_CACHE
TORCHINDUCTOR_CACHE_DIR
I see the cache folder is populated by 3.8G.
I'm creating a tar archive to place the cache on another instance, with same H100 and untar on the other instance. But compile time shows the cache has not been used.
If I'm setting the variables on two instances that share the same network drive, compile on one, then run on the other one, I see that the compile time is still very high, like the cache has not been taken into account.
What are the signatures of the cache elements? If I know better what triggers the cache retrieval, I might find a configuration where I can reuse the cache between instances.
Thanks for your help!
### Versions
torch @ https://download.pytorch.org/whl/nightly/cu124/torch-2.6.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/nightly/cu124/torchaudio-2.5.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
torchvision @ https://download.pytorch.org/whl/nightly/cu124/torchvision-0.20.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
pytorch_triton @ https://download.pytorch.org/whl/nightly/pytorch_triton-3.1.0%2B5fe38ffd73-cp311-cp311-linux_x86_64.whl
cc @chauhang @penguinwu | true |
2,789,625,977 | torch.nn.functional.scaled_dot_product_attention is_causal fails for kv-cache case (sequential and further parallel attention) | JamesGlare | open | [
"triaged",
"module: sdpa"
] | 1 | NONE | ### 🚀 The feature, motivation and pitch
**Behaviour found for torch version 2.2.2**
It would be great if scaled_dot_product_attention could be (easily) used for the case of sequential token generation when a kv-cache is present. However, currently when is_causal is set and a single query vector is put in, the function only compares against the earliest k and v resulting in repeatedly producing the same vector in sequential token generation.
More generally, in cases where further parallel attention is required - even when already a kv-cache has been generated - I found the correct attention matrix difficult to generate. The code I converged to is
`mask = torch.tril(torch.ones(w, w, dtype=torch.bool))[-h:, :]`
with
w = sequence length of KV-cache
h = sequence length of queries
since a lower-triangular attention mask is required which is all-true for the case of a single query vector.
Is this indeed the intended way to use scaled_dot_product_attention or am I doing something dumb?
### Alternatives
for is_causal=True, I propose to use the attention mask generated by the code above (plus some unsqueezing for broadcasting).
### Additional context
Behaviour found for torch version 2.2.2 | true |
2,789,568,654 | Update OpenBLAS to 0.3.29 | michalowski-arm | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11 | CONTRIBUTOR | * Improvements for GEMM to GEMV kernels
* Improvements for SVE kernels for SGEMV and DGEMV
| true |
2,789,519,322 | Connection Limitation in PyTorch Distributed (Vanilla) with c10d Rendezvous Backend | aliciasoliveiraa | open | [
"oncall: distributed"
] | 3 | NONE | ### 🐛 Describe the bug
Hello PyTorch team,
I am encountering an issue while using PyTorch Distributed Vanilla with the c10d rendezvous backend. I am currently running PyTorch version 2.5.1.
When trying to establish connections across multiple nodes, I can only manage up to 75 simultaneous connections. The plan was to test with 128, 256, and 512 nodes, but I can't exceed this 75-connection limit.
The following error occurs when attempting to establish more connections:
```shell
Traceback (most recent call last):
File "/projects/I20240002/alicia.oliveira/arm_dist_env/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
...
raise RendezvousConnectionError(
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
```
I am running the setup on an x86 architecture with InfiniBand. I would like to know if this is a known limitation of the c10d rendezvous backend or if there are any configurations or adjustments I can make to allow more connections.
Thank you in advance for your help!
### Versions
PyTorch version: 2.5.1
Rendezvous backend: c10d
Architecture: x86
Network: InfiniBand
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,789,470,536 | RuntimeError: upsample_nearest3d only supports output tensors with less than INT_MAX elements | eppaneamd | closed | [
"triaged",
"module: 64-bit",
"module: interpolation"
] | 1 | NONE | ### 🐛 Describe the bug
Upscaling a tensor with `upsample_nearest3d` where the result size would exceed 2^31 causes a `RuntimeError`. Code to reproduce:
```
import torch
x = torch.ones((1, 256, 16, 720, 1280), dtype=torch.bfloat16).cuda()
out = torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')
assert (out[0] == out[-1]).all()
```
Gives the following error:
```
File "test.py", line 3, in <module>
out = torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/functional.py", line 4651, in interpolate
return torch._C._nn.upsample_nearest3d(input, output_size, scale_factors)
RuntimeError: upsample_nearest3d only supports output tensors with less than INT_MAX elements, but got [1, 256, 32, 1440, 2560]
```
Same behaviour can be observed also with `torch2.5.1` in both CUDA and HIP environments.
This is a limitation for some models, see e.g. the following [diffusers source code](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L107-L116). The same error can occur due to [L115](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L115) and this requires setting `vae.enable_tiling().`
### Versions
```
PyTorch version: 2.6.0.dev20241122+rocm6.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41133-dd7f95766
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.3.1 24491 1e0fda770a2079fbd71e4b70974d74f62fd3af10)
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41133
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] optree==0.11.0
[pip3] pytorch-triton-rocm==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241122+rocm6.2
[pip3] torchaudio==2.5.0.dev20241206+rocm6.2
[pip3] torchvision==0.20.0.dev20241206+rocm6.2
[pip3] triton==3.0.0
[conda] No relevant packages
``` | true |
2,789,294,857 | [Intel CPU] Fix issue #143483. | RanTao123 | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Fix issue in https://github.com/pytorch/pytorch/issues/143483.
mode should be in enum class EmbeddingBagMode. | true |
2,789,213,710 | Accessing secrets variables in CI | swgu98 | closed | [
"module: ci",
"triaged",
"security"
] | 4 | NONE | I've recently been learning how to use github actions. I've set up secrets variables in my repository, and created a workflow triggered by pull_request by submitting a pull request from an external developer. At this point, the repository has a secrets variable set up and a workflow. When an external developer submits a pull request to the repository, this workflow is triggered, but the secrets variable cannot be accessed. Why?
I've also observed that the pull.yaml workflow of pytorch is triggered by pull_request, but can access the HUGGING_FACE_HUB_TOKEN variable, which should be a secrets variable pre-set in the repository. In my repository, only pull_request_target is set to trigger the access to the secrets variable.
cc @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,789,159,372 | [torch.export] Error When Trying To Express Dynamism For Transformer Model of SD3 | anzr299 | closed | [
"oncall: pt2",
"oncall: export"
] | 8 | NONE | ### 🐛 Describe the bug
**Brief Description:**
I am trying to export the transformer model of Stable Diffusion 3 using `torch.export.export_for_training`. The error occurs when trying to express dynamism for the feature map height and width. The reproducer and traceback are given below.
**Reproducible Code:**
```
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", text_encoder_3=None, tokenizer_3=None)
unet_kwargs = {}
unet_kwargs["hidden_states"] = torch.ones((2, 16, 64, 64))
unet_kwargs["timestep"] = torch.from_numpy(np.array([1, 2], dtype=np.float32))
unet_kwargs["encoder_hidden_states"] = torch.ones((2, 154, 4096))
unet_kwargs["pooled_projections"] = torch.ones((2, 2048))
#Feature map height and width are dynamic
fm_height = torch.export.Dim('fm_height', min=16)
fm_width = torch.export.Dim('fm_width', min=16)
#iterate through the unet kwargs and set only hidden state kwarg to dynamic
dynamic_shapes = {key: (None if key != "hidden_states" else {2: fm_height, 3: fm_width}) for key in unet_kwargs.keys()}
transformer = torch.export.export_for_training(pipe.transformer.eval(), args=(), kwargs=(unet_kwargs), dynamic_shapes=dynamic_shapes).module()
```
**Error Traceback:**
```
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Error while creating guard:
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Name: ''
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Source: shape_env
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Create Function: SHAPE_ENV
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Guard Types: None
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Code List: None
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Object Weakref: None
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Guarded Class Weakref: None
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Traceback (most recent call last):
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_guards.py", line 281, in create
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] return self.create_fn(builder, self)
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 1836, in SHAPE_ENV
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] guards = output_graph.shape_env.produce_guards(
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4178, in produce_guards
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] raise ConstraintViolationError(
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (fm_height, fm_width)! For more information, run with TORCH_LOGS="+dynamic".
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_height = L['hidden_states'].size()[2] in the specified range 16 <= fm_height <= 9223372036854775806 satisfy the generated guard Ne(((-(L['hidden_states'].size()[2]//2))//2) + 96, 0).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_height = L['hidden_states'].size()[2] in the specified range 16 <= fm_height <= 9223372036854775806 satisfy the generated guard (L['hidden_states'].size()[2]//2) + ((-(L['hidden_states'].size()[2]//2))//2) + 96 <= 192.
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard Ne(((-(L['hidden_states'].size()[3]//2))//2) + 96, 0).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard (L['hidden_states'].size()[3]//2) + ((-(L['hidden_states'].size()[3]//2))//2) + 96 <= 192.
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard Ne(294912, 1536*((L['hidden_states'].size()[3]//2))).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_height = L['hidden_states'].size()[2] in the specified range 16 <= fm_height <= 9223372036854775806 satisfy the generated guard Ne(Mod((L['hidden_states'].size()[2]//2), 2*((L['hidden_states'].size()[2]//2))), 0).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard Ne(Mod((L['hidden_states'].size()[3]//2), 2*((L['hidden_states'].size()[3]//2))), 0).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_height = L['hidden_states'].size()[2] in the specified range 16 <= fm_height <= 9223372036854775806 satisfy the generated guard 16 <= L['hidden_states'].size()[2] and L['hidden_states'].size()[2] <= 385
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard 16 <= L['hidden_states'].size()[3] and L['hidden_states'].size()[3] <= 385
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] Created at:
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 615, in transform
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] tracer = InstructionTranslator(
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2670, in __init__
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] output=OutputGraph(
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 317, in __init__
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] self.init_ambient_guards()
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 463, in init_ambient_guards
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
torch/export/__init__.py:154, in export_for_training(mod, args, kwargs, dynamic_shapes, strict, preserve_module_call_signature)
148 if isinstance(mod, torch.jit.ScriptModule):
149 raise ValueError(
150 "Exporting a ScriptModule is not supported. "
151 "Maybe try converting your ScriptModule to an ExportedProgram "
152 "using `TS2EPConverter(mod, args, kwargs).convert()` instead."
153 )
--> 154 return _export_for_training(
155 mod,
156 args,
157 kwargs,
158 dynamic_shapes,
159 strict=strict,
160 preserve_module_call_signature=preserve_module_call_signature,
161 )
torch/export/_trace.py:1017, in _log_export_wrapper.<locals>.wrapper(*args, **kwargs)
1010 else:
1011 log_export_usage(
1012 event="export.error.unclassified",
1013 type=error_type,
1014 message=str(e),
1015 flags=_EXPORT_FLAGS,
1016 )
-> 1017 raise e
1018 finally:
1019 _EXPORT_FLAGS = None
torch/export/_trace.py:990, in _log_export_wrapper.<locals>.wrapper(*args, **kwargs)
988 try:
989 start = time.time()
--> 990 ep = fn(*args, **kwargs)
991 end = time.time()
992 log_export_usage(
993 event="export.time",
994 metrics=end - start,
995 flags=_EXPORT_FLAGS,
996 **get_ep_stats(ep),
997 )
torch/export/exported_program.py:114, in _disable_prexisiting_fake_mode.<locals>.wrapper(*args, **kwargs)
111 @functools.wraps(fn)
112 def wrapper(*args, **kwargs):
113 with unset_fake_temporarily():
--> 114 return fn(*args, **kwargs)
torch/export/_trace.py:1746, in _export_for_training(mod, args, kwargs, dynamic_shapes, strict, preserve_module_call_signature)
1727 (
1728 args,
1729 kwargs,
(...)
1732 dynamic_shapes,
1733 ) = _process_export_inputs(mod, args, kwargs, dynamic_shapes)
1735 export_func = (
1736 functools.partial(
1737 _strict_export_lower_to_aten_ir,
(...)
1744 )
1745 )
-> 1746 export_artifact = export_func( # type: ignore[operator]
1747 mod=mod,
1748 args=args,
1749 kwargs=kwargs,
1750 dynamic_shapes=dynamic_shapes,
1751 preserve_module_call_signature=preserve_module_call_signature,
1752 pre_dispatch=False,
1753 original_state_dict=original_state_dict,
1754 orig_in_spec=orig_in_spec,
1755 allow_complex_guards_as_runtime_asserts=False,
1756 _is_torch_jit_trace=False,
1757 )
1759 export_graph_signature = export_artifact.aten.sig
1761 forward_arg_names = _get_forward_arg_names(mod, args, kwargs)
torch/export/_trace.py:1252, in _strict_export_lower_to_aten_ir(mod, args, kwargs, dynamic_shapes, preserve_module_call_signature, pre_dispatch, original_state_dict, orig_in_spec, allow_complex_guards_as_runtime_asserts, _is_torch_jit_trace, lower_to_aten_callback)
1239 def _strict_export_lower_to_aten_ir(
1240 mod: torch.nn.Module,
1241 args: Tuple[Any, ...],
(...)
1250 lower_to_aten_callback: Callable,
1251 ) -> ExportArtifact:
-> 1252 gm_torch_level = _export_to_torch_ir(
1253 mod,
1254 args,
1255 kwargs,
1256 dynamic_shapes,
1257 preserve_module_call_signature=preserve_module_call_signature,
1258 restore_fqn=False, # don't need to restore because we will do it later
1259 allow_complex_guards_as_runtime_asserts=allow_complex_guards_as_runtime_asserts,
1260 _log_export_usage=False,
1261 )
1263 # We detect the fake_mode by looking at gm_torch_level's placeholders, this is the fake_mode created in dynamo.
1264 (
1265 fake_args,
1266 fake_kwargs,
1267 dynamo_fake_mode,
1268 ) = _extract_fake_inputs(gm_torch_level, args, kwargs)
torch/export/_trace.py:560, in _export_to_torch_ir(f, args, kwargs, dynamic_shapes, preserve_module_call_signature, disable_constraint_solver, allow_complex_guards_as_runtime_asserts, restore_fqn, _log_export_usage, same_signature)
556 module_call_specs: Dict[str, Dict[str, pytree.TreeSpec]] = {}
557 with _wrap_submodules(
558 f, preserve_module_call_signature, module_call_specs
559 ), _ignore_backend_decomps():
--> 560 gm_torch_level, _ = torch._dynamo.export(
561 f,
562 dynamic_shapes=transformed_dynamic_shapes, # type: ignore[arg-type]
563 tracing_mode="symbolic",
564 disable_constraint_solver=disable_constraint_solver,
565 # currently the following 2 flags are tied together for export purposes,
566 # but untangle for sake of dynamo export api
567 prefer_deferred_runtime_asserts_over_guards=True,
568 allow_complex_guards_as_runtime_asserts=allow_complex_guards_as_runtime_asserts,
569 _log_export_usage=_log_export_usage,
570 same_signature=same_signature,
571 )(
572 *args,
573 **kwargs,
574 )
575 except (ConstraintViolationError, ValueRangeError) as e:
576 raise UserError(UserErrorType.CONSTRAINT_VIOLATION, str(e)) # noqa: B904
torch/_dynamo/eval_frame.py:1448, in export.<locals>.inner(*args, **kwargs)
1446 dim_constraints.solve()
1447 forced_specializations = dim_constraints.forced_specializations()
-> 1448 msg = dim_constraints.prettify_results(
1449 original_signature,
1450 dynamic_shapes,
1451 constraint_violation_error,
1452 forced_specializations,
1453 )
1454 if constraint_violation_error:
1455 constraint_violation_error.args = (
1456 constraint_violation_error.args[0] + msg,
1457 )
torch/fx/experimental/symbolic_shapes.py:2248, in DimConstraints.prettify_results(self, original_signature, dynamic_shapes, constraint_violation_error, forced_specializations)
2245 for s, val in forced_specializations.items():
2246 buf += f" - solving the guards generated for {s} resulted in a specialized value of {val}.\n"
-> 2248 self._process_derived_dim_roots(results, name_to_dim)
2250 dims = []
2251 others = []
torch/fx/experimental/symbolic_shapes.py:2064, in DimConstraints._process_derived_dim_roots(self, results, name_to_dim)
2062 # create result & dim
2063 results[str(root)] = {"min": min_, "max": max_}
-> 2064 name_to_dim[str(root)] = Dim(str(root), min=min_, max=max_)
2065 # remove old root min/max bounds
2066 c.pop("min", None)
torch/export/dynamic_shapes.py:227, in Dim(name, min, max)
225 _min = 0 if min is None else min
226 _max = int_oo if max is None else max
--> 227 assert _max > _min, f"Cannot create Dim with inconsistent min={min}, max={max}"
228 assert name.isidentifier(), f"Dim name must be a valid identifier, got {name}"
229 dim = _Dim(name, (int,), {"min": _min, "max": _max})
AssertionError: Cannot create Dim with inconsistent min=-4, max=-96
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.28.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy[/](https://file+.vscode-resource.vscode-cdn.net/)swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy==1.12.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1+cpu
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,789,119,643 | torch.distributed hangs between Linux (X86) and Mac (M2 Pro) | stevef1uk | open | [
"oncall: distributed"
] | 6 | NONE | ### 🐛 Describe the bug
I have pared the example code back to the simplest it can be and tried this on both machines. Both ends hang until the timeout.
Linux code:
```
import os
import torch.distributed as dist
from datetime import timedelta
def init_process(rank, world_size):
os.environ["MASTER_ADDR"] = "192.168.10.104"
os.environ["MASTER_PORT"] = "23456"
os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"
os.environ["NCCL_DEBUG"] = "INFO"
os.environ["GLOO_SOCKET_IFNAME"] = "enp3s0" # Specify correct network interface
print(f"Rank {rank}: Setting up process group...")
try:
dist.init_process_group(
backend="gloo", # or "nccl" if using GPUs
init_method="tcp://192.168.10.104:23456",
rank=rank,
world_size=world_size,
timeout=timedelta(seconds=120), # Adjust timeout
)
print(f"Rank {rank}: Process group initialized")
except Exception as e:
print(f"Rank {rank}: Error during initialization - {e}")
print(f"Rank {rank}: Reached end of init_process")
if __name__ == "__main__":
rank = int(os.environ.get("RANK", 0))
world_size = int(os.environ.get("WORLD_SIZE", 2))
init_process(rank, world_size)
```
On the Mac:
```
import os
import torch.distributed as dist
from datetime import timedelta
os.environ["MASTER_ADDR"] = "192.168.10.104" # Linux IPv4 address
os.environ["MASTER_PORT"] = "23456"
os.environ["GLOO_SOCKET_IFNAME"] = "en0" # Specify correct network interface
print("Rank 0: Setting up process group...")
dist.init_process_group(
backend="gloo",
init_method="tcp://192.168.10.104:23456", # Replace with Linux IPv4
rank=1,
world_size=2,
timeout=timedelta(seconds=60),
)
print("Rank 0: Process group initialized")
```
I have checked network connectivity between the machines and there are no firewall issues on either side.
I have used-up all my Claude.ai & ChatGPT credits investigating ways around and finally decided to raise this as an issue.
Hopefully someone real can help :-)
### Versions
Linux PyTorch version:
`python -c "import torch; print(torch.__version__)"
2.5.1+cpu
`
Mac PyTorch version:
` python -c "import torch; print(torch.__version__)"
2.5.1
`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,789,115,357 | Batching rule for aten::_thnn_fused_gru_cell | jspieler | open | [
"triaged",
"module: functorch"
] | 0 | NONE | ### 🚀 The feature, motivation and pitch
I am currently using `vmap` with GRUCell and got following message:
> There is a performance drop because we have not yet implemented the batching rule for aten::_thnn_fused_gru_cell. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:81.)
According to the [vmap operator support list](https://docs.google.com/spreadsheets/d/1Sp4HUjxwMifS5oDQg0yvjqk7hKOpCfKO4jWH4MTGP-k/edit#gid=0), the batching rule is indeed not implemented yet. Are there any plans to work on this in the near future or are there any suggestion for a workaround? Would be awesome to have such a batching rule!
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345 | true |
2,789,067,810 | [Accelerator] Use uniform `GetAllocator` for devices in `new_qtensor` function | Stonepia | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"ciflow/mps",
"ciflow/xpu",
"module: xpu",
"module: accelerator"
] | 14 | CONTRIBUTOR | Fixes #144848
This PR is intended to use a uniform `GetAllocator()` call for all the accelerators for `new_qtensor` function.
cc @gujinghui @EikanWang @fengyuan14 @guangyey @albanD @ZhiweiYan-96 | true |
2,789,060,717 | [XPU] unrecognized device for new_qtensor: xpu:0 | Stonepia | closed | [
"triaged",
"module: xpu"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
When running tests with qtensor, it has the following errors:
```
RuntimeError: 0 INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/quantized/Quantizer.cpp":125, please report a bug to PyTorch. unrecognized device for new_qtensor: xpu:0
```
This is because the XPU is not registered in qtensor.
https://github.com/pytorch/pytorch/blob/d9d7cca009ba9a79f3662a3de057a081163b95f6/aten/src/ATen/quantized/Quantizer.cpp#L116-L126
A PR is needed to add xpu support to it.
### Versions
PyTorch 2.6 release.
cc @gujinghui @EikanWang @fengyuan14 @guangyey | true |
2,789,046,763 | torch.compile() In my use case of calling torch.compile(), I have found that the model's data outputs are inconsistent. I suspect that using Triton for operator fusion may have introduced precision deviations. I am unsure how to locate and fix this issue. | liangshaopeng | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 1 | NONE | ### 🐛 Describe the bug
"My Torch environment is as follows:
2.2.2+cu121
My goal is to use functions related to torch.compile() to optimize the inference time of our model. In fact, it does work and achieves over a 50% reduction in inference time in the default mode.
The model code is as follows:
`"""
copy from https://github.com/alimama-tech/NeurIPS_Auto_Bidding_AIGB_Track_Baseline/blob/main/bidding_train_env/baseline/dd/DFUSER.py
"""
from torch.optim import Adam
import os
from typing import Optional, Tuple, List
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import gin
from .temporal import TemporalUnet
from .basic import (
cosine_beta_schedule,
Losses,
extract,
apply_conditioning,
apply_conditioning_with_fix,
)
class ReduceSum(nn.Module):
def forward(self, x):
return torch.sum(x, dim=-1)
@gin.configurable
class GaussianInvDynDiffusion(nn.Module):
def __init__(self, model, horizon, observation_dim, action_dim, n_timesteps=1000,
clip_denoised=False, predict_epsilon=True, hidden_dim=256,
loss_discount=1.0, returns_condition=False,
condition_guidance_w=0.1,
inv_bias=True,
):
super().__init__()
self.horizon = horizon
self.observation_dim = observation_dim
self.action_dim = action_dim
self.transition_dim = observation_dim + action_dim
self.model = model
self.inv_model = nn.Sequential(
nn.Linear(4 * self.observation_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
# ReduceSum(),
nn.Linear(hidden_dim, self.action_dim, bias=inv_bias),
)
self.returns_condition = returns_condition
self.condition_guidance_w = condition_guidance_w
betas = cosine_beta_schedule(n_timesteps)
alphas = 1. - betas
alphas_cumprod = torch.cumprod(alphas, axis=0)
alphas_cumprod_prev = torch.cat([torch.ones(1), alphas_cumprod[:-1]])
self.n_timesteps = int(n_timesteps)
self.clip_denoised = clip_denoised
self.predict_epsilon = predict_epsilon
self.register_buffer('betas', betas)
self.register_buffer('alphas_cumprod', alphas_cumprod)
self.register_buffer('alphas_cumprod_prev', alphas_cumprod_prev)
# calculations for diffusion q(x_t | x_{t-1}) and others
self.register_buffer('sqrt_alphas_cumprod', torch.sqrt(alphas_cumprod))
self.register_buffer('sqrt_one_minus_alphas_cumprod', torch.sqrt(1. - alphas_cumprod))
self.register_buffer('log_one_minus_alphas_cumprod', torch.log(1. - alphas_cumprod))
self.register_buffer('sqrt_recip_alphas_cumprod', torch.sqrt(1. / alphas_cumprod))
self.register_buffer('sqrt_recipm1_alphas_cumprod', torch.sqrt(1. / alphas_cumprod - 1))
# calculations for posterior q(x_{t-1} | x_t, x_0)
posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
self.register_buffer('posterior_variance', posterior_variance)
self.register_buffer('posterior_log_variance_clipped',
torch.log(torch.clamp(posterior_variance, min=1e-20)))
self.register_buffer('posterior_mean_coef1',
betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))
self.register_buffer('posterior_mean_coef2',
(1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))
loss_weights = self.get_loss_weights(loss_discount)
self.loss_fn = Losses['state_l2'](loss_weights)
def get_loss_weights(self, discount):
self.action_weight = 1
dim_weights = torch.ones(self.observation_dim, dtype=torch.float32)
discounts = discount ** torch.arange(self.horizon, dtype=torch.float)
discounts = discounts / discounts.mean()
loss_weights = torch.matmul(discounts[:, None], dim_weights[None, :])
if self.predict_epsilon:
loss_weights[0, :] = 0
return loss_weights
# ------------------------------------------ sampling ------------------------------------------#
def predict_start_from_noise(self, x_t, t, noise):
if self.predict_epsilon:
return (
extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
)
else:
return noise
def q_posterior(self, x_start, x_t, t):
posterior_mean = (
extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
)
posterior_variance = extract(self.posterior_variance, t, x_t.shape)
posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
return posterior_mean, posterior_variance, posterior_log_variance_clipped
def p_mean_variance(self, x, cond, t, returns: torch.Tensor = torch.ones(1, 1)):
if self.returns_condition:
# epsilon could be epsilon or x0 itself
epsilon_cond = self.model(x, cond, t, returns, use_dropout=False)
epsilon_uncond = self.model(x, cond, t, returns, force_dropout=True)
epsilon = epsilon_uncond + self.condition_guidance_w * (epsilon_cond - epsilon_uncond)
else:
epsilon = self.model(x, cond, t)
t = t.detach().to(torch.int64)
x_recon = self.predict_start_from_noise(x, t=t, noise=epsilon)
if self.clip_denoised:
x_recon.clamp_(-5., 5.)
model_mean, posterior_variance, posterior_log_variance = self.q_posterior(
x_start=x_recon, x_t=x, t=t)
return model_mean, posterior_variance, posterior_log_variance
def p_sample(self, x, cond, t, returns: torch.Tensor = torch.ones(1, 1)):
with torch.no_grad():
b, _, _ = x.shape
model_mean, _, model_log_variance = self.p_mean_variance(x=x, cond=cond, t=t, returns=returns)
noise = 0.5 * torch.randn_like(x, device=x.device)
nonzero_mask = (1 - (t == 0).float()).reshape(b, 1, 1)
return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
def p_sample_loop(self, shape, cond, returns: torch.Tensor = torch.ones(1, 1), t: int = 0, fix_dim: Optional[int] = None, save_denoise: bool = False):
with torch.no_grad():
torch.random.manual_seed(2046)
batch_size = shape[0]
x = 0.5 * torch.randn(shape[0], shape[1], shape[2], device=cond.device)
output1, output2 = [], []
if fix_dim is None:
x = apply_conditioning(x, cond, 0)
else:
x = apply_conditioning_with_fix(x, cond, 0, t, fix_dim)
for i in range(self.n_timesteps - 1, -1, -1):
timesteps = torch.ones(batch_size,
device=cond.device) * i
x = self.p_sample(x, cond, timesteps, returns)
#output1.append(x.clone().detach().cpu().numpy().tolist())
output1.append(x.clone().detach())
if fix_dim is None:
x = apply_conditioning(x, cond, 0)
else:
x = apply_conditioning_with_fix(x, cond, 0, t, fix_dim)
#output2.append(x.clone().detach().cpu().numpy().tolist())
output2.append(x.clone().detach())
#if save_denoise:
# return x, output1, output2
return x
# @torch.no_grad()
def conditional_sample(self, cond, returns: torch.Tensor = torch.ones(1, 1), horizon: int = 48, t: int = 0, fix_dim: Optional[int] = None, save_denoise: bool = False):
with torch.no_grad():
batch_size = 1
horizon = self.horizon
shape = torch.tensor([batch_size, horizon, self.observation_dim])
return self.p_sample_loop(shape, cond, returns, t, fix_dim, save_denoise)
def forward(self, cond, returns, t: int = 0, fix_dim: Optional[int] = None, save_denoise: bool = False):
return self.conditional_sample(cond=cond, returns=returns, t=t, fix_dim=fix_dim, save_denoise=save_denoise)
# ------------------------------------------ training ------------------------------------------#
def q_sample(self, x_start, t, noise=None):
if noise is None:
noise = torch.randn_like(x_start, device=x_start.device)
self.sqrt_alphas_cumprod = self.sqrt_alphas_cumprod.to(t.device)
self.sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod.to(t.device)
sample = (
extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
)
return sample
def p_losses(self, x_start, cond, t, returns=None, masks=None):
noise = torch.randn_like(x_start, device=x_start.device)
x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
t = t.to(x_noisy.device)
x_recon = self.model(x_noisy, cond, t, returns)
if self.predict_epsilon:
loss, info = self.loss_fn(x_recon, noise, masks)
else:
loss, info = self.loss_fn(x_recon, x_start, masks)
return loss, info
def loss(self, x, cond, returns, masks, action_mask=None):
"""
x with shape: (batch_size, step_len, H)
"""
batch_size = len(x)
t = torch.randint(0, self.n_timesteps, (batch_size,), device=x.device).long()
diffuse_loss, info = self.p_losses(x[:, :, self.action_dim:], cond, t, returns, masks)
diffuse_loss_batch = torch.reshape(info['loss'].mean(dim=(1,2)), (-1, 1))
_t = torch.reshape(t, (-1, 1))
loss_batch_t = torch.concat([_t, diffuse_loss_batch], dim=-1)
inv_loss, pred_a_t, mape = self.inv_loss(x, action_mask)
loss = (1 / 2) * (diffuse_loss + inv_loss)
# diffusion t loss bin size
return loss, info, (diffuse_loss, inv_loss), pred_a_t, mape, loss_batch_t
def inv_loss(self, x, masks):
# Calculating inv loss
x_t = x[:, :-1, self.action_dim:]
a_t = x[:, :-1, :self.action_dim]
x_t_1 = x[:, 1:, self.action_dim:]
# x_t_1[:, :, 1] = 0
x_t_2 = torch.cat(
[torch.zeros(x.shape[0], 1, x.shape[-1] - self.action_dim, device=x.device), x[:, :-2, self.action_dim:]],
dim=1)
x_t_3 = torch.cat(
[torch.zeros(x.shape[0], 2, x.shape[-1] - self.action_dim, device=x.device), x[:, :-3, self.action_dim:]],
dim=1)
x_comb_t = torch.cat([x_t_2, x_t_3, x_t, x_t_1], dim=-1)
x_comb_t = x_comb_t.reshape(-1, 4 * self.observation_dim)
masks_flat = masks[:, :-1].reshape(-1)
x_comb_t = x_comb_t[masks_flat]
a_t = a_t.reshape(-1, self.action_dim)
a_t = a_t[masks_flat]
pred_a_t = self.inv_model(x_comb_t)
inv_loss = F.mse_loss(pred_a_t, a_t, reduction="mean")
mape = ((a_t - pred_a_t).abs()) / (a_t.abs() + 1e-8)
mape = mape.mean()
return inv_loss, pred_a_t, mape
@gin.configurable
class DFUSER(nn.Module):
def __init__(self, dim_obs=16, dim_actions=1, dim_return=1, gamma=1, tau=0.01,
ACTION_MAX=10, ACTION_MIN=0,
step_len=48, n_timesteps=10,
condition_guidance_w=1.2,
clip_denoised=True,
inv_bias=True
):
super().__init__()
self.n_timestamps = n_timesteps
self.num_of_states = dim_obs
self.num_of_actions = dim_actions
self.ACTION_MAX = ACTION_MAX
self.ACTION_MIN = ACTION_MIN
self.step_len = step_len
model = TemporalUnet(
horizon=step_len,
transition_dim=dim_obs,
cond_dim=dim_actions,
return_dim=dim_return,
returns_condition=True,
dim=128,
condition_dropout=0.25,
calc_energy=False
)
self.diffuser = GaussianInvDynDiffusion(
model=model,
horizon=step_len,
observation_dim=dim_obs,
action_dim=dim_actions,
clip_denoised=clip_denoised,
predict_epsilon=True,
hidden_dim=256,
n_timesteps=n_timesteps,
loss_discount=1,
returns_condition=True,
condition_guidance_w=condition_guidance_w,
inv_bias=inv_bias,
)
self.step = 0
self.num_of_episodes = 0
self.GAMMA = gamma
self.tau = tau
self.num_of_steps = 0
#def forward(self, states, actions, returns, masks, action_mask):
# x = torch.cat([actions, states], dim=-1)
# cond = torch.ones_like(states[:, 0], device=states.device)[:, None, :]
# loss, infos, (diffuse_loss, inv_loss), pred_a_t, mape, loss_batch_t = self.diffuser.loss(x, cond, returns=returns, masks=masks, action_mask=action_mask)
# return loss, (diffuse_loss, inv_loss), pred_a_t, mape, loss_batch_t
def forward(self, x, budget):
"""
x with shape (time_step, dim)
"""
return self.diffuser(cond=x, returns=budget)
def get_action_s_by_state(self, x: torch.Tensor, returns: torch.Tensor, cur_time:int):
x = torch.reshape(x, [self.step_len, self.num_of_states])
states = x[:cur_time]
conditions = states
x_0 = self.diffuser(cond=conditions, returns=returns)
states = x_0[0, :cur_time + 1]
states_next = states[None, -1]
if cur_time > 1:
states_curt1 = conditions[-2].float()[None, :]
else:
states_curt1 = torch.zeros_like(states_next, device=states_next.device)
if cur_time > 2:
states_curt2 = conditions[-3].float()[None, :]
else:
states_curt2 = torch.zeros_like(states_next, device=states_next.device)
states_comb = torch.hstack([states_curt1, states_curt2, conditions[-1].float()[None, :], states_next])
actions = self.diffuser.inv_model(states_comb)
actions = actions.detach().cpu()[0] # .cpu().data.numpy()
return actions, states_next, x_0
def save_net(self, save_path):
if not os.path.isdir(save_path):
os.makedirs(save_path)
torch.save(self.diffuser.state_dict(), f'{save_path}/diffuser.pt')
def save_model(self, save_path):
if not os.path.isdir(save_path):
os.makedirs(save_path)
model_temp = self.cpu()
jit_model = torch.jit.script(model_temp)
torch.jit.save(jit_model, f'{save_path}/diffuser.pth')
def load_net(self, load_path):
self.diffuser.load_state_dict(torch.load(load_path, map_location='cpu'))
self.use_cuda = torch.cuda.is_available()
if self.use_cuda:
self.diffuser.cuda()
def load_model(self, load_path):
# 加载 TorchScript 模型
jit_model = torch.jit.load(load_path, map_location='cpu')
# 将加载的模型分配给 self.diffuser
self = jit_model
# 检查是否有 CUDA 可用,并将模型移动到 GPU
self.use_cuda = torch.cuda.is_available()
if self.use_cuda:
self.cuda()
`
The inference code is as follows:
`#-*-coding:utf-8-*-
import sys
import os
import time
sys.path.append("..")
from dataclasses import dataclass, field
from typing import Dict, Optional
import numpy as np
import torch
import gin
from models.DFUSER import DFUSER
class AigbInference():
def __init__(self, model_dir, warmup=True):
self.model = DFUSER(
dim_obs=3,
dim_actions=1,
step_len=40,
n_timesteps=20,
gamma = 1,
tau = 0.01,
condition_guidance_w = 1.3,
clip_denoised = True,
inv_bias = True,
dim_return = 1
)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print(f'使用的设备: {device}')
self.model.to(device)
self.model.load_net("/home/admin/workspace/aop_lab/app_data/ckp01-pt-2025-1-3/diffuser.pt")
self.device = device
print(f'list_mode_options: {torch._inductor.list_mode_options()}')
self.model = torch.compile(self.model,backend="inductor")
#self.model = torch.compile(self.model,backend="inductor",mode="reduce-overhead")
#self.model = torch.compile(self.model,backend="inductor",mode="max-autotune")
#self.model = torch.compile(self.model,backend="inductor",mode="max-autotune-no-cudagraphs")
#self.model.load_model("/home/admin/workspace/aop_lab/app_data/ckp01-pth/diffuser.pth")
if warmup == True:
for i in range(self.model.step_len):
arg1_shape = (i, 3)
arg2_shape = (1, 1)
x = torch.ones(arg1_shape)
budget = torch.ones(arg2_shape)
x = x.to(self.device)
budget = budget.to(self.device)
traj_pred = self.model(x, budget)
print(f'warmup done')
def infer(self, x, budget):
start_time = time.perf_counter()
x = x.to(self.device)
budget = budget.to(self.device)
traj_pred = self.model(x, budget)
#traj_pred = self.model(x,budget)
print(
#f"traj_pred.shape: {traj_pred.shape} \n "
#f"traj_pred: {traj_pred} \n "
)
#self.model.save_model("/home/admin/workspace/aop_lab/app_data/ckp01-pth/checkpoint-pth-2")
end_time = time.perf_counter()
elapsed_time_ms = (end_time - start_time) * 1000
print(f"my_function 执行时间: {elapsed_time_ms:.3f} ms")
return traj_pred
`
Subsequently, I used Nsight to analyze the GPU utilization efficiency and concluded that some fragmented kernels were fused into Triton operators.
before

after

However, I soon discovered that when I run the model with the same input before and after optimization, the output results differ. Currently, I do not know how to resolve this issue."

### Versions
Collecting environment information...
PyTorch version: 2.2.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Alibaba Group Enterprise Linux Server 7.2 (Paladin) (x86_64)
GCC version: (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3 2.17)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.32
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.32
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 34-37,98-101
Off-line CPU(s) list: 0-33,38-97,102-127
Thread(s) per core: 0
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3476.128
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5806.48
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.2.2+cu121
[pip3] torchaudio==2.2.2+cu121
[pip3] torchvision==0.17.2+cu121
[pip3] triton==2.2.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.19.3 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.2.2+cu121 pypi_0 pypi
[conda] torchaudio 2.2.2+cu121 pypi_0 pypi
[conda] torchvision 0.17.2+cu121 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
(base)
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | true |
2,789,043,326 | [inductor] [dynamo]index_reduce_ raised AssertionError in assert_functional_graph | zhejiangxiaomai | open | [
"triaged",
"module: functionalization",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 5 | NONE | ### 🐛 Describe the bug
index_reduce_ will raise assertionError when the input is a view.
mini reproducer:
```python
import torch
class OpWrapperModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, ifm, op_inputs_dict):
result = ifm.index_reduce_(**op_inputs_dict)
return result
torch.manual_seed(8450)
ifm_t = torch.randn([4, 34, 64])
ifm = ifm_t[slice(None, None, None), slice(2, None, None), slice(None, None, None)]
index_tensor = torch.randint(low=0, high=34, size=[64])
source_tensor = torch.randn([4, 32, 64])
params = {
"index": index_tensor,
"source": source_tensor,
"dim": 2,
"reduce": "mean",
"include_self": False,
}
model = OpWrapperModule()
model_compiled = torch.compile(model, backend="inductor")
result = model_compiled(ifm, params)
```
ERROR log and trace:
```
Traceback (most recent call last):
File "/home/zhenzhao/qnpu/sw_214852/src/rep.py", line 27, in <module>
result = model_compiled(ifm, params)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1742, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1753, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1742, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1753, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 153, in aot_dispatch_base
fw_module, updated_flat_args, maybe_subclass_meta = aot_dispatch_base_graph( # type: ignore[misc]
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 184, in aot_dispatch_base_graph
copy_count = assert_functional_graph(fw_module.graph)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/functional_utils.py", line 461, in assert_functional_graph
n.args[0] in placeholders
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: n=copy_, n.args[0]=permute, placeholders={arg2_1, arg0_1, arg1_1}, graph=graph():
%arg0_1 : [num_users=2] = placeholder[target=arg0_1]
%arg1_1 : [num_users=1] = placeholder[target=arg1_1]
%arg2_1 : [num_users=3] = placeholder[target=arg2_1]
%full : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([4, 32, 64], 1), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%scalar_tensor : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (0,), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%expand : [num_users=1] = call_function[target=torch.ops.aten.expand.default](args = (%scalar_tensor, [4, 32, 64]), kwargs = {})
%index_put : [num_users=1] = call_function[target=torch.ops.aten.index_put.default](args = (%arg0_1, [None, None, %arg2_1], %expand), kwargs = {})
%empty : [num_users=1] = call_function[target=torch.ops.aten.empty.memory_format](args = ([4, 32, 64],), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%permute : [num_users=1] = call_function[target=torch.ops.aten.permute.default](args = (%empty, [0, 1, 2]), kwargs = {})
%copy_ : [num_users=1] = call_function[target=torch.ops.aten.copy_.default](args = (%permute, %index_put), kwargs = {})
%full_1 : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([4, 32, 64], 0), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%index_put_1 : [num_users=2] = call_function[target=torch.ops.aten.index_put.default](args = (%full_1, [None, None, %arg2_1], %full, True), kwargs = {})
%lt : [num_users=1] = call_function[target=torch.ops.aten.lt.Scalar](args = (%index_put_1, 1), kwargs = {})
%scalar_tensor_1 : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (1.0,), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu})
%where : [num_users=1] = call_function[target=torch.ops.aten.where.self](args = (%lt, %scalar_tensor_1, %index_put_1), kwargs = {})
%index_put_2 : [num_users=1] = call_function[target=torch.ops.aten.index_put.default](args = (%copy_, [None, None, %arg2_1], %arg1_1, True), kwargs = {})
%div : [num_users=1] = call_function[target=torch.ops.aten.div.Tensor](args = (%index_put_2, %where), kwargs = {})
%copy__1 : [num_users=1] = call_function[target=torch.ops.aten.copy_.default](args = (%arg0_1, %div), kwargs = {})
return (copy__1,)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.6.0a0+git30ac7fd
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5 (ssh://git@github.com/habana-internal/tpc_llvm10 150d2d7c6a8ff8abf0d8ce194d3fac3986b078e6)
CMake version: version 3.28.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
cc @bdhirsh @ezyang @chauhang @penguinwu @zou3519 @yf225 | true |
2,789,035,893 | NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend | xiangxinhello | open | [
"triaged",
"module: PrivateUse1"
] | 4 | NONE | ### 🐛 Describe the bug
```
import torch
a = torch.ones((3,3), device='privateuseone')
print(a)
```
```
a = torch.ones((3,3), device='privateuseone')
NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, CUDA, Meta, QuantizedCPU, QuantizedCUDA, QuantizedMeta, MkldnnCPU, SparseCPU, SparseCUDA, SparseMeta, SparseCsrCPU, SparseCsrCUDA, SparseCsrMeta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
```
### Versions
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.5.0a0+gita8d6afb
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.1 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.0a0+gita8d6afb dev_0 <develop>
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens | true |
2,789,026,790 | Fix flash attention seed/offset overflow when seed/offset larger than int64 | lixin-sxty | open | [
"triaged",
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 16 | CONTRIBUTOR | Operator _scaled_dot_product_flash_attention saves seed and offset when argument dropout is larger than zero for backward gradient calculation.
Torch uses uint64 to represent seed and offset. [See here](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/cuda/detail/PhiloxCudaStateRaw.cuh#L33). However, flash attention uses int64 to save these two data. [See here](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/transformers/cuda/flash_attn/flash_api.cpp#L502-L503)
This will cause an overflow problem when the seed or offset is larger than int64(2^63-1).
Here is a sample for reproducing this bug. Offset has the same problem but it's harder to construct a case.
```
a = torch.randn(1).cuda() # init cuda
dtype = torch.half
shape = (2, 80, 1, 32)
q = torch.randn(shape, dtype=dtype).cuda().requires_grad_()
k = torch.randn(shape, dtype=dtype).cuda().requires_grad_()
v = torch.randn(shape, dtype=dtype).cuda().requires_grad_()
seed = (1 << 64) - 1
print('user seed:', seed)
torch.manual_seed(seed)
out = torch.ops.aten._scaled_dot_product_flash_attention(q, k, v, 0.1)
print('flash attention saved seed for backward:', out[6])
seed = (1 << 63) - 1
torch.manual_seed(seed)
print('user seed:', seed)
out = torch.ops.aten._scaled_dot_product_flash_attention(q, k, v, 0.1)
print('flash attention saved seed for backward:', out[6])
# seed wrong when seed larger than int64
user seed: 18446744073709551615
flash attention saved seed for backward: tensor(-1)
user seed: 9223372036854775807
flash attention saved seed for backward: tensor(9223372036854775807)
```
With this fix, the result is right.
```
# use same code example
# both output are right
user seed: 18446744073709551615
flash attention saved seed for backward: tensor(18446744073709551615, dtype=torch.uint64)
user seed: 9223372036854775807
flash attention saved seed for backward: tensor(9223372036854775807, dtype=torch.uint64)
``` | true |
2,788,995,134 | OpenReg: Split Allocator | Zhenbin-8 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Split the Allocator into HostAllocator and DeviceAllocator.
cc @albanD | true |
2,788,981,535 | [DONT MERGE] temp upgrade onednn to 3.7 | chuanqi129 | closed | [
"module: mkldnn",
"open source",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 1 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal | true |
2,788,967,996 | OpenReg: Remove REGISTER_GENERATOR_PRIVATEUSE1 | Zhenbin-8 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 22 | CONTRIBUTOR | Replace REGISTER_GENERATOR_PRIVATEUSE1 with new API in AcceleratorHooksInterface.
cc @albanD | true |
2,788,950,523 | OpenReg: Use device agnostic API | Zhenbin-8 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Use `torch.accelerator.device_count()` to get the number of devices.
cc @albanD | true |
2,788,948,547 | Apply Ruff fixes and pyupgrade to torch/fx | cyyever | closed | [
"triaged",
"open source",
"release notes: fx",
"fx",
"ciflow/inductor",
"suppress-bc-linter"
] | 1 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,788,934,690 | Enhance running pr time benchmarks locally experience. | laithsakka | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 16 | CONTRIBUTOR | Summary: title
Test Plan: NA
Differential Revision: D68195894
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,788,873,340 | Default Copies are not vectorized in v3.6.0 of cutlass | drisspg | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Summary:
FlashAttentionV2 perf was tanked in v3.6.0, See: https://github.com/pytorch/pytorch/issues/144729 for more details.
This PR makes it possible to land v3.6.0 update and fixes perf regression. See: https://github.com/pytorch/pytorch/issues/144729#issuecomment-2591644076 for anlaysis, as well we have various internal tests to verify
Differential Revision: D68194635
| true |
2,788,848,937 | [inductor] `MaxUnpool` crash when meeting out-of-bound value on inductor | shaoyuyoung | open | [
"triaged",
"oncall: pt2",
"module: decompositions",
"module: inductor"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
**symptom**: when input tensor shape is too small (e.g., [1, 1, 1]) and kernel size >1, eager will return an empty tensor while inductor crashes.
**device**: both on CPU and cuda.
**exposed area**: `MaxUnpool1d`, `MaxUnpool2d`, and `MaxUnpool3d`
```python
import torch
import torch.nn as nn
from torch._inductor import config
config.fallback_random = True
torch.manual_seed(0)
torch.set_grad_enabled(False)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.unpool = nn.MaxUnpool1d(kernel_size=2, stride=2, padding=1)
def forward(self, x):
x = self.unpool(x, x.long())
return x
model = Model()
x = torch.randn(1, 1, 1)
inputs = [x]
try:
output = model(*inputs)
print(f"succeed on eager: {output}")
except Exception as e:
print(e)
try:
c_model = torch.compile(model)
c_output = c_model(*inputs)
print(f"succeed on inductor: {c_output}")
except Exception as e:
print(e)
```
error log
CPU
```
succeed on eager: tensor([], size=(1, 1, 0))
kernel, /tmp/torchinductor_root/jy/cjypsx3k535vxaiglqu75ffcicjwpqokk6hiqhrtdzzwznugpdmm.cpp:20, index out of bounds: 0 <= tmp10 < 0L
```
cuda
```
succeed on eager: tensor([], device='cuda:0', size=(1, 1, 0))
/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:47: max_unpooling2d_forward_kernel: block: [0,0,0], thread: [0,0,0] Assertion `maxind >= 0 && maxind < outputImageSize` failed.
RuntimeError: CUDA error: device-side assert triggered
```
### Versions
PyTorch version: 2.7.0.dev20250112+cu124
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: Tesla V100-SXM2-32GB
<details>
<summary>click here for detailed env</summary>
```
PyTorch version: 2.7.0.dev20250112+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 550.142
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250112+cu124
[pip3] torchaudio==2.6.0.dev20250112+cu124
[pip3] torchvision==0.22.0.dev20250112+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250112+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250112+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250112+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @SherlockNoMad @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | true |
2,788,821,831 | CUDA unknown error | leewww-code | closed | [] | 1 | NONE | Our previous environment can use GPU well, but I don't know why, something went wrong these days.
```
python -c "import torch; print(torch.cuda.is_available())"
```
and it returns:
```
/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py:138: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
False
```
Some information needs to be explained:
1. This environment is in a container and I don't have root privileges
2. `nvidia-smi`

`nvcc -V`

3. pytorch version
```Name: torch
Version: 2.1.0
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /opt/conda/lib/python3.8/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-nccl-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: fastai, torchaudio, torchvision
```
`python -c "import torch; print(torch.version.cuda)"`got 12.1, but I think 12.3 should be backward compatible
4. ~/.bashrc also Added relevant paths like
```
export PATH=$PATH:/usr/local/cuda/bin
export CUDA_PATH=/usr/local/cuda
export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export MATHLIBS_PATH=$CUDA_PATH/lib64
```
What should I do? | true |
2,788,816,358 | WIP pp_cp test | wconstab | closed | [
"oncall: distributed",
"Stale",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145125
* __->__ #144834
* #145099
* #145011
* #145010
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k | true |
2,788,816,288 | [Pipelining] move scale_grads to base class, add docs | wconstab | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144834
* #145011
* #145010
* __->__ #144833
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | true |
2,788,814,828 | [Cutlass] Seeing if changing default copies fixes perf | drisspg | closed | [
"module: cuda",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144832
cc @ptrblck @msaroufim @eqy | true |
2,788,813,787 | updates to benchmarks | drisspg | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144831
| true |
2,788,784,661 | [dynamo] Issue with torch.compile decorator for simple function using python `format` function with integer input. | lunathanael | closed | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0 | NONE | ### 🐛 Describe the bug
...
File "/home/.../.venv/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1958, in call_format
return variables.StringFormatVariable.create(format_string, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.../.venv/lib/python3.12/site-packages/torch/_dynamo/variables/misc.py", line 1357, in create
format_string.format(
^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'int' object has no attribute 'format'
Torch JIT compilation seems to have issues calling format function with integer input.
Reproducible example:
```python
import torch
@torch.compile
def to_binary_string(num: int):
return format(num, "b")
to_binary_string(10)
```
Example without bug:
```python
import torch
@torch.compile
def to_binary_string(num: int):
"b".format(num)
to_binary_string(10)
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 PRO 7840U w/ Radeon 780M Graphics
CPU family: 25
Model: 116
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
BogoMIPS: 6587.68
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 4 MiB (4 instances)
L3 cache: 16 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,788,782,929 | Added swizzle searching, disabled fp16 accum, and enabled ping-pong for cutlass | masnesral | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 12 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144829
Summary:
Test Plan:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D68751149](https://our.internmc.facebook.com/intern/diff/D68751149) | true |
2,788,741,221 | [DO NOT MERGE]upgrade onednn to 3.7 | ZhiweiYan-96 | closed | [
"module: mkldnn",
"open source",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/linux-aarch64"
] | 1 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal | true |
2,788,713,492 | [MPSInductor] Implement `pow()` | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144827
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.