id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,941,421,749 | Demote logger of runtime_asserts_frozen to be fired only on debug mode | tugsbayasgalan | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149833
* __->__ #149832
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D71702305](https://our.internmc.facebook.com/intern/diff/D71702305) | true |
2,941,420,130 | Only print dde partial fx graph for export | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149831
Lazos correctly pointed out this doesn't make sense for compile since
we graph break in compile. This results in tons of unwanted user log
spew. We do want this in export though since it's drastiaclly reduced
the support load for DDEs. This PR does the refactor to keep it in
export but remove it from compile
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,941,295,476 | cd: Add script for generating binary build matrix | seemethere | open | [
"topic: not user facing"
] | 2 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150713
* __->__ #149830
This script currently exists in
.github/scripts/generate_binary_build_matrix.py however I think there's
a lot of legacy cruft associated with that script so I'm going to
attempt to do a complete refactor starting with the creation of this
script.
The aim of this script is the follows:
* Every binary build should be its own separate object that we can pass
METADATA into
* There should be no hidden logic as to why a particular build has
particular METADATA
* All of our binary builds should center around an idea of CPU
architecture, acclerator type, and accelerator version (where
applicable)
For this script in particular you can run it like so:
```
python3 .ci/release/generate_build_matrix.py --os linux
--accelerator-type cuda
```
This should actually give you a json matrix blob that you can directly pass into
your actions workflow, similar to what we do for our test workflows and
for things like NOVA workflows.
Some things to still be figured out:
* How do we build docker images when we need them?
* validate_nccl_dep_consistency should probably be its own separate test
outside of the main generation script?
* How do we integrate this into our actions workflows?
Signed-off-by: Eli Uriegas <github@terriblecode.com> | true |
2,941,266,551 | TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support | justinchuby | closed | [
"module: cpu",
"triaged",
"module: python frontend"
] | 7 | COLLABORATOR | The warning message
> /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/Context.cpp:148.)
> torch._C._set_onednn_allow_tf32(_allow_tf32)
Has been triggering with a normal cpu installation of PyTorch from pypi, making it annoying and it is unclear what the user needs to do. Would it make sense to suppress or improve this warning? (For 2.7 as well)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @albanD @malfet | true |
2,941,256,497 | bump XNNPACK dependency to fix GCC 14 build on aarch64-linux | prusnak | open | [
"module: build",
"triaged",
"actionable",
"module: xnnpack",
"module: arm"
] | 3 | NONE | ### 🐛 Describe the bug
bundled version of XNNPACK cannot be built on aarch64-linux with GCC14 because of this issue https://github.com/google/XNNPACK/issues/7726
the issue has been fixed in XNNPACK in the meanwhile: https://github.com/google/XNNPACK/commit/3bc2a32a44db62434248197bceefa37f4f05153e
suggestion: bump the XNNPACK dependency in `third_party` to newer commit which contains the fix
### Versions
2.6.0
cc @malfet @seemethere @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 | true |
2,941,248,356 | Why has my linear regression always been NaN? | bbhxwl | closed | [] | 1 | NONE | I use chatgpt to learn linear regression, but I don't understand why it can't predict?
Where is the mistake?
```
import torch
import torch.nn as nn
import torch.optim as optim
# 1. 数据准备:构造老人年龄(特征)和花费金额(目标)的数据
# 注意:数据形状必须是二维张量,每一行代表一个样本
ages = torch.tensor([[65], [70], [75], [80], [85], [90], [95], [100]], dtype=torch.float32)
spendings = torch.tensor([[200], [250], [300], [350], [400], [450], [500], [550]], dtype=torch.float32)
# 2. 模型构建:用 nn.Sequential 构建一个简单的线性回归模型
# 这里只有一层线性层,将输入的1个特征映射为1个输出
model = nn.Sequential(
nn.Linear(1, 1)
)
# 3. 定义损失函数和优化器
# 使用均方误差损失函数(MSELoss),优化器选用随机梯度下降(SGD)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
# 4. 模型训练
num_epochs = 1000 # 设置训练轮数
for epoch in range(num_epochs):
optimizer.zero_grad() # 清除上一步的梯度
predictions = model(ages) # 前向传播:用当前模型预测花费金额
loss = criterion(predictions, spendings) # 计算预测值和真实值之间的均方误差
loss.backward() # 反向传播:计算梯度
optimizer.step() # 更新模型参数
# 每100个epoch输出一次当前的损失值
if (epoch + 1) % 100 == 0:
print(f"Epoch {epoch+1}/{num_epochs}, Loss: {loss.item():.4f}")
# 5. 使用训练好的模型进行预测
model.eval() # 设置为评估模式,关闭 dropout 等训练模式专用功能
with torch.no_grad(): # 关闭梯度计算,提高预测效率
new_age = torch.tensor([[77.0]], dtype=torch.float32) # 新输入数据:77岁的老人
predicted_spending = model(new_age) # 得到预测值
print("预测的花费金额:", predicted_spending.item())
```
<img width="423" alt="Image" src="https://github.com/user-attachments/assets/7aac37d5-1b22-48fe-9e3d-63a9d20ea919" /> | true |
2,941,049,317 | How to handle dynamic output size with torch.onnx.export (through dynamo) for Resize | FabianSchuetze | closed | [
"module: onnx",
"triaged",
"oncall: pt2"
] | 9 | CONTRIBUTOR | ### 🐛 Describe the bug
I would like to export with torch.onnx.export (through dynamo) some code that contains a resize operation. The output width and height is dynamic. An example model is as follows:
```
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, size):
y = torch.nn.functional.interpolate(x, size=size.tolist())
return y
model = Model()
x = torch.rand(1, 3, 400, 500)
size = torch.tensor([1024, 1024]).to(torch.int32)
y = model(x, size)
onnx_model = torch.onnx.export(model, (x, size), dynamo=True)
```
The code throws the following error:
```
<class 'RuntimeError'>: /pytorch/build/aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:5615: SymIntArrayRef expected to contain only concrete integers
While executing %upsample_nearest2d : [num_users=1] = call_function[target=torch.ops.aten.upsample_nearest2d.vec](args = (%x, [%_local_scalar_dense, %_local_scalar_dense_1], None), kwargs = {})
Original traceback:
File "/tmp/test.py", line 11, in forward
y = torch.nn.functional.interpolate(x, size=size.tolist())
```
The interpolate function doesn't accept a tensor as argument, so I somehow has to convert it to a List. That fails with the error as shown. I can hardcode the list to a fixed sizes, but then I cannot accept images with different size at inference time.
How can I address this issue?
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 14.2.0-4ubuntu2~24.04) 14.2.0
Clang version: 19.1.1 (1ubuntu1~24.04.2)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-19-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 22%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 896 KiB (14 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fast_pytorch_kmeans==0.2.2
[pip3] flake8==7.1.2
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.0
[pip3] torch==2.6.0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[pip3] types-flake8-2020==1.8
[pip3] types-flake8-bugbear==23.9.16
[pip3] types-flake8-builtins==2.2
[pip3] types-flake8-docstrings==1.7
[pip3] types-flake8-plugin-utils==1.3
[pip3] types-flake8-rst-docstrings==0.3
[pip3] types-flake8-simplify==0.21
[pip3] types-flake8-typing-imports==1.15
[pip3] types-mypy-extensions==1.0
[conda] Could not collect
cc @chauhang @penguinwu | true |
2,941,033,781 | [AOTInductor] Free folded constants that's managed by AOTInductor | muchulee8 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149825
internally.
Summary:
This diff allows freeing the usage of folded constants that's created by
AOTInductor through CUDACachingAllocator instead of the constant blob
from cudaMalloc directly.
Test Plan:
LD_LIBRARY_PATH=/data/users/$USER/pytorch/build/lib
/home/$USER/local/pytorch/build/bin/test_aoti_inference
Reviewers:
Subscribers:
Tasks:
Tags: | true |
2,941,032,971 | flex_attention raises error at compile | dslisleedh | closed | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: flex attention"
] | 3 | NONE | ### 🐛 Describe the bug
I'm trying to accelerate WindowAttention with flex_attention.
However, when the window size equals 8, it raises an error when compiling.
Please refer to this [code](https://github.com/dslisleedh/ESC/blob/main/scripts/compare_attn.py)
```bash
python compare_attn.py --h 64 --w 64 --window_size 16 --attn_func flex # This works
python compare_attn.py --h 64 --w 64 --window_size 8 --attn_func flex # Raises Error !!!
```
The second line raises an error following:
```bash
Traceback (most recent call last):
File "/home/leedh97/ESC/scripts/compare_attn.py", line 149, in <module>
model(x) # Make sure CUDNN to find proper algorithms, especially for convolutions.
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/leedh97/ESC/scripts/compare_attn.py", line 105, in forward
out = self.attn_func(q, k, v, score_mod=self.get_rpe)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 203, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1496, in run_node
result = super().run_node(n)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1143, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1133, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 409, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/kernel/flex_attention.py", line 1096, in flex_attention
return create_flex_decoding_kernel(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/kernel/flex_decoding.py", line 425, in create_flex_decoding_kernel
kernel_options.setdefault("SPLIT_KV", get_split_k(B, Hkv, seq_len_kv))
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/kernel/flex_decoding.py", line 303, in get_split_k
split_k = max(split_k, 1)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/sympy/core/relational.py", line 516, in __bool__
raise TypeError("cannot determine truth value of Relational")
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: TypeError: cannot determine truth value of Relational
target: flex_attention
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg1_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 4, s0, 16], stride=[64*s0, 16*s0, 16, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg3_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 4, s0, 16], stride=[64*s0, 16*s0, 16, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='arg5_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 4, s0, 16], stride=[64*s0, 16*s0, 16, 1]))
))
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (1, 1, TensorBox(StorageBox(
ComputedBuffer(name='buf4', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x14b295723a30>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf5', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x14b2957405e0>, ranges=[1, 1, 1, 1]))
)), None, None, TensorBox(StorageBox(
Pointwise(
'cuda',
torch.int32,
def inner_fn(index):
_, _, _ = index
tmp0 = ops.load(buf0, 0)
tmp1 = ops.to_dtype(tmp0, torch.int64, src_dtype=torch.int32)
tmp2 = ops.to_dtype(tmp1, torch.int32, src_dtype=torch.int64)
return tmp2
,
ranges=[1, 1, 1],
origin_node=convert_element_type,
origins=OrderedSet([sum_1, convert_element_type])
)
)), TensorBox(StorageBox(
Pointwise(
'cuda',
torch.int32,
def inner_fn(index):
_, _, _, _ = index
tmp0 = ops.index_expr(0, dtype=torch.int16)
tmp1 = ops.to_dtype(tmp0, torch.int64, src_dtype=torch.int16)
tmp2 = ops.to_dtype(tmp1, torch.int32, src_dtype=torch.int64)
return tmp2
,
ranges=[1, 1, 1, 1],
origin_node=convert_element_type_1,
origins=OrderedSet([sort, convert_element_type_1])
)
)), None, None, 1073741824, 1073741824, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.25
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'OUTPUT_LOGSUMEXP': False}
args[7]: (s5, TensorBox(StorageBox(
InputBuffer(name='arg6_1', layout=FixedLayout('cuda:0', torch.float32, size=[4, 225], stride=[225, 1]))
)))
args[8]: ()
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Interestingly, when the input size is small, the window size 8 works and 16 fails to compile.
```bash
python compare_attn.py --h 16 --w 16 --window_size 8 --attn_func flex # This works
python compare_attn.py --h 16 --w 16 --window_size 16 --attn_func flex # Raises Error !!!
```
Error:
```bash
Traceback (most recent call last):
File "/home/leedh97/ESC/scripts/compare_attn.py", line 150, in <module>
model(x) # Make sure CUDNN to find proper algorithms, especially for convolutions.
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/leedh97/ESC/scripts/compare_attn.py", line 106, in forward
out = self.attn_func(q, k, v, score_mod=self.get_rpe)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 203, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1496, in run_node
result = super().run_node(n)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1143, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1133, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 409, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/kernel/flex_attention.py", line 1155, in flex_attention
assert q_strides[-1] == 1, "Query must be contiguous in the last dimension"
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: AssertionError: Query must be contiguous in the last dimension
target: flex_attention
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg1_1', layout=FixedLayout('cuda:0', torch.float32, size=[1, 4, s1, 16], stride=[64*s1, 16*s1, 1, s1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg3_1', layout=FixedLayout('cuda:0', torch.float32, size=[1, 4, s1, 16], stride=[64*s1, 16*s1, 1, s1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='arg5_1', layout=FixedLayout('cuda:0', torch.float32, size=[1, 4, s1, 16], stride=[64*s1, 16*s1, 1, s1]))
))
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (1, 1, TensorBox(StorageBox(
ComputedBuffer(name='buf2', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x1534afd63d90>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf3', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x1534afd84940>, ranges=[1, 1, 1, 1]))
)), None, None, TensorBox(StorageBox(
ComputedBuffer(name='buf4', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x1534afd62b90>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf5', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x1534afd855a0>, ranges=[1, 1, 1, 1]))
)), None, None, 1073741824, 1073741824, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.25
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'OUTPUT_LOGSUMEXP': False}
args[7]: (s5, TensorBox(StorageBox(
InputBuffer(name='arg6_1', layout=FixedLayout('cuda:0', torch.float32, size=[4, 961], stride=[961, 1]))
)))
args[8]: ()
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Fedora release 36 (Thirty Six) (x86_64)
GCC version: (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
Clang version: 14.0.5 (Fedora 14.0.5-2.fc36)
CMake version: version 3.22.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.6.77_TGMv2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6526Y
CPU family: 6
Model: 207
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 49%
CPU max MHz: 2801.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 64 MiB (32 instances)
L3 cache: 75 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 1.24.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,940,868,279 | Please support python 3.13! | wuhuang2 | closed | [
"needs reproduction",
"module: binaries",
"module: windows",
"triaged"
] | 4 | NONE | ### 🚀 The feature, motivation and pitch
I'm using python3.13 to develop a project, and I need to use the Whisper library, which depends on the Pytorch library, but I encountered the problem of "unable to find the applicable version" when pip, and after checking the Internet, I found that the Pytorch library does not support python 3.13 version! So please support python 3.13! (Windows10 32bit)
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @osalpekar @atalman @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | true |
2,940,844,170 | `INTERNAL ASSERT FAILED` when using `torch.max` with mixed device tensors | default1360 | closed | [
"module: cuda",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
Code:
```
import torch
x = torch.ones(10)
# If CUDA is available, use a CUDA tensor for the output.
if torch.cuda.is_available():
out_values = torch.empty(10, device="cuda")
out_indices = torch.empty(10, dtype=torch.long, device="cpu")
torch.max(x, 0, out=(out_values, out_indices))
```
Output:
```
RuntimeError: t == DeviceType::CUDA INTERNAL ASSERT FAILED at "/pytorch/c10/cuda/impl/CUDAGuardImpl.h":28, please report a bug to PyTorch.
```
### Versions
PyTorch 2.6.0
cc @ptrblck @msaroufim @eqy | true |
2,940,838,975 | `Aborted` error when using `torch.cuda.memory.caching_allocator_delete` | default1360 | open | [
"module: cuda",
"triaged",
"module: CUDACachingAllocator"
] | 2 | NONE | ### 🐛 Describe the bug
Code:
```
import torch
from torch.cuda.memory import caching_allocator_delete
torch.cuda.empty_cache()
dev_props = torch.cuda.get_device_properties(0)
total_memory = dev_props.total_memory
allocation = int(total_memory * 0.5)
tmp_tensor = torch.empty(allocation, dtype=torch.int8, device='cuda')
mem_ptr = tmp_tensor.data_ptr()
caching_allocator_delete(mem_ptr)
```
Output:
```
Aborted
```
### Versions
PyTorch 2.6.0
cc @ptrblck @msaroufim @eqy | true |
2,940,837,662 | `Segmentation fault` when using `torch.sparse.mm` with `torch.sparse_csr_tensor` | default1360 | open | [
"module: sparse",
"module: crash",
"triaged"
] | 3 | NONE | ### 🐛 Describe the bug
Code:
```
import torch
m, n, p = 7, 8, 9
nnz = 20
crow_indices = torch.tensor([0, nnz], dtype=torch.int64)
col_indices = torch.arange(nnz, dtype=torch.int32)
values = torch.randn(nnz)
S = torch.sparse_csr_tensor(crow_indices, col_indices, values, size=(m, n))
D = torch.randn(n, p)
result = torch.sparse.mm(S, D)
```
Output:
```
Segmentation fault
```
### Versions
PyTorch 2.6.0
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | true |
2,940,834,456 | `free(): invalid next size` error when using `torch.linalg.ldl_solve` | default1360 | closed | [
"module: crash",
"triaged",
"module: linear algebra"
] | 3 | NONE | ### 🐛 Describe the bug
Code:
```python
import torch
LD = torch.tensor([[1.0, 2.0, 3.0],
[2.0, 5.0, 6.0],
[3.0, 6.0, 9.0]], dtype=torch.float32)
pivots = torch.tensor([0, 1, 2], dtype=torch.int32)
B = torch.tensor([[1.0], [2.0], [3.0]], dtype=torch.float32)
torch.linalg.ldl_solve(LD, pivots, B, hermitian=True)
```
Output:
```
free(): invalid next size (fast)
Aborted
```
### Versions
torch 2.6.0
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | true |
2,940,825,135 | Fix `torch.cuda.MemPool()` internal assertion failure when changing devices | fzyzcjy | open | [
"triaged",
"open source"
] | 2 | CONTRIBUTOR | Fix https://github.com/pytorch/pytorch/issues/149802
This is just a prototype, and I would like to hear feedbacks, e.g. is this direction OK? or shall we let MemPool to support multi devices?
After feedbacks I will refine the PR, by e.g. making code better, adding tests, etc.
| true |
2,940,816,310 | [executorch hash update] update the pinned executorch hash | pytorchupdatebot | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 39 | COLLABORATOR | This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash. | true |
2,940,815,107 | [MPS] Add support for `chebyshev_polynomial_t` in eager. | dcci | closed | [
"Merged",
"release notes: mps",
"ciflow/mps"
] | 4 | MEMBER | null | true |
2,940,791,142 | comparison operators only accept scalars as the 2nd argument but not as a 1st argument | ev-br | open | [
"triaged",
"module: python frontend"
] | 1 | COLLABORATOR | ### 🐛 Describe the bug
Comparison 'ufuncs' seem to be missing a `Number, Tensor` overload:
```
In [31]: x = torch.as_tensor([1.0])
In [32]: torch.less_equal(x, 1.0)
Out[32]: tensor([True])
In [33]: torch.less_equal(1.0, x)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[33], line 1
----> 1 torch.less_equal(1.0, x)
TypeError: less_equal() received an invalid combination of arguments - got (float, Tensor), but expected one of:
* (Tensor input, Tensor other, *, Tensor out = None)
* (Tensor input, Number other, *, Tensor out = None)
```
The matching operators are OK:
```
In [35]: 1.0 < x
Out[35]: tensor([False])
```
The behavior is the same for all richcomp modes:
```
In [24]: for ufunc in [torch.equal, torch.not_equal, torch.less, torch.greater, torch.gr
...: eater_equal, torch.less_equal]:
...: try:
...: x = torch.as_tensor([1.0])
...: print(ufunc(1.0, x))
...: except:
...: print(f'{ufunc}')
...:
<built-in method equal of type object at 0x7fb5cbf92cc0>
<built-in method not_equal of type object at 0x7fb5cbf92cc0>
<built-in method less of type object at 0x7fb5cbf92cc0>
<built-in method greater of type object at 0x7fb5cbf92cc0>
<built-in method greater_equal of type object at 0x7fb5cbf92cc0>
<built-in method less_equal of type object at 0x7fb5cbf92cc0>
In [25]: for ufunc in [torch.eq, torch.not_equal, torch.less, torch.greater, torch.greater_equal, torch.less_equal]:
...: try:
...: x = torch.as_tensor([1.0])
...: print(ufunc(x, 1.0))
...: except:
...: print(f'{ufunc}')
...:
tensor([True])
tensor([False])
tensor([False])
tensor([False])
tensor([True])
tensor([True])
```
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (conda-forge gcc 13.3.0-1) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz
Stepping: 12
CPU MHz: 1800.000
CPU max MHz: 3900,0000
CPU min MHz: 400,0000
BogoMIPS: 3600.00
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 6 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy_extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] numpydoc==1.8.0
[pip3] torch==2.6.0+cpu
[conda] mkl 2024.2.2 ha957f24_16 conda-forge
[conda] numpy 2.2.3 py312h72c5963_0 conda-forge
[conda] numpydoc 1.8.0 pyhd8ed1ab_1 conda-forge
[conda] torch 2.6.0+cpu pypi_0 pypi
```
cc @albanD | true |
2,940,786,761 | [inductor] [bug fix] Enable type promotions in slice_scatter in inductor | golkir | open | [
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 9 | CONTRIBUTOR | Fixes #147842. Specifically, enables type promotions when calling `torch.slice_scatter` with tensors of different `dtype` thereby enforcing uniform behaviour in eager mode and inductor compilation mode.
To test:
`pytest -s -v test/inductor/test_torchinductor.py -k test_slice_scatter_types_promotion`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,940,780,031 | torch.tril introduces NaNs on MPS when matrix contained Infs (when diagonal is negative) | twoertwein | closed | [
"triaged",
"module: NaNs and Infs",
"module: correctness (silent)",
"module: mps"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
```python
# bug
(Pdb) torch.tril(torch.full((3, 3), float("inf"), device="mps"), diagonal=-1)
tensor([[nan, nan, nan],
[inf, nan, nan],
[inf, inf, nan]], device='mps:0')
# working examples
# works with non-infs
(Pdb) torch.tril(torch.full((3, 3), 1.0, device="mps"), diagonal=-1)
tensor([[0., 0., 0.],
[1., 0., 0.],
[1., 1., 0.]], device='mps:0')
# works on the cpu
(Pdb) torch.tril(torch.full((3, 3), float("inf"), device="cpu"), diagonal=-1)
tensor([[0., 0., 0.],
[inf, 0., 0.],
[inf, inf, 0.]])
# works for diagonal=0
(Pdb) torch.tril(torch.full((3, 3), float("inf"), device="mps"), diagonal=0)
tensor([[inf, 0., 0.],
[inf, inf, 0.],
[inf, inf, inf]], device='mps:0')
# works for positive diagonal
(Pdb) torch.tril(torch.full((3, 3), float("inf"), device="mps"), diagonal=1)
tensor([[inf, inf, 0.],
[inf, inf, inf],
[inf, inf, inf]], device='mps:0')
```
Temporary workaround: transpose + triu (with positive diagonal) + transpose (or move to CPU and back to MPS)
### Versions
torch installed with uv: the collect_env.py script breaks, because `python -mpip` fails
pytorch 2.6.0
python 3.12.9
Mac M2
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | true |
2,940,765,851 | [AOTInductor] Refine error message for dlopen in AOTInductor | muchulee8 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149812
Summary:
Refine the error message if dlopen failed in AOTInductor.
The original error message was ominous, modified to recommend user to
rebuild AOTInductor if needed, otherwise it's fine.
Test Plan:
None. Error message change.
Reviewers:
Subscribers:
Tasks:
Tags: | true |
2,940,648,934 | Rename README.txt to README.md | Jzhyang1 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | I am 99% sure this is meant to be a .md file rather than a .txt file
Fixes an issue with viewing the README on github, idk what else this accomplishes but it's been bothering me
| true |
2,940,576,242 | [AOTInductor] Bug fix for freeing buffers when freeing multiple times | muchulee8 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149810
Summary:
We might free the active buffer if we free the buffer twice.
Test Plan:
```
LD_LIBRARY_PATH=/data/users/$USER/pytorch/build/lib
/home/$USER/local/pytorch/build/bin/test_aoti_inference
```
Reviewers:
Subscribers:
Tasks:
Tags: | true |
2,940,562,575 | LoadHIP.cmake should find_package(composable_kernel) | trixirt | open | [
"module: build",
"module: rocm",
"triaged"
] | 4 | NONE | ### 🐛 Describe the bug
When building on Fedora, there is this build error
aten/src/ATen/native/hip/ck_types.h:19:10: fatal error: 'ck/ck.hpp' file not found
19 | #include <ck/ck.hpp>
| ^~~~~~~~~~~
1 error generated when compiling for host.
The ck/ck.hpp header is part of the composable_kernel package.
It is never checked for in LoadHip.cmake
composable_kernel not available on all linux distributions so these ck_gemm routines should only be used when composable kernel is found.
### Versions
This is a build problem.
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,940,534,154 | Fix #149806 : Fix path lookup in _preload_cuda_deps | Divain | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"bug"
] | 9 | CONTRIBUTOR | @pytorchbot label "bug"
Fixes #149806
| true |
2,940,513,319 | Fix #149806 : Fix path lookup in _preload_cuda_deps | Divain | closed | [
"open source",
"bug"
] | 6 | CONTRIBUTOR | Fixes #149806
| true |
2,940,510,876 | _preload_cuda_deps cannot find libraries located in path/lib_folder | Divain | closed | [
"module: binaries",
"module: cuda",
"triaged"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
Hi,
I'm facing an issue when loading torch with CUDA from a PEX file. The function [_preload_cuda_deps](https://github.com/pytorch/pytorch/blob/2b848ab192e51498fb626355aedfd210df7da27e/torch/__init__.py#L282) seems to have a bug that prevents it from locating CUDA dependencies when they are placed in `path/lib_folder` rather than in the usual `path/nvidia/lib_folder`.
The problem appears to be in this code snippet:
```python
nvidia_path = os.path.join(path, "nvidia")
if not os.path.exists(nvidia_path):
continue
```
Because of the `continue` statement, the loop skips the current path entirely if a `"nvidia"` subdirectory is not found. This prevents the function from checking the alternative location (`os.path.join(path, lib_folder, "lib", lib_name)`) as suggested by the comment in the code.
```
bash-4.4$ ./__main__.py
Python 3.9.13 (main, Jul 12 2022, 09:07:30)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import torch
Traceback (most recent call last):
File "<console>", line 1, in <module>
File ".../torch/__init__.py", line 404, in <module>
_load_global_deps()
File ".../torch/__init__.py", line 362, in _load_global_deps
_preload_cuda_deps(lib_folder, lib_name)
File ".../torch/__init__.py", line 302, in _preload_cuda_deps
raise ValueError(f"{lib_name} not found in the system path {sys.path}")
ValueError: libcusparseLt.so.*[0-9] not found in the system path [..., '/home/username/.pex/installed_wheels/d5fa26412760c0038223e2b474f059aa2eeaf36010d8b9e4c1bc1665f7d88fb9/nvidia_cusparselt_cu12-0.6.2-py3-none-manylinux2014_x86_64.whl']
```
/.../nvidia_cusparselt_cu12-0.6.2-py3-none-manylinux2014_x86_64.whl/cusparselt/lib/libcusparseLt.so.0 is actually existing.
Thanks
### Versions
Using torch 2.6.0. Calling this file produces the same exception.
cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim @eqy | true |
2,940,464,271 | Wrong location of rocm_version.h for Fedora and OpenSUSE | trixirt | closed | [
"module: build",
"module: rocm",
"triaged"
] | 2 | NONE | ### 🐛 Describe the bug
LoadHIP.cmake makes the assumption that ROCm is installed only from AMD to /opt/rocm
For several linux distributions including Fedora and OpenSUSE, this is /usr
This can be worked around sometimes if the user knows to set ROCM_PATH.
For some header files, it can not.
For rocm_version.h set, not searched for, here
https://github.com/pytorch/pytorch/blob/main/cmake/public/LoadHIP.cmake#L87
Is not found because on Fedora or OpenSUSE, because the location is /usr/include/rocm_version.h
$ dnf provides */rocm_version.h
...
rocm-core-devel-6.3.3-1.fc43.x86_64 : Libraries and headers for rocm-core
Repo : @System
Matched From :
Filename : /usr/include/rocm_version.h
### Versions
This is a build problem.
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,940,451,901 | torch.linalg.norm RuntimeError with torch.func.grad( vmap( hessian(.) ) ) | BurgerAndreas | open | [
"triaged",
"module: linear algebra",
"module: vmap",
"module: functorch"
] | 0 | NONE | ### 🐛 Describe the bug
`torch.linalg.norm(a)` causes `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation`, but `torch.sum(a**2).sqrt()` works fine
```python
import torch
import numpy as np
def forward(samples):
# this causes
# RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8, 1, 2]]
return torch.linalg.norm(samples)
# this works fine
# return torch.sum(samples**2).sqrt()
def pseudoenergy_function(samples):
# works fine with torch.linalg.norm
# energies = torch.vmap(
# forward, in_dims=(0)
# )(samples)
# works fine with torch.linalg.norm
# forces = -1 * torch.vmap(
# torch.func.grad(forward, argnums=0),
# in_dims=(0,),
# )(samples)
hessian = torch.vmap(
torch.func.hessian(forward, argnums=0),
in_dims=(0,),
)(samples)
# some reduction to get shape [1]
pseudoenergy = torch.sum(hessian)
return pseudoenergy
if __name__ == "__main__":
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
torch.autograd.set_detect_anomaly(True)
batch_size = 8
dim = 2
# Create test inputs
x = torch.randn(batch_size, dim, device=device) # [B, D]
grad_out = torch.func.grad(pseudoenergy_function, argnums=0)(x)
print("grad_out", grad_out.shape)
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i5-13400F
CPU family: 6
Model: 191
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
CPU max MHz: 4600.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 416 KiB (10 instances)
L1i cache: 448 KiB (10 instances)
L2 cache: 9.5 MiB (7 instances)
L3 cache: 20 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] cuequivariance-ops-torch-cu12==0.1.0
[pip3] cuequivariance-torch==0.1.0
[pip3] mace-torch==0.3.10
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==1.9.5
[pip3] torch==2.5.1+cu121
[pip3] torch_cluster==1.6.3+pt25cu121
[pip3] torch-ema==0.3
[pip3] torch-geometric==2.6.1
[pip3] torch_scatter==2.1.2+pt25cu121
[pip3] torch_sparse==0.6.18+pt25cu121
[pip3] torch_spline_conv==1.2.2+pt25cu121
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.5
[pip3] torchdiffeq==0.2.5
[pip3] torchdyn==1.0.6
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] cuda-cudart-dev_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cudart-static_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cudart_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-nvrtc 12.6.85 hbd13f7d_0 conda-forge
[conda] cuequivariance-ops-torch-cu12 0.1.0 pypi_0 pypi
[conda] cuequivariance-torch 0.1.0 pypi_0 pypi
[conda] libcublas 12.6.4.1 hbd13f7d_0 conda-forge
[conda] libcufft 11.3.0.4 hbd13f7d_0 conda-forge
[conda] libcurand 10.3.7.77 hbd13f7d_0 conda-forge
[conda] libcusolver 11.7.1.2 hbd13f7d_0 conda-forge
[conda] libcusparse 12.5.4.2 hbd13f7d_0 conda-forge
[conda] libnvjitlink 12.6.85 hbd13f7d_0 conda-forge
[conda] mace-torch 0.3.10 pypi_0 pypi
[conda] numpy 1.25.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] torch 2.5.1+cu121 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt25cu121 pypi_0 pypi
[conda] torch-ema 0.3 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt25cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt25cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt25cu121 pypi_0 pypi
[conda] torchcde 0.2.5 pypi_0 pypi
[conda] torchcfm 1.0.5 pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchdyn 1.0.6 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.20.1+cu121 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @zou3519 @Chillee @samdow @kshitij12345 | true |
2,940,426,743 | checking out NCCL when it is not used | trixirt | open | [
"module: build",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
NCCL is conditionally used with the USE_NCCL environmental variable
But it is unconditionally git cloned here
https://github.com/pytorch/pytorch/blob/main/tools/build_pytorch_libs.py#L122
This causes a problem for packaging pytorch v2.7.0 on Fedora.
packaging requires network isolation, so the git command will fail.
Even Fedora's local build set's USE_NCCL=False so it is not expecting to have to deal with anything NCCL related.
And any cpu only build should not have to fetch NCCL.
This change came in recently.
commit 4ece056791d779a6bfb0574c3a26cd6a7e600089
Author: atalman <atalman@fb.com>
Date: Wed Feb 19 03:52:26 2025 +0000
Nccl update to 2.25.1 for cuda 12.4-12.8 (#146073)
### Versions
This is a build, not a running problem, but here goes
PyTorch version: 2.6.0a0+git1eba9b3
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Fedora Linux 43 (Workstation Edition Prerelease) (x86_64)
GCC version: (GCC) 15.0.1 20250228 (Red Hat 15.0.1-0)
Clang version: 20.1.0 (Fedora 20.1.0-1.fc43)
CMake version: version 4.0.0-rc4
Libc version: glibc-2.41.9000
Python version: 3.13.2 (main, Feb 6 2025, 00:00:00) [GCC 15.0.1 20250204 (Red Hat 15.0.1-0)] (64-bit runtime)
Python platform: Linux-6.14.0-0.rc6.49.fc43.x86_64-x86_64-with-glibc2.41.9000
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5975WX 32-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 36%
CPU max MHz: 4561.0000
CPU min MHz: 400.0000
BogoMIPS: 7186.36
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] torch==2.6.0a0+git1eba9b3
[conda] Could not collect
cc @malfet @seemethere | true |
2,940,289,317 | (With PR) `torch.cuda.MemPool()` internal assertion failure when changing devices | fzyzcjy | open | [
"module: cuda",
"triaged"
] | 1 | CONTRIBUTOR | ### Potential cause analysis
Quickly glanced at the code, quick thoughts:
* When creating `MemPool` on device 0, it creates a MemPool on device 0, let's say it has mempool_id 111
* When first call to `use_mem_pool`, it tells C++ to find mempool with id 111 on device 1 (!), but that does not exist, so C++ side creates a brand new pool with id 111 on device 1
* When first `use_mem_pool` leaves, refcount of the mempool 111 on device 1 decreases by one, and it becomes zero
* When second call to `use_mem_pool`, it finds the pool 111 on device 1, but then realize it has refcount being zero, thus error
A quick fix may be adding assertions when using MemPool - if users use a wrong device, we just throw to forbid the action. Another more elaborated fix may be supporting pools on different devices.
### 🐛 Describe the bug
code
```python
import torch
torch.cuda.set_device(0)
pool = torch.cuda.MemPool()
torch.cuda.set_device(1)
with torch.cuda.use_mem_pool(pool):
a = torch.tensor([10, 20], device='cuda')
with torch.cuda.use_mem_pool(pool):
b = torch.tensor([30, 40], device='cuda')
print(f'{a=} {b=}')
```
error
```
RuntimeError: it->second->use_count > 0 INTERNAL ASSERT FAILED at "/pytorch/c10/cuda/CUDACachingAllocator.cpp":2225, please report a bug to PyTorch.
```
full error log
<details>
```
[W322 10:20:27.881420799 Module.cpp:182] symbolizing C++ stack trace for exception; if this hangs, rerun with TORCH_DISABLE_ADDR2LINE=1...
Traceback (most recent call last):
File "/host_home/primary_synced/tom_sglang_server/misc/adhoc_ac3369_mem_pool.py", line 10, in <module>
with torch.cuda.use_mem_pool(pool):
File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.10/dist-packages/torch/cuda/memory.py", line 1086, in use_mem_pool
_cuda_beginAllocateToPool(device_index, pool.id)
RuntimeError: it->second->use_count > 0 INTERNAL ASSERT FAILED at "/pytorch/c10/cuda/CUDACachingAllocator.cpp":2225, please report a bug to PyTorch.
Exception raised from ensure_exists_and_incref_pool at /pytorch/c10/cuda/CUDACachingAllocator.cpp:2225 (most recent call first):
C++ CapturedTraceback:
#4 std::_Function_handler<std::shared_ptr<c10::LazyValue<std::string> const> (), c10::SetStackTraceFetcher(std::function<std::string ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) from Logging.cpp:0
#5 c10::Error::Error(c10::SourceLocation, std::string) from ??:0
#6 c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) from ??:0
#7 c10::cuda::CUDACachingAllocator::Native::NativeCachingAllocator::beginAllocateToPool(signed char, std::pair<unsigned long long, unsigned long long>, std::function<bool (CUstream_st*)>) from :0
#8 pybind11::cpp_function::initialize<registerCudaPluggableAllocator(_object*)::{lambda(signed char, std::pair<unsigned long long, unsigned long long>)#21}, void, signed char, std::pair<unsigned long long, unsigned long long>, pybind11::name, pybind11::scope, pybind11::sibling>(registerCudaPluggableAllocator(_object*)::{lambda(signed char, std::pair<unsigned long long, unsigned long long>)#21}&&, void (*)(signed char, std::pair<unsigned long long, unsigned long long>), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call&) from Module.cpp:0
#9 pybind11::cpp_function::dispatcher(_object*, _object*, _object*) from :0
#10 PyObject_CallFunctionObjArgs from ??:0
#11 _PyObject_MakeTpCall from ??:0
#12 _PyEval_EvalFrameDefault from ??:0
#13 _PyUnicode_ToDecimalDigit from ??:0
#14 PyCell_New from ??:0
#15 _PyEval_EvalFrameDefault from ??:0
#16 PyMethod_New from ??:0
#17 _PyEval_EvalFrameDefault from ??:0
#18 PyEval_EvalCode from ??:0
#19 PyEval_EvalCode from ??:0
#20 PyUnicode_Tailmatch from ??:0
#21 PyInit__collections from ??:0
#22 PyUnicode_Tailmatch from ??:0
#23 _PyRun_SimpleFileObject from ??:0
#24 _PyRun_AnyFileObject from ??:0
#25 Py_RunMain from ??:0
#26 Py_BytesMain from ??:0
#27 __libc_start_call_main from ./csu/../sysdeps/nptl/libc_start_call_main.h:58
#28 __libc_start_main_impl from ./csu/../csu/libc-start.c:392
#29 _start from ??:0
```
</details>
### Versions
torch 2.6.0
cc @ptrblck @msaroufim @eqy | true |
2,940,287,457 | `INTERNAL ASSERT FAILED` in `torch.func.vmap` and `torch.scatter_add` | vwrewsge | closed | [
"triaged",
"module: vmap",
"oncall: pt2",
"module: functorch",
"module: pt2-dispatcher"
] | 4 | NONE | ### 🐛 Describe the bug
Code:
```
import torch
def buggy_vmap_fn(input_tensor, index_tensor, src_tensor):
return torch.func.vmap(lambda t: torch.scatter_add(t, 0, index_tensor, src_tensor))(input_tensor)
input_tensor = torch.randn(3)
index_tensor = torch.tensor([0, 1, 2])
src_tensor = torch.tensor([1.0, 2.0, 3.0]) # This will create a real tensor
opt_fn = torch.compile(buggy_vmap_fn, backend="eager", fullgraph=True)
opt_fn(input_tensor, index_tensor, src_tensor)
```
Output:
```
torch._C._functorch.pop_dynamic_layer_stack_and_undo_to_depth(
RuntimeError: !dynamicLayerStack.empty() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp":219, please report a bug to PyTorch.
```
### Versions
torch 2.6.0
cc @zou3519 @chauhang @penguinwu @Chillee @samdow @kshitij12345 @bdhirsh | true |
2,940,240,700 | `Segmentation fault` in `torch.jit.jit_module_from_flatbuffer` | vwrewsge | open | [
"oncall: jit"
] | 0 | NONE | ### 🐛 Describe the bug
Code:
```
import torch
from torch import nn
simple_model = nn.Sequential(
nn.Linear(10, 20),
nn.BatchNorm2d(5),
nn.ReLU()
)
scripted_model = torch.jit.script(simple_model)
torch.jit.save_jit_module_to_flatbuffer(scripted_model, 'model.ff')
loaded_model = torch.jit.jit_module_from_flatbuffer('model.ff')
sample_input = torch.rand(1, 5, 3, 10)
_ = loaded_model(sample_input)
```
Output:
```
Segmentation fault
```
### Versions
torch 2.6.0
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,940,226,203 | bug in pytorch/torch/nn/parameter: | said-ml | closed | [] | 1 | NONE | ### 🐛 Describe the bug
```python
class UninitializedBuffer(UninitializedTensorMixin, torch.Tensor):
r"""A buffer that is not initialized.
Uninitialized Buffer is a a special case of :class:`torch.Tensor`
where the shape of the data is still unknown.
Unlike a :class:`torch.Tensor`, uninitialized parameters
hold no data and attempting to access some properties, like their shape,
will throw a runtime error. The only operations that can be performed on a uninitialized
parameter are changing its datatype, moving it to a different device and
converting it to a regular :class:`torch.Tensor`.
The default device or dtype to use when the buffer is materialized can be set
during construction using e.g. ``device='cuda'``.
"""
cls_to_become = torch.Tensor
def __new__(
cls, requires_grad=False, device=None, dtype=None, persistent=True
) -> None:
factory_kwargs = {"device": device, "dtype": dtype}
data = torch.empty(0, **factory_kwargs)
ret = torch.Tensor._make_subclass(cls, data, requires_grad)
ret.persistent = persistent
ret._is_buffer = True
return ret # ret is not None probablity of issue here
# suggest debugging :
class UninitializedBuffer(UninitializedTensorMixin, torch.Tensor):
# as it is
def __new__(cls, requires_grad=False, device=None, dtype=None, persistent=True):
factory_kwargs = {"device": device, "dtype": dtype}
data = torch.empty(0, **factory_kwargs)
# Ensure we are subclassing correctly
ret = super().__new__(cls, data, requires_grad)
# Set attributes
ret.persistent = persistent
ret._is_buffer = True
return ret # avoid annotation that method def __new__ return None
```
### Versions
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py | true |
2,940,178,868 | Build Extension Failed with setuptools==77.0.3 | AlongWY | open | [
"module: cpp-extensions",
"triaged"
] | 1 | NONE | ### 🐛 Describe the bug
When build deepspeed wheels with setuptools==77.0.3, the CUDAExtension throw the error info:
```
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/setuptools/_distutils/command/sdist.py", line 245, in add_defaults
self._add_defaults_ext()
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/setuptools/_distutils/command/sdist.py", line 329, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 333, in get_finalized_command
cmd_obj = self.distribution.get_command_obj(command, create)
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 885, in get_command_obj
cmd_obj = self.command_obj[command] = klass(self)
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 397, in __init__
super().__init__(*args, **kwargs)
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 402, in __init__
super(BuildExtension, self).__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'use_ninja'
```
### How to Reproduce
the code is from [pytorch](https://github.com/pytorch/pytorch/blob/6bbe8dbd63f25e10ef75252a89ac277feff59ba1/torch/utils/cpp_extension.py#L540)
```python
from distutils.dist import Distribution
from setuptools.command.build_ext import build_ext
class BuildExtension(build_ext):
@classmethod
def with_options(cls, **options):
"""Return a subclass with alternative constructor that extends any original keyword arguments to the original constructor with the given options."""
class cls_with_options(cls): # type: ignore[misc, valid-type]
def __init__(self, *args, **kwargs):
kwargs.update(options)
super().__init__(*args, **kwargs)
return cls_with_options
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.no_python_abi_suffix = kwargs.get("no_python_abi_suffix", False)
self.use_ninja = kwargs.get("use_ninja", True)
d = Distribution()
build_class = BuildExtension.with_options(use_ninja=True)
build_class(d)
```
### Versions
+ The successful with "setuptools<=77.0.1" github actions is [here](https://github.com/AlongWY/deepspeed_wheels/actions/runs/14004770472)
+ The failed github actions is [here](https://github.com/AlongWY/deepspeed_wheels/actions/runs/13989692342)
### Related issue
https://github.com/pypa/setuptools/issues/4908#issue-2940177305
cc @malfet @zou3519 @xmfan | true |
2,940,124,047 | avoid allocation when tensor_new from storage | ppwwyyxx | closed | [
"open source",
"Merged",
"topic: not user facing"
] | 4 | COLLABORATOR | null | true |
2,940,014,555 | Cannot compile SGlang with Torch 2.7 or Torch 2.8 and CUDA 12.8 (sm_120). | shahizat | open | [
"module: build",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
Greetings to all,
I want to build SGlang from source on a machine with Nvidia RTX 5090. Using torch 2.6, the build succeeds, but torch 2.6 does not work with Triton version 3.3 and CUDA 12.8 with sm_120 support. Errors appear with versions 2.7 and the latest 2.8.
Error logs related to torch:
```
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(180): error: identifier "PyFrame_FastToLocalsWithError" is undefined
if (PyFrame_FastToLocalsWithError(frame) < 0) {
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(186): error: pointer or reference to incomplete type "_frame" is not allowed
return _Py_NewRef(((PyObject*)(frame->f_locals)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(195): error: pointer or reference to incomplete type "_frame" is not allowed
return _Py_NewRef(((PyObject*)(frame->f_globals)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(204): error: pointer or reference to incomplete type "_frame" is not allowed
return _Py_NewRef(((PyObject*)(frame->f_builtins)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(217): error: pointer or reference to incomplete type "_frame" is not allowed
if (frame->f_lasti < 0) {
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(220): error: pointer or reference to incomplete type "_frame" is not allowed
return frame->f_lasti * 2;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(346): error: pointer or reference to incomplete type "_ts" is not allowed
tstate->tracing++;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(348): error: pointer or reference to incomplete type "_ts" is not allowed
tstate->cframe->use_tracing = 0;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(359): error: pointer or reference to incomplete type "_ts" is not allowed
int use_tracing = (tstate->c_tracefunc != nullptr
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(360): error: pointer or reference to incomplete type "_ts" is not allowed
|| tstate->c_profilefunc != nullptr);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(361): error: pointer or reference to incomplete type "_ts" is not allowed
tstate->tracing--;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(363): error: pointer or reference to incomplete type "_ts" is not allowed
tstate->cframe->use_tracing = use_tracing;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(475): error: identifier "_PyFloat_Pack2" is undefined
{ return _PyFloat_Pack2(x, (unsigned char*)p, le); }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(478): error: identifier "_PyFloat_Unpack2" is undefined
{ return _PyFloat_Unpack2((const unsigned char *)p, le); }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(489): error: identifier "_PyFloat_Pack4" is undefined
{ return _PyFloat_Pack4(x, (unsigned char*)p, le); }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(492): error: identifier "_PyFloat_Pack8" is undefined
{ return _PyFloat_Pack8(x, (unsigned char*)p, le); }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(495): error: identifier "_PyFloat_Unpack4" is undefined
{ return _PyFloat_Unpack4((const unsigned char *)p, le); }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(498): error: identifier "_PyFloat_Unpack8" is undefined
{ return _PyFloat_Unpack8((const unsigned char *)p, le); }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(506): error: pointer or reference to incomplete type "PyCodeObject" is not allowed
return _Py_NewRef(((PyObject*)(code->co_code)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(515): error: pointer or reference to incomplete type "PyCodeObject" is not allowed
return _Py_NewRef(((PyObject*)(code->co_varnames)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(523): error: pointer or reference to incomplete type "PyCodeObject" is not allowed
return _Py_NewRef(((PyObject*)(code->co_freevars)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(531): error: pointer or reference to incomplete type "PyCodeObject" is not allowed
return _Py_NewRef(((PyObject*)(code->co_cellvars)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(678): error: identifier "_PyObject_LookupAttr" is undefined
return _PyObject_LookupAttr(obj, attr_name, result);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(863): error: identifier "_Py_IsFinalizing" is undefined
return _Py_IsFinalizing();
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(902): error: identifier "_PyLong_AsInt" is undefined
return _PyLong_AsInt(obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(913): error: identifier "_PyObject_GetDictPtr" is undefined
PyObject **dict = _PyObject_GetDictPtr(obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(924): error: identifier "_PyObject_GetDictPtr" is undefined
PyObject **dict = _PyObject_GetDictPtr(obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(938): error: identifier "_PyThreadState_UncheckedGet" is undefined
return _PyThreadState_UncheckedGet();
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(958): error: identifier "PyUnicode_IS_ASCII" is undefined
if (PyUnicode_IS_ASCII(unicode)) {
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(959): error: identifier "PyUnicode_DATA" is undefined
utf8 = PyUnicode_DATA(unicode);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(960): error: identifier "PyUnicode_GET_LENGTH" is undefined
len = PyUnicode_GET_LENGTH(unicode);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(963): error: identifier "PyUnicode_AsUTF8AndSize" is undefined
utf8 = PyUnicode_AsUTF8AndSize(unicode, &len);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1054): error: identifier "_PyDict_Pop" is undefined
value = _PyDict_Pop(dict, key,
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1104): error: identifier "_Py_HashPointer" is undefined
return _Py_HashPointer(ptr);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1115): error: identifier "_PyTime_t" is undefined
typedef _PyTime_t PyTime_t;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1120): error: identifier "_PyTime_AsSecondsDouble" is undefined
{ return _PyTime_AsSecondsDouble(t); }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1123): error: identifier "_PyTime_GetMonotonicClockWithInfo" is undefined
{ return _PyTime_GetMonotonicClockWithInfo(result,
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1126): error: identifier "_PyTime_GetSystemClockWithInfo" is undefined
{ return _PyTime_GetSystemClockWithInfo(result,
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1131): error: identifier "_PyTime_GetPerfCounterWithInfo" is undefined
return _PyTime_GetPerfCounterWithInfo(result,
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1353): error: identifier "_PyUnicodeWriter" is undefined
_PyUnicodeWriter_Dealloc((_PyUnicodeWriter*)writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1353): error: expected an expression
_PyUnicodeWriter_Dealloc((_PyUnicodeWriter*)writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1353): error: expected a ")"
_PyUnicodeWriter_Dealloc((_PyUnicodeWriter*)writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1353): error: identifier "_PyUnicodeWriter_Dealloc" is undefined
_PyUnicodeWriter_Dealloc((_PyUnicodeWriter*)writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1365): error: identifier "_PyUnicodeWriter" is undefined
const size_t size = sizeof(_PyUnicodeWriter);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1371): error: identifier "writer" is undefined
_PyUnicodeWriter *writer = (_PyUnicodeWriter *)pub_writer;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1371): error: expected an expression
_PyUnicodeWriter *writer = (_PyUnicodeWriter *)pub_writer;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1371): error: expected a ";"
_PyUnicodeWriter *writer = (_PyUnicodeWriter *)pub_writer;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1373): error: identifier "_PyUnicodeWriter_Init" is undefined
_PyUnicodeWriter_Init(writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1374): error: identifier "_PyUnicodeWriter_Prepare" is undefined
if (_PyUnicodeWriter_Prepare(writer, length, 127) < 0) {
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1384): error: identifier "_PyUnicodeWriter" is undefined
PyObject *str = _PyUnicodeWriter_Finish((_PyUnicodeWriter*)writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1384): error: expected an expression
PyObject *str = _PyUnicodeWriter_Finish((_PyUnicodeWriter*)writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1384): error: expected a ")"
PyObject *str = _PyUnicodeWriter_Finish((_PyUnicodeWriter*)writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1384): error: identifier "_PyUnicodeWriter_Finish" is undefined
PyObject *str = _PyUnicodeWriter_Finish((_PyUnicodeWriter*)writer);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1399): error: identifier "_PyUnicodeWriter" is undefined
return _PyUnicodeWriter_WriteChar((_PyUnicodeWriter*)writer, ch);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1399): error: expected an expression
return _PyUnicodeWriter_WriteChar((_PyUnicodeWriter*)writer, ch);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1399): error: expected a ")"
return _PyUnicodeWriter_WriteChar((_PyUnicodeWriter*)writer, ch);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1399): error: identifier "_PyUnicodeWriter_WriteChar" is undefined
return _PyUnicodeWriter_WriteChar((_PyUnicodeWriter*)writer, ch);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1410): error: identifier "_PyUnicodeWriter" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1410): error: expected an expression
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1410): error: expected a ")"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1410): error: identifier "_PyUnicodeWriter_WriteStr" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1410): error: "str" has already been declared in the current scope
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1410): error: expected a ";"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1411): warning #549-D: variable "str" is used before its value is set
_Py_DECREF(((PyObject*)(str)));
^
Remark: The warnings can be suppressed with "-diag-suppress <warning-number>"
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1423): error: identifier "_PyUnicodeWriter" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1423): error: expected an expression
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1423): error: expected a ")"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1423): error: identifier "_PyUnicodeWriter_WriteStr" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1423): error: "str" has already been declared in the current scope
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1423): error: expected a ";"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1424): warning #549-D: variable "str" is used before its value is set
_Py_DECREF(((PyObject*)(str)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1441): error: identifier "_PyUnicodeWriter" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1441): error: expected an expression
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1441): error: expected a ")"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1441): error: identifier "_PyUnicodeWriter_WriteStr" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1441): error: "str_obj" has already been declared in the current scope
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1441): error: expected a ";"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1442): warning #549-D: variable "str_obj" is used before its value is set
_Py_DECREF(((PyObject*)(str_obj)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1459): error: identifier "_PyUnicodeWriter" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1459): error: expected an expression
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1459): error: expected a ")"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1459): error: identifier "_PyUnicodeWriter_WriteStr" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1459): error: "str_obj" has already been declared in the current scope
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1459): error: expected a ";"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str_obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1460): warning #549-D: variable "str_obj" is used before its value is set
_Py_DECREF(((PyObject*)(str_obj)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1476): error: identifier "PyUnicode_GET_LENGTH" is undefined
if (end > PyUnicode_GET_LENGTH(str)) {
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1481): error: identifier "_PyUnicodeWriter" is undefined
return _PyUnicodeWriter_WriteSubstring((_PyUnicodeWriter*)writer, str,
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1481): error: expected an expression
return _PyUnicodeWriter_WriteSubstring((_PyUnicodeWriter*)writer, str,
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1481): error: expected a ")"
return _PyUnicodeWriter_WriteSubstring((_PyUnicodeWriter*)writer, str,
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1481): error: identifier "_PyUnicodeWriter_WriteSubstring" is undefined
return _PyUnicodeWriter_WriteSubstring((_PyUnicodeWriter*)writer, str,
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1496): error: identifier "_PyUnicodeWriter" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1496): error: expected an expression
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1496): error: expected a ")"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1496): error: identifier "_PyUnicodeWriter_WriteStr" is undefined
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1496): error: "str" has already been declared in the current scope
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1496): error: expected a ";"
int res = _PyUnicodeWriter_WriteStr((_PyUnicodeWriter*)writer, str);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1497): warning #549-D: variable "str" is used before its value is set
_Py_DECREF(((PyObject*)(str)));
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1507): error: pointer or reference to incomplete type "_typeobject" is not allowed
PyErr_Format(PyExc_TypeError, "expect int, got %s", (((PyObject*)(obj))->ob_type)->tp_name);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/torch/csrc/utils/pythoncapi_compat.h(1511): error: identifier "_PyLong_Sign" is undefined
*sign = _PyLong_Sign(obj);
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/pybind11/buffer_info.h(107): error: identifier "Py_buffer" is undefined
explicit buffer_info(Py_buffer *view, bool ownview = true)
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/pybind11/buffer_info.h(153): error: identifier "Py_buffer" is undefined
Py_buffer *view() const { return m_view; }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/pybind11/buffer_info.h(154): error: identifier "Py_buffer" is undefined
Py_buffer *&view() { return m_view; }
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/pybind11/buffer_info.h(181): error: identifier "Py_buffer" is undefined
Py_buffer *m_view = nullptr;
^
/home/admin2/.virtualenvs/sglang/lib/python3.10/site-packages/torch/include/pybind11/buffer_info.h(118): error: more than one user-defined conversion from "<error-type>" to "std::vector<int64_t, std::allocator<int64_t>>" (aka "std::vector<signed long, std::allocator<signed long>>") applies:
? std::vector<ssize_t>(view->strides, view->strides + view->ndim)
^
```
### Versions
PyTorch version: 2.7.0a0+gitcdd7a2c
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5090
Nvidia driver version: 570.124.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3970X 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4549,1211
CPU min MHz: 2200,0000
BogoMIPS: 7400.09
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.14.1
[pip3] torch==2.7.0a0+gitcdd7a2c
[pip3] torchao==0.9.0
[pip3] triton==3.3.0+gite1964461
[conda] Could not collect
cc @malfet @seemethere | true |
2,939,989,403 | Remove outdated instructions from CI scripts | cyyever | closed | [
"open source",
"Merged",
"release notes: releng"
] | 3 | COLLABORATOR | Some instructions about Python 3.8 and CUDA 11.3 are removed. | true |
2,939,953,493 | [MPS/inductor] Add support for modified_scaled_bessel_k{0,1} | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,950,965 | [CUDA]][SymmetricMemory] Interpret empty string as `std::nullopt` in `rendezvous` | eqy | closed | [
"oncall: distributed",
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing"
] | 13 | COLLABORATOR | this is a "temporary" fix as current internal API requires strings at some interfaces instead of `std::optional` and empty strings are presumably used in-lieu of `nullopt`.
e.g.,
https://github.com/pytorch/pytorch/blob/9d02b3993f7dae7fa3379d5190ac88291ecd4dce/torch/csrc/distributed/c10d/intra_node_comm.cu#L49
this currently breaks `test_intra_node_comm_all_reduce`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ptrblck @msaroufim | true |
2,939,936,349 | [dynamo] Always trace into tensor subclass `__torch_function__` | StrongerXi | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"ci-no-td"
] | 22 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149792
* #149484
* #149483
* #149482
This patch effectively ignores traceable_tensor_subclasses, allowing
Dynamo to always try tracing into the `__torch_function__` of tensor
subclass. This helps us with 2 things:
1. allowing users to directly benefit from better compilation of tensor
subclass, by just upgrading pytorch, without having to change legacy
library code (see earlier patches in the stack for examples).
2. potentially exposing more issues in compiling tensor subclass, so we
can get signals and improve them.
As a consequence, it exposed and fixes 2 subtle bugs:
1. In `build_torch_function_fn`, we could get
`torch._C._disabled_torch_function_impl` because we have a
`Parameter` subclass without `__torch_function__` override or if we
have a tensor subclass with `__torch_dispatch__` override. We graph
break on this for now, and plan to add support -- the logic for
simulating `torch._C._disabled_torch_function_impl` is already in
`SuperVariable`, we just need to reuse it.
2. Sometimes we create `SyntheticLocalSource` and need to remove all the
guards installed on it, but we only removed the ones whose source
_is_ the created synthetic source `s`, but forgot about chained
source like `s.foo`, this showed up as
`SYNTHETIC_LOCAL['tmp_0'].__torch_function__.__func__`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D71906141](https://our.internmc.facebook.com/intern/diff/D71906141) | true |
2,939,936,277 | [dynamo] Fix handling of setattr with some tensor attributes | StrongerXi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149792
* #149484
* #149483
* #149482
* __->__ #149791
* #149481
We weren't handling `setattr(tensor_obj, "real", 42)` correctly, because
the attribute is a `GetSetDescriptorType` that has special setter logic.
See added test and comments for more explanations.
This patch makes it so that we graph break in those cases, rather than
resulting in silent incorrectness.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,939,920,336 | [inductor] Add the largest matmul tile size to default tuning set | bertmaher | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | While we probably don't want to expand the set of default matmul tunings too much, this is the largest tile size usable by H100 and A100, and is usually the top performing tile size for large matmuls. E.g. on H100 adding this tile size improves perf of multiplying 8192-square matrices from 600->700 tflops. (cuBLAS 12.6 gets 780, so Triton still isn't SOTA, but closer)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,898,879 | flex_attention create_block_mask() + inductor: integer division or modulo by zero | rmmr | closed | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1 | NONE | ### 🐛 Describe the bug
This occurs very randomly! However i managed to reproduce. Please run this snippet multiple times, if no error happens the first time. In case of a notebook restart the kernel before rerunning.
```python
import torch
import torch._inductor.utils
from torch.nn.attention.flex_attention import (
BlockMask,
create_block_mask,
)
create_block_mask = torch.compile(create_block_mask)
def create_block_mask_from_seqlens(
q_seqlen: torch.Tensor,
kv_seqlen: torch.Tensor,
) -> BlockMask:
device = q_seqlen.device
B, H = None, None
q_batch = torch.arange(q_seqlen.size(0), device=device).repeat_interleave(q_seqlen)
kv_batch = torch.arange(kv_seqlen.size(0), device=device).repeat_interleave(
kv_seqlen
)
Q_LEN = q_batch.size(0)
KV_LEN = kv_batch.size(0)
def batch_mask_mod(
b: torch.Tensor,
h: torch.Tensor,
q_idx: torch.Tensor,
kv_idx: torch.Tensor,
):
q_idx_batch = q_batch[q_idx]
kv_idx_batch = kv_batch[kv_idx]
batch_mask = (
(q_idx_batch == kv_idx_batch) & (q_idx_batch != -1) & (kv_idx_batch != -1)
)
return batch_mask
return create_block_mask(
batch_mask_mod,
B=B,
H=H,
Q_LEN=Q_LEN,
KV_LEN=KV_LEN,
)
a = torch.tensor([2, 42, 18, 21, 4, 2, 7, 1, 1]).cuda()
b = torch.tensor([57, 21, 16, 8]).cuda()
with torch._inductor.utils.fresh_inductor_cache():
for seqlen in [a, b]:
create_block_mask_from_seqlens(q_seqlen=seqlen, kv_seqlen=seqlen)
torch.cuda.synchronize()
```
This should output:
```
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] Error in codegen for ComputedBuffer(name='buf1', layout=FixedLayout('cuda:0', torch.int64, size=[1, 1, ((s0 + 127)//128), ((s1 + 127)//128), 2], stride=[2*(((s0 + 127)//128))*(((s1 + 127)//128)), 2*(((s0 + 127)//128))*(((s1 + 127)//128)), 2*(((s1 + 127)//128)), 2, 1]), data=Reduction(
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] 'cuda',
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] torch.int64,
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] def inner_fn(index, rindex):
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] _, _, i2, i3, i4 = index
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] r0_0 = rindex
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp0 = ops.index_expr(i3 + FloorDiv(127 + s1, 128) * ModularIndexing(r0_0 + 8192 * i4, 128, 128), torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp1 = ops.index_expr(s0, torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp2 = tmp0 < tmp1
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp3 = ops.index_expr(ModularIndexing(r0_0, 1, 128), torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp4 = ops.index_expr(s1, torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp5 = tmp3 < tmp4
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp6 = tmp2 & tmp5
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp7 = ops.index_expr(i3 + FloorDiv(127 + s1, 128) * ModularIndexing(r0_0 + 8192 * i4, 128, 128), torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp8 = ops.load(arg5_1, tmp7)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp9 = ops.index_expr(ModularIndexing(r0_0, 1, 128), torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp10 = ops.load(arg3_1, tmp9)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp11 = tmp8 == tmp10
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp12 = ops.index_expr(i3 + FloorDiv(127 + s1, 128) * ModularIndexing(r0_0 + 8192 * i4, 128, 128), torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp13 = ops.load(arg5_1, tmp12)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp14 = ops.constant(-1, torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp15 = tmp13 != tmp14
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp16 = ops.bitwise_and(tmp11, tmp15)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp17 = ops.index_expr(ModularIndexing(r0_0, 1, 128), torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp18 = ops.load(arg3_1, tmp17)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp19 = ops.constant(-1, torch.int64)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp20 = tmp18 != tmp19
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp21 = ops.bitwise_and(tmp16, tmp20)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp22 = ops.masked(tmp6, tmp21, False)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] tmp23 = ops.to_dtype(tmp22, torch.int64, src_dtype=torch.bool)
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] return tmp23
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] ,
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] ranges=[1, 1, ((s0 + 127)//128), ((s1 + 127)//128), 2],
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] reduction_ranges=[8192],
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] reduction_type=sum,
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] origin_node=None,
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] origins=OrderedSet([sum_1])
C0322 01:40:15.273000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py:1172] [0/1] ))
W0322 01:40:15.277000 113731 .venv/lib/python3.12/site-packages/torch/_inductor/utils.py:972] on error, temporary cache dir kept at /tmp/tmpn8bibj7d
Traceback (most recent call last):
File "/home/raymond/ttz/issue.py", line 70, in <module>
_create_block_mask(q_seqlen=seqlen, kv_seqlen=seqlen)
File "/home/raymond/ttz/issue.py", line 55, in _create_block_mask
return create_block_mask(
^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1453, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1234, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 619, in __call__
return _compile(
^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1080, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 782, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 818, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 736, in transform
tracer.run()
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3502, in run
super().run()
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3703, in RETURN_VALUE
self._return(inst)
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3688, in _return
self.output.compile_subgraph(
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1179, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1437, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1487, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1519, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/__init__.py", line 2357, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2152, in compile_fx
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2140, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1163, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 775, in load
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1148, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 573, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 823, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 482, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1987, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 639, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 771, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 756, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1338, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1226, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2085, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2093, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2004, in codegen
self.scheduler.codegen()
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py", line 4175, in codegen
else self._codegen(self.nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py", line 4311, in _codegen
self.get_backend(device).codegen_node(node)
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/cuda_combined_scheduling.py", line 104, in codegen_node
return self._triton_scheduling.codegen_node(node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 1317, in codegen_node
return self.codegen_node_schedule(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 1358, in codegen_node_schedule
self.codegen_node_schedule_with_kernel(node_schedule, kernel)
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 1458, in codegen_node_schedule_with_kernel
node.codegen(index_vars)
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py", line 1170, in codegen
self._body(*index_vars)
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/loop_body.py", line 407, in __call__
result = self.root_block()
^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/loop_body.py", line 476, in __call__
return InterpreterShim(graph, submodules).run(V.get_ops_handler())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/loop_body.py", line 60, in run
return super().run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/loop_body.py", line 56, in run_node
return super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/fx/interpreter.py", line 240, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/fx/interpreter.py", line 344, in call_method
return getattr(self_obj, target)(*args_tail, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/sizevars.py", line 919, in store_reduction
return self._inner.store_reduction(name, self._simplify(index), value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/common.py", line 2486, in store_reduction
return self.kernel.store_reduction(name, index, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/triton.py", line 2955, in store_reduction
indexing = self.indexing(index, block_ptr=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/triton.py", line 1761, in indexing
index = self.prepare_indexing(index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 787, in prepare_indexing
index = self.simplify_indexing(index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 385, in simplify_indexing
index = self.combine_contiguous_dims(index, tree)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 550, in combine_contiguous_dims
return self._combine_contiguous_dims(index, tree)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 568, in _combine_contiguous_dims
new_index_vars = tree.construct(new_sizes)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 224, in construct
return [e.symbol() for e in self.construct_entries(lengths)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 219, in construct_entries
itervars.append(self.lookup(divisor, length))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 197, in lookup
expr = ModularIndexing(self.index_sym(), divisor, length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/cache.py", line 72, in wrapper
retval = cfunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/function.py", line 466, in __new__
result = super().__new__(cls, *args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/cache.py", line 72, in wrapper
retval = cfunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/function.py", line 307, in __new__
evaluated = cls.eval(*args)
^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/utils/_sympy/functions.py", line 327, in eval
return ModularIndexing(
^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/cache.py", line 72, in wrapper
retval = cfunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/function.py", line 466, in __new__
result = super().__new__(cls, *args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/cache.py", line 72, in wrapper
retval = cfunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/function.py", line 307, in __new__
evaluated = cls.eval(*args)
^^^^^^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/torch/utils/_sympy/functions.py", line 321, in eval
return (base // divisor) % modulus
~~~~~^^~~~~~~~~
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/decorators.py", line 65, in __sympifyit_wrapper
return func(a, b)
^^^^^^^^^^
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/numbers.py", line 2118, in __floordiv__
return Integer(self.p // other)
~~~~~~~^^~~~~~~
File "/home/raymond/ttz/.venv/lib/python3.12/site-packages/sympy/core/numbers.py", line 2122, in __rfloordiv__
return Integer(Integer(other).p // self.p)
~~~~~~~~~~~~~~~~~^^~~~~~~~
torch._inductor.exc.InductorError: ZeroDivisionError: integer division or modulo by zero
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0.dev20250321+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.86.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X3D 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 5758.5928
CPU min MHz: 3000.0000
BogoMIPS: 8384.36
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 128 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.6.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.8.0.87
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-lightning==2.5.1
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250321+cu128
[pip3] torchmetrics==1.7.0
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,939,829,977 | `torch.vmap` does not work with tensor subclasses as expected | hchau630 | open | [
"triaged",
"tensor subclass",
"module: functorch"
] | 4 | NONE | ### 🐛 Describe the bug
vmapped functions do not seem to call `__torch_function__` for tensor subclasses. To illustrate this, I combined the [tutorial](https://pytorch.org/docs/stable/notes/extending.html#subclassing-torch-tensor) for tensor subclassing and the [example](https://pytorch.org/docs/stable/generated/torch.vmap.html) for using `torch.vmap`.
```
import torch
class LoggingTensor(torch.Tensor):
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
if func is not torch.Tensor.__repr__:
print(f"func: {func.__name__}")
if kwargs is None:
kwargs = {}
return super().__torch_function__(func, types, args, kwargs)
batched_dot = torch.vmap(torch.dot)
x, y = LoggingTensor(torch.randn(2, 5)), LoggingTensor(torch.randn(2, 5))
out = batched_dot(x, y)
print(out)
x, y = LoggingTensor(torch.randn(5)), LoggingTensor(torch.randn(5))
out = torch.dot(x, y)
print(out)
```
I would expected `func: dot` to be printed for both the `batched_dot` and `torch.dot` call, but this is instead the output I get
```
func: dim
func: dim
func: dim
func: dim
func: size
func: size
tensor([-2.6343, -1.3578])
func: dot
LoggingTensor(-1.2882)
```
This means that any pytorch function override I implement in the tensor subclass will not be called when that function is wrapped in vmap, which does not seem correct.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.28)
CMake version: version 3.31.3
Libc version: N/A
Python version: 3.11.11 | packaged by conda-forge | (main, Mar 3 2025, 20:44:07) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
```
cc @ezyang @albanD @zou3519 @Chillee @samdow @kshitij12345 | true |
2,939,827,292 | Slower Mixed Precision Performance with pytorch installed via pip vs. conda | stas-sl | closed | [
"needs reproduction"
] | 2 | NONE | I’ve observed a significant performance difference when running mixed precision workloads with pytorch installed via pip compared to the same version installed via conda. The pip installation is substantially slower in mixed precision (float16) on a GTX 1080 GPU (yeah, it's old). This issue does not appear in full precision (float32), where performance is comparable between the two installation methods.
I installed pytorch like this:
```bash
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
# or
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=12.4 -c pytorch -c nvidia
```
Here are my benchmarks:
| Model | Precision | pytorch-2.5.1 conda | pytorch-2.5.1 pip |
|--------------------|----------------------|---------------------|-------------------|
| efficientnet_v2_s | float32 | 154ms | 150ms |
| efficientnet_v2_s | mixed precision (float16) | 125ms | 233ms |
| resnet101 | float32 | 225ms | 230ms |
| resnet101 | mixed precision (float16) | 179ms | 209ms |
As you can see:
- In mixed precision, the pip installation is 86% slower for efficientnet_v2_s (233ms vs. 125ms) and 17% slower for resnet101 (209ms vs. 179ms).
- In full precision, the performance difference is negligible (±2-3%).
Here is my script:
```python
import time
import torch
import numpy as np
from tqdm import tqdm
from torchvision.models import resnet101, efficientnet_v2_s
model = efficientnet_v2_s()
# model = resnet101()
model.cuda()
model.eval()
batch_size = 64
num_steps = 50
warmup_steps = 5
input_size = 224
step_times = []
with torch.no_grad(), torch.amp.autocast('cuda', dtype=torch.float16), tqdm(range(num_steps)) as pbar:
for step in pbar:
torch.cuda.synchronize()
start_time = time.time()
inputs = torch.randn(batch_size, 3, input_size, input_size).cuda()
outputs = model(inputs)
torch.cuda.synchronize()
step_time = time.time() - start_time
if step >= warmup_steps:
step_times.append(step_time)
avg_time = np.mean(step_times)
throughput = batch_size / avg_time
pbar.set_postfix({'time': f'{round(avg_time * 1000)}ms'})
print(f'Average time per step: {round(np.mean(step_times) * 1000)}ms')
```
Do you have any idea why this performance discrepancy might occur? This is more relevant now for me since support for conda installations is being discontinued, leaving pip as the only available installation method in the future.
### Versions
[pytorch-2.5.1 (pip).txt](https://github.com/user-attachments/files/19399073/pytorch-2.5.1.pip.txt)
[pytorch-2.5.1 (conda).txt](https://github.com/user-attachments/files/19399074/pytorch-2.5.1.conda.txt) | true |
2,939,818,329 | CudaGraphs Failing on Blackwell | drisspg | closed | [
"module: cuda",
"triaged",
"module: cuda graphs",
"Blackwell"
] | 2 | CONTRIBUTOR | # Summary
Run repro:
```py
import torch
def func(a):
return torch.softmax(a, dim=-1, dtype=torch.float32)
a = torch.randn(4, 16, dtype=torch.float16, device="cuda")
g = torch.cuda.CUDAGraph()
torch.cuda.synchronize()
with torch.cuda.graph(g):
out = func(a)
torch.cuda.synchronize()
g.replay()
torch.cuda.synchronize()
print(out.shape)
```
Result
```Shell
raceback (most recent call last):
File "/home/drisspg/meta/scripts/misc/cuda_graph.py", line 13, in <module>
out = func(a)
^^^^^^^
File "/home/drisspg/meta/scripts/misc/cuda_graph.py", line 4, in func
return torch.softmax(a, dim=-1, dtype=torch.float32)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: operation not permitted when stream is capturing
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
During handling of the above exception, another exception occurred:
```
cc @ptrblck @msaroufim @eqy @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng | true |
2,939,802,895 | [DCP] Cache save plan metadata to reduce the collective overhead | saumishr | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 9 | CONTRIBUTOR | Summary:
Cache save plan metadata to reduce the collective overhead.
Global plan dedupe and metadata creation are the main overheads on Rank 0. This change saves all this cost for the subsequent saves if the plans do not change. A quick experiment with the 256 rank job, Global step overhead drops by ~99%, from 90s+ to mere 1.5s. 1.5s was mostly spent on creating the checkpoint module directories and near empty collective.
Differential Revision: D71631441
cc @LucasLLC @pradeepfn @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,939,791,586 | [ca] torch._dynamo.disable the checkpoint unpack hook | xmfan | open | [
"module: inductor",
"ciflow/inductor"
] | 2 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149784
* #149773
* #149897
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,787,810 | [MPS] Add support for scaled_modified_bessel_k1 to eager. | dcci | closed | [
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 4 | MEMBER | Another day another op
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | true |
2,939,779,645 | [graph partition] support splitting on custom ops | BoyuanFeng | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 7 | CONTRIBUTOR | This PR adds support for graph partition on custom ops. Land after #149458.
### API
This PR provides a new API to set torch._C.Tag.cudagraph_unsafe tag for custom ops. This tag will be used for graph partition.
Example usage:
```python
@torch.library.custom_op(
"mylib::mysin",
mutates_args=["out_list"],
schema="(Tensor x, Tensor(a!)[]? out_list) -> Tensor",
tags=(torch._C.Tag.cudagraph_unsafe,),
)
def mysin(x, out_list) -> torch.Tensor:
r = x.sin()
if out_list is not None:
out_list[0].copy_(r)
return r
@mysin.register_fake
def _(x, out_list) -> torch.Tensor:
return torch.empty_like(x)
```
### Example
In this example, 1 torch-compiled region has 3 cudagraphs after splitting on 2 custom ops.

Code to repro:
```python
import torch
torch._inductor.config.graph_partition = True
@torch.library.custom_op(
"mylib::movement",
mutates_args=(),
tags=(torch._C.Tag.cudagraph_unsafe,),
)
def movement(pic: torch.Tensor) -> torch.Tensor:
img = pic.cpu()
cropped_img = (img + 1) * 2
return cropped_img.cuda() / 255.0
@movement.register_fake
def _(pic):
return torch.empty_like(pic)
@torch.library.custom_op(
"mylib::modify",
mutates_args=(),
tags=(torch._C.Tag.cudagraph_unsafe,),
)
def modify(pic: torch.Tensor) -> torch.Tensor:
pic1 = pic + 1
pic1_cpu = (pic1.cpu() + 1) * 2
return pic1_cpu.cuda() + pic
@modify.register_fake
def _(pic):
return torch.empty_like(pic)
@torch.library.custom_op("mylib::transform", mutates_args=())
def transform(pic: torch.Tensor) -> torch.Tensor:
return (pic + 1) * 2
@transform.register_fake
def _(pic):
return torch.empty_like(pic)
img = torch.randn(3, 64, 64, device="cuda")
def f(img):
x = (img + 10) * 2
y = movement(x)
z = y + 1
u = transform(z)
v = 2 * u + 1
out = modify(v)
return out + 1
compiled_f = torch.compile(f, fullgraph=True)
eager_out = f(img)
compiled_out = compiled_f(img)
assert torch.allclose(eager_out, compiled_out)
compiled_f = torch.compile(f, mode="reduce-overhead", fullgraph=True)
eager_out = f(img)
for _ in range(3):
compiled_out = compiled_f(img)
assert torch.allclose(eager_out, compiled_out)
# splitting on 2 custom gives 3 cudagraphs
assert torch._inductor.graph_manager.graph_id_manager.new_graph_id().id == 3
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,777,155 | ProcessGroupGloo: support ReduceOp::AVG | d4l3k | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3 | MEMBER | This adds AVG support to ProcessGroupGloo to better support FSDP on CPU. I expect there will be more issues but this is easy enough to support in a naive fashion.
This applies to both reduce and allreduce.
This is a simple SUM + division and may not be the most numerically stable but that's expected. FSDP for low precision data types implements pre/post divide and uses SUM instead.
Test plan:
```
pytest -v test/distributed/test_c10d_gloo.py
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o | true |
2,939,773,393 | Remove aten.elu core ATen decomp because it is now core ATen | swolchok | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149780
Per @larryliu0820.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,757,143 | Removed ROCM ifdef that governs thread count + smem parallel reduction. | 5had3z | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"release notes: cuda"
] | 10 | CONTRIBUTOR | #149548 Fixed the arbitrarily missing parallelism for NLL, but they also added an arbritrary #ifdef ROCM guard around this fix to prevent its use on CUDA gpus. There is also a problem with the way the kernel does the reduction from the intermediate shared memory, using only thread 0 walking linearly. This has been changed to a simple parallel reduction algorithm.
Tested changes with `python3 test/test_nn.py`
```
Ran 3551 tests in 200.554s
OK (skipped=998, expected failures=4)
```
Performance before and after with the script below with an RTX 3090, batch size x axis, time (sec) y axis. This GPU is also used for display graphics and such, so the measurements are pretty noisy, even with 100 samples.
## Before

## After ifdef removal

## After Parallel SMEM reduction

```python
import torch
from matplotlib import pyplot as plt
from torch.nn import functional as F
timing = []
batches= list(range(32, 4096, 32))
for batch in [32] + batches:
samples = []
for _ in range(100):
probs = torch.rand(batch, 10).cuda()
labels = torch.randint(0, 10, (batch,)).cuda()
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
F.nll_loss(probs, labels)
end.record()
torch.cuda.synchronize()
elapsed = start.elapsed_time(end)
samples.append(elapsed)
timing.append(sum(samples) / len(samples))
timing = timing[1:]
plt.plot(batches, timing)
plt.show()
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,939,755,118 | ci/docker: use NCCL 2.26.2-1 | d4l3k | closed | [
"Merged",
"topic: not user facing"
] | 6 | MEMBER | Related to #149153
This updates some build scripts to hopefully fix the nightly builds which are somehow building against nccl 2.25.1 and using 2.26.2 from pip.
Test plan:
After merging rerun nightly linux jobs and validate that nccl version matches | true |
2,939,753,347 | [WIP] stop writing Max(*, 1) for strides | pianpwk | closed | [
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | This should help us move away from size-oblivious. Looking at what the code was before sym_max(*, 1) was introduced, I think this is appropriate (https://github.com/pytorch/pytorch/pull/94400 in `_prims_common/__init__.py`) | true |
2,939,737,320 | [WIP] guard or false | pianpwk | closed | [
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,939,737,224 | Use schema as source of truth + support ones_like/empty_like | pytorchbot | closed | [
"open source",
"ciflow/inductor"
] | 1 | COLLABORATOR | NOTE: THIS MUST BE LANDED ONLY AFTER https://github.com/pytorch/pytorch/pull/149644 IS LANDED IN THE RELEASE, otherwise tests will fail
This change does 2 important things:
(a) Instead of relying on IValue type as source of truth, we use the schema as the source of truth, which is important as IValue types are overloaded and can ambiguously convert incorrectly. For example, a MemoryFormat will look like an int + get converted to an int64_t vs a MemoryFormat!
(b) This PR expands support for many more types to encompass way more schemas, e.g., Optional, Device, dtype, etc. The main win from this PR is the ability for aoti_torch_call_dispatcher to call TensorFactory ops like ones_like/empty_like!
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149230
* __->__ #149052
| true |
2,939,712,627 | bound_sympy() produces incorrect result for mod | pianpwk | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"export-triage-review",
"oncall: export"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
`bound_sympy(s0 - (s0 % 8))` produces an incorrect range of [-5, inf], when the correct answer is [0, inf] (s0 has a bound of [2, inf].
My guess is this happens because each term is evaluated individually, with s0 resolving to [2, inf], and -(s0 % 8) resolving to [-7, 0], combining for a range of [-5, inf]. Not sure what the efficient fix is.
xref: https://fb.workplace.com/groups/pytorch.edge2.team/posts/1163036018285582/?comment_id=1163038158285368&reply_comment_id=1164412728147911
```
from torch.utils._sympy.value_ranges import bound_sympy
class Foo(torch.nn.Module):
def forward(self, x):
expr = x.shape[0] - (x.shape[0] % 8) # s0 - (s0 % 8)
return torch.empty(expr)
ep = export(
Foo(),
(torch.randn(13),),
dynamic_shapes={"x": (Dim("dim", min=2),)},
)
val = [node for node in ep.graph.nodes][-2].meta["val"]
expr = val.shape[0].node.expr
var_to_ranges = val.shape[0].node.shape_env.var_to_range
print(bound_sympy(expr, var_to_ranges)) # [-5, inf], should be [0, inf]
```
### Versions
.
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,939,703,212 | [ca][aot cache] disable caching on joint graphs when CA is enabled | xmfan | open | [
"module: dynamo",
"ciflow/inductor"
] | 2 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149784
* __->__ #149773
* #149897
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,939,686,949 | [inductor] fix combo_kernel logging #2 | YUNQIUGUO | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Summary:
fix another combo kernel logging error:
File "/home/guorachel/local/fbsource/buck-out/v2/gen/fbcode/4bcbfa3ef39dbd6f/caffe2/test/inductor/__combo_kernels__/combo_kernels#link-tree/torch/_inductor/scheduler.py", line 2036, in _init
self.create_combo_kernel_nodes(num_ck_nodes=None)
File "/home/guorachel/local/fbsource/buck-out/v2/gen/fbcode/4bcbfa3ef39dbd6f/caffe2/test/inductor/__combo_kernels__/combo_kernels#link-tree/torch/_inductor/scheduler.py", line 3068, in create_combo_kernel_nodes
log.debug("ComboKernels: Generating with num_ck_nodes = %d...", num_ck_nodes)
Message: 'ComboKernels: Generating with num_ck_nodes = %d...'
Arguments: (None,)
Test Plan:
Verified in test_combo_kernel.py
the logging error went away.
Differential Revision: D71655949
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,678,315 | How to remove the “internal api” notice? | justinchuby | closed | [
"module: docs",
"triaged"
] | 4 | COLLABORATOR | ### 📚 The doc issue
What is the option that will remove this notice?
> This page describes an internal API which is not intended to be used outside of the PyTorch codebase and can be modified or removed without notice.
We would like to remove it for https://pytorch.org/docs/stable/onnx_dynamo.html and a few onnx pages.
@svekars
### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke | true |
2,939,663,451 | Include other accelerators in capturable docstr for optimizers | janeyx99 | closed | [
"Merged",
"ciflow/trunk",
"topic: docs",
"release notes: optim"
] | 5 | CONTRIBUTOR | Fixes #149722
@ILCSFNO is this better?
| true |
2,939,656,581 | fix inductor logging for torch._scaled_mm | vkuzo | open | [
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149769
Summary:
https://github.com/pytorch/pytorch/pull/148800 made inductor logs
throw an exception of the model contains `torch._scaled_mm` because
it split a string by underscore and making some assumptions about the resulting
list, and `torch._scaled_mm` has underscores which breaks those
assumptions.
Fixing by making the string parsing code slightly less brittle and
adding a test.
Test Plan:
```bash
pytest test/dynamo/test_logging.py -s -k test_inductor_mm_info
pytest test/dynamo/test_logging.py -s -k test_inductor_scaled_mm_info
```
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,568,144 | [FSDP2][DTensor] numeric bug for DTensor + python float in gradient clipping | weifengpy | open | [
"oncall: distributed",
"triaged",
"module: fsdp"
] | 6 | CONTRIBUTOR | ### 🐛 Describe the bug
```
python3.12/site-packages/torch/nn/utils/clip_grad.py", line 155, in _clip_grads_with_norm_
[rank1]: clip_coef = max_norm / (total_norm + 1e-6)
```
for DTensor + 1e-6, each rank adds 1e-6 to local tensor, instead of adding once
this was reported by others as well
### Versions
na
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o | true |
2,939,550,367 | `flex_attention` slower than manual attention implementation | abdulfatir | open | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 6 | NONE | ### 🐛 Describe the bug
I'm not sure if this is a bug or if I am using `flex_attention` incorrectly. Also, not completely sure if flex attention is designed for such masked language modeling-style use cases. That said, I found `flex_attention` to be slower than a (compiled) manual torch implementation of attention for my use case. My actual codebase is a bit too large to post here, however, the following example contains the most critical components in a dummy form.
I get the following results on an A100 GPU.
| Model | Estimated Runtime | Memory Usage |
|--------|--------|--------|
| `flex_attention` | 45 mins | 1150 MB |
| manual attention | 35 mins | 1300 MB |
`flex_attention` is slower, although it does save some memory.
```py
import torch
import torch.nn as nn
from torch.nn.attention.flex_attention import BlockMask, create_block_mask, flex_attention
from tqdm.auto import tqdm
def compile_flex_attention():
try:
print("Compling flex_attention")
return torch.compile(flex_attention, dynamic=False)
except Exception as e:
print(f"Compiling flex_attention failed with error '{e}'. Retrying with mode='max-autotune'.")
try:
return torch.compile(flex_attention, dynamic=False, mode="max-autotune")
except Exception as e:
print(
f"Compiling flex_attention failed with error: '{e}', "
"Updating your pytorch version to nightlies may solve it, or you can set"
"in your config dataset.packed=False to avoid using flex attention.",
)
raise
flex_attention_compiled = compile_flex_attention()
@torch.compiler.disable(recursive=False)
def compile_friendly_flex_attention(
q: torch.Tensor, k: torch.Tensor, v: torch.Tensor, block_mask: BlockMask, **kwargs
) -> torch.Tensor:
return flex_attention_compiled(q, k, v, block_mask=block_mask, **kwargs)
class Attention(nn.Module):
def __init__(self, use_flex_attn: bool = False):
super().__init__()
self.d_model = 512
self.key_value_proj_dim = 64
self.n_heads = 8
self.inner_dim = self.n_heads * self.key_value_proj_dim
self.use_flex_attention = use_flex_attn
self.q = nn.Linear(self.d_model, self.inner_dim, bias=False)
self.k = nn.Linear(self.d_model, self.inner_dim, bias=False)
self.v = nn.Linear(self.d_model, self.inner_dim, bias=False)
self.o = nn.Linear(self.inner_dim, self.d_model, bias=False)
def forward(
self,
hidden_states: torch.Tensor,
mask: torch.Tensor | BlockMask = None,
encoder_states: torch.Tensor | None = None,
):
batch_size = hidden_states.shape[0]
if encoder_states is None:
# Self Attention
query_states = self.q(hidden_states)
key_states = self.k(hidden_states)
value_states = self.v(hidden_states)
else:
# Cross Attention
query_states = self.q(hidden_states)
key_states = self.k(encoder_states)
value_states = self.v(encoder_states)
query_states = query_states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2)
key_states = key_states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2)
value_states = value_states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2)
if self.use_flex_attention:
assert isinstance(mask, BlockMask) or mask is None
attn_output = compile_friendly_flex_attention(
query_states, key_states, value_states, block_mask=mask, scale=1.0
)
else:
scores = torch.matmul(query_states, key_states.transpose(3, 2))
scores += mask
attn_weights = nn.functional.softmax(scores, dim=-1)
attn_output = torch.matmul(attn_weights, value_states)
attn_output = attn_output.transpose(1, 2).contiguous().view(batch_size, -1, self.inner_dim)
attn_output = self.o(attn_output)
return attn_output
class Encoder(nn.Module):
def __init__(self, use_flex_attn: bool = False):
super().__init__()
self.sa_layers = nn.ModuleList([Attention(use_flex_attn=use_flex_attn) for _ in range(6)])
def forward(self, hidden_states: torch.Tensor, mask: torch.Tensor | BlockMask):
for layer in self.sa_layers:
hidden_states = hidden_states + layer(hidden_states, mask=mask)
return hidden_states
class Decoder(nn.Module):
def __init__(self, use_flex_attn: bool = False):
super().__init__()
self.sa_layers = nn.ModuleList([Attention(use_flex_attn=use_flex_attn) for _ in range(6)])
self.ca_layers = nn.ModuleList([Attention(use_flex_attn=use_flex_attn) for _ in range(6)])
def forward(
self,
hidden_states: torch.Tensor,
mask: torch.Tensor | BlockMask,
encoder_states: torch.Tensor,
encoder_mask: torch.Tensor,
):
for index in range(6):
hidden_states = hidden_states + self.sa_layers[index](hidden_states, mask=mask)
hidden_states = hidden_states + self.ca_layers[index](
hidden_states, mask=encoder_mask, encoder_states=encoder_states
)
return hidden_states
class EncoderDecoderModel(nn.Module):
def __init__(self, use_flex_attn: bool = False):
super().__init__()
self.use_flex_attn = use_flex_attn
self.encoder = Encoder(use_flex_attn=use_flex_attn)
self.decoder = Decoder(use_flex_attn=use_flex_attn)
def forward(self, hidden_states: torch.Tensor, mask: torch.BoolTensor, decoder_states: torch.Tensor):
# hidden_states: (batch_size, seq_len, d_model)
# mask: (batch_size, seq_len)
# decoder_states: (batch_size, 1, d_model)
batch_size, seq_len = hidden_states.shape[:2]
if self.use_flex_attn:
def mask_mod(b, h, q_idx, kv_idx):
return mask[b, kv_idx]
encoder_sa_mask = create_block_mask(
mask_mod, batch_size, None, mask.shape[-1], mask.shape[-1], _compile=True
)
decoder_sa_mask = None
def encoder_mask_mod(b, h, q_idx, kv_idx):
return mask[b, kv_idx]
decoder_ca_mask = create_block_mask(encoder_mask_mod, batch_size, None, 1, seq_len, _compile=True)
else:
encoder_sa_mask = torch.where(mask[:, None, None, :], 0.0, float("-inf"))
decoder_sa_mask = torch.zeros(decoder_states.shape[:-1], device=decoder_states.device)[:, None, None, :]
decoder_ca_mask = torch.where(mask[:, None, None, :], 0.0, float("-inf"))
encoder_states = self.encoder(hidden_states, mask=encoder_sa_mask)
decoder_states = self.decoder(
decoder_states, mask=decoder_sa_mask, encoder_states=encoder_states, encoder_mask=decoder_ca_mask
)
return decoder_states
def random_batch(batch_size, seq_len, d_model, device):
hidden_states = torch.rand(batch_size, seq_len, d_model, device=device)
mask = torch.rand(batch_size, seq_len, device=device) > 0.5
decoder_states = torch.rand(batch_size, 1, d_model, device=device)
return hidden_states, mask, decoder_states
if __name__ == "__main__":
batch_size = 32
num_iters = 100000
seq_len = 128
d_model = 512
model = EncoderDecoderModel(use_flex_attn=True).to("cuda:0")
model = torch.compile(model)
for _ in tqdm(range(num_iters)):
out = model(*random_batch(batch_size, seq_len, d_model, "cuda:0"))
out.mean().backward()
```
I also verified that the two implementations give me the same output using:
```py
batch_size = 32
num_iters = 100000
seq_len = 128
d_model = 512
model = EncoderDecoderModel(use_flex_attn=False).to("cuda:0")
model_flex = EncoderDecoderModel(use_flex_attn=True).to("cuda:0")
model_flex.load_state_dict(model.state_dict())
model = torch.compile(model)
model_flex = torch.compile(model_flex)
batch = random_batch(batch_size, seq_len, d_model, "cuda:0")
out_torch = model(*batch)
out_flex = model_flex(*batch)
print(torch.abs(out_torch - out_flex).mean().item()) # 1.6685237369529204e-07
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,939,531,591 | Move emulate_precision_casts to be controlled by a JK. | c00w | closed | [
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Summary: This allows us to selectively turn this on for internal use cases.
Test Plan: Relying primarily on existing models not breaking at test time.
Reviewed By: Yuzhen11
Differential Revision: D71647650
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,504,820 | Parallelize sort | pytorchbot | closed | [
"open source"
] | 1 | COLLABORATOR | PR #142391 erroneously used `USE_OMP` instead of `USE_OPENMP`.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,939,492,678 | [DTensor] Error on illegal view op during sharding prop | wconstab | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 21 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149764
* #152045
Adds explicit error checking during sharding propagation for view ops
rather than relying on runtime errors during local op execution.
Before:
An error is thrown by aten.view op called by DTensor dispatch, because
the local shard size is incompatible with the (incorrectly calculated)
args to the view op.
`RuntimeError: shape '[384]' is invalid for input of size 512`
After:
We raise more specific errors for cases of incompatible view operations
during sharding propagation, before getting to runtime dispatch.
`RuntimeError: Attempted to flatten an unevenly sharded dimension, which would require resharding the input. Please explicitly redistribute the tensor instead.`
Change Summary:
add 'strict_view' kwarg to the helper methods that implement
view/reshape op shard prop rules, so it can be decided op-by-op whether
to raise these new errors
enabled errors just for the 'view' op in this PR
added two specific checks/errors that can occur during view ops.
Details:
- View ops are never allowed to flatten a dimension that is unevenly
sharded, since that would likely change the size/content of the
local_tensor and require redistribute
- View ops are also never allowed to flatten two dims if the rightmost
dim is a Shard() placment, becuase it would cause contiguity errors
without redistribution
Notes:
- Disables support for several ops in test_dtensor_ops.py test, which
decompose to an illegal view that only works by performing a
redistribution: cartesian_prod, flatten, ravel, reshape, reshape_as, view, view_as, take_along_dim, kron
Follow Ups:
- triage other view-like ops (besides aten::view) for using strict_view
- look for other gaps where view-like ops could still perform
redistribution (ban them all, and document this)
Fixes #143372
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k | true |
2,939,479,668 | [scan] Support None return in combine_fn | angelayi | open | [
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149763
Wondering if we could support the case if the user doesn't want to put anything in the accumulated tensor:
```python
def add(x, y):
return x + y[0], None
def f(init, x):
return scan(add, init, [x, x])[0]
x = torch.randn(3, 2, 2)
init = torch.ones(2, 2)
f(init, x)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,939,457,110 | terminate called after throwing an instance of 'c10::Error' | guarin | open | [
"oncall: distributed",
"triaged",
"module: ddp"
] | 0 | NONE | Hi, I am running into the stacktrace shown below and am not sure where the error is coming from.
Setup is a 4xRTX4090 node with DDP training. The error happens randomly, in this case after 20 epochs.
Library versions:
```
Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
CUDA Version: 12.4
CUDA Driver Version: 550.120
Python: 3.10.16
torch 2.6.0
torchvision 0.21.0
pytorch-lightning 2.5.1
```
The run had `CUDA_LAUNCH_BLOCKING=1` set.
Any pointers would be super helpful :)
```
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e3aee76c1b6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7e3aee715a76 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7e3b07ed2918 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x20ba6 (0x7e3b07e98ba6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22507 (0x7e3b07e9a507 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2270f (0x7e3b07e9a70f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x644a90 (0x7e3ad6444a90 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x85082f (0x7e3ad665082f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #8: /home/user/.venv/bin/python() [0x510ceb]
frame #9: /home/user/.venv/bin/python() [0x53af4f]
frame #10: /home/user/.venv/bin/python() [0x5102aa]
frame #11: /home/user/.venv/bin/python() [0x5232c1]
frame #12: /home/user/.venv/bin/python() [0x572da5]
frame #13: /home/user/.venv/bin/python() [0x5075cb]
frame #14: /home/user/.venv/bin/python() [0x5c60d0]
frame #15: /home/user/.venv/bin/python() [0x5348de]
frame #16: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #17: /home/user/.venv/bin/python() [0x66bda4]
frame #18: _PyObject_MakeTpCall + 0x1da (0x51e06a in /home/user/.venv/bin/python)
frame #19: _PyEval_EvalFrameDefault + 0x5738 (0x51a1b8 in /home/user/.venv/bin/python)
frame #20: /home/user/.venv/bin/python() [0x531136]
frame #21: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #22: /home/user/.venv/bin/python() [0x531136]
frame #23: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #24: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #25: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #26: /home/user/.venv/bin/python() [0x5663f3]
frame #27: /home/user/.venv/bin/python() [0x566287]
frame #28: _PyEval_EvalFrameDefault + 0xbd1 (0x515651 in /home/user/.venv/bin/python)
frame #29: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #30: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #31: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #32: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #33: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #34: _PyEval_EvalFrameDefault + 0x302b (0x517aab in /home/user/.venv/bin/python)
frame #35: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #36: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #37: /home/user/.venv/bin/python() [0x531136]
frame #38: _PyEval_EvalFrameDefault + 0x1451 (0x515ed1 in /home/user/.venv/bin/python)
frame #39: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #40: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #41: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #42: /home/user/.venv/bin/python() [0x52f106]
frame #43: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #44: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #45: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #46: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #47: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #48: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #49: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #50: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #51: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #52: /home/user/.venv/bin/python() [0x52f106]
frame #53: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #54: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #55: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #56: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #57: /home/user/.venv/bin/python() [0x521797]
frame #58: /home/user/.venv/bin/python() [0x5dfac5]
frame #59: /home/user/.venv/bin/python() [0x60939d]
frame #60: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #61: /home/user/.venv/bin/python() [0x52595b]
frame #62: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e3aee76c1b6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7e3aee715a76 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7e3b07ed2918 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x20ba6 (0x7e3b07e98ba6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22507 (0x7e3b07e9a507 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2270f (0x7e3b07e9a70f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x644a90 (0x7e3ad6444a90 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x85082f (0x7e3ad665082f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #8: /home/user/.venv/bin/python() [0x510ceb]
frame #9: /home/user/.venv/bin/python() [0x53af4f]
frame #10: /home/user/.venv/bin/python() [0x5102aa]
frame #11: /home/user/.venv/bin/python() [0x5232c1]
frame #12: /home/user/.venv/bin/python() [0x572da5]
frame #13: /home/user/.venv/bin/python() [0x5075cb]
frame #14: /home/user/.venv/bin/python() [0x5c60d0]
frame #15: /home/user/.venv/bin/python() [0x5348de]
frame #16: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #17: /home/user/.venv/bin/python() [0x66bda4]
frame #18: _PyObject_MakeTpCall + 0x1da (0x51e06a in /home/user/.venv/bin/python)
frame #19: _PyEval_EvalFrameDefault + 0x5738 (0x51a1b8 in /home/user/.venv/bin/python)
frame #20: /home/user/.venv/bin/python() [0x531136]
frame #21: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #22: /home/user/.venv/bin/python() [0x531136]
frame #23: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #24: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #25: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #26: /home/user/.venv/bin/python() [0x5663f3]
frame #27: /home/user/.venv/bin/python() [0x566287]
frame #28: _PyEval_EvalFrameDefault + 0xbd1 (0x515651 in /home/user/.venv/bin/python)
frame #29: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #30: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #31: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #32: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #33: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #34: _PyEval_EvalFrameDefault + 0x302b (0x517aab in /home/user/.venv/bin/python)
frame #35: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #36: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #37: /home/user/.venv/bin/python() [0x531136]
frame #38: _PyEval_EvalFrameDefault + 0x1451 (0x515ed1 in /home/user/.venv/bin/python)
frame #39: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #40: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #41: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #42: /home/user/.venv/bin/python() [0x52f106]
frame #43: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #44: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #45: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #46: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #47: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #48: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #49: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #50: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #51: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #52: /home/user/.venv/bin/python() [0x52f106]
frame #53: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #54: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #55: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #56: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #57: /home/user/.venv/bin/python() [0x521797]
frame #58: /home/user/.venv/bin/python() [0x5dfac5]
frame #59: /home/user/.venv/bin/python() [0x60939d]
frame #60: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #61: /home/user/.venv/bin/python() [0x52595b]
frame #62: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e3aee76c1b6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7e3aee715a76 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7e3b07ed2918 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x20ba6 (0x7e3b07e98ba6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22507 (0x7e3b07e9a507 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2270f (0x7e3b07e9a70f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x644a90 (0x7e3ad6444a90 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x85082f (0x7e3ad665082f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #8: /home/user/.venv/bin/python() [0x510ceb]
frame #9: /home/user/.venv/bin/python() [0x53af4f]
frame #10: /home/user/.venv/bin/python() [0x5102aa]
frame #11: /home/user/.venv/bin/python() [0x5232c1]
frame #12: /home/user/.venv/bin/python() [0x572da5]
frame #13: /home/user/.venv/bin/python() [0x5075cb]
frame #14: /home/user/.venv/bin/python() [0x5c60d0]
frame #15: /home/user/.venv/bin/python() [0x5348de]
frame #16: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #17: /home/user/.venv/bin/python() [0x66bda4]
frame #18: _PyObject_MakeTpCall + 0x1da (0x51e06a in /home/user/.venv/bin/python)
frame #19: _PyEval_EvalFrameDefault + 0x5738 (0x51a1b8 in /home/user/.venv/bin/python)
frame #20: /home/user/.venv/bin/python() [0x531136]
frame #21: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #22: /home/user/.venv/bin/python() [0x531136]
frame #23: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #24: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #25: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #26: /home/user/.venv/bin/python() [0x5663f3]
frame #27: /home/user/.venv/bin/python() [0x566287]
frame #28: _PyEval_EvalFrameDefault + 0xbd1 (0x515651 in /home/user/.venv/bin/python)
frame #29: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #30: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #31: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #32: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #33: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #34: _PyEval_EvalFrameDefault + 0x302b (0x517aab in /home/user/.venv/bin/python)
frame #35: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #36: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #37: /home/user/.venv/bin/python() [0x531136]
frame #38: _PyEval_EvalFrameDefault + 0x1451 (0x515ed1 in /home/user/.venv/bin/python)
frame #39: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #40: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #41: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #42: /home/user/.venv/bin/python() [0x52f106]
frame #43: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #44: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #45: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #46: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #47: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #48: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #49: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #50: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #51: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #52: /home/user/.venv/bin/python() [0x52f106]
frame #53: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #54: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #55: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #56: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #57: /home/user/.venv/bin/python() [0x521797]
frame #58: /home/user/.venv/bin/python() [0x5dfac5]
frame #59: /home/user/.venv/bin/python() [0x60939d]
frame #60: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #61: /home/user/.venv/bin/python() [0x52595b]
frame #62: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e3aee76c1b6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7e3aee715a76 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7e3b07ed2918 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x20ba6 (0x7e3b07e98ba6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22507 (0x7e3b07e9a507 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2270f (0x7e3b07e9a70f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x644a90 (0x7e3ad6444a90 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x85082f (0x7e3ad665082f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #8: /home/user/.venv/bin/python() [0x510ceb]
frame #9: /home/user/.venv/bin/python() [0x53af4f]
frame #10: /home/user/.venv/bin/python() [0x5102aa]
frame #11: /home/user/.venv/bin/python() [0x5232c1]
frame #12: /home/user/.venv/bin/python() [0x572da5]
frame #13: /home/user/.venv/bin/python() [0x5075cb]
frame #14: /home/user/.venv/bin/python() [0x5c60d0]
frame #15: /home/user/.venv/bin/python() [0x5b86cf]
frame #16: _PyEval_EvalFrameDefault + 0xa31 (0x5154b1 in /home/user/.venv/bin/python)
frame #17: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #18: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #19: /home/user/.venv/bin/python() [0x531136]
frame #20: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #21: /home/user/.venv/bin/python() [0x531136]
frame #22: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #23: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #24: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #25: /home/user/.venv/bin/python() [0x5663f3]
frame #26: /home/user/.venv/bin/python() [0x566287]
frame #27: _PyEval_EvalFrameDefault + 0xbd1 (0x515651 in /home/user/.venv/bin/python)
frame #28: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #29: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #30: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #31: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #32: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #33: _PyEval_EvalFrameDefault + 0x302b (0x517aab in /home/user/.venv/bin/python)
frame #34: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #35: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #36: /home/user/.venv/bin/python() [0x531136]
frame #37: _PyEval_EvalFrameDefault + 0x1451 (0x515ed1 in /home/user/.venv/bin/python)
frame #38: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #39: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #40: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #41: /home/user/.venv/bin/python() [0x52f106]
frame #42: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #43: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #44: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #45: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #46: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #47: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #48: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #49: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #50: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #51: /home/user/.venv/bin/python() [0x52f106]
frame #52: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #53: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #54: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #55: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #56: /home/user/.venv/bin/python() [0x521797]
frame #57: /home/user/.venv/bin/python() [0x5dfac5]
frame #58: /home/user/.venv/bin/python() [0x60939d]
frame #59: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #60: /home/user/.venv/bin/python() [0x52595b]
frame #61: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #62: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e3aee76c1b6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7e3aee715a76 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7e3b07ed2918 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x20ba6 (0x7e3b07e98ba6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22507 (0x7e3b07e9a507 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2270f (0x7e3b07e9a70f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x644a90 (0x7e3ad6444a90 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x85082f (0x7e3ad665082f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #8: /home/user/.venv/bin/python() [0x510ceb]
frame #9: /home/user/.venv/bin/python() [0x53af4f]
frame #10: /home/user/.venv/bin/python() [0x5102aa]
frame #11: /home/user/.venv/bin/python() [0x5232c1]
frame #12: /home/user/.venv/bin/python() [0x572da5]
frame #13: /home/user/.venv/bin/python() [0x5075cb]
frame #14: /home/user/.venv/bin/python() [0x5c60d0]
frame #15: PyType_GenericAlloc + 0x307 (0x505b07 in /home/user/.venv/bin/python)
frame #16: /home/user/.venv/bin/python() [0x572191]
frame #17: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #18: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #19: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #20: /home/user/.venv/bin/python() [0x531136]
frame #21: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #22: /home/user/.venv/bin/python() [0x531136]
frame #23: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #24: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #25: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #26: /home/user/.venv/bin/python() [0x5663f3]
frame #27: /home/user/.venv/bin/python() [0x566287]
frame #28: _PyEval_EvalFrameDefault + 0xbd1 (0x515651 in /home/user/.venv/bin/python)
frame #29: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #30: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #31: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #32: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #33: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #34: _PyEval_EvalFrameDefault + 0x302b (0x517aab in /home/user/.venv/bin/python)
frame #35: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #36: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #37: /home/user/.venv/bin/python() [0x531136]
frame #38: _PyEval_EvalFrameDefault + 0x1451 (0x515ed1 in /home/user/.venv/bin/python)
frame #39: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #40: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #41: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #42: /home/user/.venv/bin/python() [0x52f106]
frame #43: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #44: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #45: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #46: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #47: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #48: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #49: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #50: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #51: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #52: /home/user/.venv/bin/python() [0x52f106]
frame #53: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #54: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #55: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #56: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #57: /home/user/.venv/bin/python() [0x521797]
frame #58: /home/user/.venv/bin/python() [0x5dfac5]
frame #59: /home/user/.venv/bin/python() [0x60939d]
frame #60: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #61: /home/user/.venv/bin/python() [0x52595b]
frame #62: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e3aee76c1b6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7e3aee715a76 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7e3b07ed2918 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x20ba6 (0x7e3b07e98ba6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22507 (0x7e3b07e9a507 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2270f (0x7e3b07e9a70f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x644a90 (0x7e3ad6444a90 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x85082f (0x7e3ad665082f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #8: /home/user/.venv/bin/python() [0x510ceb]
frame #9: /home/user/.venv/bin/python() [0x53af4f]
frame #10: /home/user/.venv/bin/python() [0x5102aa]
frame #11: /home/user/.venv/bin/python() [0x5232c1]
frame #12: /home/user/.venv/bin/python() [0x572da5]
frame #13: /home/user/.venv/bin/python() [0x5075cb]
frame #14: /home/user/.venv/bin/python() [0x5c60d0]
frame #15: /home/user/.venv/bin/python() [0x5348de]
frame #16: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #17: /home/user/.venv/bin/python() [0x66bda4]
frame #18: _PyObject_MakeTpCall + 0x1da (0x51e06a in /home/user/.venv/bin/python)
frame #19: _PyEval_EvalFrameDefault + 0x5738 (0x51a1b8 in /home/user/.venv/bin/python)
frame #20: /home/user/.venv/bin/python() [0x531136]
frame #21: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #22: /home/user/.venv/bin/python() [0x531136]
frame #23: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #24: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #25: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #26: /home/user/.venv/bin/python() [0x5663f3]
frame #27: /home/user/.venv/bin/python() [0x566287]
frame #28: _PyEval_EvalFrameDefault + 0xbd1 (0x515651 in /home/user/.venv/bin/python)
frame #29: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #30: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #31: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #32: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #33: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #34: _PyEval_EvalFrameDefault + 0x302b (0x517aab in /home/user/.venv/bin/python)
frame #35: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #36: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #37: /home/user/.venv/bin/python() [0x531136]
frame #38: _PyEval_EvalFrameDefault + 0x1451 (0x515ed1 in /home/user/.venv/bin/python)
frame #39: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #40: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #41: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #42: /home/user/.venv/bin/python() [0x52f106]
frame #43: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #44: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #45: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #46: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #47: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #48: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #49: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #50: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #51: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #52: /home/user/.venv/bin/python() [0x52f106]
frame #53: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #54: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #55: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #56: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #57: /home/user/.venv/bin/python() [0x521797]
frame #58: /home/user/.venv/bin/python() [0x5dfac5]
frame #59: /home/user/.venv/bin/python() [0x60939d]
frame #60: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #61: /home/user/.venv/bin/python() [0x52595b]
frame #62: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e3aee76c1b6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7e3aee715a76 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7e3b07ed2918 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x20ba6 (0x7e3b07e98ba6 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22507 (0x7e3b07e9a507 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2270f (0x7e3b07e9a70f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x644a90 (0x7e3ad6444a90 in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x85082f (0x7e3ad665082f in /home/user/.venv/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #8: /home/user/.venv/bin/python() [0x510ceb]
frame #9: /home/user/.venv/bin/python() [0x53af4f]
frame #10: /home/user/.venv/bin/python() [0x5102aa]
frame #11: /home/user/.venv/bin/python() [0x5232c1]
frame #12: /home/user/.venv/bin/python() [0x572da5]
frame #13: /home/user/.venv/bin/python() [0x5075cb]
frame #14: /home/user/.venv/bin/python() [0x5c60d0]
frame #15: /home/user/.venv/bin/python() [0x5b86cf]
frame #16: _PyEval_EvalFrameDefault + 0xa31 (0x5154b1 in /home/user/.venv/bin/python)
frame #17: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #18: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #19: /home/user/.venv/bin/python() [0x531136]
frame #20: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #21: /home/user/.venv/bin/python() [0x531136]
frame #22: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #23: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #24: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #25: /home/user/.venv/bin/python() [0x5663f3]
frame #26: /home/user/.venv/bin/python() [0x566287]
frame #27: _PyEval_EvalFrameDefault + 0xbd1 (0x515651 in /home/user/.venv/bin/python)
frame #28: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #29: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #30: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #31: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #32: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #33: _PyEval_EvalFrameDefault + 0x302b (0x517aab in /home/user/.venv/bin/python)
frame #34: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #35: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #36: /home/user/.venv/bin/python() [0x531136]
frame #37: _PyEval_EvalFrameDefault + 0x1451 (0x515ed1 in /home/user/.venv/bin/python)
frame #38: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #39: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #40: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #41: /home/user/.venv/bin/python() [0x52f106]
frame #42: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #43: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #44: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #45: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #46: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #47: _PyEval_EvalFrameDefault + 0x4e81 (0x519901 in /home/user/.venv/bin/python)
frame #48: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #49: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #50: _PyObject_FastCallDictTstate + 0xc3 (0x51d5c3 in /home/user/.venv/bin/python)
frame #51: /home/user/.venv/bin/python() [0x52f106]
frame #52: _PyObject_MakeTpCall + 0x2ef (0x51e17f in /home/user/.venv/bin/python)
frame #53: _PyEval_EvalFrameDefault + 0x5156 (0x519bd6 in /home/user/.venv/bin/python)
frame #54: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
frame #55: _PyEval_EvalFrameDefault + 0x734 (0x5151b4 in /home/user/.venv/bin/python)
frame #56: /home/user/.venv/bin/python() [0x521797]
frame #57: /home/user/.venv/bin/python() [0x5dfac5]
frame #58: /home/user/.venv/bin/python() [0x60939d]
frame #59: PyObject_GetIter + 0x1f (0x50f36f in /home/user/.venv/bin/python)
frame #60: /home/user/.venv/bin/python() [0x52595b]
frame #61: _PyEval_EvalFrameDefault + 0x32d (0x514dad in /home/user/.venv/bin/python)
frame #62: _PyFunction_Vectorcall + 0x75 (0x525775 in /home/user/.venv/bin/python)
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1251, in _try_get_data
[rank0]: data = self._data_queue.get(timeout=timeout)
[rank0]: File "/usr/lib/python3.10/multiprocessing/queues.py", line 122, in get
[rank0]: return _ForkingPickler.loads(res)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 541, in rebuild_storage_fd
[rank0]: fd = df.detach()
[rank0]: File "/usr/lib/python3.10/multiprocessing/resource_sharer.py", line 57, in detach
[rank0]: with _resource_sharer.get_connection(self._id) as conn:
[rank0]: File "/usr/lib/python3.10/multiprocessing/resource_sharer.py", line 86, in get_connection
[rank0]: c = Client(address, authkey=process.current_process().authkey)
[rank0]: File "/usr/lib/python3.10/multiprocessing/connection.py", line 508, in Client
[rank0]: answer_challenge(c, authkey)
[rank0]: File "/usr/lib/python3.10/multiprocessing/connection.py", line 752, in answer_challenge
[rank0]: message = connection.recv_bytes(256) # reject large message
[rank0]: File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
[rank0]: buf = self._recv_bytes(maxlength)
[rank0]: File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
[rank0]: buf = self._recv(4)
[rank0]: File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
[rank0]: chunk = read(handle, remaining)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 73, in handler
[rank0]: _error_if_any_worker_fails()
[rank0]: RuntimeError: DataLoader worker (pid 486770) is killed by signal: Aborted.
[rank0]: The above exception was the direct cause of the following exception:
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/user/run_train.py", line 19, in <module>
[rank0]: train(
[rank0]: File "/home/user/src/lib/_commands/train.py", line 207, in train
[rank0]: train_from_config(config=config)
[rank0]: File "/home/user/src/lib/_commands/train.py", line 346, in train_from_config
[rank0]: trainer_instance.fit(
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 561, in fit
[rank0]: call._call_and_handle_interrupt(
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 47, in _call_and_handle_interrupt
[rank0]: return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 105, in launch
[rank0]: return function(*args, **kwargs)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 599, in _fit_impl
[rank0]: self._run(model, ckpt_path=ckpt_path)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1012, in _run
[rank0]: results = self._run_stage()
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1056, in _run_stage
[rank0]: self.fit_loop.run()
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 216, in run
[rank0]: self.advance()
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 455, in advance
[rank0]: self.epoch_loop.run(self._data_fetcher)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 150, in run
[rank0]: self.advance(data_fetcher)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 282, in advance
[rank0]: batch, _, __ = next(data_fetcher)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fetchers.py", line 134, in __next__
[rank0]: batch = super().__next__()
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fetchers.py", line 61, in __next__
[rank0]: batch = next(self.iterator)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/utilities/combined_loader.py", line 341, in __next__
[rank0]: out = next(self._iterator)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/pytorch_lightning/utilities/combined_loader.py", line 78, in __next__
[rank0]: out[i] = next(self.iterators[i])
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 708, in __next__
[rank0]: data = self._next_data()
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1458, in _next_data
[rank0]: idx, data = self._get_data()
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1401, in _get_data
[rank0]: success, data = self._try_get_data(self._timeout)
[rank0]: File "/home/user/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1264, in _try_get_data
[rank0]: raise RuntimeError(
[rank0]: RuntimeError: DataLoader worker (pid(s) 486770) exited unexpectedly
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,939,442,183 | [Inductor] Introducing Subgraph as a Choice | PaulZhang12 | closed | [
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149761
Introduce autotuning on a subgraph as a choice in Inductor.
Working with decomposing mm -> bmm + sum, with repro https://pastebin.com/UZq3VtyK
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,441,365 | AssertionError: Unexpected key _export_root.mods.embedding_model.blocks.0 | ivyw-ts | closed | [
"module: onnx",
"triaged"
] | 1 | NONE | ### 🐛 Describe the bug
This is a followup/continuation of <a href="https://github.com/pytorch/pytorch/issues/149533">bug report #149533</a>.
We ran into this error when trying to convert the <a href="https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa">VoxLingua107 ECAPA-TDNN Spoken Language Identification Model</a> to ONNX.
### Error message:
```
AssertionError: Unexpected key _export_root.mods.embedding_model.blocks.0
```
### Stack trace:
```
torch._dynamo.exc.ArgsMismatchError: got an unexpected keyword argument 'lengths'.
func = 'forward' /Users/user/langid/.venv/lib/python3.13/site-packages/speechbrain/lobes/models/ECAPA_TDNN.py:83, args = [<class 'speechbrain.lobes.models.ECAPA_TDNN.TDNNBlock'>, <class 'torch.Tensor'>], kwargs = {'lengths': TensorVariable()}
from user code:
File "/Users/user/langid/.venv/lib/python3.13/site-packages/speechbrain/inference/classifiers.py", line 188, in forward
return self.classify_batch(wavs, wav_lens)
```
### Additional output: [onnx_export_2025-03-21_16-22-16-738715_pt_export.md](https://github.com/user-attachments/files/19396927/onnx_export_2025-03-21_16-22-16-738715_pt_export.md)
### Steps to replicate the error (using Linux machine):
We followed the README for Linux to download and build Pytorch on a Conda environment, but used the most recent nightly build by running
```
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
```
. The next steps detail how to replicate the error we encountered when exporting the VoxLingua model.
1. Install speechbrain dependencies:
```
pip install git+https://github.com/speechbrain/speechbrain.git@develop
```
2. Set up VoxLingua project in new Python file:
```
import torch
import torchaudio
from speechbrain.inference.classifiers import EncoderClassifier
import torch.onnx
language_id = EncoderClassifier.from_hparams(source="speechbrain/lang-id-voxlingua107-ecapa", savedir="tmp")
# Create dummy audio signal data
signal = torch.zeros(48000)
prediction = language_id.classify_batch(signal)
print(prediction)
```
3. Add torch.onnx command to end of Python file:
```
torch.onnx.export(language_id, signal, "langid.onnx", export_params=True,
do_constant_folding=True, input_names=['input'], output_names=['output'],
dynamic_axes={'input' : {0 : 'batch_size'}}, dynamo=True, report=True)
```
4. Add this line after line 1060 of the .venv/lib/python3.13/site-packages/speechbrain/inference/classifiers.py file:
```
return torch.empty_like(x)
```
5. Run in conda environment:
```
python3 <FILENAME>.py
```
This should result in the error and stack trace described in the beginning of this report.
**Note:**
Step 4 is meant to bypass the error *torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not extract specialized integer from data-dependent expression u0 (unhinted: u0). (Size-like symbols: none)*. We have tried the solutions mentioned <a href="https://github.com/pytorch/pytorch/issues/149533#issuecomment-2739198942">here</a> without success, and created this updated report as we are encountering a different type of issue with the new torch build.
### Versions
PyTorch version: 2.8.0.dev20250321+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.13.2 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:02) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
CPU family: 6
Model: 79
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
Stepping: 1
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 320 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.1
[pip3] torch==2.8.0.dev20250321+cpu
[pip3] torchaudio==2.6.0.dev20250321+cpu
[pip3] triton==3.2.0
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.14.1 pypi_0 pypi
[conda] torch 2.8.0.dev20250321+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250321+cpu pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi | true |
2,939,360,023 | [BE]: Update cudnn frontend submodule to 1.11.0 | Skylion007 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: cudnn"
] | 9 | COLLABORATOR | Update CUDNN frontend submodule to 11.1.0. Adds some new features like score_mod from flex_attention and adds a lot of bugfixes and new feature knobs. | true |
2,939,332,557 | [dynamo][hooks] config to wrap the top frame in a wrapper | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149758
* #149712
This should be done by default but there are too many issues. This PR is a
workaround.
https://github.com/pytorch/pytorch/issues/117584
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,939,325,223 | [Profiler] Give non-zero default values to start events | mcalman | open | [
"triaged",
"open source",
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 20 | CONTRIBUTOR | The intent of the existing code is to
> // Assign system TIDs to start events based on the system TID of the next
// observed event with the same Python TID.
However, if there are start events that don't share the same Python TID as later observed events, then they are left with the default initialization of DeviceAndResource and assigned values of `0`. This is problematic because Kineto uses `device=0, resource=0` for the first GPU (or other backend) device.
This PR maintains the previous logic of using TIDs from later events if any are present, but defaults to the current process and system thread IDs if there aren't later events to reference.
This issue was discovered while working to implement a custom backend and some CPU start events were appearing on the same process and thread as the device in the trace. | true |
2,939,265,807 | Removing doc references to PRE_CXX11_ABI. | AlannaBurke | closed | [
"module: docs",
"Merged",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Fixes #149550
cc @svekars @sekyondaMeta | true |
2,939,217,204 | [sigmoid] Fix scalar resolution for Scalar_mode aten ops. | zhxchen17 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Summary: For Scalar variant resolution, we didn't handle a corner case of "Tensor_mode" variant (from aten::div). Adding the missing case to the graph pass.
Test Plan: buck test mode/opt caffe2/test:test_export -- -r test_operator_aten_tensor_mode_variant_cpp_runtime
Differential Revision: D71638433
| true |
2,939,217,042 | [sigmoid] Support _operator.neg/truediv | zhxchen17 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Summary: adding operator.truediv and operator.neg support to the runtime
Test Plan: buck run mode/opt caffe2/test:test_export -- -r test_sym_float_operators_cpp_runtime_nonstrict
Differential Revision: D71637267
| true |
2,939,215,094 | Stash tensors for reduce_scatter_v and all_gather_v | kwen2501 | closed | [
"oncall: distributed",
"release notes: distributed (c10d)"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149753
* #148590
https://github.com/pytorch/pytorch/pull/148590 removed record_stream. Since previous AVOID_RECORD flag does not cover reduce_scatter_v and all_gather_v which are in coalescing form, these two ops were missed. Causing TorchRec's Variable Length Embedding to fail.
This PR adds a vector to stash tensors when coalescing is in flight. And the end of coalescing, it will hand over the tensors to Work.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,939,187,660 | [MPS][BE] Move `polar`/`complex` to stubs | malfet | closed | [
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149730
* __->__ #149752
* #149729
* #149728
* #149727
No need to have in-place MPS kernel, as it just copy-n-paste of code
from TensorFactories.cpp into Binarykernel.mm | true |
2,939,186,088 | [AOTAutogradCache] Allow Custom Autograd functions behind a flag | jamesjwu | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149751
This adds a new env var and flag,
autograd_cache_allow_custom_autograd_functions, (env var: `TORCHINDUCTOR_AUTOGRAD_CACHE_ALLOW_CUSTOM_AUTOGRAD`) which allows custom autograd functions into AOTAutogradCache.
@hirsheybar and I worked together to verify that the higher order op AutogradFunctionApply is pure with respect to the dynamo input being passed in, so this *should* be safe. I'm still putting it behind a flag and turning it on slowly, first on an internal model, though. Once we verify that it is correct on the internal model we can work to enable the flag by default.
Differential Revision: [D71633184](https://our.internmc.facebook.com/intern/diff/D71633184/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,939,124,233 | [FSDP2] warning that reshard_after_forward=1 and True are different | weifengpy | closed | [
"oncall: distributed",
"Merged",
"ciflow/inductor",
"release notes: distributed (fsdp2)"
] | 3 | CONTRIBUTOR | people complains about spending time to debug reshard_after_forward=1. What they actually want is reshard_after_forward=True. 1 and True can be used interchangeably in programming generally, add one-time warning to remind they are different
* reshard_after_forward=1 means resharding parameters to world size 1, by keeping unsharded parameters from forward to backward
* reshard_after_forward=True means reshard parameters to FSDP mesh
from FSDP2 perspective, our docstring is clear about int vs bool https://pytorch.org/docs/main/distributed.fsdp.fully_shard.html
<img width="764" alt="Screenshot 2025-03-21 at 11 02 55 AM" src="https://github.com/user-attachments/assets/6675f7a4-95a0-4421-8dbf-f47e9fdeca26" />
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149750
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,939,099,055 | Support None return type in torchbind and Add more AOTI torchbind e2e tests | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Summary:
- Add more tests for torchbind in aoti
**FallBackKernel**
- In FallbackKernel.find_device, do not check the device of torchbind obj because they don't have a fixed "device"
- If no device found for CallTorchBindObject, use cpu
- handle None output in `export_extern_kernel_node`
Test Plan:
```
buck run //sigmoid/inference/test:e2e_test_cpu -- -r CustomClassHolderConstantDynamic
```
Differential Revision: D70746626
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,090,050 | Remove `torch.utils.deterministic` from `MOD_SKIPLIST` | guilhermeleobas | open | [
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149748
* #149643
Attempt to trace `torch.utils.deterministic` module
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,939,066,739 | Support torchbind in OSS proxy executor | yushangdi | closed | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 16 | CONTRIBUTOR | Summary:
Implement torchbind support in OSSProxyExecutor.
Exactly the same as the implementation in FbProxyExecutor.
D69693697 - fbProxyExecutor
D69887230 - fbProxyExecutor but for torchbind method
Other changes:
- When generating the schema of the CallTrochBind HOP, the arg name of the torchbind object arg should be the same as the torchbind method's torchbind object arg (instead of `obj`).
- In `AOTIModelPackageLoader`, we extract everything in `data/constants` to `tmp_dir/data/aot_inductor/<model>/` folder, so the torchbind objs exist in the same folder as the rest of the files (e.g. cpp, so). This is to be consistent of how files are packaged internally
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r torchbind_aoti
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r aot_compile
```
Differential Revision: D69500038
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,939,058,839 | enabled dynamic rblock scaling on H100 | shunting314 | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
Apply https://github.com/pytorch/pytorch/pull/109275 to h100.
### Error logs
_No response_
### Versions
.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov | true |
2,939,052,831 | [ONNX] Clean up legacy dynamo export code | justinchuby | closed | [
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: bc breaking",
"suppress-bc-linter"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149745
Clean up code that is unused and obsolete. The public `torch.onnx.dynamo_export` is kept for now but the legacy implementation is removed.
Remove public option classes and OnnxRegistry that have been deprecated.
Users: use torch.onnx.export(…, dynamo=True). | true |
2,939,033,198 | [fbcode]Removing `@NoIntBaseDeprecated` annotation in `caffe2.thrift` file (#149742) | Sunnie912 | open | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 19 | CONTRIBUTOR | Summary:
To align with thrift-python, we are adding the int base class for `non-Flag` enums. In order to not break production code, the annotation `python.NoIntBaseClassDeprecated` is added to opt-out some enums
After the related customer code logic changes, we can now safely remove the annotations that were added earlier.
Our ultimate goal is to unconditionally add the `int` base to `thrift-py3` enums.
Test Plan:
```
buck test 'fbcode//mode/opt' fbcode//caffe2/torch/fb/training_toolkit/applications/bulk_eval/tests:evaluator_test -- --exact 'caffe2/torch/fb/training_toolkit/applications/bulk_eval/tests:evaluator_test - test_setup_evaluation_utils (caffe2.torch.fb.training_toolkit.applications.bulk_eval.tests.evaluator_test.EvaluatorTest)'
```
Reviewed By: ahilger
Differential Revision: D71446522
| true |
2,939,017,311 | dynamic annotations that still allow duck sizing | xmfan | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2 | MEMBER | ### 🚀 The feature, motivation and pitch
Today if we `mark_dynamic`/`maybe_mark_dynamic`, we will always assign a separate symbol for each of the dims. This is different than automatic dynamic's default behavior which might share symbols between dims, and this difference can incur recompiles and remote cache misses.
I ran into this while marking activations as dynamic in https://github.com/pytorch/pytorch/pull/149707. When the saved activations are user inputs, marking them with `maybe_mark_dynamic` will cause a recompile if the marked dimensions were previously duck sized. I have a simplified example below, where `fn` is compiled twice despite already being traced dynamic the first time, and the shapes not changing.
```python
import torch
x = torch.randn(10, 10)
@torch.compile(backend="eager", dynamic=True)
def fn(x):
return x + 1
fn(x)
"""
class GraphModule(torch.nn.Module):
def forward(self, s0: "Sym(s0)", L_x_: "f32[s0, s0][s0, 1]cpu"):
l_x_ = L_x_
# File: /home/xmfan/core/a/pytorch/ex.py:7 in fn, code: return x + 1
add: "f32[s0, s0][s0, 1]cpu" = l_x_ + 1; l_x_ = None
return (add,)
"""
torch._dynamo.mark_dynamic(x, [0, 1])
fn(x)
"""
class GraphModule(torch.nn.Module):
def forward(self, s0: "Sym(s0)", s1: "Sym(s1)", L_x_: "f32[s0, s1][s1, 1]cpu"):
l_x_ = L_x_
# File: /home/xmfan/core/a/pytorch/ex.py:7 in fn, code: return x + 1
add: "f32[s0, s1][s1, 1]cpu" = l_x_ + 1; l_x_ = None
return (add,)
"""
```
### Alternatives
another option could be to get rid of duck sizing
### Additional context
_No response_
cc @chauhang @penguinwu @ezyang @bobrenjc93 | true |
2,938,982,029 | [fbcode]Removing `@NoIntBaseDeprecated` annotation in `caffe2.thrift` file | williamwen42 | closed | [
"fb-exported"
] | 4 | MEMBER | Summary:
To align with thrift-python, we are adding the int base class for `non-Flag` enums. In order to not break production code, the annotation `python.NoIntBaseClassDeprecated` is added to opt-out some enums
After the related customer code logic changes, we can now safely remove the annotations that were added earlier.
Our ultimate goal is to unconditionally add the `int` base to `thrift-py3` enums.
Test Plan:
```
buck test 'fbcode//mode/opt' fbcode//caffe2/torch/fb/training_toolkit/applications/bulk_eval/tests:evaluator_test -- --exact 'caffe2/torch/fb/training_toolkit/applications/bulk_eval/tests:evaluator_test - test_setup_evaluation_utils (caffe2.torch.fb.training_toolkit.applications.bulk_eval.tests.evaluator_test.EvaluatorTest)'
```
Reviewed By: ahilger
Differential Revision: D71446522
| true |
2,938,981,469 | Cudagraph fix + comment cleanup | eellison | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149741
Cudagraphs is careful to not allow any memory recorded to escape globally without having a reference to the tensor. This is because we may later reclaim that memory for a cudagraph recording and we need to mark the tensor as erroring on access. Very occasionally, a stray tensor will have been allocated locally but not yet cleaned up. In this case, we enter the slow path and try to gc.collect() to deallocate it. From a hard to repro internal use case, this was fixed by an additional `cuda.synchronize()`.
i also snuck in an outdated comment and a duplicate line removal.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,938,955,950 | Torch compile update documentation, listing required dependent packages | atalman | closed | [
"module: docs",
"triaged",
"topic: docs",
"oncall: pt2"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
Please see PR: https://github.com/pytorch/test-infra/pull/6434
On clean Amazon Linux 2023 instance to make torch compile work in addition to the python we need to install following 2 packages:
```
yum groupinstall -y "Development Tools"
yum install -y python-devel
```
Please make sure to document these dependencies in and any other torch compile doc :
https://pytorch.org/tutorials/intermediate/torch_compile_tutorial_.html
cc @svekars @sekyondaMeta @AlannaBurke @chauhang @penguinwu @malfet @seemethere @williamwen42
### Versions
2.8.0 | true |
2,938,933,544 | adding logging to capture when a trainer process is sigkilled. | aschhabra | closed | [
"oncall: distributed",
"fb-exported",
"release notes: distributed (torchelastic)"
] | 9 | CONTRIBUTOR | Summary:
It will help us determine when a trainer process is not terminated gracefully due to SIGKILL by torch elastic.
In case of process is terminated with SIGKILL, there is no way to collect logs before termination which causes confusion due to missing logs. Logging when trainer was SIGKILLED when help understand failure for the trainer process.
Test Plan:
unit tests
buck tests test/distributed/elastic/multiprocessing/errors/subprocess_handler_test.py
https://www.internalfb.com/intern/testinfra/testrun/12103424072581559
**ran an e2e job to confirm that training jobs using torch elastic will continue to run gracefully.**
https://www.internalfb.com/mlhub/pipelines/runs/mast/f710719113-TrainingApplication_OYJWR?job_attempt=0&version=0&tab=execution_details&env=PRODUCTION
Differential Revision: D71296393
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,938,929,927 | [ROCm] Extend vectorized elementwise kernel to more heterogenous tensor types. | carlobertolli | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: rocm",
"ciflow/rocm"
] | 3 | CONTRIBUTOR | This patch extends the initial support for "vectorized templated" kernels to the following input tensor types: (BFloat16, float)
(float, float16)
(float16, float)
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,938,823,259 | `INTERNAL ASSERT FAILED` in `interpolate` and `torch.import_ir_module` | vwrewsge | open | [
"oncall: quantization",
"module: error checking"
] | 0 | NONE | ### 🐛 Describe the bug
# Bug 1
Code:
```
import torch
from torch.nn.quantized.functional import interpolate
x = torch.rand((1, 1, 4, 4), dtype=torch.float32)
q_x = torch.quantize_per_tensor(x, scale=0.1, zero_point=10, dtype=torch.quint8)
_ = interpolate(q_x, scale_factor=-1.0, mode='nearest')
```
Output:
```
File "/export/d2/anaconda3/lib/python3.11/site-packages/torch/nn/functional.py", line 4649, in interpolate
return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: input_width > 0 && output_width > 0 INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/quantized/cpu/UpSampleNearest2d.cpp":142, please report a bug to PyTorch.
```
# Bug 2
Code:
```
import torch
cu = torch._C.CompilationUnit("def forward(x):\n return x\n")
dummy_ir = {"methods": {"forward": "graph representation", "extra_method": "graph representation"}}
inputs = {'forward': torch.randn(1, 1, 10, 10), 'extra_method': torch.randn(1, 1, 10, 10)}
imported_module = torch.import_ir_module(cu, "dummy_module", dummy_ir, inputs, True)
```
Output:
```
imported_module = torch.import_ir_module(cu, "dummy_module", dummy_ir, inputs, True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: THPDevice_Check(map_location.ptr()) INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/python/script_init.cpp":1893, please report a bug to PyTorch.
```
### Versions
torch 2.6.0
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @malfet | true |
2,938,799,179 | `INTERNAL ASSERT FAILED` in `torch.jit.script` | vwrewsge | open | [
"oncall: jit"
] | 0 | NONE | ### 🐛 Describe the bug
Code:
```
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
class TestModule(torch.nn.Module):
def forward(self, input: torch.Tensor, weight: torch.Tensor) -> torch.Tensor:
return torch.nn.functional.conv2d(input, weight, padding="same")
module = TestModule().eval()
scripted_module = torch.jit.script(module)
mobile_module = optimize_for_mobile(scripted_module)
```
Output:
```
Traceback (most recent call last):
File "/export/d2/test.py", line 11, in <module>
mobile_module = optimize_for_mobile(scripted_module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/export/d2/anaconda3/lib/python3.11/site-packages/torch/utils/mobile_optimizer.py", line 59, in optimize_for_mobile
optimized_cpp_module = torch._C._jit_pass_optimize_for_mobile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":617, please report a bug to PyTorch. We don't have an op for prepacked::conv2d_clamp_prepack but it isn't a special case. Argument types: Tensor, NoneType, int[], str, int[], int, NoneType, NoneType,
Candidates:
prepacked::conv2d_clamp_prepack(Tensor W, Tensor? B, int[2] stride, int[2] padding, int[2] dilation, int groups, Scalar? output_min=None, Scalar? output_max=None) -> __torch__.torch.classes.xnnpack.Conv2dOpContext
```
### Versions
torch 2.6.0
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,938,791,301 | DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int64 (__main__.TestForeachCUDA) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5 | NONE | Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39166913362).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int64], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int64], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int64], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int64], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int64], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int64], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int64], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int64], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int64], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int64], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int64], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int64], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int64], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int64], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int64], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int64], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int64], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int64], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int64], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int64]], args=(10), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99 | true |
2,938,680,560 | [RFC] Multi-backend, multi-device test class instantiation for Inductor | kundaMwiza | open | [
"triaged",
"module: testing",
"oncall: pt2",
"module: inductor"
] | 3 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
## Background
Out-of-tree backends (like Graphcore's) are able to utilise the PyTorch test suite to verify their implementation.
For eager tests, to instantiate device-specific test classes, out-of-tree backends can subclass from `DeviceTypeTestBase` and register their device test class by setting the environment variable `TORCH_TEST_DEVICES` to point to a Python file containing a subclass of `DeviceTypeTestBase`. `instantiate_device_type_tests` in in-tree test modules would then create additional test classes that are specialised for the device type.
As a simple example of how it works today for a custom device type called `customdevice`:
In `/path/to/custom/device/subclass.py`, used with export `TORCH_TEST_DEVICES=/path/to/custom/device/subclass.py`:
```python
class CustomDevice(DeviceTypeTestBase):
device_type = "customdevice"
```
For eager PyTorch, this was then handled in `instantiate_device_type_tests`:
```python
class TestFunctionality(TestCase):
@ops(...)
def test_op(self, op, device, dtype):
...
# Generate device specific test classes
instantiate_device_type_tests(TestFunctionality, globals())
# This generates a class with the device type in the name e.g. TestFunctionalityCPU
```
Device types supporting Inductor may have a selection of available backends to choose from during lowering. In-tree there are C++, Halide and Triton options available for CPU, and Halide and Triton for CUDA, for example. To test these lowering options, the following pattern is used in PyTorch test modules to create inductor backend specific test classes:
```python
def check_model(...):
...
def check_model_cuda(...):
...
class SomeTestTemplate:
def test_unbacked(self):
self.check_model(...)
...
# 'Empty vessel' test classes are explicitly created for each device, annotated with onlyXYZ or
# equivalent, and are then duplicated for other non-default backends for the device, e.g. Halide.
@onlyCUDA
class TestSomeCUDA(TestCase):
check_model = check_model_cuda
@onlyCUDA
@config.patch("cuda_backend", "halide")
class TestSomeCUDAHalide(TestCase):
check_model = check_model_cuda
@onlyCPU
class TestSomeCPU(TestCase):
check_model = check_model
def copy_tests(...):
...
# Copy the tests in to these 'empty vessel' classes
copy_tests(SomeTestTemplate, TestSomeCUDA, "cuda")
copy_tests(SomeTestTemplate, TestSomeCUDAHalide, "cuda")
copy_tests(SomeTestTemplate, TestSomeCPU, "cpu")
```
This manual class instantiation process does not allow for out-of-tree backends to register their device types and backends to get instantiated test classes like they would have had in eager tests, despite a fair amount of test suites being generic enough to apply to most Inductor supporting device types.
Examples of real test modules which exhibit this problem include test_torchinductor.py and test_torchinductor_opinfo.py.
## Proposal
@kundaMwiza created a proposal WIP extension (https://github.com/pytorch/pytorch/pull/145873) of the `TORCH_TEST_DEVICES API` that allows third party backends to specify which lowering options they support, so that specialised test classes can be generated for each option, for example:
We propose to replace the existing pattern in Inductor test modules with usage of a version of `instantiate_device_type_tests` that takes a list of Inductor-capable backends:
```python
class TestFunctionality(TestCase):
@ops(...)
def test_op(self, op, device, dtype):
...
# Generate device specific test classes
instantiate_device_type_tests(
TestFunctionality, globals(),
enable_inductor_backend_classes=True,
# if the third party extension has `inductor_backends` set,
# apply the filter
only_inductor_backends=["triton"]
)
# This generates e.g.:
# TestFunctionalityTritonCUSTOMDEVICE
# TestFunctionalityTritonCPU
# Native device + backends are guarded to only run if the device + backend are available:
# i.e. TestFunctionalityTritonCPU is equivalent to:
# @skipUnless(HAS_CPU, "Requires C++ compiler")
# @config.patch("cpu_backend", "cpp")
# class TestFunctionalityTritonCPU(CPUTestBase):
# ...
```
We also propose to deprecate `config.{device}_backend` Inductor config properties, and replace them with the following endpoints that cover all device types:
```python
def set_active_backend(device_type: str, backend: str) -> None:
...
def get_active_backend(device_type: str) -> str:
...
```
These will have to be matched in `register_backend_for_device` by changing the type of the `device_scheduling` parameter to `dict[str, SchedulingConstructor]` and adding a `device_default_scheduler: str` parameter. For backwards compatibility, `config.{device}_backend` would be replaced by properties that use these new methods respectively.
Then, back in our made up out-of-tree `customdevice` test base mentioned earlier `/path/to/custom/subclass.py`, we'd add an `inductor_backends` property:
```python
class CustomDevice(DeviceTypeTestBase):
device_type = "customdevice"
inductor_backends = ["triton", "halide"]
```
This would then enable the customdevice backend to instantiate those tests for its `triton` and `halide` backends.
### Alternatives
Note: this feature can probably be handled by the more general device abstraction proposal: https://github.com/pytorch/pytorch/issues/146898 but this separate issue was created for visibility, especially for the suggestion to deprecate `config.{device}_backend` for an alternative that is scalable.
### Additional context
_No response_
CC @jansel @malfet
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov | true |
2,938,676,439 | [AOTI] Switch AOTI benchmark runner to use run_single_threaded | desertfire | open | [
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149733
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.