id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,905,222,405 | Add shim.h C API to call dispatcher on our own aten ops | janeyx99 | closed | [
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | This PR still needs testing through some cpp extension
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148832
* #148124
| true |
2,905,179,395 | FSDP2 and autocast compatibility issue | yjxiong | open | [
"oncall: distributed",
"triaged",
"module: fsdp"
] | 1 | NONE | ### 🐛 Describe the bug
# Bug Report: Composable FSDP (`fully_shard`) Loses Autocast Context During Checkpoint Recomputation
## Description
When using the composable FSDP API (`fully_shard`) with `torch.utils.checkpoint.checkpoint`, the autocast context is lost during the recomputation phase of the backward pass. This causes inputs to have different dtypes during the recomputation compared to the original forward pass, leading to tensor metadata mismatch errors.
The classic FSDP API (`FullyShardedDataParallel` class) does not have this issue and correctly preserves the autocast context during checkpoint recomputation.
## Steps to Reproduce
The following minimal reproducible sample demonstrates the issue by comparing both FSDP implementations with checkpointing under autocast:
```python
import torch
import torch.nn as nn
from torch.utils.checkpoint import checkpoint
import torch.distributed as dist
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed._composable.fsdp import MixedPrecisionPolicy, fully_shard
from torch.distributed.fsdp import MixedPrecision
class CheckpointedModule(nn.Module):
def __init__(self, wrap_type: str = "FSDP"):
super().__init__()
self.wrap_type = wrap_type
self.linear1 = nn.Linear(2048, 2048)
def forward(self, x):
print(f"wrap_type: {self.wrap_type} forward input dtype {x.dtype}, shape {x.shape}")
x = nn.functional.gelu(x)
print(f"wrap_type: {self.wrap_type} gelu output dtype {x.dtype}, shape {x.shape}")
x = self.linear1(x)
print(f"wrap_type: {self.wrap_type} linear1 output dtype {x.dtype}, shape {x.shape}")
return x
def init_process_group():
if not dist.is_initialized():
dist.init_process_group(backend="nccl")
torch.cuda.set_device(dist.get_rank())
def reproduce_fsdp_checkpoint_dtype_bug():
# Initialize process group (single process for simplicity)
init_process_group()
# FSDP1 mixed precision
mp_policy_fsdp = MixedPrecision(
param_dtype=torch.bfloat16, reduce_dtype=torch.bfloat16, buffer_dtype=torch.bfloat16
)
# FSDP2 mixed precision
mp_policy_fsdp2 = MixedPrecisionPolicy(param_dtype=torch.bfloat16, reduce_dtype=torch.bfloat16)
# FSDP1
fsdp_model = FSDP(CheckpointedModule(wrap_type="FSDP"), mixed_precision=mp_policy_fsdp, device_id=dist.get_rank())
# FSDP2
fsdp2_model = CheckpointedModule(wrap_type="fully_shard")
fully_shard(fsdp2_model, mp_policy=mp_policy_fsdp2)
# Create input
x = torch.randn(256, 2048).cuda()
x.requires_grad = True
# Run with autocast, FSDP
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
# Forward pass
out = checkpoint(fsdp_model, x, use_reentrant=False)
# Backward pass to trigger recomputation
loss = out.sum()
loss.backward()
# Run with autocast, fully_shard
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
# Forward pass
out = checkpoint(fsdp2_model, x, use_reentrant=False)
# Backward pass
loss = out.sum()
loss.backward()
dist.destroy_process_group()
if __name__ == "__main__":
reproduce_fsdp_checkpoint_dtype_bug()
```
## Expected Behavior
Both FSDP implementations should maintain consistent dtypes between the original forward pass and the recomputation during backward. The autocast context should be preserved during checkpoint recomputation.
## Actual Behavior
1. With classic FSDP (`FullyShardedDataParallel`), the dtype remains consistent between forward pass and recomputation.
2. With composable FSDP (`fully_shard`), the autocast context is lost during recomputation, causing inputs to have different dtypes in recompute than they did in the original forward pass. This causes metadata mismatch error.
During recomputation, you can observe that inputs to the `fully_shard` model revert to float32 instead of maintaining bfloat16, which can lead to tensor metadata mismatch errors.
## Environment
- PyTorch version: 2.5
- CUDA version: 12.4
- GPU type: NVIDIA H100
- OS: Linux Ubuntu 22
### Versions
## Environment
- PyTorch version: 2.5
- CUDA version: 12.4
- GPU type: NVIDIA H100
- OS: Linux Ubuntu 22
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o | true |
2,905,176,430 | [import][fx] Move map_aggregate to C++ | jansel | closed | [
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Copy of #148243 for fbcode import
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,905,175,929 | torch.compile with mode = "max-autotune" raise error when using gradient checkpoint | efsotr | closed | [
"module: activation checkpointing",
"triaged",
"module: cuda graphs",
"oncall: pt2"
] | 5 | NONE | ### 🐛 Describe the bug
```python
import torch
from torch import nn
from torch.utils.checkpoint import checkpoint
class Linear(nn.Linear):
pass
# def forward(self, x):
# y = super().forward(x)
# return x + y
class Test(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.ModuleList([Linear(10, 10, device="cuda") for i in range(10)])
def forward(self, x):
for i in range(len(self.layers)):
x = checkpoint(self.layers[i], x)
return x
model = Test()
def compile(model):
return torch.compile(model, mode="max-autotune")
for i in range(len(model.layers)):
model.layers[i] = compile(model.layers[i])
x = torch.randn((10, 10), device="cuda", requires_grad=True)
y = model(x)
```
```
File ~/anaconda3/envs/profile/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py:1004, in CUDAGraphNode._copy_inputs_and_remove_from_src(self, dsts, srcs)
1002 # Fails on empty lists
1003 if dst_tensors:
-> 1004 torch._foreach_copy_(dst_tensors, src_tensors)
File ~/anaconda3/envs/profile/lib/python3.10/site-packages/torch/utils/_device.py:106, in DeviceContext.__torch_function__(self, func, types, args, kwargs)
104 if func in _device_constructors() and kwargs.get('device') is None:
105 kwargs['device'] = self.device
--> 106 return func(*args, **kwargs)
RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File "/tmp/ipykernel_2930826/1024770469.py", line 10, in forward
y = super().forward(x). To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
### Versions
torch 2.5.1+cu124
cc @soulitzer @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @chauhang | true |
2,905,073,976 | MiniCPM-o compilation issue | janak2 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0 | NONE | ### 🐛 Describe the bug
MiniCPM-o model doesn't compile.
Here is the code to reproduce the error on Mac:
```python
import torch
from PIL import Image
import librosa
from transformers import AutoModel, AutoTokenizer
import os
import time
from torch._dynamo.testing import CompileCounter
from transformers.cache_utils import DynamicCache
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
os.environ["TORCHINDUCTOR_FX_GRAPH_CACHE"] = "1"
os.environ["TORCH_COMPILE_DEBUG"] = "1"
os.environ["TORCH_LOGS"] = "dynamo,inductor,guards"
device = torch.device("mps")
model = AutoModel.from_pretrained(
"openbmb/MiniCPM-o-2_6",
trust_remote_code=True,
attn_implementation="sdpa",
torch_dtype=torch.bfloat16,
revision="refs/pr/19",
low_cpu_mem_usage=True,
)
model = model.to(device=device)
tokenizer = AutoTokenizer.from_pretrained(
"openbmb/MiniCPM-o-2_6", trust_remote_code=True, revision="refs/pr/19"
)
model.init_tts()
model.tts.float()
def inference():
ref_audio_path = "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/services/inference-engine/minicpm-o-2.6/ref_audios/female_example.wav"
ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True)
sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode="audio_assistant", language="en")
user_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True)
user_question = {
"role": "user",
"content": [user_audio],
}
if hasattr(model, "processor") and hasattr(model.processor, "feature_extractor"):
model.processor.feature_extractor.device = torch.device("cpu")
start_time = time.time()
msgs = [sys_prompt, user_question]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path="result_assistant_round_1.wav",
)
end_time = time.time()
print(f"Time taken to process the conversation: {end_time - start_time} seconds")
start_time = time.time()
model.generate = torch.compile(model.generate, mode="reduce-overhead", fullgraph=True)
end_time = time.time()
print(f"Time taken to compile the model: {end_time - start_time} seconds")
inference()
```
### Error logs
`Loading checkpoint shards: 100%|██████████████████████████████████████████| 4/4 [00:00<00:00, 10.17it/s]
Time taken to compile the model: 0.00593113899230957 seconds
Traceback (most recent call last):
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/services/inference-engine/minicpm-o-2.6/mac/test_compile.py", line 163, in <module>
inference()
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/services/inference-engine/minicpm-o-2.6/mac/test_compile.py", line 62, in inference
res = model.chat(
^^^^^^^^^^^
File "/Users/janakagrawal/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-o-2_6/1ceb0cbfa4dd6c40d2d504994a50afd210222039/modeling_minicpmo.py", line 981, in chat
res, outputs = self.generate(
^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1873, in BUILD_SLICE
self.push(SliceVariable(items))
^^^^^^^^^^^^^^^^^^^^
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/variables/lists.py", line 923, in __init__
unimplemented("Dynamic slicing on data-dependent value is not supported")
File "/Users/janakagrawal/Documents/Outspeed/outspeed-inference-infra-inferno/.venv/lib/python3.11/site-packages/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Dynamic slicing on data-dependent value is not supported
from user code:
File "/Users/janakagrawal/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-o-2_6/1ceb0cbfa4dd6c40d2d504994a50afd210222039/modeling_minicpmo.py", line 789, in generate
model_inputs["inputs_embeds"] = self.get_omni_embedding(
File "/Users/janakagrawal/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-o-2_6/1ceb0cbfa4dd6c40d2d504994a50afd210222039/modeling_minicpmo.py", line 563, in get_omni_embedding
audio_embeddings = self.get_audio_embedding(data, chunk_length)
File "/Users/janakagrawal/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-o-2_6/1ceb0cbfa4dd6c40d2d504994a50afd210222039/modeling_minicpmo.py", line 543, in get_audio_embedding
target_audio_embeds.append(audio_embeds[idx, : num_audio_tokens[idx], :])
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True`
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-15.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.2
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] vector-quantize-pytorch==1.21.9
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,905,037,743 | Inductor may permute inputs to flex attention, leading to assertion error | Aleko2286 | closed | [
"triaged",
"oncall: pt2",
"module: inductor",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 6 | NONE | ### 🐛 Describe the bug
For flex attention, inputs must be contiguous, but inductor seems to permute inputs under certain conditions which then results in an assertion error.
When using an Attention layer looking somewhat like this:
```python
class Attention(nn.Module):
def __init__(
self,
q_ch: int,
kv_ch: Optional[int] = None,
qk_embed_dim: Optional[int] = None,
v_embed_dim: Optional[int] = None,
linear_bias: bool = False,
num_heads: int = 1,
):
self.q_ch = q_ch
self.kv_ch = kv_ch or self.q_ch
self.qk_embed_dim = qk_embed_dim or self.q_ch
self.v_embed_dim = v_embed_dim or self.kv_ch
self.num_heads = num_heads
assert (
not self.qk_embed_dim % num_heads and not self.v_embed_dim % num_heads
), "The dimension of the embeddings in Attention must be divisible by the number of heads."
super().__init__()
self.q_proj = nn.Linear(self.q_ch, self.qk_embed_dim, bias=linear_bias)
self.kv_proj = nn.Linear(
self.kv_ch, self.qk_embed_dim + self.v_embed_dim, bias=linear_bias
)
self.o_proj = nn.Linear(self.v_embed_dim, self.q_ch, bias=linear_bias)
def scaled_dot_product_attention(
self,
q: torch.Tensor,
k: torch.Tensor,
v: torch.Tensor,
block_mask: torch.nn.attention.flex_attention.BlockMask,
) -> torch.Tensor:
return torch.nn.attention.flex_attention.flex_attention(
q, k, v, block_mask=block_mask
)
def forward(
self, x: torch.Tensor, block_mask: torch.nn.attention.flex_attention.BlockMask
) -> torch.Tensor:
q = self.q_proj(x)
kv = self.kv_proj(x)
k = kv[..., : self.qk_embed_dim]
v = kv[..., self.qk_embed_dim :]
q = q.reshape((q.shape[0], q.shape[1], self.num_heads, -1)).transpose(1, 2)
k = k.reshape((k.shape[0], k.shape[1], self.num_heads, -1)).transpose(1, 2)
v = v.reshape((v.shape[0], v.shape[1], self.num_heads, -1)).transpose(1, 2)
return self.o_proj(
self.scaled_dot_product_attention(q, k, v, block_mask)
.transpose(1, 2)
.reshape((x.shape[0], x.shape[1], -1))
)
```
I get a LoweringException under certain conditions. It does not reproduce as a standalone example sadly. In my model, this only happens if I do a validation iteration before doing a training iteration. If I directly start training, the compilation results seems to be different and the training runs without any issue. From the error message, it looks like inductor swaps the original memory format (B, L, H, C) [transposed to (B, H, L, C)] to (B, L, C, H), which results in non-contiguous q, k and v. (B=1, H=24, L=1904, C=32)
There seems to be no straight forward way to fix this. For example, the following code will also run into the same problem:
```python
def scaled_dot_product_attention(
self,
q: torch.Tensor,
k: torch.Tensor,
v: torch.Tensor,
block_mask: torch.nn.attention.flex_attention.BlockMask,
) -> torch.Tensor:
return torch.nn.attention.flex_attention.flex_attention(
q.contiguous(), k.contiguous(), v.contiguous(), block_mask=block_mask
)
```
Workarounds like this exist:
```python
def scaled_dot_product_attention(
self,
q: torch.Tensor,
k: torch.Tensor,
v: torch.Tensor,
block_mask: torch.nn.attention.flex_attention.BlockMask,
) -> torch.Tensor:
print("", end="")
return torch.nn.attention.flex_attention.flex_attention(
q, k, v, block_mask=block_mask
)
```
I think flex_attention should probably not error on non-contiguous tensors, but rather enforce it itself. In any case, this is unexpected behavior from a user's perspective, since even when the eager version is contiguous, the compilation may fail due to query not being contiguous.
### Error logs
torch._inductor.exc.InductorError: LoweringException: AssertionError: Query must be contiguous in the last dimension
target: flex_attention
args[0]: TensorBox(StorageBox(
ComputedBuffer(name='buf195', layout=FixedLayout('cuda:0', torch.float16, size=[1, 24, 1904, 32], stride=[1462272, 1, 768, 24]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.float16, inner_fn=<function pointwise_cat.<locals>.inner_fn at 0x7f037043fd80>, ranges=[1, 24, 1904, 32]))
))
args[1]: TensorBox(StorageBox(
ComputedBuffer(name='buf196', layout=FixedLayout('cuda:0', torch.float16, size=[1, 24, 1904, 32], stride=[1462272, 1, 768, 24]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.float16, inner_fn=<function pointwise_cat.<locals>.inner_fn at 0x7f0370485e40>, ranges=[1, 24, 1904, 32]))
))
args[2]: TensorBox(
ReinterpretView(
StorageBox(
ExternKernelOut(
python_kernel_name='extern_kernels.mm',
name=buf192,
layout=FixedLayout('cuda:0', torch.float16, size=[1904, 1536], stride=[1536, 1]),
inputs=[ReinterpretView(
StorageBox(
ComputedBuffer(name='buf189', layout=FixedLayout('cuda:0', torch.float16, size=[1, 1904, 768], stride=[1462272, 768, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.float16, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x7f03703a4e00>, ranges=[1, 1904, 768]))
),
FixedLayout('cuda:0', torch.float16, size=[1904, 768], stride=[768, 1]),
origins=OrderedSet([mm_1])
), ComputedBuffer(name='buf191', layout=FixedLayout('cuda:0', torch.float16, size=[768, 1536], stride=[1, 768]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.float16, inner_fn=<function BaseView.make_loader.<locals>.loader at 0x7f03703a49a0>, ranges=[768, 1536]))],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=extern_kernels.mm,
cpp_kernel_name=at::mm_out,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}, {}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=mm_1,
origins=OrderedSet([mm_1])
)
),
FixedLayout('cuda:0', torch.float16, size=[1, 24, 1904, 32], stride=[0, 32, 1536, 1], offset=768),
origins=OrderedSet([permute_60])
)
)
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (1904, 1904, TensorBox(StorageBox(
InputBuffer(name='primals_95', layout=FixedLayout('cuda:0', torch.int32, size=[4, 24, 15], stride=[360, 15, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_94', layout=FixedLayout('cuda:0', torch.int32, size=[4, 24, 15, 15], stride=[5400, 225, 15, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_96', layout=FixedLayout('cuda:0', torch.int32, size=[4, 24, 15], stride=[360, 15, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_97', layout=FixedLayout('cuda:0', torch.int32, size=[4, 24, 15, 15], stride=[5400, 225, 15, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_98', layout=FixedLayout('cuda:0', torch.int32, size=[4, 24, 15], stride=[360, 15, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_99', layout=FixedLayout('cuda:0', torch.int32, size=[4, 24, 15, 15], stride=[5400, 225, 15, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_100', layout=FixedLayout('cuda:0', torch.int32, size=[4, 24, 15], stride=[360, 15, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_101', layout=FixedLayout('cuda:0', torch.int32, size=[4, 24, 15, 15], stride=[5400, 225, 15, 1]))
)), 128, 128, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.17677669529663687
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True}
args[7]: ()
args[8]: ()
### Versions
PyTorch version: 2.7.0.dev20250308+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.13.5-1-default-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 570.124.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 5600X 6-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 4651.0000
CPU min MHz: 550.0000
BogoMIPS: 7400.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu128
[pip3] torchaudio==2.6.0.dev20250308+cu128
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.22.0.dev20250308+cu128
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,905,035,734 | automatically convert _check(u>=0) to check_is_size(), export should suggest check_is_size() instead of _check(u>=0) when applicable | laithsakka | closed | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"oncall: export"
] | 4 | CONTRIBUTOR | cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,904,983,475 | [DSD] Fix the shared parameter mismatch for optimizer state_dict when flattening FQNs are used | fegin | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148825
Summary:
As title.
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,904,944,470 | Backout D70075331 | renganxu | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 12 | CONTRIBUTOR | Summary:
The AOTI lowering for model 699109736 and other new models worked before D70075331, but failed after with error "RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 4096 n 10 k 7936 mat1_ld 7936 mat2_ld 7936 result_ld 4096 abcType 2 computeType 68 scaleType 0"
So we revert D70075331 as a workaround now.
Test Plan: The model could be lowered and published successfully. e.g. 702869739_16
Differential Revision: D70823254
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,904,911,552 | Make dynamism code robust to NotImplementedException | bobrenjc93 | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"ci-no-td"
] | 13 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148823
In prod many models have `@property` methods that raise
NotImplementedError. This PR updates our dynamism code to be more robust
to these types of models.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,904,821,374 | Add env for disabling meta reference on functionalization. | ysiraichi | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: lazy"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148822
Fix: https://github.com/pytorch/xla/issues/8755
This PR introduces `TORCH_DISABLE_FUNCTIONALIZATION_META_REFERENCE`
environment variable. Setting this variable makes it so the
functionalization kernels won't run the meta reference, which is used to
propagate expected sizes and strides.
Currently, PyTorch/XLA doesn't actually propagates the correct strides
to its tensors. It was also shown that calling these meta functions may
incur in significant overhead.
Running the provided minimal reproducer (see issue), we see a speedup
close to 4.3x:
- Baseline: 0.0747s
- `XLA_DISABLE_FUNCTIONALIZATION=1`: 0.0159s
- `TORCH_DISABLE_FUNCTIONALIZATION_META_REFERENCE=1`: 0.0175s
In summary, this PR:
- Creates the `disable_meta_reference()` function, which checks whether
the environment variable is set
- Modifies codegen for functionalization kernels, adding the call to
`disable_meta_reference()` function to the appropriate conditions
- Creates a new bash function for running `lazy/test_ts_opinfo.py` with
the environment variable set | true |
2,904,801,219 | Update decompositions_for_jvp.py | NinoRisteski | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | small typo thing that got my eye
Fixes #ISSUE_NUMBER
| true |
2,904,739,265 | Implement derivatives for nextafter operation | baskargopinath | open | [
"triaged",
"open source",
"release notes: autograd"
] | 10 | NONE | # Implement derivatives for nextafter operation
## Description
This PR implements the derivative for the `nextafter` operation, fixing issue #148814.
The derivative of `nextafter(x, y)` with respect to `x` is:
- 1.0 where x != y (since small changes in x result in proportional changes in output)
- 0.0 where x == y (since the function is constant at that point)
The derivative with respect to `y` is always zero.
## How Has This Been Tested?
I've tested this implementation with:
- Eager mode gradient calculation
- torch.compile (Inductor backend)
- Added test functionality to verify correct gradients are computed
Tests verify that gradients are correctly computed, with x getting gradients only where x != y.
## Fixes
Fixes #148814
| true |
2,904,675,303 | addition of muon optimizer to torch.optim | SwamiKannan | closed | [
"module: optimizer",
"triaged"
] | 1 | NONE | ### 🚀 The feature, motivation and pitch
The Muon optimizer seems to be converging faster and with more stability than the Adam optimizer. Could you please consider adding it to the torch optimizers? Write-up [here](https://kellerjordan.github.io/posts/muon/) . Implementation [here](https://github.com/KellerJordan/Muon)
Thanks PyTorch team. You guys rock !
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | true |
2,904,656,471 | Fix typos in SpectralOps.cpp | csukuangfj | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | null | true |
2,904,602,180 | Remove numeric_limits::has_norm and numeric_limits::has_norm_loss | cyyever | open | [
"module: bc-breaking",
"triaged",
"open source",
"topic: not user facing"
] | 7 | COLLABORATOR | They are deprecated in C++23 and aren't used in computation.
See
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p2614r2.pdf
cc @ezyang @gchanan | true |
2,904,597,650 | Make dynamism checking code more robust | bobrenjc93 | closed | [
"fb-exported",
"release notes: fx",
"fx"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148816
Differential Revision: [D70834597](https://our.internmc.facebook.com/intern/diff/D70834597/)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,904,591,002 | remove guard_size_oblivious from unbind. | laithsakka | open | [] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148815
unbind will always specialize on dim, because it determine the number of output tensors.
guard_size_oblivious is not useful there and more confusing probably for code readers
added a comment and a test that verifies the specialization. | true |
2,904,563,055 | [Inductor] Error detected in NextafterBackward0. Traceback of forward call that caused the error | Cookiee235 | open | [
"module: autograd",
"triaged",
"oncall: pt2"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
### Reproducible Script
```python
```
### StackTrace
```
/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/graph.py:823: UserWarning: Error detected in NextafterBackward0. Traceback of forward call that caused the error:
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0308/torch.nextafter.py", line 12, in forward
x = torch.nextafter(x, torch.tensor(1.0))
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:122.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0308/torch.nextafter.py", line 37, in <module>
compiled_out = compiled_model(*inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
~~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
~~~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
~~~~~~~~~^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self,
^^^^^
...<2 lines>...
),
^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
tx, list(reversed(stack_values)), root, output_replacements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
e.__traceback__
) from None
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in__call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
~~~~~~~~~~~~~
...<6 lines>...
cudagraphs=cudagraphs,
~~~~~~~~~~~~~~~~~~~~~~
)(model_, example_inputs_)
~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
functional_call,
^^^^^^^^^^^^^^^^
...<3 lines>...
shape_env,
^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
~~~~~~~~~~~^
flat_fn,
^^^^^^^^
...<2 lines>...
fw_metadata=fw_metadata,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 396, in aot_dispatch_autograd
fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~^
flat_fn, flat_args, aot_config, fw_metadata=fw_metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 318, in aot_dispatch_autograd_graph
fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph
fx_g = make_fx(
...<3 lines>...
pre_dispatch=aot_config.pre_dispatch,
)(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2196,in wrapped
return make_fx_tracer.trace(f, *args)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2134,in trace
return self._trace_inner(f, *args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2105,in _trace_inner
t = dispatch_trace(
wrap_key(func, args, self.fx_tracer, self.pre_dispatch),
tracer=self.fx_tracer,
concrete_args=tuple(phs),
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1138,in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/_symbolic_trace.py", line 843, in trace
(self.create_arg(fn(*args)),),
~~^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/_symbolic_trace.py", line 700, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1193,in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 693, in inner_fn
outs = fn(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 644, in joint_helper
return _functionalized_f_helper(primals, tangents)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 413, in _functionalized_f_helper
f_outs = fn(*f_args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 280, in inner_fn_with_anomaly
return inner_fn(*args)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 265, in inner_fn
backward_out = torch.autograd.grad(
needed_outs,
...<2 lines>...
allow_unused=True,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/__init__.py", line 445, in grad
return handle_torch_function(
grad,
...<9 lines>...
materialize_grads=materialize_grads,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/overrides.py", line 1720, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1241,in __torch_function__
return func(*args, **kwargs)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
outputs,
...<5 lines>...
accumulate_grad=False,
)
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
t_outputs, *args, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^
) # Calls into the C++ engine to run the backward pass
^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
NotImplementedError: the derivative for 'nextafter' is not implemented.
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu | true |
2,904,554,119 | [ONNX] Remove inaccurate test comment | justinchuby | closed | [
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Remove the comment that says jit trace strategy doesn't support dynamic shapes as dict because it does support it (which is what the test is testing)
| true |
2,904,550,506 | Bump jinja2 from 3.1.5 to 3.1.6 in /.ci/docker | dependabot[bot] | closed | [
"triaged",
"open source",
"topic: not user facing",
"dependency issue",
"python"
] | 2 | CONTRIBUTOR | Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.5 to 3.1.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/releases">jinja2's releases</a>.</em></p>
<blockquote>
<h2>3.1.6</h2>
<p>This is the Jinja 3.1.6 security release, which fixes security issues but does not otherwise change behavior and should not result in breaking changes compared to the latest feature release.</p>
<p>PyPI: <a href="https://pypi.org/project/Jinja2/3.1.6/">https://pypi.org/project/Jinja2/3.1.6/</a>
Changes: <a href="https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6">https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6</a></p>
<ul>
<li>The <code>|attr</code> filter does not bypass the environment's attribute lookup, allowing the sandbox to apply its checks. <a href="https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7">https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/blob/main/CHANGES.rst">jinja2's changelog</a>.</em></p>
<blockquote>
<h2>Version 3.1.6</h2>
<p>Released 2025-03-05</p>
<ul>
<li>The <code>|attr</code> filter does not bypass the environment's attribute lookup,
allowing the sandbox to apply its checks. :ghsa:<code>cpwx-vrp4-4pq7</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/jinja/commit/15206881c006c79667fe5154fe80c01c65410679"><code>1520688</code></a> release version 3.1.6</li>
<li><a href="https://github.com/pallets/jinja/commit/90457bbf33b8662926ae65cdde4c4c32e756e403"><code>90457bb</code></a> Merge commit from fork</li>
<li><a href="https://github.com/pallets/jinja/commit/065334d1ee5b7210e1a0a93c37238c86858f2af7"><code>065334d</code></a> attr filter uses env.getattr</li>
<li><a href="https://github.com/pallets/jinja/commit/033c20015c7ca899ab52eb921bb0f08e6d3dd145"><code>033c200</code></a> start version 3.1.6</li>
<li><a href="https://github.com/pallets/jinja/commit/bc68d4efa99c5f77334f0e519628558059ae8c35"><code>bc68d4e</code></a> use global contributing guide (<a href="https://redirect.github.com/pallets/jinja/issues/2070">#2070</a>)</li>
<li><a href="https://github.com/pallets/jinja/commit/247de5e0c5062a792eb378e50e13e692885ee486"><code>247de5e</code></a> use global contributing guide</li>
<li><a href="https://github.com/pallets/jinja/commit/ab8218c7a1b66b62e0ad6b941bd514e3a64a358f"><code>ab8218c</code></a> use project advisory link instead of global</li>
<li><a href="https://github.com/pallets/jinja/commit/b4ffc8ff299dfd360064bea4cd2f862364601ad2"><code>b4ffc8f</code></a> release version 3.1.5 (<a href="https://redirect.github.com/pallets/jinja/issues/2066">#2066</a>)</li>
<li>See full diff in <a href="https://github.com/pallets/jinja/compare/3.1.5...3.1.6">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/pytorch/pytorch/network/alerts).
</details> | true |
2,904,477,750 | Custom Triton operator not registered when loading AOT-compiled .so in C++ environment | siluzhou-pku | open | [
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 2 | NONE | ### Description:
I'm encountering schema registration issues when using torch._export.aot_compile to compile a PyTorch model that includes custom Triton operators, the custom operators are not included in the compiled .so file. As a result, when the compiled .so works in Python with module imports, it fails in pure C++ environments due to missing operator schemas. How can I solve this problem?
### To Reproduce
I've been working with a model that utilizes custom Triton operators, based on the [moe_layer example](https://github.com/RobertCsordas/moe_layer/tree/master/triton_src/moe_layer). Here's the code I used to compile the model:
```python
import torch
import os
from triton_src.moe_layer import MoE
bs = 64
seq_len = 512
d_model = 512
n_experts = 32
expert_size = 128
k = 4
x = torch.randn(batch_size, input_dim).cuda()
sigma_moe = MoE(d_model, n_experts, expert_size, k).cuda()
with torch.no_grad():
dynamicLib_path = torch._export.aot_compile(
sigma_moe,
args=(x,),
dynamic_shapes={'input': {0: torch.export.Dim("batch", min=1, max=64)}},
options={
"aot_inductor.output_path": 'dynamicLib/moe.so',
"max_autotune": True,
},
)
```
After compiling, when I try to load and execute the compiled .so in Python, I encounter the following error:
```
Error: Could not find schema for mylib::cvmm_triton.
Exception raised from findSchemaOrThrow at ../aten/src/ATen/core/dispatch/Dispatcher.cpp:150 (most recent call first):
```
To make it work in Python, I need to explicitly import the module where the custom operators are defined:
```python
from triton_src.moe_layer import MoE
```
However, when attempting to use the compiled .so in a C++ environment, I cannot import the Python module, and therefore the custom operator definitions (mylib::cvmm_triton and mylib::cvmm_triton_backward) are missing. This causes the execution to fail with the same error.
It appears that during the torch._export.aot_compile process, the custom Triton operators are not being included in the compiled .so file.
### Key Questions
- How can we ensure custom operator schemas are properly embedded in the compiled .so?
- What additional steps are needed to make these operators visible in C++ environments?
- Are there specific compilation flags or registration mechanisms required for AOT compilation to persist custom operator definitions?
### Environment
- PyTorch 2.5.1
- Cuda 12.4
- Triton 3.1.0
### Related Code Structure
The custom operators are defined using:
- @triton.jit for kernel implementation
- @custom_op with fake implementations
- Autograd registration via register_autograd
- Explicit schema registration through Python-side imports
cc @chauhang @penguinwu @zou3519 @bdhirsh | true |
2,904,456,447 | avoid guard_size_oblivious in vector_norm | laithsakka | closed | [] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148810
* #148809
* #148430
| true |
2,904,456,407 | Remove guard_size_oblivious from vector_norm decomposition. | laithsakka | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 14 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148809
This PR remove the usage of guard_size_oblivious in vector_norm by inlining it in the runtime check,
this prevent any data dependent error from ever appearing here at the locations where guard_size_oblivious
used to exist. Before this PR it used to break potentially. This is NOT BC breaking or changing of semantics from eager.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,904,437,368 | [Inductor] Inference failed with the compiled model with aminmax operator | Cookiee235 | open | [
"module: autograd",
"good first issue",
"triaged",
"oncall: pt2"
] | 7 | CONTRIBUTOR | ### 🐛 Describe the bug
```python
import torch
class SimpleModel(torch.nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.linear = torch.nn.Linear(10, 10)
def forward(self, x):
x = self.linear(x)
min_val, max_val = torch.aminmax(x)
x_normalized = (x - min_val) / (max_val - min_val)
return x_normalized
model = SimpleModel()
inputs = torch.randn(1, 10)
with torch.no_grad():
compiled_model = torch.compile(model, backend='inductor')
compiled_out = compiled_model(*inputs)
```
### StackTrace
```
/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/autograd/graph.py:825: UserWarning: Error detected in torch::autograd::NotImplemented. Traceback of forward call that caused the error:
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/torch.aminmax.py", line 11, in forward
min_val, max_val = torch.aminmax(x)
(Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:110.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/__init__.py", line 2234, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 333, in aot_dispatch_autograd
fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 294, in aot_dispatch_autograd_graph
fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 54, in _create_graph
fx_g = make_fx(
^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 2110, in wrapped
return make_fx_tracer.trace(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 2048, in trace
return self._trace_inner(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 2034, in _trace_inner
t = dispatch_trace(
^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 1127, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 823, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 676, in flatten_fn
tree_out = root_fn(*tree_args)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 1182, in wrapped
out = f(*tensors)
^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 693, in inner_fn
outs = fn(*args)
^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 644, in joint_helper
return _functionalized_f_helper(primals, tangents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 413, in _functionalized_f_helper
f_outs = fn(*f_args)
^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 280, in inner_fn_with_anomaly
return inner_fn(*args)
^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 265, in inner_fn
backward_out = torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/autograd/__init__.py", line 445, in grad
return handle_torch_function(
^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/overrides.py", line 1717, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py", line 1230, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: derivative for aten::aminmax is not implemented
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/torch.aminmax.py", line 22, in <module>
compiled_out = compiled_model(*inputs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2987, in RETURN_VALUE
self._return(inst)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2972, in _return
self.output.compile_subgraph(
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1117, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1369, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1416, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1465, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: derivative for aten::aminmax is not implemented
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Process finished with exit code 1
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] optree==0.14.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu | true |
2,904,396,664 | Native Infinite Sampler for Datasets | shivakanthsujit | open | [
"feature",
"module: dataloader",
"triaged",
"module: data"
] | 0 | NONE | ### 🚀 The feature, motivation and pitch
Currently I don't think there is a native function in PyTorch that lets you sample from a fixed length dataset through an infinite length data loader. That is, if I define my ML loop in terms of number of gradient steps that I want to do, and just keep sampling batches from the Dataloader for those many times.
From looking online this is the approach that people generally use to achieve this
```
d_iter = iter(dataloader)
for _ in range(num_updates):
try:
d = next(d_iter)
except StopIteration:
d_iter = iter(dataloader)
d = next(d_iter)
```
But this feels clunky and more importantly, adds overhead every time you wrap the dataloader in an iter. Depending on the dataset, the overhead of recreating the dataloader iter can be significant. In my use case it is about 25s every time and can be completely eliminated by having an infinite sampler, saving hours of training time.
Instead I would prefer having a wrapper that wraps the original dataset which effectively allows you to sample from the dataloader an infinite number of times. How I've done this is I wrap the original dataset as part of a IterableDataset and put the sampling of indices in a while True loop. So the iter can keep yielding elements for as long as you like. I make sure to handle multiple workers and the Distributed setting.
I have a version of an infinite sampler working and wanted to know if it could be added to PyTorch directly? I think this is a common enough use case that it warrants a PyTorch native function.
For context this is what I do
```
def __iter__(self):
worker_info = torch.utils.data.get_worker_info()
num_workers = 1
worker_id = 0
if worker_info:
num_workers = worker_info.num_workers
worker_id = worker_info.id
total_processes = self.world_size * num_workers
process_id = self.rank * num_workers + worker_id
indices = list(range(self.dataset_len))
if len(indices) < self.total_size:
padding = self.total_size - len(indices)
indices += indices[:padding]
while True:
if self.shuffle:
current_seed = self.seed + self.iteration
g = torch.Generator()
g.manual_seed(current_seed)
shuffle_indices = torch.randperm(len(indices), generator=g).tolist()
indices = [indices[i] for i in shuffle_indices]
local_indices = indices[process_id::total_processes]
for idx in local_indices:
yield self.dataset[idx]
self.iteration += 1
```
self.iteration makes sure the ordering of the samples is properly shuffled at the end of the dataset, similar to how we use sampler.set_epoch() for the DistributedSampler. And it will be handled internally since the user wouldn't know when one pass of the dataset is complete.
After wrapping your dataset in this iterable, you can use the DataLoader to perform batching as usual.
### Alternatives
The loop with stop iteration as mentioned above. But I feel my solution is better cause it avoids the overhead of creating the dataloader iterator multiple time.
### Additional context
_No response_
cc @andrewkho @divyanshk @SsnL @VitalyFedyunin @dzhulgakov | true |
2,904,374,252 | [triton 3.3] support both specialize_impl and create_specialize_impl | davidberard98 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148806
After https://github.com/triton-lang/triton/pull/6099, we sometimes need to do `from triton.runtime.jit import specialize impl` and sometimes do `triton.runtime.jit.create_specialize_impl()`. This should fix a bunch of the new errors that appeared with the triton 3.3 / pytorch 2.7 integration (e.g. `python test/inductor/test_aot_inductor.py -k test_triton_kernel_equal_to_1_float_arg_dynamic_False_cuda`, failing at https://hud.pytorch.org/pr/pytorch/pytorch/148684#38392501220) | true |
2,904,349,072 | CUDACachingAllocator,c10d: fixes for IPC release performance | d4l3k | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3 | MEMBER | This has two fixes to improve IPC tensor release performance when using torchft's BabyProcessGroupNCCL.
1. release the IpcMutex when deleting the `ExpandableSegements` object to avoid synchronizing under the lock
2. release the GIL in WorkNCCL destructor since the shared tensor will be destructed there
Test plan:
Run with torchft + torchtitan
```
REPLICA_GROUP_ID=0 NGPU=2 CUDA_VISIBLE_DEVICES=0,1 CONFIG_FILE=./torchtitan/models/llama/train_configs/llama3_8b.toml ./run_train.sh --training.data_par
allel_shard_degree=2 --fault_tolerance.enable --fault_tolerance.group_size=2 --fault_tolerance.replica_id=0 --metrics.log_freq=1 --training.seq_len 4096
...
[rank0]:[titan] 2025-03-07 17:51:31,387 - root - INFO - step: 61 loss: 7.4825 memory: 79.73GiB(83.89%) tps: 317 tflops: 16.34 mfu: 1.65%
```
Check py-spy to verify no bottleneck on IPC lock when creating new shared tensors


cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o | true |
2,904,342,734 | [export] Make aoti_call_delegate hop traceable | yiming0416 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 21 | CONTRIBUTOR | Summary: The `aoti_call_delegate` hop now uses a stateless `original_gm` for tracing with fake tensors and the OSS AOTI Runner for running with real tensors
Differential Revision: D70738393
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,904,339,202 | [export] Fix tensor_constant and buffer naming conflicts in TS converter | yiming0416 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 10 | CONTRIBUTOR | Summary: In TS converter, tensor constants are traced as BUFFER and later we will convert them back to CONSTANT_TENSOR. So we need to prevent naming conflicts during lift constant pass.
Test Plan: CI
Differential Revision: D70826426
| true |
2,904,329,514 | [CUDA][TF32] Account for tf32 in `test_efficient_conv_bn_eval` | eqy | closed | [
"module: cuda",
"open source",
"Merged",
"module: tf32",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | COLLABORATOR | cc @ptrblck @msaroufim @zasdfgbnm @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,904,327,409 | [ca] always do initial trace with dynamic shapes | xmfan | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 6 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149014
* #149064
* __->__ #148801
* #149030
* #148799
HUD: https://fburl.com/wzvx6tax no regressions (ignore the pass rate improvements, those come from #149030)
<img width="864" alt="image" src="https://github.com/user-attachments/assets/d7598f98-b378-4abb-a0c7-e4311162f681" />
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,904,320,590 | [mm_logs] make aten mm info readable | YUNQIUGUO | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 16 | CONTRIBUTOR | Summary:
as title. make it into a table like
e.g. also see pic in test plan
| Name | M | N | K | Count |
| aten.mm | 16 | 6 | 16 | 1 |
...
Test Plan: {F1975907876}
<img width="1090" alt="Screenshot 2025-03-11 at 3 13 00 PM" src="https://github.com/user-attachments/assets/ffae8c56-e32c-49cc-bbfb-5b8d216b8657" />
Differential Revision: D70825664
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,904,245,482 | [ca] support for dynamic shapes CopySlices | xmfan | closed | [
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 4 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149014
* #149064
* #148801
* #149030
* __->__ #148799
i'm changing CA initial trace to always trace as dynamic, fixes these errors:
```python
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
FAILED [0.2139s] test/inductor/test_compiled_autograd.py::TestAutogradWithCompiledAutograd::test_autograd_python_custom_function_inplace - RuntimeError: !has_symbolic_sizes_strides_ INTERNAL ASSERT FAILED at "/home/xmfan/core/a/pytorch/aten/src/ATen/TensorGeometry.h":63, please report a bug to PyTorch.
To execute this test, run the following from the base repo dir:
python test/test_autograd.py TestAutogradWithCompiledAutograd.test_autograd_python_custom_function_inplace
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
FAILED [0.0057s] test/inductor/test_compiled_autograd.py::TestAutogradWithCompiledAutograd::test_copy_slices_graph_task_updates - RuntimeError: !has_symbolic_sizes_strides_ INTERNAL ASSERT FAILED at "/home/xmfan/core/a/pytorch/aten/src/ATen/TensorGeometry.h":63, please report a bug to PyTorch.
To execute this test, run the following from the base repo dir:
python test/test_autograd.py TestAutogradWithCompiledAutograd.test_copy_slices_graph_task_updates
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
FAILED [0.9662s] test/inductor/test_compiled_autograd.py::TestAutogradWithCompiledAutograd::test_inplace_on_view_weak_grad_fn - RuntimeError: !has_symbolic_sizes_strides_ INTERNAL ASSERT FAILED at "/home/xmfan/core/a/pytorch/aten/src/ATen/TensorGeometry.h":63, please report a bug to PyTorch.
To execute this test, run the following from the base repo dir:
python test/test_autograd.py TestAutogradWithCompiledAutograd.test_inplace_on_view_weak_grad_fn
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
FAILED [0.0077s] test/inductor/test_compiled_autograd.py::TestAutogradWithCompiledAutograd::test_leaf_assignment - RuntimeError: !has_symbolic_sizes_strides_ INTERNAL ASSERT FAILED at "/home/xmfan/core/a/pytorch/aten/src/ATen/TensorGeometry.h":63, please report a bug to PyTorch.
To execute this test, run the following from the base repo dir:
python test/test_autograd.py TestAutogradWithCompiledAutograd.test_leaf_assignment
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
FAILED [5.0485s] test/inductor/test_compiled_autograd.py::TestAutogradWithCompiledAutograd::test_setitem_mask - RuntimeError: !has_symbolic_sizes_strides_ INTERNAL ASSERT FAILED at "/home/xmfan/core/a/pytorch/aten/src/ATen/TensorGeometry.h":63, please report a bug to PyTorch.
To execute this test, run the following from the base repo dir:
python test/test_autograd.py TestAutogradWithCompiledAutograd.test_setitem_mask
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
FAILED [0.0102s] test/inductor/test_compiled_autograd.py::TestAutogradWithCompiledAutograd::test_tensor_hooks_inplace_over_view - RuntimeError: !has_symbolic_sizes_strides_ INTERNAL ASSERT FAILED at "/home/xmfan/core/a/pytorch/aten/src/ATen/TensorGeometry.h":63, please report a bug to PyTorch.
To execute this test, run the following from the base repo dir:
python test/test_autograd.py TestAutogradWithCompiledAutograd.test_tensor_hooks_inplace_over_view
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,904,243,457 | c10d/ProcessGroup: cleanup abort and shutdown | d4l3k | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 9 | MEMBER | This adds `abort` and `shutdown` to `Backend` and `ProcessGroup` objects. This simplifies the logic in `distributed_c10d.py` by having a default noop implementation for all PGs.
This will be useful for torchft and upcoming versions of NCCL which will handle abort correctly. Currently `torchft` would have to call internal methods `_abort` on the PGNCCL object directly but with this change we can now just call `.abort()` and have it work for any PG implementation.
Test plan:
```
pytest distributed/test_backends.py distributed/test_c10d_common.py distributed/test_c10d_pypg.py
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o | true |
2,904,240,844 | [cudagraph] add log for skip reasons | BoyuanFeng | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Summary: Add skip reasons to dynamo_compile so we can know popular skip reasons for cudagraph
Test Plan: {F1975906635}
Differential Revision: D70820791
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,904,195,054 | [CUDA] try to abate some flakiness in `test_stream_event_nogil` | eqy | closed | [
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | COLLABORATOR | threshold twiddling as one in a few dozen runs tend to fail the current threshold
cc @ptrblck @msaroufim | true |
2,904,194,620 | fix 144607, added ```SymInt``` type to the valid Layout size specification types | AmalDevHaridevan | open | [
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 2 | NONE | Fixes #144607
Added ```SymInt``` as a valid type for the stride specification. This type only appears as arg to the Layout ```__init___``` method when meta tensors are compiled.
# Test
```python
import os
os.environ.update(dict( TORCHDYNAMO_VERBOSE='1', ))
import torch
@torch.compile
def foobar(x):
return x * 2
def test(device):
print(foobar(torch.empty((1, 16, 128, 128), device = device)).size() if device != 'meta' else foobar(torch.empty((1, 16, 128, 128), device = device)))
print(foobar(torch.empty((1, 32, 64, 64), device = device)).size() if device != 'meta' else foobar(torch.empty((1, 32, 64, 64), device = device)))
# OK
test("cpu")
print("cpu ok")
# OK
test("meta")
print("meta ok")
```
# After fix
```bash
torch.Size([1, 16, 128, 128])
torch.Size([1, 32, 64, 64])
cpu ok
tensor(..., device='meta', size=(1, 16, 128, 128))
tensor(..., device='meta', size=(1, 32, 64, 64))
meta ok
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,904,189,323 | Set specialized representation string for meta/fake tensor with empty construction | chajath | open | [
"Stale",
"topic: not user facing"
] | 4 | NONE | This is done so that the output of the representation is a valid pytorch code.
Now we can do:
```python
>>> import torch
>>> torch.empty((3,4), device='meta', dtype=torch.float64)
torch.empty((3, 4), device='meta', dtype=torch.float64)
```
Which allows to roundtrip repl with `eval`
Alternative is to allow the elliptical construction of tensor. However, this is a far bigger change to make, and we will have to alter accepted signature of tensor() to retrofit the usecase.
Fixes #147643
| true |
2,904,182,876 | [MM] Add sm carevout to lowerings | drisspg | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148849
* __->__ #148793
# Summary
See https://github.com/pytorch/pytorch/issues/145115 for more details. I have been using
the following to verify, need to figure out how to do proper guarding
This does do the correct thing if we compile w/ sm carvout already set but since we dont guard on it just yet we dont recompile
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,904,180,038 | wire torch._scaled_mm with fp4 operands to the cublas nvfp4 kernel | vkuzo | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148792
* #148791
Summary:
When `a` and `b` have dtype `torch.float4_e2m1fn_x2` and `a_scale` and `b_scale` have dtype `torch.float8_e4m3fn`, makes
```python
c = torch._scaled_mm(a, b, a_scale, b_scale, out_dtype=torch.bfloat16)
```
call the cuBLAS fp4 gemm kernel, as specified in https://docs.nvidia.com/cuda/cublas/index.html?highlight=fp4#d-block-scaling-for-fp8-and-fp4-data-types
note: output scale (`scale_in_D` from the cuBLAS docs) is not tested in this PR - we can enable in a follow-up.
Test Plan:
```bash
pytest test/test_matmul_cuda.py -s -k mxfp8_nvfp4
```
Reviewers:
Subscribers:
Tasks:
Tags: | true |
2,904,179,999 | add `torch.float4_e2m1fn_x2` to PyTorch | vkuzo | closed | [
"Merged",
"release notes: quantization"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148792
* __->__ #148791
Summary:
Redo of https://github.com/pytorch/pytorch/pull/146578 to get around
rebase conflicts.
Test Plan:
```
pytest test/quantization/core/experimental/test_floatx.py -s
```
Reviewers:
Subscribers:
Tasks:
Tags: | true |
2,904,161,383 | Set non-strict export as default mode | gmagogsfm | closed | [
"module: bc-breaking",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: bc breaking",
"release notes: export"
] | 35 | CONTRIBUTOR | Summary:
- Flip the default value of strict argument in torch.export.export from True to False
- Update test infra to cope with the change, some of them made the assumption of strict mode as default
- Disabled some tests that fail in non-strict mode
Test Plan: Sandcastle
Differential Revision: D70228628
cc @ezyang @gchanan | true |
2,904,150,122 | [hop] Rework the check of Metadata in the functionalization key | bohnstingl | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 7 | COLLABORATOR | This PR is a more cosmetic rework of the metadata check performed by some HOPs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4
| true |
2,904,085,492 | Add timm_efficientnet to flaky models after cuda 12.6 update in CI/CD | atalman | closed | [
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | After https://github.com/pytorch/pytorch/pull/148612
This model have become flaky
Tracking this regression in an issue : https://github.com/pytorch/pytorch/issues/148699
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,903,978,166 | [dynamo] torch.compiler.disable(recursive=False) modifies the original function | williamwen42 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0 | MEMBER | ```python
import torch
def f(x):
return x + 1
f_disabled = torch.compiler.disable(f, recursive=False)
torch.compile(f, backend="eager")(torch.ones(3))
```
Output:
```
TORCH_LOGS_FORMAT="" TORCH_LOGS="+dynamo" python playground.py
TorchDynamo attempted to trace the following frames: [
]
```
In this example, we should still expect `f` to be traced (in fact, if `recursive=True`, it is traced). But it is not traced.
This is because `torch.compiler.disable` calls `torch._dynamo.skip`, which marks the function's code object to be skipped, and adds `_torchdynamo_disable` to the original function to prevent inlining.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,903,946,273 | [cutlass backend][ez] Incorporate AOTI dynamic shape test into main test of MM | henrylhtsang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148786
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,903,940,992 | [Upstream] Wrap log_2_e in tl.constexpr for new 3.3 bump | drisspg | closed | [
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148793
* __->__ #148785
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,903,934,645 | [Minimizer] allow overriding of ShapeProp logic by subclasses of _MinimizerBase | qcyuan | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 6 | CONTRIBUTOR | Summary:
The changes contained in this diff
- allow subclass Minimizer implementations to override the default shape propagation logic with custom logic
- copies over the meta attribute on get_attr graph nodes during the graph splitting step
- for both changes, behavior for existing classes do not change
Test Plan: CI
Differential Revision: D70799942
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,903,925,195 | [test] conda cmake | clee2000 | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
| true |
2,903,872,144 | Delete duplicate entry from `docker-builds.yml` | malfet | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Regression introduced by merge conflict of https://github.com/pytorch/pytorch/pull/148612
| true |
2,903,860,174 | [Codemod][AddExplicitStrictExportArg] caffe2/test/inductor | gmagogsfm | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Differential Revision: D70575053
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,903,857,402 | FSDP: use Work.wait instead of event for all reduce | d4l3k | open | [
"oncall: distributed",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 20 | MEMBER | This uses Work.swait instead of using CUDA events for allreduce. This has some important performance considerations when using ProcessGroups that depend on CPU synchronization such as Gloo. By not calling `Work.wait` until the end this allows for all of the allreduce calls to be dispatched before we synchronize on any of them.
# Test plan
Run with torchtitan and verify memory and tps is the same
```
REPLICA_GROUP_ID=1 NGPU=4 CONFIG_FILE=./torchtitan/models/llama/train_configs/llama3_8b.toml ./run_train.sh --training.data_parallel_shard_degree=2 --training.data_parallel_replicate_degree=2 --metrics.log_freq=1 --training.seq_len 4096
```
## Baseline
```
[rank0]:[titan] 2025-03-06 16:50:02,110 - root - INFO - step: 4 loss: 11.3703 memory: 80.00GiB(84.18%) tps: 454 tflops: 23.36 mfu: 2.36%
[rank0]:[titan] 2025-03-06 16:50:11,060 - root - INFO - step: 5 loss: 11.1204 memory: 80.00GiB(84.18%) tps: 458 tflops: 23.56 mfu: 2.38%
[rank0]:[titan] 2025-03-06 16:50:19,816 - root - INFO - step: 6 loss: 10.5902 memory: 80.00GiB(84.18%) tps: 468 tflops: 24.08 mfu: 2.44%
[rank0]:[titan] 2025-03-06 16:50:28,662 - root - INFO - step: 7 loss: 10.4288 memory: 80.00GiB(84.18%) tps: 463 tflops: 23.84 mfu: 2.41%
[rank0]:[titan] 2025-03-06 16:50:37,136 - root - INFO - step: 8 loss: 10.6261 memory: 80.00GiB(84.18%) tps: 484 tflops: 24.89 mfu: 2.52%
[rank0]:[titan] 2025-03-06 16:50:45,576 - root - INFO - step: 9 loss: 10.6391 memory: 80.00GiB(84.18%) tps: 485 tflops: 24.98 mfu: 2.53%
[rank0]:[titan] 2025-03-06 16:50:54,317 - root - INFO - step: 10 loss: 10.3310 memory: 80.00GiB(84.18%) tps: 469 tflops: 24.12 mfu: 2.44%
[rank0]:[titan] 2025-03-06 16:51:03,457 - root - INFO - step: 11 loss: 9.8705 memory: 80.00GiB(84.18%) tps: 448 tflops: 23.07 mfu: 2.33%
```
avg = 466 tps, std = 12.4
## With Change
```
[rank0]:[titan] 2025-03-06 16:44:48,055 - root - INFO - step: 5 loss: 11.0233 memory: 80.00GiB(84.18%) tps: 468 tflops: 24.08 mfu: 2.44%
[rank0]:[titan] 2025-03-06 16:44:56,778 - root - INFO - step: 6 loss: 10.5089 memory: 80.00GiB(84.18%) tps: 470 tflops: 24.17 mfu: 2.44%
[rank0]:[titan] 2025-03-06 16:45:05,606 - root - INFO - step: 7 loss: 10.2253 memory: 80.00GiB(84.18%) tps: 464 tflops: 23.89 mfu: 2.42%
[rank0]:[titan] 2025-03-06 16:45:14,203 - root - INFO - step: 8 loss: 10.5021 memory: 80.00GiB(84.18%) tps: 477 tflops: 24.53 mfu: 2.48%
[rank0]:[titan] 2025-03-06 16:45:22,986 - root - INFO - step: 9 loss: 10.5685 memory: 80.00GiB(84.18%) tps: 466 tflops: 24.01 mfu: 2.43%
[rank0]:[titan] 2025-03-06 16:45:31,795 - root - INFO - step: 10 loss: 10.4201 memory: 80.00GiB(84.18%) tps: 465 tflops: 23.94 mfu: 2.42%
[rank0]:[titan] 2025-03-06 16:45:40,274 - root - INFO - step: 11 loss: 9.8893 memory: 80.00GiB(84.18%) tps: 483 tflops: 24.87 mfu: 2.51%
```
avg = 470 tps, std = 6.5
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o | true |
2,903,809,258 | [Export Benchmark] non-strict export doesn't work with .numpy on Kokoro model | tugsbayasgalan | open | [
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
```python
class Foo(torch.nn.Module):
def forward(self, x):
a = x.numpy()
return x + x.numpy().sum()
foo = Foo()
foo(torch.randn(10, 10))
torch.export.export(foo, (torch.randn(10, 10),), strict=False)
Errors with:
File "/data/users/tmanlaibaatar/pytorch/torch/fx/experimental/proxy_tensor.py", line 1277, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/tmanlaibaatar/pytorch/torch/fx/experimental/proxy_tensor.py", line 1324, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/tmanlaibaatar/pytorch/torch/_export/non_strict_utils.py", line 683, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
RuntimeError: .numpy() is not supported for tensor subclasses.
```
Probably need to replicate what dynamo does to replace .numpy call.
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4 | true |
2,903,792,874 | Add ninja to requirements-ci for all arch | clee2000 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | So I can get ninja_logs for the builds
No negative consequences afaik | true |
2,903,709,799 | [inductor] Precompilation start time is the time when a config is added to the queue, not when executor starts compiling the config | henrylhtsang | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
code pointer to start time: https://github.com/pytorch/pytorch/blob/main/torch/_inductor/select_algorithm.py#L1847
This means when we print the elapsed time, it is not accurate. https://github.com/pytorch/pytorch/blob/main/torch/_inductor/select_algorithm.py#L1808-L1812
repro:
go to select_algorithm.py, https://github.com/pytorch/pytorch/blob/main/torch/_inductor/select_algorithm.py#L1742, change num_workers to 1
run:
```
import logging
import os
os.environ["TORCH_LOGS"] = "+inductor"
import torch
import torch._inductor.config
torch._inductor.config.max_autotune = True
torch._inductor.config.autotune_num_choices_displayed = None
torch._inductor.config.max_autotune_gemm_backends = "TRITON"
torch._inductor.config.autotune_fallback_to_aten = False
logger: logging.Logger = logging.getLogger(__name__)
class MatMulModel(torch.nn.Module):
def forward(self, A, B):
return A @ B
def main():
M, N, K = 1024, 1024, 1024
dtype = torch.bfloat16
A = torch.randn(M, K, device="cuda", dtype=dtype)
B = torch.randn(K, N, device="cuda", dtype=dtype)
model = MatMulModel().cuda()
compiled_model = torch.compile(model, fullgraph=True)
_ = compiled_model(A, B)
print("done")
if __name__ == "__main__":
main()
```
logs:
```
Precompiling benchmark choice TritonTemplateCaller took 0.03s
Precompiling benchmark choice TritonTemplateCaller took 0.05s
Precompiling benchmark choice TritonTemplateCaller took 0.06s
Precompiling benchmark choice TritonTemplateCaller took 0.08s
Precompiling benchmark choice TritonTemplateCaller took 0.09s
Precompiling benchmark choice TritonTemplateCaller took 0.10s
Precompiling benchmark choice TritonTemplateCaller took 0.12s
Precompiling benchmark choice TritonTemplateCaller took 0.13s
Precompiling benchmark choice TritonTemplateCaller took 0.15s
Precompiling benchmark choice TritonTemplateCaller took 0.17s
Precompiling benchmark choice TritonTemplateCaller took 0.18s
Precompiling benchmark choice TritonTemplateCaller took 0.19s
Precompiling benchmark choice TritonTemplateCaller took 0.21s
Precompiling benchmark choice TritonTemplateCaller took 0.23s
Precompiling benchmark choice TritonTemplateCaller took 0.24s
Precompiling benchmark choice TritonTemplateCaller took 0.26s
Precompiling benchmark choice TritonTemplateCaller took 0.27s
Precompiling benchmark choice TritonTemplateCaller took 0.29s
Precompiling benchmark choice TritonTemplateCaller took 0.31s
```
### Versions
trunk
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov | true |
2,903,701,199 | [BE] Combine `windows_arm64_binary_build.yaml.js2` with regular windows yaml | malfet | closed | [
"module: windows",
"module: ci",
"triaged",
"better-engineering"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
BE followup from https://github.com/pytorch/pytorch/pull/139760
Jinja templates are designed specifically to include/exclude shared parts based on the platform, so same template should be used for both x86 and arm64 builds
### Versions
CI
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @seemethere @pytorch/pytorch-dev-infra | true |
2,903,679,596 | Calling `torch.linalg.lstsq` with a wrongly-shaped `out=output` argument does not properly reshape `output` | Bichidian | open | [
"triaged",
"module: linear algebra"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
The function `torch.linalg.lstsq` returns a named tuple `(solution, residuals, rank, singular_values)`. It accepts an `out=output` argument so the `output` is modified in-place. This `out` behavior is not documented, but it is tested in `test/test_ops.py`.
The function's main arguments are `A` and `B`, which have shapes `(*, m, n)` and `(*, m, k)` according to the documentation. The case where the shape of `B` is `(*, m)` is also supported (the code explicitly considers this case), but is not documented or tested. When I tried to add tests for this case, I found a bug that made a test in `test/test_ops.py` fail. It can be reproduced in the following way:
First run the ordinary version:
```
A = torch.randn(4, 3)
B = torch.randn(4)
result = torch.linalg.lstsq(A, B, driver="gels")
print(result)
```
This gives the expected result :
```
torch.return_types.linalg_lstsq(
solution=tensor([-0.2456, 0.3925, -0.3837]),
residuals=tensor([0.4924]),
rank=tensor([], dtype=torch.int64),
singular_values=tensor([]))
```
Then run the in-place `out` version:
```
output = [torch.cat((torch.randn_like(x), torch.Tensor([2.3]))) for x in result]
output[2] = output[2].to(torch.int64)
output = tuple(output)
print(output)
torch.linalg.lstsq(A, B, driver="gels", out=output)
print(output)
```
This gives a different result (the warning is expected):
```
(tensor([1.8222, 0.4479, 1.2158, 2.3000]), tensor([2.0712, 2.3000]), tensor([2]), tensor([2.3000]))
(tensor([-0.2456, 0.3925, -0.3837]), tensor([0.4924]), tensor([2]), tensor([2.3000]))
/tmp/ipykernel_607561/2153907606.py:5: UserWarning: An output with one or more elements was resized since it had shape [2], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:28.)
torch.linalg.lstsq(A, B, driver="gels", out=output)
```
Notice that the last two elements `(rank, singular_values)` should become empty but actually stayed the same. This incorrect behavior only happens when I append one number. If I append more (e.g. change `torch.Tensor([2.3])` to `torch.Tensor([2.3, 4.6])`), the behavior becomes correct (so `output == result`).
### Versions
Running `python collect_env.py` fails. It seems to because I have the `uv` environment manager and do not have `pip`. My pytorch version is `2.5.1+cu124`. The above code was run on CPU.
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | true |
2,903,679,320 | Change nvcc arch flags for sm100 | danielvegamyhre | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"topic: build"
] | 8 | CONTRIBUTOR | ### Summary
- Addressing this comment https://github.com/pytorch/pytorch/pull/148274#discussion_r1984944012
### Test plan
- Verified building from source w/ B200s is successful
- Verified B200 tensorcores are still being utilized properly via benchmarking script
| true |
2,903,676,649 | cpp_wrapper: build non-performance-sensitive code at O1 | benjaminglass1 | open | [
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149961
* __->__ #148773
* #144293
Builds on #148212, applying the same improvements to `cpp_wrapper` mode. Sample benchmark results on [A100](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Mon%2C%2003%20Mar%202025%2017%3A12%3A36%20GMT&stopTime=Mon%2C%2010%20Mar%202025%2016%3A12%3A36%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=gh/benjaminglass1/77/orig&lCommit=6979874fa846ca6727d32f2beaaf2649a18169f1&rBranch=main&rCommit=666508eb170cad470fc75f990e3a02a7a0ac0a0d).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,903,665,951 | [torchbench] fix dynamic_shapes spec for moco | pianpwk | closed | [
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 7 | CONTRIBUTOR | Fixes https://github.com/pytorch/pytorch/issues/148333
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,903,612,776 | on-pr docker build stuck with `user is not authorized to BatchGetImage` | malfet | open | [
"module: ci",
"triaged",
"module: regression",
"module: docker",
"security"
] | 5 | CONTRIBUTOR | ### 🐛 Describe the bug
Build https://github.com/pytorch/pytorch/actions/runs/13724713611/job/38388317099?pr=148740 stuck at `Calculate docker image` step trying to check if such image already exists or not
```
+ [[ 1741362495 -lt 1741364292 ]]
+ docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.6-cudnn9-py3-gcc11:c097a94c03da3be2f692f9ff22e3963e933633cf
denied: User: arn:aws:sts::391835788720:assumed-role/ghci-lf-github-action-runners-runner-role/i-0e98877505f067739 is not authorized to perform: ecr:BatchGetImage on resource: arn:aws:ecr:us-east-1:308535385114:repository/pytorch/pytorch-linux-focal-cuda12.6-cudnn9-py3-gcc11 because no resource-based policy allows the ecr:BatchGetImage action
+ '[' false == true ']'
+ sleep 300
++ date +%s
```
This logic was added by https://github.com/pytorch/test-infra/pull/6013 but looks like it does not work right now due to some sort of security considerations. (Though all runners should have read access to ECR, shouldn't they?)
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | true |
2,903,588,629 | the example program using libtorch is not linked against torch_cuda even when USE_CUDA is defined | mboedigh | open | [
"module: windows",
"module: cpp",
"module: cuda",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
libtorch is not implicitly loading torch_cuda.dll.
```
#define USE_CUDA 1 // has no discernible effect. same behavior with and without.
#include <torch/torch.h>
#include <iostream>
#include <Windows.h>
int main() {
// LoadLibraryA("torch_cuda.dll"); // if this line is present, then cuda is used, otherwise it is not
torch::Device device = torch::cuda::is_available() ? torch::kCUDA : torch::kCPU;
torch::Tensor tensor = torch::randn({2, 3}).to(device);
std::cout << '\n'
<< device << " " << tensor << std::endl;
return 0;
}
```
example output without manual torch_cuda.dll:
cpu -0.2920 -1.2051 -0.2542
-0.7403 1.3832 -0.4595
[ CPUFloatType{2,3} ]
### Versions
Sorry, I'm not using python. I'm using the nightly preview build:
"libtorch-win-shared-with-deps-debug-latest.zip" from the cu128 directory.
windows 11 24H2.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jbschlosser @ptrblck @msaroufim @eqy | true |
2,903,586,226 | [inductor] fix matmul w/ torch.bucketize epilogue | davidberard98 | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148769
See https://github.com/pytorch/pytorch/issues/148764.
Inductor was codegen-ing wrong shapes for bucketize when it was fused as an epilogue: the binary search helper function requested the shape of the input tensor, and Inductor was generating `[XBLOCK]`, when `XBLOCK` doesn't exist.
As a workaround, this PR removes the `BLOCK_SHAPE` parameter from the helper function (and just uses `values.shape`) so that we don't even have to generate the shape.
This PR also introduces `torch._inductor.config.triton.disallow_failing_autotune_kernels_TESTING_ONLY` to test this behavior. This config is needed to enforce that _all_ autotune kernel candidates pass - otherwise, the fused-bucketize exception just gets caught and an `inf` latency is assigned to it.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D70794563](https://our.internmc.facebook.com/intern/diff/D70794563) | true |
2,903,580,645 | [aarch64] install ninja for docker to build triton on arm | tinglvv | closed | [
"open source",
"Merged",
"topic: not user facing"
] | 3 | COLLABORATOR | cc @atalman
| true |
2,903,566,585 | fix #147170 Sorting the input node | urstrulyvishtan | open | [
"triaged",
"open source",
"release notes: fx",
"fx"
] | 2 | NONE | Fixes #147170
This pull request makes several improvements to the `make_partition` function in the `torch/fx/passes/utils/source_matcher_utils.py` file. The changes focus on code clarity and ensuring deterministic behavior when handling input nodes.
Key changes include:
* Initialization and population of `input_nodes`:
* Added comments to clarify the initialization and population of `input_nodes`.
* Sorted `input_nodes` deterministically after populating it.
* Code clarity:
* Improved comments to clarify the purpose of `filter_fn` in filtering partitions.
* Removed unnecessary blank lines to maintain code readability.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,903,553,061 | Implement `raise ... from ...` | guilhermeleobas | closed | [
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 10 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150466
* #147990
* #146506
* #146501
* #146500
* __->__ #148766
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,903,552,852 | Set __context__/__cause__ when generator raise `StopIteration` | guilhermeleobas | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147990
* #146506
* #146501
* #146500
* #148766
* __->__ #148765
* #146505
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,903,472,585 | [inductor] torch.bucketize in fused epilogue throws `NameError('XBLOCK is not defined')` | davidberard98 | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
Repro:
```python
import torch
import torch
import torch._inductor.config
from torch._inductor.utils import fresh_inductor_cache
torch._inductor.config.max_autotune_gemm_backends = "TRITON"
def fn(x: torch.Tensor, y: torch.Tensor, buckets: torch.Tensor) -> torch.Tensor:
z = torch.mm(x, y)
return torch.bucketize(z, buckets)
buckets = torch.arange(-100, 100, 10, device="cuda")
x = torch.randn(64, 64, device="cuda")
y = torch.randn(64, 64, device="cuda")
with fresh_inductor_cache():
torch.compile(fn, mode="max-autotune")(x, y, buckets)
```
Error:
```
/home/dberard/local/triton-env2/pytorch/torch/_inductor/compile_fx.py:244: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
AUTOTUNE mm(64x64, 64x64)
triton_mm_1 0.0061 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=4
triton_mm_4 0.0071 ms 86.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=64, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=4
triton_mm_2 0.0074 ms 83.5% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=32, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=8
triton_mm_3 0.0075 ms 82.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=8
triton_mm_7 0.0083 ms 73.8% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=8
triton_mm_9 0.0084 ms 73.6% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=3, num_warps=4
triton_mm_14 0.0084 ms 73.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=64, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=5, num_warps=8
triton_mm_0 0.0086 ms 71.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=16, BLOCK_M=32, BLOCK_N=32, EVEN_K=True, GROUP_M=8, num_stages=1, num_warps=2
triton_mm_10 0.0088 ms 70.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=4, num_warps=8
triton_mm_6 0.0090 ms 68.1% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=64, EVEN_K=True, GROUP_M=8, num_stages=2, num_warps=4
SingleProcess AUTOTUNE benchmarking takes 0.2732 seconds and 0.2760 seconds precompiling for 15 choices
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] Triton compilation failed: Placeholder.DESCRIPTIVE_NAME
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] def triton_(arg_A, arg_B, in_ptr2, out_ptr1):
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] GROUP_M : tl.constexpr = 8
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] EVEN_K : tl.constexpr = True
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] ALLOW_TF32 : tl.constexpr = False
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] ACC_TYPE : tl.constexpr = tl.float32
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] BLOCK_M : tl.constexpr = 32
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] BLOCK_N : tl.constexpr = 32
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] BLOCK_K : tl.constexpr = 64
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] A = arg_A
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] B = arg_B
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] M = 64
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] N = 64
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] K = 64
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] if M * N == 0:
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] # early exit due to zero-size input(s)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] return
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] stride_am = 64
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] stride_ak = 1
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] stride_bk = 64
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] stride_bn = 1
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] # based on triton.ops.matmul
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] pid = tl.program_id(0)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] grid_m = (M + BLOCK_M - 1) // BLOCK_M
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] grid_n = (N + BLOCK_N - 1) // BLOCK_N
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] # re-order program ID for better L2 performance
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] width = GROUP_M * grid_n
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] group_id = pid // width
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] group_size = min(grid_m - group_id * GROUP_M, GROUP_M)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] pid_m = group_id * GROUP_M + (pid % group_size)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] pid_n = (pid % width) // (group_size)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] if ((stride_am == 1 and stride_ak == M) or (stride_am == K and stride_ak == 1)) and M >= BLOCK_M:
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] offs_a_m = tl.max_contiguous(tl.multiple_of(rm % M, BLOCK_M), BLOCK_M)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] else:
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] offs_a_m = rm % M
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] if ((stride_bk == 1 and stride_bn == K) or (stride_bk == N and stride_bn == 1)) and N >= BLOCK_N:
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] offs_b_n = tl.max_contiguous(tl.multiple_of(rn % N, BLOCK_N), BLOCK_N)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] else:
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] offs_b_n = rn % N
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] offs_k = tl.arange(0, BLOCK_K)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] acc = tl.zeros((BLOCK_M, BLOCK_N), dtype=ACC_TYPE)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] for k_idx in range(0, tl.cdiv(K, BLOCK_K)):
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] a_k_idx_vals = offs_k[None, :] + (k_idx * BLOCK_K)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] b_k_idx_vals = offs_k[:, None] + (k_idx * BLOCK_K)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] idx_m = offs_a_m[:, None]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] idx_n = a_k_idx_vals
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] xindex = idx_n + 64*idx_m
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] a = tl.load(A + (xindex))
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] idx_m = b_k_idx_vals
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] idx_n = offs_b_n[None, :]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] xindex = idx_n + 64*idx_m
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] b = tl.load(B + (xindex))
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] acc += tl.dot(a, b, allow_tf32=ALLOW_TF32)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] # rematerialize rm and rn to save registers
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] idx_m = rm[:, None]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] idx_n = rn[None, :]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] mask = (idx_m < M) & (idx_n < N)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] # inductor generates a suffix
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] xindex = idx_n + 64*idx_m
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] tmp0 = triton_helpers.bucketize_binary_search(acc, in_ptr2, 20, 20, 1, 0, tl.int64, False, None, None, None, [XBLOCK], )
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] tl.store(out_ptr1 + (tl.broadcast_to(xindex, acc.shape)), tmp0, mask)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] metadata: {'signature': {'arg_A': '*fp32', 'arg_B': '*fp32', 'in_ptr2': '*i64', 'out_ptr1': '*i64'}, 'device': 0, 'constants': {}, 'configs': [{(0,): [['tt.divisibility', 16]], (1,): [['tt.divisibility', 16]], (2,): [['tt.divisibility', 16]], (3,): [['tt.divisibility', 16]]}], 'device_type': 'cuda', 'num_warps': 4, 'num_stages': 2, 'debug': True, 'cc': 90}
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] Traceback (most recent call last):
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] File "/home/dberard/local/triton-env2/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 531, in _precompile_config
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] binary = triton.compile(*compile_args, **compile_kwargs)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] File "/home/dberard/local/triton-env2/triton/python/triton/compiler/compiler.py", line 278, in compile
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] module = src.make_ir(options, codegen_fns, module_map, context)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] File "/home/dberard/local/triton-env2/triton/python/triton/compiler/compiler.py", line 81, in make_ir
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] triton.compiler.errors.CompilationError: at 73:114:
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] acc += tl.dot(a, b, allow_tf32=ALLOW_TF32)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] # rematerialize rm and rn to save registers
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] idx_m = rm[:, None]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] idx_n = rn[None, :]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] mask = (idx_m < M) & (idx_n < N)
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533]
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] # inductor generates a suffix
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] xindex = idx_n + 64*idx_m
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] tmp0 = triton_helpers.bucketize_binary_search(acc, in_ptr2, 20, 20, 1, 0, tl.int64, False, None, None, None, [XBLOCK], )
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] ^
E0307 08:45:35.023000 3871554 torch/_inductor/runtime/triton_heuristics.py:533] NameError('XBLOCK is not defined')
```
### Versions
viable/strict, Mar 7. H100.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov | true |
2,903,409,776 | Suppress build warnings when gcc-11 is used | malfet | closed | [
"oncall: jit",
"Merged",
"NNC",
"release notes: jit"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148740
* __->__ #148763
By decorating the header with `C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wmismatched-new-delete")`
that will suppress following (when building against ancient llvm-9)
```
In file included from /var/lib/jenkins/workspace/torch/csrc/jit/tensorexpr/llvm_codegen.cpp:24:
/opt/llvm/include/llvm/IR/IRBuilder.h: In member function 'llvm::LoadInst* llvm::IRBuilder<T, Inserter>::CreateLoad(llvm::Type*, llvm::Value*, const llvm::Twine&) [with T = llvm::ConstantFolder; Inserter = llvm::IRBuilderDefaultInserter]':
/opt/llvm/include/llvm/IR/IRBuilder.h:1581:19: error: 'static void llvm::User::operator delete(void*)' called on pointer returned from a mismatched allocation function [-Werror=mismatched-new-delete]
1581 | return Insert(new LoadInst(Ty, Ptr), Name);
| ^~~~~~~~~~~~~~~~~~~~~
/opt/llvm/include/llvm/IR/IRBuilder.h:1581:19: note: returned from 'static void* llvm::UnaryInstruction::operator new(size_t)'
```
Probably a reasonable followup will be to disable NNC testing all-together, as project has been in a maintenance mode for a while now
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,903,322,985 | magma builds should be part of the docker image builds | malfet | open | [
"oncall: releng",
"module: ci",
"triaged",
"better-engineering",
"security"
] | 6 | CONTRIBUTOR | ### 🐛 Describe the bug
One of the followups from https://github.com/pytorch/pytorch/issues/148495
Currently magma are ingested in docker builds as unversioned artifacts
Those artifacts are produced by build-magma-linux.yml workflow that does not do any testing before publishing the binaries, nor version them, which makes it impossible to roll-back
Considering the current ecosystem, I wonder if it'll be better to work with Magma team to publish a versioned wheels
Alternatives: Get rid of MAGMA, if nvidia stock libraries are finally performant enough in all cases
cc @seemethere @pytorch/pytorch-dev-infra | true |
2,903,179,737 | Fix redistribution cost for all-reduce | fmassa | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | MEMBER | This issue seems to have been introduced in https://github.com/pytorch/pytorch/pull/119897. With the current implementation, it might be more favorable to perform a reduce_scatter followed by an all-gather than simply an all-reduce.
Thanks @lw for the helpful discussions on getting this PR out!
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,903,126,464 | Re-introduce -Wmaybe-uninitialized | cyyever | open | [
"triaged",
"open source",
"topic: not user facing",
"ciflow/periodic"
] | 2 | COLLABORATOR | See what fail | true |
2,902,858,014 | remove redundant calls to inc_pending_event_queries() and dec_pending_event_queries() in cuda graph mode | taozhiwei | closed | [
"oncall: distributed",
"open source",
"release notes: distributed (c10d)"
] | 1 | CONTRIBUTOR | In cuda graph mode, there is no need to call `inc_pending_event_queries ()` and `dec_pending_event_queries ()`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,902,856,621 | Fix compile errors | cyyever | closed | [
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"ciflow/trunk",
"release notes: jit",
"module: dynamo",
"ciflow/inductor"
] | 9 | COLLABORATOR | Fix
```
/usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/bits/unique_ptr.h:91:16: error: invalid application of 'sizeof' to an incomplete type 'torch::jit::AliasDb::WriteRegistry'
91 | static_assert(sizeof(_Tp)>0,
| ^~~~~~~~~~~
/usr/bin/../lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/bits/unique_ptr.h:399:4: note: in instantiation of member function 'std::default_delete<torch::jit::AliasDb::WriteRegistry>::operator()' requested here
399 | get_deleter()(std::move(__ptr));
| ^
../torch/csrc/jit/ir/alias_analysis.cpp:200:10: note: in instantiation of member function 'std::unique_ptr<torch::jit::AliasDb::WriteRegistry>::~unique_ptr' requested here
200 | AliasDb::~AliasDb() = default;
| ^
../torch/csrc/jit/ir/alias_analysis.cpp:200:23: note: in defaulted destructor for 'torch::jit::AliasDb' first required here
200 | AliasDb::~AliasDb() = default;
| ^
../torch/csrc/jit/ir/alias_analysis.h:298:10: note: forward declaration of 'torch::jit::AliasDb::WriteRegistry'
298 | struct WriteRegistry;
| ^
1 error generated.
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,902,827,029 | Fix Wc++98-compat-extra-semi | cyyever | closed | [
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,902,764,736 | FSPD ValueError: expected to be in states [<TrainingState.FORWARD_BACKWARD: 2>] but current state is TrainingState.IDLE | nikonikolov | open | [
"oncall: distributed",
"triaged",
"module: fsdp"
] | 19 | CONTRIBUTOR | I saw many reports online of this issue, but no 'minimal' reproduction. I have one below, which uses Gemma from `transformers` library. It's likely it's possible to have an even more minimal version of this (I have very limited capacity rn and couldn't minimize this even further, so bear in mind some parts might be unnecessary). My observations to why the issue is happening - there are tensors/parameters involved in the forward pass (the output projection of the last attention layer) which don't play any role for computing the backward pass. I believe when the loss is computed on the output of the attention layer instead of on the key-value projections, the issue goes away.
I run this manually in two separate shells with `RANK=0 CUDA_VISIBLE_DEVICES=0 ipython3` and `RANK=1 CUDA_VISIBLE_DEVICES=1 ipython3`
```python
import os
import torch
import torch.distributed.fsdp
import functools
import transformers
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '42819'
os.environ['WORLD_SIZE'] = '2'
torch.distributed.init_process_group("nccl")
torch.cuda.set_device(0)
class TestModule(torch.nn.Module):
def __init__(self):
super().__init__()
config = transformers.models.gemma.modeling_gemma.GemmaConfig(
attention_dropout=0,
hidden_size=1024,
num_heads=8,
head_dim=128,
num_key_value_heads=1,
max_position_embeddings=256,
rope_theta=10000,
)
self.attn_1 = transformers.models.gemma.modeling_gemma.GemmaSdpaAttention(config, layer_idx=0)
self.attn_2 = transformers.models.gemma.modeling_gemma.GemmaSdpaAttention(config, layer_idx=1)
def forward(self, x):
output = []
position_ids = torch.arange(x.shape[1]).view(1, -1).cuda()
cache = transformers.DynamicCache()
output.append(self.attn_1(x, position_ids=position_ids, past_key_value=cache))
output.append(self.attn_2(x, position_ids=position_ids, past_key_value=cache))
return output
module = TestModule()
module.cuda()
fsdp_module = torch.distributed.fsdp.FullyShardedDataParallel(
module,
auto_wrap_policy=functools.partial(
transformer_auto_wrap_policy,
transformer_layer_cls={transformers.models.gemma.modeling_gemma.GemmaSdpaAttention}
),
mixed_precision=torch.distributed.fsdp.MixedPrecision(
param_dtype=torch.float32,
reduce_dtype=torch.float32,
buffer_dtype=torch.float32,
),
sharding_strategy = torch.distributed.fsdp.ShardingStrategy.FULL_SHARD,
backward_prefetch = torch.distributed.fsdp.BackwardPrefetch.BACKWARD_PRE,
device_id=torch.device('cuda:0'),
sync_module_states=True,
forward_prefetch=False,
limit_all_gathers=True,
use_orig_params=True,
)
x = torch.randn([4, 10, 1024]).cuda()
output = fsdp_module(x)
out = output[-1][-1][-1][0] + output[-1][-1][-1][1]
loss = torch.nn.functional.mse_loss(out, torch.randn(out.shape).cuda())
loss.backward()
```
Stack trace:
```
File /scratch/niko/.cache/bazel/_bazel_niko/bd560e0defb87030a1fbbd7d7fbab0b7/execroot/barrel/bazel-out/k8-opt/bin/tools/ipython3.runfiles/pip-core_torch/site-packages/torch/_tensor.py:581, in Tensor.backward(self, gradient, retain_graph
, create_graph, inputs)
571 if has_torch_function_unary(self):
572 return handle_torch_function(
573 Tensor.backward,
574 (self,),
(...)
579 inputs=inputs,
580 )
--> 581 torch.autograd.backward(
582 self, gradient, retain_graph, create_graph, inputs=inputs
583 )
File /scratch/niko/.cache/bazel/_bazel_niko/bd560e0defb87030a1fbbd7d7fbab0b7/execroot/barrel/bazel-out/k8-opt/bin/tools/ipython3.runfiles/pip-core_torch/site-packages/torch/autograd/__init__.py:347, in backward(tensors, grad_tensors, re
tain_graph, create_graph, grad_variables, inputs)
342 retain_graph = create_graph
344 # The reason we repeat the same comment below is that
345 # some Python versions print out the first line of a multi-line function
346 # calls in the traceback and some print out the last line
--> 347 _engine_run_backward(
348 tensors,
349 grad_tensors_,
350 retain_graph,
351 create_graph,
352 inputs,
353 allow_unreachable=True,
354 accumulate_grad=True,
355 )
File /scratch/niko/.cache/bazel/_bazel_niko/bd560e0defb87030a1fbbd7d7fbab0b7/execroot/barrel/bazel-out/k8-opt/bin/tools/ipython3.runfiles/pip-core_torch/site-packages/torch/autograd/graph.py:825, in _engine_run_backward(t_outputs, *args, **kwargs)
823 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)
824 try:
--> 825 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
826 t_outputs, *args, **kwargs
827 ) # Calls into the C++ engine to run the backward pass
828 finally:
829 if attach_logging_hooks:
File /scratch/niko/.cache/bazel/_bazel_niko/bd560e0defb87030a1fbbd7d7fbab0b7/execroot/barrel/bazel-out/k8-opt/bin/tools/ipython3.runfiles/pip-core_torch/site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File /scratch/niko/.cache/bazel/_bazel_niko/bd560e0defb87030a1fbbd7d7fbab0b7/execroot/barrel/bazel-out/k8-opt/bin/tools/ipython3.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_runtime_utils.py:714, in _post_backward_hook(state, handle, flat_param, *unused)
710 flat_param._post_backward_called = True
711 with torch.autograd.profiler.record_function(
712 "FullyShardedDataParallel._post_backward_hook"
713 ):
--> 714 _assert_in_training_states(state, [TrainingState.FORWARD_BACKWARD])
715 # For multiple applications of reentrant AC across submodules sharing
716 # the same `FlatParameter`, the post-backward hook may run multiple
717 # times in one backward, in which case we permit the state to already
718 # be in `BACKWARD_POST`.
719 _p_assert(
720 handle._training_state
721 in (HandleTrainingState.BACKWARD_PRE, HandleTrainingState.BACKWARD_POST),
722 f"Expects `BACKWARD_PRE` or `BACKWARD_POST` state but got {handle._training_state}",
723 )
File /scratch/niko/.cache/bazel/_bazel_niko/bd560e0defb87030a1fbbd7d7fbab0b7/execroot/barrel/bazel-out/k8-opt/bin/tools/ipython3.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_common_utils.py:463, in _assert_in_training_states(state, training_states)
461 print(f"ERROR: {msg}")
462 traceback.print_stack()
--> 463 raise ValueError(msg)
ValueError: expected to be in states [<TrainingState.FORWARD_BACKWARD: 2>] but current state is TrainingState.IDLE
```
Environment:
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.13 (main, Oct 3 2023, 01:22:22) [Clang 17.0.1 ] (64-bit runtime)
Python platform: Linux-5.15.0-1048-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy==1.7.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
```
transformers version used is `transformers==4.47.0`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o | true |
2,902,738,154 | [CD] Add triton xpu as dependency of torch xpu windows whl | chuanqi129 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 17 | COLLABORATOR | Depends on PR #147637 land
| true |
2,902,709,749 | [Inductor] Compilation the model with fold operator failed | Cookiee235 | closed | [
"oncall: cpu inductor"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
```python
import torch
class FoldModel(torch.nn.Module):
def __init__(self, output_size, kernel_size, stride, padding):
super(FoldModel, self).__init__()
self.output_size = output_size
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
def forward(self, x):
return torch.nn.functional.fold(x, self.output_size, self.kernel_size, stride=self.stride, padding=self.padding)
inputs = torch.randn(1, 4, 4)
model = FoldModel((4, 4), (2, 2), (2, 2), (0, 0))
res = model(*inputs)
compiled_model = torch.compile(model, backend='inductor') ### Crash here!!
compiled_out = compiled_model(*inputs)
```
#### StackTrace
```
C0307 18:53:32.175000 3905749 site-packages/torch/_inductor/scheduler.py:991] [0/0] Error in codegen for ComputedBuffer(name='buf1', layout=MutationLayoutSHOULDREMOVE('cpu', torch.float32, size=[1, 1, 4, 4], stride=[16, 16, 4, 1]), data=Scatter(device=device(type='cpu'), dtype=torch.float32, inner_fn=<function ReinterpretView.make_loader.<locals>.loader at 0x7fb77c10b420>, ranges=[1, 1, 2, 2, 2, 2], output_indexer=<function index_output_size_and_inner_fn.<locals>.fn at 0x7fb77c10afc0>, scatter_mode='atomic_add'))
Traceback (most recent call last):
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/__init__.py", line 2234, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 878, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1913, in compile_to_fn
return self.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1839, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1845, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1784, in codegen
self.scheduler.codegen()
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/scheduler.py", line 3383, in codegen
return self._codegen()
^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/scheduler.py", line 3461, in _codegen
self.get_backend(device).codegen_node(node)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codegen/cpp.py", line 4469, in codegen_node
cpp_kernel_proxy.codegen_nodes(nodes)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codegen/cpp.py", line 3852, in codegen_nodes
self.codegen_functions(fn_list, var_sizes_list)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codegen/cpp.py", line 3766, in codegen_functions
tile2d_kernel = codegen_kernel(
^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codegen/cpp.py", line 3669, in codegen_kernel
run(kernel)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codegen/cpp.py", line 3681, in run
fn(vars, reduction_vars)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codegen/cpp.py", line 3833, in fn
return node.codegen(index_vars)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/scheduler.py", line 989, in codegen
self._body(*index_vars)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/loop_body.py", line 373, in __call__
result = self.root_block()
^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/loop_body.py", line 573, in __call__
return InterpreterShim(graph, submodules).run(V.get_ops_handler())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/loop_body.py", line 45, in run
return super().run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/interpreter.py", line 146, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/loop_body.py", line 41, in run_node
return super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/interpreter.py", line 203, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/fx/interpreter.py", line 297, in call_method
return getattr(self_obj, target)(*args_tail, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/sizevars.py", line 883, in store
return self._inner.store(name, self._simplify(index), value, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codegen/common.py", line 1941, in store
return self.store(name, index, value, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_inductor/codegen/cpp.py", line 3134, in store
assert mode is None
AssertionError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/torch.nn.functional.fold.py", line 21, in <module>
compiled_out = compiled_model(*inputs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2987, in RETURN_VALUE
self._return(inst)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2972, in _return
self.output.compile_subgraph(
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1117, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1369, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1416, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/vllm/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1465, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Process finished with exit code 1
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] optree==0.14.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi | true |
2,902,707,217 | Trunk workflow for Windows Arm64 | iremyux | open | [
"triaged",
"open source",
"ciflow/trunk",
"release notes: releng"
] | 7 | COLLABORATOR | This PR introduces the trunk workflow for Windows Arm64. | true |
2,902,700,689 | Compiling Flex Attention on CPU: torch._inductor.exc.InductorError: IndexError: tuple index out of range | JCBrouwer | open | [
"module: intel",
"oncall: pt2",
"module: higher order operators",
"oncall: cpu inductor",
"module: pt2-dispatcher",
"module: flex attention"
] | 6 | NONE | ### 🐛 Describe the bug
I'm interested in training with attention over variable length sequences on CPU. I'm using document masking similar to what's described here: https://github.com/pytorch-labs/attention-gym/blob/main/examples/flex_attn.ipynb
With the following code I'm running into an inductor error when compiling flex attention (the same code seems to work fine without compilation).
I'm getting "IndexError: tuple index out of range" from the inductor backend on both version 2.6.0 and 2.7.0.dev20250306.
```py
import torch
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
def test_flex_compiled():
batch_size = 16
min_length, max_length, block_size = 2, 8, 64
n_heads = 2
d_head = 32
x, ix = [], []
for i in range(batch_size):
n = int(torch.randint(min_length, max_length, ()) * block_size)
x.append(torch.randn(n, n_heads, d_head))
ix.append(torch.full((n,), i))
x = torch.cat(x, dim=0).unsqueeze(0).transpose(1, 2)
ix = torch.cat(ix)
print(f"{x.shape=}, {ix.shape=}")
def mask_mod(b: torch.Tensor, h: torch.Tensor, q_idx: torch.Tensor, kv_idx: torch.Tensor) -> torch.Tensor:
return ix[q_idx] == ix[kv_idx]
block_mask = create_block_mask(
mask_mod=mask_mod, B=1, H=1, Q_LEN=ix.shape[0], KV_LEN=ix.shape[0], device=ix.device.type
)
print(block_mask)
# first try without compilation (this succeeds)
x1 = x.clone().requires_grad_(True)
o = flex_attention(x1, x1, x1, block_mask=block_mask)
print(f"{o.shape=}")
loss = torch.cat(o.unbind(), dim=1).sum()
loss.backward()
print(f"{x1.grad.shape=}")
# then with compilation (this fails)
x2 = x.clone().requires_grad_(True)
# need to clone to avoid LoweringException: NotImplementedError: Unsupported for now if query, key, value are the same buffer.
o = torch.compile(flex_attention)(x2.clone(), x2.clone(), x2.clone(), block_mask=block_mask)
print(f"{o.shape=}")
loss = torch.cat(o.unbind(), dim=1).sum()
loss.backward()
print(f"{x2.grad.shape=}")
test_flex_compiled()
```
Logs with TORCH_LOGS="+dynamo" TORCHDYNAMO_VERBOSE=1:
[flex_cpu_compiled_2.6.0+cu124.txt](https://github.com/user-attachments/files/19125147/flex_cpu_compiled_2.6.0%2Bcu124.txt)
[flex_cpu_compiled_2.7.0.dev20250306+cu124.txt](https://github.com/user-attachments/files/19125148/flex_cpu_compiled_2.7.0.dev20250306%2Bcu124.txt)
### Versions
<details>
<summary>Environment 2.6.0</summary>
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.6
CMake version: version 3.20.0-rc4
Libc version: glibc-2.35
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1080 Ti
GPU 1: NVIDIA GeForce GTX 1080 Ti
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 570.86.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 1920X 12-Core Processor
CPU family: 23
Model: 1
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3500.0000
CPU min MHz: 2200.0000
BogoMIPS: 6999.14
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 768 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 32 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] dctorch==0.1.2
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch250cu124
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.6.0
[pip3] torch-geometric==2.6.1
[pip3] torchaudio==2.6.0
[pip3] torchdata==0.12.0.dev20250220
[pip3] torcheval==0.0.7
[pip3] torchist==0.2.3
[pip3] torchsde==0.2.6
[pip3] torchtext==0.18.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] dctorch 0.1.2 pypi_0 pypi
[conda] natten 0.17.4+torch250cu124 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchdata 0.12.0.dev20250220 pypi_0 pypi
[conda] torcheval 0.0.7 pypi_0 pypi
[conda] torchist 0.2.3 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchtext 0.18.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
</details>
<details>
<summary>Environment 2.7.0</summary>
```
PyTorch version: 2.7.0.dev20250306+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.6
CMake version: version 3.20.0-rc4
Libc version: glibc-2.35
Python version: 3.11.11 | packaged by conda-forge | (main, Mar 3 2025, 20:43:55) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1080 Ti
GPU 1: NVIDIA GeForce GTX 1080 Ti
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 570.86.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 1920X 12-Core Processor
CPU family: 23
Model: 1
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3500.0000
CPU min MHz: 2200.0000
BogoMIPS: 6999.14
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 768 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 32 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] dctorch==0.1.2
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.17.4+torch250cu124
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250306+cu124
[pip3] torch-geometric==2.6.1
[pip3] torcheval==0.0.7
[pip3] torchist==0.2.3
[pip3] torchsde==0.2.6
[pip3] triton==3.2.0
[conda] dctorch 0.1.2 pypi_0 pypi
[conda] natten 0.17.4+torch250cu124 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250306+cu124 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torcheval 0.0.7 pypi_0 pypi
[conda] torchist 0.2.3 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
</details>
cc @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,902,584,298 | [Inductor] Torch inductor pattern match breaks topological sort after replace the pattern | liji-nv | open | [
"triaged",
"oncall: pt2",
"module: inductor",
"inductor_pattern_match"
] | 0 | NONE | ### 🐛 Describe the bug
```python
import torch
import os
from typing import List, Optional, Union, Tuple
import torch
from torch._functorch.aot_autograd import aot_module_simplified
from torch._inductor.compile_fx import compile_fx
from torch.fx import Graph, GraphModule
from torch._inductor.pattern_matcher import (MULTIPLE, CallFunction, KeywordArg,
Match, MultiOutputPattern,
PatternMatcherPass, fwd_only,
register_replacement)
aten = torch.ops.aten
@torch.library.custom_op("bad_pattern::add_add",
mutates_args=())
def add_add(A: torch.Tensor, B: torch.Tensor, C: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
X = A + B
return X, X + C
@add_add.register_fake
def add_add_fake(A, B, C):
return torch.empty_like(A), torch.empty_like(A)
def register_bad_pattern(custom_pass: PatternMatcherPass):
tensor_a = KeywordArg("A")
tensor_b = KeywordArg("B")
tensor_c = KeywordArg("C")
add_Tensor_1 = CallFunction(aten.add.Tensor,
tensor_a,
tensor_b,
_users=2)
add_Tensor_2 = CallFunction(aten.add.Tensor,
add_Tensor_1,
tensor_c)
add_pattern = MultiOutputPattern([add_Tensor_1, add_Tensor_2])
def empty_pattern(
A: torch.Tensor,
B: torch.Tensor,
C: torch.Tensor,
):
return
def target_pattern(
A: torch.Tensor,
B: torch.Tensor,
C: torch.Tensor,
):
return torch.ops.bad_pattern.add_add(A, B, C)
def extra_check(match: Match):
return True
register_replacement(
empty_pattern,
target_pattern,
[],
fwd_only,
custom_pass,
search_fn_pattern=add_pattern,
extra_check=extra_check,
)
class Backend:
_custom_pass_instance: Optional[PatternMatcherPass] = None
def __init__(self, enable_inductor=True) -> None:
super().__init__()
self.elapsed_time = 0
self.module_inference_event = []
self.module_inference_time = 0
self.call_count = 0
self.custom_pass = Backend.get_custom_pass()
self.enable_inductor = enable_inductor
self.match_count = []
if enable_inductor:
from torch._inductor import config
self.inductor_config = config.get_config_copy()
self.inductor_config["joint_custom_post_pass"] = self.optimize
@classmethod
def get_custom_pass(cls):
if cls._custom_pass_instance == None:
# Really naive pass manager here
cls._custom_pass_instance = PatternMatcherPass()
register_bad_pattern(cls._custom_pass_instance)
return cls._custom_pass_instance
def optimize(
self,
gm: Union[GraphModule | Graph],
example_inputs: Optional[List[torch.Tensor]] = None,
):
graph = gm.graph if isinstance(gm, GraphModule) else gm
self.match_count.append(self.custom_pass.apply(graph))
while self.match_count[-1]:
self.match_count.append(self.custom_pass.apply(graph))
graph.eliminate_dead_code()
if isinstance(gm, GraphModule):
gm.recompile()
return gm
def __call__(self, gm: GraphModule,
example_inputs: List[torch.Tensor]) -> callable:
if self.enable_inductor:
return compile_fx(gm,
example_inputs,
config_patches=self.inductor_config)
else:
return aot_module_simplified(gm,
example_inputs,
fw_compiler=self.optimize)
# This model should work
class TestModel(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, A, B, C, D):
x = A + B
y = C * D
z = x + y
return torch.abs(z)
# This model should not replace A + B + y with add_add
# because fuse A + B + y will intoduce a loop
class TestModel1(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, A, B, C, D):
x = A + B
y = x * D
z = x + y
return torch.abs(z)
class TestModel2(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, A, B, C, D):
x = A + B
y = x * C
z = x + D
t = y * z
return torch.abs(t)
model = TestModel().cuda()
model_opt = torch.compile(model, backend=Backend(False))
model_opt(torch.randn(10).cuda(), torch.randn(10).cuda(), torch.randn(10).cuda(), torch.randn(10).cuda())
print("model success")
model2 = TestModel2().cuda()
model2_opt = torch.compile(model2, backend=Backend(False))
model2_opt(torch.randn(10).cuda(), torch.randn(10).cuda(), torch.randn(10).cuda(), torch.randn(10).cuda())
print("model2 success")
model1 = TestModel1().cuda()
model1_opt = torch.compile(model1, backend=Backend(False))
model1_opt(torch.randn(10).cuda(), torch.randn(10).cuda(), torch.randn(10).cuda(), torch.randn(10).cuda())
print("Should not match the pattern")
```
```
(venv) liji@d4e167d18494:~$ python3 torch_compile_bug_repro.py
Traceback (most recent call last):
File "/home/liji/torch_compile_bug_repro.py", line 139, in <module>
model_opt(torch.randn(10).cuda(), torch.randn(10).cuda(), torch.randn(10).cuda(), torch.randn(10).cuda())
File "/home/liji/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/liji/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/home/liji/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/liji/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/liji/venv/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/liji/venv/lib/python3.10/site-packages/torch/__init__.py", line 2385, in __call__
return self.compiler_fn(model_, inputs_, **self.kwargs)
File "/home/liji/torch_compile_bug_repro.py", line 121, in __call__
return aot_module_simplified(gm,
File "/home/liji/venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/liji/venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/liji/venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 203, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/liji/torch_compile_bug_repro.py", line 107, in optimize
graph.eliminate_dead_code()
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/graph.py", line 1862, in eliminate_dead_code
self.lint()
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/graph.py", line 1748, in lint
map_arg(node.args, lambda arg: check_arg(arg, node))
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/node.py", line 896, in map_arg
return map_aggregate(a, lambda x: fn(x) if isinstance(x, Node) else x)
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/node.py", line 905, in map_aggregate
t = tuple([map_aggregate(elem, fn) for elem in a])
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/node.py", line 905, in <listcomp>
t = tuple([map_aggregate(elem, fn) for elem in a])
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/node.py", line 922, in map_aggregate
return fn(a)
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/node.py", line 896, in <lambda>
return map_aggregate(a, lambda x: fn(x) if isinstance(x, Node) else x)
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/graph.py", line 1748, in <lambda>
map_arg(node.args, lambda arg: check_arg(arg, node))
File "/home/liji/venv/lib/python3.10/site-packages/torch/fx/graph.py", line 1727, in check_arg
raise RuntimeError(
torch._dynamo.exc.BackendCompilerFailed: backend='<__main__.Backend object at 0x7f1765acac20>' raised:
RuntimeError: Argument 'mul' of Node 'add_add_default' was used before it has been defined! Please check that Nodes in the graph are topologically ordered
graph():
%arg0_1 : [num_users=1] = placeholder[target=arg0_1]
%arg1_1 : [num_users=1] = placeholder[target=arg1_1]
%arg2_1 : [num_users=1] = placeholder[target=arg2_1]
%arg3_1 : [num_users=1] = placeholder[target=arg3_1]
%add_add_default : [num_users=2] = call_function[target=torch.ops.bad_pattern.add_add.default](args = (%arg0_1, %arg1_1, %mul), kwargs = {})
%getitem : [num_users=0] = call_function[target=operator.getitem](args = (%add_add_default, 0), kwargs = {})
%getitem_1 : [num_users=1] = call_function[target=operator.getitem](args = (%add_add_default, 1), kwargs = {})
%mul : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%arg2_1, %arg3_1), kwargs = {})
%abs_1 : [num_users=1] = call_function[target=torch.ops.aten.abs.default](args = (%getitem_1,), kwargs = {})
return (abs_1,)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
After match, it should be able to keep the correct order.
Another case is if replace `y = C * D` in TestModel with y = x * D. The pattern match expects to fail. This is because if replace the pattern, the graph is no longer a DAG.
### Versions
ollecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-112-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 550.54.08
cuDNN version: Probably one of the following:
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7313P 16-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3729.4919
CPU min MHz: 1500.0000
BogoMIPS: 5999.94
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov | true |
2,902,574,839 | [pytree][easy] lock global registry containers properly for thread-safety | XuehaiPan | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: pytree"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148750
cc @zou3519 | true |
2,902,545,315 | Enable ASAN on inductor CUDA tests | cyyever | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,902,364,563 | export deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B failed | FlintWangacc | open | [
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 3 | NONE | ### 🐛 Describe the bug
```python
from transformers import AutoFeatureExtractor, AutoModelForImageClassification from transformers import AutoTokenizer, AutoModelForCausalLM import torch
from torch.export import export
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B")
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B")
dummy_text = "This is a dummy input for testing." inputs = tokenizer(dummy_text, return_tensors="pt") dummy_input = inputs['input_ids']
exported_program: torch.export.ExportedProgram = export (
model, (dummy_input,)
)
```
This script will fail this this
```shell
torch._dynamo.exc.UserError: It looks like one of the outputs with type `<class
'transformers.cache_utils.DynamicCache'>` is not supported or pytree-flattenable.
Exported graphs outputs can only contain the following supported types: [<class 'torch.Tensor'>, <class 'torch.SymInt'>, <class 'torch.SymFloat'>, <class 'torch.SymBool'>, <class 'torch.ScriptObject'>, <class 'NoneType'>, <class 'float'>, <class 'torch.nn.attention._SDPBackend'>, <class 'torch.finfo'>, <class 'bytes'>, <class 'NotImplementedType'>, <class 'bool'>, <class 'complex'>, <class 'ellipsis'>, <class 'torch.memory_format'>, <class 'torch.dtype'>, <class 'torch.device'>, <class 'torch.layout'>, <class 'torch.iinfo'>, <class 'str'>, <class 'int'>, <class 'code'>, <class 'torch._C._CudaDeviceProperties'>].
If you are using a custom class object, please register a pytree_flatten/unflatten function using
`torch.utils._pytree.register_pytree_node` or `torch.export.register_dataclass`.
```
### Versions
```shell
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7 (https://github.com/llvm/llvm-project.git cd708029e0b2869e80abe31ddb175f7c35361f90)
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] No relevant packages
[conda] magma-cuda121 2.6.1 1 pytorch
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,902,196,222 | How can I use inductor aot_compile to support a MoE network? | sujuyu | open | [
"oncall: pt2",
"export-triage-review",
"oncall: export",
"module: aotinductor"
] | 1 | NONE | ### 🚀 The feature, motivation and pitch
Deepseek has sparked a wave of enthusiasm for the design of Moe (Mixture of Experts) network architectures. I am often asked how to accelerate the inference of an Moe network. Undoubtedly, I thought of using Inductor's aot_compile to compile it into a dynamic library and then calling it in C++ for acceleration.
Unfortunately, the process of selecting experts in Moe is different from that of a typical dense network. This part of the syntax is more like an extension of PyTorch, closer to Python's syntax, and cannot be traced. Below is a simple demo I wrote. I would like to know if the developers of Inductor have any plans to support Moe networks?
```Python
import torch
import torch.nn as nn
import torch.nn.functional as F
class Expert(nn.Module):
def __init__(self, input_dim, output_dim):
super(Expert, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
return self.linear(x)
class MoE(nn.Module):
def __init__(self, input_dim, output_dim, num_experts=10, top_k=2):
super(MoE, self).__init__()
# Eight experts for gating
self.other_experts = nn.ModuleList([Expert(input_dim, output_dim) for _ in range(num_experts - 2)])
# Gate network to choose top_k experts
self.gate = nn.Linear(input_dim, num_experts - 2)
# Final output layer
self.final_linear = nn.Linear((top_k) * output_dim, output_dim)
def forward(self, x):
# Compute gating scores
gate_scores = self.gate(x)
topk_scores, topk_indices = torch.topk(gate_scores, 2, dim=-1)
# Collect outputs from selected experts based on gating
selected_expert_outputs = torch.stack(
[torch.stack([self.other_experts[i](x[idx]) for i in topk_indice], dim = 0) for idx, topk_indice in enumerate(topk_indices)], dim=0
)
# Flatten and pass through final linear layer
all_expert_outputs = selected_expert_outputs.view(x.size(0), -1)
output = self.final_linear(all_expert_outputs)
return output
if __name__ == "__main__":
# Example usage
input_dim = 128
output_dim = 64
moe = MoE(input_dim, output_dim)
x = torch.randn(32, input_dim) # Batch size of 32
output = moe(x)
print(output.shape) # Expected output shape: [32, 64]
export_model = torch.export.export(
mod=moe,
args=tuple([torch.randn(32, input_dim)]),
dynamic_shapes={"x": {0: torch.export.Dim("batch", min=1, max=1024)}},
)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi | true |
2,902,073,579 | Update torch-xpu-ops commit pin | xytintel | closed | [
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/xpu",
"release notes: xpu"
] | 6 | CONTRIBUTOR | Update the torch-xpu-ops commit to [ae267a5f249748adbac75d43ee36fc11040e80e4](https://github.com/intel/torch-xpu-ops/commit/ae267a5f249748adbac75d43ee36fc11040e80e4), includes:
- Bugfixes of windows build
| true |
2,902,065,280 | Fix attempt https://github.com/pytorch/pytorch/issues/148498 | pradeepfn | open | [
"oncall: distributed",
"fb-exported",
"release notes: distributed (checkpoint)"
] | 3 | CONTRIBUTOR | Differential Revision: D70757404
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,901,995,570 | [FlexDecode] causes bogus assert w/ small seq-len and not sure what for | drisspg | closed | [
"release notes: nn",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148744
fixes https://github.com/pytorch/pytorch/issues/148527
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng | true |
2,901,989,222 | torch.unique(t, dim=1, return_counts=True) does not work | aifartist | open | [
"triaged",
"module: python frontend"
] | 5 | NONE | ### 🐛 Describe the bug
torch.unique(t, dim=1, return_counts=True) does not work
The "value" result still has dup's for each row(dim=1) and only one set of counts is returned instead of one for each row.
```
import torch
t = torch.tensor([[1, 1, 7, 3, 7, 7, 3], [2, 2, 2, 6, 6, 4, 3]])
print(torch.unique(t, dim=1, return_counts=True))
for tRow in t: # Show what the results should be
print(torch.unique(tRow, return_counts=True))
print(torch.unique_consecutive(t, dim=1, return_counts=True))
for tRow in t: # Show what the results should be
print(torch.unique_consecutive(tRow, return_counts=True))
```
The repo also has the results if unique or unique_consecutive is called once per row with a for loop. As can be seen the results are different.
I can't make any sense of the results. Basically worthless for 2d. However, it is clear having unique() work across multiple rows using dim=1 isn't straight forward because the number of unique values might differ from row to row and the resultant values or counts wouldn't be rectangular.
HOWEVER, one could 'pad' the results to return:
uniqvals = [[1, 3, 7, **_pad_**], [2, 3, 4, 6]]
counts = [[2, 2, 3, 0], [3, 1, 1, 2]]
**_pad_** could be any value of the dtype, and where count=0 is how we'd know a position was just padding.
IMO, it is an important use case to go through a large number of 1d vectors and compute unique on each without a for loop on a gpu.
Thinking ahead, to know how much to pad, one would need to know the maximum number of unique values on any given row. To save needing to do extra work to handle padding after the first pass one could just set a max for the length of the output "rows". To guarantee nothing is missed just specify the len of that dimension from the input. Hopefully I made this part clear.
NOTE: I disagree with: https://github.com/pytorch/pytorch/issues/130217
I find the simple 1d case understandable and usable. But with an additional dimension(s) it isn't that it couldn't be understandable but it simply does not work.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-53-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.86.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-3,11,13-31
Off-line CPU(s) list: 4-10,12
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 21
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 56%
CPU max MHz: 5800.0000
CPU min MHz: 0.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 752 KiB (21 instances)
L1i cache: 1.2 MiB (21 instances)
L2 cache: 26 MiB (9 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3,11,13-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] came-pytorch==0.1.3
[pip3] mypy-extensions==1.0.0
[pip3] nexfort==0.1.dev329+torch251cu124
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] oldest-supported-numpy==2023.12.21
[pip3] pytorch-fid==0.3.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.5.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @albanD | true |
2,901,988,936 | [DRAFT][Reshape] Guard-free reshape for contiguous tensors to avoid data dependent errors. | laithsakka | open | [
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149266
* #148899
* #148893
* #148872
* __->__ #148742
* #148815
* #148809
* #148430
Main reason for refactor is to avoid data dependent error that the default path have
this is because this new path have no checks on sizes.
Does this deviate from previous behaviors/or from torch eager? specially when it comes to strides?
I need to dig deep on this, I do not have a clear answer. There was in situation where this used to diverge
from previous behavior when we reshape the input to itself but i addressed that.
In general we have three choices here:
1. use the new logic only for unbacked.
2. use it for all compile
3. use it for eager and compile
I do not want to spend time fixing the failing tests if we are going to go with (1) or if the idea is not accepted
thats why i would like to have this discussed first. the failures does seems not risky (change expected string or runtime assert ..etc)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,901,954,221 | Remove some Centos7 builds | cyyever | closed | [
"triaged",
"open source",
"topic: not user facing"
] | 4 | COLLABORATOR | It should be safe to remove them. And they distract grepping keywords to debug CI issues. | true |
2,901,946,897 | [BE] Move cuda12.6 builds to gcc11 | malfet | closed | [
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"no-runner-experiments"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148740
I.e. `s/pytorch-linux-focal-cuda12.6-cudnn9-py3-gcc9/pytorch-linux-focal-cuda12.6-cudnn9-py3-gcc11/`
Which accidentally fixes undefined symbol references errors namely
```
/usr/bin/ld: /var/lib/jenkins/cpp-build/caffe2/build/lib/libtorch_cuda.so: undefined reference to `std::__throw_bad_array_new_length()'
```
Which happens because `libmagma.a` that were build with gcc-11 (after https://github.com/pytorch/pytorch/pull/148135 ) contains symbols which are defined in `/opt/rh/gcc-toolset-11/root/usr/lib/gcc/x86_64-redhat-linux/11/libstdc++_nonshared.a` but missing from the corresponding library bundled with `g++-9`)
Though I could not figure out what flags one must use to trigger generation of those symbols, see https://godbolt.org/z/E9KfdhzzY or
```
$ echo "int* foo(int x) { return new int[x];}"|g++ -std=c++17 -S -O3 -x c++ -o - -
.file ""
.text
.section .text.unlikely,"ax",@progbits
.LCOLDB0:
.text
.LHOTB0:
.p2align 4
.globl _Z3fooi
.type _Z3fooi, @function
_Z3fooi:
.LFB0:
.cfi_startproc
endbr64
movslq %edi, %rdi
subq $8, %rsp
.cfi_def_cfa_offset 16
movabsq $2305843009213693950, %rax
cmpq %rax, %rdi
ja .L2
salq $2, %rdi
addq $8, %rsp
.cfi_def_cfa_offset 8
jmp _Znam@PLT
.cfi_endproc
.section .text.unlikely
.cfi_startproc
.type _Z3fooi.cold, @function
_Z3fooi.cold:
.LFSB0:
.L2:
.cfi_def_cfa_offset 16
call __cxa_throw_bad_array_new_length@PLT
.cfi_endproc
```
Fixes https://github.com/pytorch/pytorch/issues/148728 and https://github.com/pytorch/pytorch/issues/148495 | true |
2,901,937,629 | [BE] Delete split builds | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148740
* __->__ #148739
They has been disabled since Oct 2024, perhaps time to remove them from the workflows
See https://github.com/pytorch/pytorch/issues/138750 | true |
2,901,912,506 | Reduce the binary size of Intel GPU wheel package | chunhuanMeng | closed | [
"open source",
"topic: not user facing",
"module: xpu"
] | 9 | CONTRIBUTOR | Update the torch-xpu-ops commit to [8d58bd6c0ac86191aa375075e09e2b47fa957d39](https://github.com/intel/torch-xpu-ops/commit/8d58bd6c0ac86191aa375075e09e2b47fa957d39), includes:
- Bugfixes of windows build
cc @gujinghui @EikanWang @fengyuan14 @guangyey | true |
2,901,903,898 | gracefully handle `tokenize.TokenError` in funcname parser. Adds support for non-Python source | cat-state | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 5 | CONTRIBUTOR | This change allows defining python functions in non-python source and having them be able to compiled by torch.compile. The existing implementation already returns None for the case where the file couldn't be read, so returning None (by making an empty funcname cache) makes sense for the case of non-python source code too.
Example [basilisp](https://github.com/basilisp-lang/basilisp):
```clojure
(import torch)
(import [torch.nn.functional :as F])
(torch/rand 10)
(defn f {:decorators [torch/compile]} [x]
(* (F/relu x) x))
(f (-> (torch/randn 100)
(.cuda)))
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,901,896,354 | [dynamo] add recursive-only dont_skip_tracing (i.e. force_inline) | williamwen42 | closed | [
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going",
"module: compile ux"
] | 2 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148736
Implementation details:
- `external_utils._ignore_skip_function_variable` is a global that toggles whether we should ignore most skip rules.
- When active, `trace_rules.lookup_inner` will return `UserFunctionVariable` (i.e. trace normally) instead of `SkipFunctionVariable` (skip tracing), unless the function is located in `torch._dynamo` (since tracing into certain `torch._dynamo` modules, especially `torch._dynamo.eval_frame`, results in bad tracing behaviors).
- `SkipFunctionVariable.call_function` will also attempt to sourceless build its function and run `call_function` on that. This needs to be done since the `SkipFunctionVariable` may have been loaded before `_ignore_skip_function_variable` is active.
- `dont_skip_tracing(recursive=True)` wraps around a function and toggles `_ignore_skip_function_variable` around the function call.
- When Dynamo traces `_set_ignore_skip_function_variable`, it records the global side effect change to `_ignore_skip_function_variable` but also sets it for real in the current process.
Notes on correctness: We must ensure that our rules for setting `_ignore_skip_function_variable` are composable with graph breaks and eval_frame/caching behavior. In particular:
- `_ignore_skip_function_variable` should be set for real so that we do the correct thing in the case of non-inlined nested calls
- `_ignore_skip_function_variable` should be set when tracing so that we do the correct thing in the case of inlining
We follow these invariants in order to ensure correct behavior:
- `_ignore_skip_function_variable` should be set before and unset after the function call for real
- Dynamo-generated bytecode should produce the same results and have the same side effects as running the original code eagerly
- Dynamo can change global state in whatever way it wishes during tracing, as long as the global state is reset after tracing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,901,887,216 | Code Clean: Remove unnecessary code | FFFrog | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148735
As the title stated.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,901,885,397 | [Inductor UT][XPU] Skip test case test_cat_max_autotune_triton for known issue. | etaf | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 10 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148734
The mm triton template/configs have not been tuned for XPU, we observer that the epilogue fusion can not speed up on XPU because of registers spill. So XPU failed on the case `test_cat_max_autotune_triton` which checks the fusion. We'll remove the skip after #146568 being resolved.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,901,872,170 | [Inductor][Windows] add env_var switch to turn all Windows inductor UTs. | xuhancn | closed | [
"module: windows",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: dynamo",
"ciflow/inductor",
"ciflow/xpu"
] | 12 | COLLABORATOR | For timeout reason, we can't turn on all Windows Inductor UTs in CI: https://github.com/pytorch/pytorch/issues/135927
And without the UTs, we can't ensure Windows inductor quality.
Intel team will do some local test for Windows inductor, but we still need to add a switch to turn on the full Windows inductor UTs.
The switch is an environment variable:
```cmd
set TORCHINDUCTOR_WINDOWS_TESTS=1
```
After setup this environment variable, we can turn on all Windows inductor UTs, It will not affect to PyTorch CI.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.