id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,782,054,575 | remove allow-untyped-defs from torch/_functorch/utils.py | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144627
* __->__ #144626
| true |
2,782,054,510 | remove allow-untyped-defs from torch/jit/_pickle.py | bobrenjc93 | closed | [
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144627
* #144626
* __->__ #144625
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,782,054,458 | remove allow-untyped-defs from torch/distributions/pareto.py | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144627
* #144626
* #144625
* __->__ #144624
| true |
2,782,054,417 | remove allow-untyped-defs from torch/distributed/_shard/sharded_tensor/shard.py | bobrenjc93 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (sharded)",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144627
* #144626
* #144625
* #144624
* __->__ #144623
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,782,045,650 | [inductor] Enable docstring_linter on _inductor | rec | closed | [
"module: lint",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144622
* #144621
| true |
2,782,045,628 | [inductor] Add tests for new docstring_linter features (fix #142496) | rec | closed | [
"module: lint",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144622
* __->__ #144621
| true |
2,782,045,609 | [inductor] Fix issue with set_linter, improve linter framework | rec | closed | [
"module: lint",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"suppress-api-compatibility-check",
"suppress-bc-linter"
] | 20 | COLLABORATOR | ### `set_linter` only
* Fix gnarly [bug](https://github.com/pytorch/pytorch/blob/dbed747aae223d53ca4e22fe45c24d1d9a8b4432/tools/test/set_linter_testdata/python_code.py.txt.python#L42) which would have garbled Python files involving sets contained in sets.
* Better handling of new Python3.12 token types
### Both linters.
* Recover from and report on unparseable Python files
* Remove `ParseError.check()` (it made it harder to read the code)
* FileLinter is now generic on `PythonFile`
### Notes
As I started working on new docstring features, I found a nasty bug and an edge case bug in set linter, and realized both the linters crash when there is a badly-formed Python file in the repo.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144620
| true |
2,782,025,987 | [inductor] Enable docstring_lint on _inductor | rec | closed | [
"open source",
"topic: not user facing"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144619
* #144618
| true |
2,782,025,963 | Add features to docstring_linter (fix #142496) | rec | closed | [
"open source",
"topic: not user facing"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144619
* __->__ #144618
| true |
2,782,025,300 | Unified Pytorch for Nvidia (CUDA), Intel (XPU), AMD (ROCm) | Qubitium | closed | [
"module: build",
"feature",
"module: rocm",
"triaged"
] | 6 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
Allow a single pytorch binary/pkg to support all major gpu platforms. One pytorch env that can execute code on `cpu`, `mps`, `cuda`, `xpu`, and `rocm`. Not 3 torch virtual envs where they can't talk to each other.
Reasons why we need this:
* It is only the natural solution for end-users
* Pre-built consumer multiple device machines already exist: Arc iGPU (XPU) + Nvidia (CUDA) (LAPTOPS).
* Custom-builds: XPU + CUDA + ROCm in one machine. No one says you can only have a single device class in a system.
* LLM models can run optimally using mixed gpus in a more performant way and/or cost effective way.
* There is no technical reason I can think that shouldn't allow this natural state of multi-device torch env.
* Developers on multi-device envs are forced to use vritual envs where one device can't talk via pytorch api to another without some shm or rpc magic.
End-User Problems:
* Driver dependencies. Nvidia drivers are the easiest to use/install with ROCm and Intel/XPU less friendly in that order.
* Drivers have complex depends and single pkg for all platforms is hard for end-users and they have to do all the prep work
Current state:
* CUDA pytorch can't access XPU or ROCm.
* Intel XPU enabled Pytorch can't access CUDA or ROCm.
* Amd ROCm pytorch can't access CUDA, or XPU.
The current state of affairs is bad for pytorch and bad for developers.
Ask yourself this one question:
Why can't Pytorch natively transfer `tensors` from [`cpu`, `mps`, `cuda`, `xpu`, `rocm`] to/from [`cpu`, `mps`, `cuda`, `xpu`, `rocm`] in one environment?
### Alternatives
None. We don't want 3 separate environments. End-users should have the option to have a unified env for all devices. Not all users want this but many will, given the choice is there. Currently there is no choice.
### Additional context
_No response_
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,781,986,956 | Collect packages with importlib in collect_env | AngryLoki | closed | [
"triaged",
"open source",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements",
"ci-no-td"
] | 22 | CONTRIBUTOR | If pytorch is installed systemwide (via os package manager) or by alternative package manager like `uv`, pip is not available, causing error in `collect_env`.
However it is still possible to collect exactly the same list using `importlib` API, which is always available.
Fixes #144615
| true |
2,781,979,082 | `collect_env.py` fails with `'NoneType' object has no attribute 'splitlines'` if pytorch is installed without pip | AngryLoki | closed | [
"module: collect_env.py",
"triaged",
"module: devx"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
When user tries to collect system information with `python -m torch.utils.collect_env` on systems, where pytorch is installed **from a system package manager** and **pip is not installed** (as expected on systemwide installations), this script fails with `'NoneType' object has no attribute 'splitlines'`.
This issue is observed in multiple bug reports: https://github.com/pytorch/pytorch/issues?q=%22object+has+no+attribute+%27splitlines%27%22 (sometimes from ArchLinux, Gentoo, Android, or external package-manager like `uv`).
In reality, python provides methods to enumerate installed packages natively and there is no need to install/call pip to do this.
Please see attached pull-request with a fix.
### Versions
```
Collecting environment information...
Traceback (most recent call last):
File "/src/dockers/src/collect_env.py", line 692, in <module>
main()
File "/src/dockers/src/collect_env.py", line 675, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "/src/dockers/src/collect_env.py", line 670, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "/src/dockers/src/collect_env.py", line 495, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/dockers/src/collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
cc @ZainRizvi @kit1980 @huydhn @clee2000 | true |
2,781,944,274 | Different Result with Different GPUs (A6000, A40) | iot2edge | open | [
"needs reproduction",
"module: cuda",
"triaged",
"module: determinism"
] | 1 | NONE | ### 🐛 Describe the bug
I set most of parameters right but get different result with different GPUs.
### Versions
```
def set_deterministic_pytorch(seed: int):
# Set CUBLAS workspace config
cublas_workspace_config = os.environ.get("CUBLAS_WORKSPACE_CONFIG")
if cublas_workspace_config is None:
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
# Set PyTorch deterministic settings
os.environ['PYTHONHASHSEED'] = str(seed)
torch.manual_seed(seed)
torch.use_deterministic_algorithms(True, warn_only=True) # alternative : torch.use_deterministic_algorithms(True)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.utils.deterministic.fill_uninitialized_memory = True
# Disable TensorFloat32 for consistent precision
torch.backends.cuda.matmul.allow_tf32 = False
torch.backends.cudnn.allow_tf32 = False
# If using CUDA
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # If using multi-GPU
```
cc @ptrblck @msaroufim @eqy @mruberry @kurtamohler | true |
2,781,935,134 | The label marked by torch.profiler.profile.record_function() appears twice in the output | plorrrrrrr | open | [
"oncall: profiler"
] | 1 | NONE | ### 🐛 Describe the bug
I have followed the tutorials in [link](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html)
I ran the code as follows
```python
import torch
import torchvision.models as models
from torch.profiler import profile, record_function, ProfilerActivity
if torch.cuda.is_available():
device = 'cuda:2'
elif torch.xpu.is_available():
device = 'xpu'
else:
print('Neither CUDA nor XPU devices are available to demonstrate profiling on acceleration devices')
import sys
sys.exit(0)
activities = [ProfilerActivity.CPU, ProfilerActivity.CUDA]
sort_by_keyword = "cuda" +"_time_total"
model = models.resnet18().to(device)
inputs = torch.randn(5, 3, 224, 224).to(device)
warmup = 5
for i in range(warmup):
model(inputs)
if __name__ == "__main__":
with profile(activities=activities,record_shapes=True) as prof:
with record_function("model_inference"):
model(inputs)
print(prof.key_averages().table(sort_by=sort_by_keyword, row_limit=10))
```
And I get the results shown in the picture. There is only one "model_inference" in the tutorial, but there are two here

I don't why this happens. And the cuda time reported by the first model_inference is longer than actual runtime.
Thanks a lot.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.127.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 6
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.6 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 70 MiB (56 instances)
L3 cache: 84 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxruntime-gpu==1.20.1
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.5.1
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[pip3] tritonclient==2.53.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
[conda] tritonclient 2.53.0 pypi_0 pypi
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | true |
2,781,920,096 | [CUDA] Illegal Memory Access with `torch.bmm` | jwnhy | open | [
"module: cuda",
"triaged",
"topic: fuzzer"
] | 5 | NONE | ### 🐛 Describe the bug
The following code causes illegal memory access in PyTorch.
```python
import torch
m1 = torch.randn(2, 291105, 1).to_sparse().cuda()
m2 = torch.randn(2, 1, 1).cuda()
print([m1.size(), m2.size()])
torch.bmm(m1, m2)
```
The bug is detected via `computer-sanitizer`
```bash
computer-sanitizer python3 poc2.py
```
### Versions
Environment:
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @ptrblck @msaroufim @eqy | true |
2,781,918,475 | [CUDA] Illegal Memory Access with `ConvTranspose2d` | jwnhy | open | [
"module: cuda",
"triaged",
"topic: fuzzer"
] | 5 | NONE | ### 🐛 Describe the bug
The following code causes illegal memory access in PyTorch.
```python
import torch
D = 40000
C = 10
m1 = torch.randn(C, D, 2).cuda()
model = torch.nn.ConvTranspose2d(C, 2, kernel_size=(1, 1), stride=(200, 200)).cuda()
model(m1)
```
The bug is detected via `computer-sanitizer`
```bash
computer-sanitizer python3 poc1.py
```
### Versions
Environment:
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @ptrblck @msaroufim @eqy | true |
2,781,848,795 | some errors in torch.compile(model,fullgraph=True,mode="reduce-overhead") on muti-gpu | zyxiyy | open | [
"needs reproduction",
"triaged",
"oncall: pt2"
] | 2 | NONE | ### 🐛 Describe the bug
code:
```python
import torch
from transformers import StaticCache
NUM_TOKENS_TO_GENERATE = 40
torch_device = "cuda"
from torch.nn.attention import SDPBackend, sdpa_kernel
def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values):
logits = model(
cur_token,
position_ids=input_pos,
cache_position=cache_position,
past_key_values=past_key_values,
return_dict=False,
use_cache=True
)[0]
new_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
return new_token
from torch.nn.attention import SDPBackend, sdpa_kernel
batch_size, seq_length = inputs["input_ids"].shape
with torch.no_grad():
past_key_values = StaticCache(
config=model.config, batch_size=1, max_cache_len=4096, device=torch_device, dtype=model.dtype,layer_device_map=layer_device_map
)
cache_position = torch.arange(seq_length, device=torch_device)
generated_ids = torch.zeros(
batch_size, seq_length + NUM_TOKENS_TO_GENERATE + 1, dtype=torch.int, device=torch_device
)
generated_ids[:, cache_position] = inputs["input_ids"].to(torch_device).to(torch.int)
logits = model(
**inputs, cache_position=cache_position, past_key_values=past_key_values,return_dict=False, use_cache=True
)[0]
next_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
generated_ids[:, seq_length] = next_token[:, 0]
print(next_token.device)
# decode_one_tokens = torch.compile(decode_one_tokens, mode="reduce-overhead", fullgraph=True)
compile_layer(model)
cache_position = torch.tensor([seq_length + 1], device=torch_device)
for _ in range(1, NUM_TOKENS_TO_GENERATE):
# with sdpa_kernel(SDPBackend.MATH):
next_token = decode_one_tokens(model, next_token.clone(), None, cache_position, past_key_values)
generated_ids[:, cache_position] = next_token.int()
cache_position += 1
text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
```
error:
```
Unsupported: torch.* op returned non-Tensor device call_function <built-in function getitem>
from user code:
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py", line 364, in pre_forward
return send_to_device(args, self.execution_device), send_to_device(
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 184, in send_to_device
{
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 185, in <dictcomp>
k: t if k in skip_keys else send_to_device(t, device, non_blocking=non_blocking, skip_keys=skip_keys)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 156, in send_to_device
return tensor.to(device, non_blocking=non_blocking)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1299, in to
device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8558
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 2
CPU(s) scaling MHz: 35%
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 520 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] torch==2.5.1+cu118
[pip3] torchaudio==2.5.1+cu118
[pip3] torchvision==0.20.1+cu118
[pip3] triton==3.1.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] torch 2.5.1+cu118 pypi_0 pypi
[conda] torchaudio 2.5.1+cu118 pypi_0 pypi
[conda] torchvision 0.20.1+cu118 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu | true |
2,781,781,611 | Inductor C++ Wrapper + autograd cause error in the second run because of FX graph cache | YouJiacheng | open | [
"triaged",
"module: fx",
"oncall: pt2",
"module: inductor",
"compile-cache"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
```python
import torch
import torch._inductor.config as config
from torch import Tensor
config.cpp_wrapper = True
@torch.compile
def foo(x: Tensor):
return x.sin()
x = torch.tensor(0.0, device="cuda", requires_grad=True)
foo(x).backward()
print(x.grad)
```
run this code __TWICE__ will get an error in the second run:
```
Traceback (most recent call last):
File "/root/modded-nanogpt/custom_op_cache.py", line 15, in <module>
foo(x).backward()
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1958, in backward
return impl_fn()
^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1944, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2079, in _backward_impl
out = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_inductor/utils.py", line 2203, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_root/pw/cpwoz7xtew3ko7zejrn4bsrizhftvllcrykvty7vz5xn6v3zmkbp.py", line 262, in g
output_handles = f(input_handles)
^^^^^^^^^^^^^^^^
RuntimeError: CUDA driver error: invalid device context
```
Turn off FX graph cache can fix it:
```python
import os
os.environ["TORCHINDUCTOR_FX_GRAPH_CACHE"] = "0"
import torch
import torch._inductor.config as config
from torch import Tensor
config.cpp_wrapper = True
@torch.compile
def foo(x: Tensor):
return x.sin()
x = torch.tensor(0.0, device="cuda", requires_grad=True)
foo(x).backward()
print(x.grad)
```
This bug might be relevant to https://github.com/pytorch/pytorch/issues/144344
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250110+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 19 2024, 14:33:20) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-161
Off-line CPU(s) list: 162-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250110+cu126
[conda] Could not collect
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | true |
2,781,725,672 | Build breaks on FreeBSD on arm platforms: Unrecognized CMAKE_SYSTEM_NAME = FreeBSD | yurivict | open | [
"module: build",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
```
-- The ASM compiler identification is Clang with GNU-like command-line
-- Found assembler: /usr/local/llvm15/bin/clang
CMake Error at aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt:65 (message):
Unrecognized CMAKE_SYSTEM_NAME = FreeBSD
-- Configuring incomplete, errors occurred!
```
### Versions
2.5.1
cc @malfet @seemethere | true |
2,781,620,946 | broken `torch.compile` with `"meta"` device tensors | koute | closed | [
"good first issue",
"triaged",
"actionable",
"oncall: pt2",
"module: dynamic shapes",
"module: inductor"
] | 9 | NONE | ### 🐛 Describe the bug
Consider the following code:
```python
import torch
@torch.compile
def foobar(x):
return x * 2
def test(device):
foobar(torch.empty((1, 16, 128, 128), device = device))
foobar(torch.empty((1, 32, 64, 64), device = device))
# OK
test("cuda")
print("cuda ok")
# Fails
test("meta")
print("meta ok")
```
Running `test` with `"cuda"` works, but running `test` with the `"meta"` device fails with the following exception:
```
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/__init__.py", line 2234, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 859, in fx_codegen_and_compile
graph.run(*example_inputs)
File ".venv/lib/python3.11/site-packages/torch/_inductor/graph.py", line 780, in run
return super().run(*args)
^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/fx/interpreter.py", line 146, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1319, in run_node
result = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/fx/interpreter.py", line 203, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1024, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File ".venv/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1021, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/lowering.py", line 361, in wrapped
out = decomp_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/lowering.py", line 2844, in empty_strided
pointwise.realize()
File ".venv/lib/python3.11/site-packages/torch/_inductor/ir.py", line 6282, in realize
return self.data.realize()
^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/ir.py", line 6367, in realize
layout=FlexibleLayout(
^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/ir.py", line 3254, in __init__
super().__init__(device, dtype, size, strides)
File ".venv/lib/python3.11/site-packages/torch/_inductor/ir.py", line 2900, in __init__
assert all(isinstance(s, (Expr, int)) for s in size)
torch._inductor.exc.LoweringException: AssertionError:
target: aten.empty_strided.default
args[0]: (1, s0, s1, s2)
args[1]: (s0*s1*s2, s1*s2, s2, 1)
kwargs: {'dtype': torch.float32, 'device': device(type='meta')}
```
This only happens when `foobar` is called twice inside `test` *and* when the size of the tensor in the second call is different.
### Versions
(The `collect_env.py` script doesn't work for me so I'm pasting the versions manually)
```
torch 2.5.1
triton 3.1.0
python 3.11.8
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | true |
2,781,582,154 | [mps/inductor] Add support for exp(). | dcci | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 12 | MEMBER | inductor/test_silu now passes after this change.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,781,567,984 | [inductor] Add unbacked symints binding in ShapeProp | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 124 | CONTRIBUTOR | Summary: ShapeProp doesn't know how to propagate unbacked. Patch it up to propagate unbacked symints like PropagateUnbackedSymInts.
Test Plan:
```
buck run mode/dev-nosan fbcode//caffe2/test:fx -- -r test_shape_prop_unbacked_sym
```
Differential Revision: D68050073
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,781,522,863 | Fix broken YAML template after #144574 | huydhn | closed | [
"Merged",
"topic: not user facing",
"test-config/default"
] | 3 | CONTRIBUTOR | The YAML syntax is wrong and GitHub complains about it https://github.com/pytorch/pytorch/blob/main/.github/ISSUE_TEMPLATE/pt2-bug-report.yml | true |
2,781,504,005 | Modernize C++ code | cyyever | closed | [
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: mobile",
"release notes: quantization",
"ciflow/mps",
"module: dynamo",
"ciflow/inductor"
] | 3 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,781,499,720 | Fix mis-categorization of clang++ as gcc. | cptspacemanspiff | closed | [
"triaged",
"open source",
"Stale",
"module: inductor",
"release notes: inductor"
] | 3 | NONE | Fixes #144601
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng | true |
2,781,498,679 | Compiling with clang fails in torch inductor, miscategorized as gcc | cptspacemanspiff | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 0 | NONE | ### 🐛 Describe the bug
In torch inductor, if the clang compiler is used on Linux, it may be miscategorized as gcc.
Specifically in the current code below, the regex will match with ```clang++```, and then return that the compiler is gcc.
```python
def _is_gcc(cpp_compiler: str) -> bool:
if sys.platform == "darwin" and _is_apple_clang(cpp_compiler):
return False
return bool(re.search(r"(gcc|g\+\+)", cpp_compiler))
```
---
This causes issues with runtime builds b/c of compile flag variations, and I specifically ran into the fact that clang (clang++18) does not support fno-tree-loop-vectorize.
I am not sure if clang is explicitly supported on linux, but considering it is used on macos it works, as long as it is detected properly.
---
As a fix, in the associated pull request, I just call the existing _is_clang, and return false if it is detected as clang.
### Versions
```
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Ti
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 5600X 6-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
CPU max MHz: 5278.7100
CPU min MHz: 2200.0000
BogoMIPS: 8400.51
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] executorch==0.5.0a0+68b0864
[pip3] numpy==1.21.3
[pip3] torch==2.6.0.dev20241218+cpu
[pip3] torchao==0.8.0+git2e032c6b
[pip3] torchaudio==2.6.0.dev20241218+cpu
[pip3] torchsr==1.0.4
[pip3] torchtune==0.5.0
[pip3] torchvision==0.22.0.dev20241218+cpu
[pip3] triton==3.1.0
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | true |
2,781,494,889 | [device_mesh] improve device selection logic | wanchaol | closed | [
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (dtensor)"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144600
* #144599
as titled, this PR improves the device selection logic when user did not
set the device before calling the DeviceMesh constructor, as a device
manager, DeviceMesh should try to set the device for users in a good
way.
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,781,494,866 | Fix DTensorTestBase to barrier with device ids | wanchaol | closed | [
"oncall: distributed",
"open source",
"Stale",
"topic: not user facing"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144600
* __->__ #144599
Get rid of the below annoying warnings when running the unit tests
```
test/distributed/test_device_mesh.py [rank1]:[W106 17:08:03.158159859 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
[rank2]:[W106 17:08:03.216576760 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
[rank0]:[W106 17:08:04.766730880 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
[rank3]:[W106 17:08:04.773544169 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
NCCL version 2.21.5+cuda12.1
```
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,781,479,393 | Actually remove example inputs from aoti_compile_and_package API | angelayi | closed | [
"fb-exported",
"Stale",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Test Plan: CI
Differential Revision: D67998953
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,781,477,674 | [Dynamo] Supports autograd.Function forward returns constant | yanboliang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144597
Fixes #144142
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,781,458,424 | [Pipelining] Refactor common utils from test_pp_dp | wconstab | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: pipelining"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144734
* __->__ #144596
* #144352
Split test_pp_dp into pp_ddp and pp_fsdp so its a bit more
concise and easier to add CP to the FSDP one.
Realize that 'use_new_runtime' parametrization was not even being used,
removing it saves a bunch of test time. We should migrate schedules to
the new runtime and have them be covered that way. (And
test_schedule*.py are testing new runtime too).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | true |
2,781,449,239 | [ca] raise error message on AOT Autograd caching | xmfan | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 12 | MEMBER | FIXES https://github.com/pytorch/pytorch/issues/144175, bandaid
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144595
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @yf225 | true |
2,781,403,978 | [ROCm] Enable inductor-periodic testing for MI300 | BLOrange-AMD | closed | [
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"skip-pr-sanity-checks",
"module: dynamo",
"ciflow/inductor",
"rocm",
"ciflow/rocm",
"ciflow/inductor-periodic"
] | 18 | CONTRIBUTOR | cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,781,386,973 | dynamo: Don't crash when tracing a missing attr on a constant. | c00w | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144593
dynamo: Don't crash when tracing a missing attr on a constant.
This throws a InternalTorchDynamoError: AttributeError: 'NoneType' object has no attribute 'max'
instead of just skipping the bad call when tracing, and throwing a
normal AttributeError instead.
There are two questions that I would love reviewer comment on.
1) Is throwing unimplemented the right thing here? or should I throw
something like ObservedAttributeError
2) Do we need to worry about performance with this code? In particular,
should we just catch the exception? Or maybe cache the lookup result?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,781,363,704 | [CUDA][TF32] Add some missing TF32 decorators to `test_nn.py` | eqy | closed | [
"module: cuda",
"open source",
"Merged",
"module: tf32",
"ciflow/trunk",
"topic: not user facing"
] | 3 | COLLABORATOR | Original authored by @bilal2vec
cc @ptrblck @msaroufim @zasdfgbnm | true |
2,781,359,048 | [BE] Enable test_public_bindings on MacOS | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | I've tried it locally and it works.. (One more reason to xfail rather than skip) | true |
2,781,283,368 | [docs] Add 32-bit complex to the list of dtypes | antoinebrl | closed | [
"triaged",
"open source",
"Merged",
"Stale",
"ciflow/trunk",
"release notes: python_frontend",
"topic: docs"
] | 25 | CONTRIBUTOR | null | true |
2,781,263,288 | Enable grep_linter to use -a | clee2000 | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 6 | CONTRIBUTOR | Lintrunner can only apply changes (-a) if only one suggestion is made per file. The grep_linter makes a suggestion for every line it finds incorrect, so it creates multiple suggestions per file if there are multiple lines that it wants to change
This sets the `line` parameter of the LintMessage to None for all of grep_linter, but I'm not sure if that entry did anything
I'm not sure if enabling -a is the best idea, since its currently used for tabs and tab width might differ each time? I had one instance where running with -a cause the spacing to change. On the other hand, -a would have already worked if only one line was bad | true |
2,781,252,749 | Avoid data-dependent errors in NJT tests via capture_scalar_outputs=True | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144889
* __->__ #144588
* #144587
* #144586
Part of my BE project addressing NJT bugs surfaced via OpInfo tests.
There are several xfails related to data-dependent errors in torch.compile. This PR sets `torch._dynamo.config.capture_scalar_outputs=True` to avoid these, which tends to exercise unbacked SymInt logic and will require `torch._check()`-related fixes. | true |
2,781,252,666 | Implement backward for NJT matmul | jbschlosser | closed | [
"module: nestedtensor",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: nested tensor"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144889
* #144588
* __->__ #144587
* #144586
Part of my BE project addressing NJT bugs surfaced via OpInfo tests.
This PR implements missing backward support for NJT matmul. Notably, for dense tensors, matmul dispatches to bmm. However, due to historical reasons related to NST, NJT handles matmul directly, and thus can't rely on the CompositeImplicit impl of matmul to get the derivative formula.
cc @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | true |
2,781,252,607 | Fix NJT fill.Scalar for contiguous inputs | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: nested tensor"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144889
* #144588
* #144587
* __->__ #144586
Part of my BE project addressing NJT bugs surfaced via OpInfo tests.
This PR implements the missing `fill.Scalar` support, which works fine for contiguous inputs, but there is still some AOTAutograd debugging required to handle non-contiguous transposed NJTs. | true |
2,781,252,535 | Fix NJT frexp() to handle both outputs | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: nested tensor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144889
* #144588
* #144587
* #144586
* __->__ #144585
* #144584
* #144583
* #144582
Part of my BE project addressing NJT bugs surfaced via OpInfo tests.
Before this PR, `frexp()` for NJT was handled via the unary pointwise fallback. The op returns a tuple, however, and the fallback doesn't handle that. This PR defines an explicit impl for `frexp()` that wraps both returned `(mantissa, exponent)` as NJTs. | true |
2,781,252,483 | Support NJT chunk() backward on batch dim | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: nested tensor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144889
* #144588
* #144587
* #144586
* #144585
* __->__ #144584
* #144583
* #144582
Part of my BE project addressing NJT bugs surfaced via OpInfo tests.
Implements `chunk()` backward on the batch dim, which was left out before. This PR unbinds the components and invokes `copy_()` on these to pass along the appropriate gradients. | true |
2,781,252,108 | Fix NJT min / max backward() for non-ragged reductions | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: nested tensor"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144889
* #144588
* #144587
* #144586
* #144585
* #144584
* __->__ #144583
* #144582
Part of my BE project addressing NJT bugs surfaced via OpInfo tests.
`value_selecting_reduction_backward()` is used in the backward for min / max, so this PR implements it for NJT. Notably, this isn't enough for reducing over the ragged dim, since that results in a dense tensor and thus NJT's torch_dispatch will not be called for this op. We need factory function support for nested ints to fix that case. | true |
2,781,251,996 | Fix NJT OpInfo entry for nn.functional.prelu | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144889
* #144588
* #144587
* #144586
* #144585
* #144584
* #144583
* __->__ #144582
Part of my BE project addressing NJT bugs surfaced via OpInfo tests.
The OpInfo entry for prelu was wrong before this PR; `weight` needs to be passed as well. The op isn't fully implemented yet. | true |
2,781,204,499 | [MPSInductor] Speedup maximum/minumum ops | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | By relying on the fact that if either `a` or `b` is NaN (or both), than `a + b` would also be NaN.
I.e. it replaces
```metal
auto tmp2 = metal::any(metal::isnan(static_cast<decltype(tmp0+tmp1)>(tmp0))) | metal::any(metal::isnan(static_cast<decltype(tmp0+tmp1)>(tmp1))) ? static_cast<decltype(tmp0+tmp1)>(NAN) : metal::max(static_cast<decltype(tmp0+tmp1)>(tmp0), static_cast<decltype(tmp0+tmp1)>(tmp1));
```
with
```metal
auto tmp2 = metal::isnan(tmp0 + tmp1) ? tmp0 + tmp1 : metal::max(static_cast<decltype(tmp0+tmp1)>(tmp0), static_cast<decltype(tmp0+tmp1)>(tmp1));
```
which according to MetalProfiler takes fewer instructions:
<img width="520" alt="image" src="https://github.com/user-attachments/assets/54659392-012b-453e-9c02-c3c5f332074a" />
vs
<img width="1031" alt="image" src="https://github.com/user-attachments/assets/55fcfa78-1ea5-4b0a-8154-d79b3e3cc400" />
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,781,150,261 | `torch._foreach_mul` does not support autograd | ad8e | open | [
"module: autograd",
"triaged",
"actionable",
"module: mta"
] | 6 | CONTRIBUTOR | ### 📚 The doc issue
This is just a note for the eventual foreach docs. If someone has the same error, they can arrive here through search.
```
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: inconsistent range for TensorList output
```
I don't expect foreach ops to support autograd.
(Or maybe I'm wrong and my code has an issue, and foreach is intended to support autograd?)
### Suggest a potential alternative/fix
Nothing to fix for now.
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @crcrpar @mcarilli @janeyx99 | true |
2,781,149,204 | [aotd] Guess tangents stride as output strides | IvanKobzarev | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144579
AOTDispatch doing AOT backward graph preparation does not know real tangents that user will specify when runs backward.
AOTD guesses the tangents. Before - we guessed that memory format of tangents will be as memory format of corresponding outputs. And if specified tangents at runtime are not the same memory format as we guessed during compilation, AOTD does coercion (copy) to guessed memory_format
But as Horace found, there are popular use cases, where the outputs of compiled region will be in specific memory_format. E.g. in 4D tensor transposing dims 1 and 2.
https://github.com/karpathy/nanoGPT/blob/master/model.py#L57
This PR changes the logic, that AOTD expects the same "strideness" of tangents as outputs. As a result it will avoid coercion for the case of transposed dims.
Limitations:
We keep guessing memory_format for:
1/ Dynamic shapes (needs more changes)
2/ Tensor subclasses (needs more changes)
Other changes:
test_torchinductor was always creating contiguous tangents via `torch.randn()`, changing them to be `torch.randn_like()` to compare computation with the same strideness.
(E.g. for cuda float16 strideness affects numerics for fft ops).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,781,144,135 | [CI] Add Triton 3.13t build | pytorchbot | closed | [
"open source",
"topic: not user facing"
] | 1 | COLLABORATOR | By just extending the matrix and invoking script with appropriate cpython runtime | true |
2,781,044,098 | lintrunner has stale errors | bobrenjc93 | closed | [
"module: lint",
"triaged",
"module: devx"
] | 7 | CONTRIBUTOR | ### 🐛 Describe the bug
Sometimes lintrunner will have stale errors even though the errors no longer exist (eg. if you switch back to a clean main commit). Here's a small repro:
```
ghstack checkout https://github.com/pytorch/pytorch/pull/144263
lintrunner -a
git checkout --detach origin/main
lintrunner -a
```
Notice the errors are still there even though we are on a clean main
```
(/home/bobren/local/b/pytorch-env) [15:02] devgpu009:/home/bobren/local/b/pytorch lintrunner -a
Warning: Could not find a lintrunner config at: '.lintrunner.private.toml'. Continuing without using configuration file.
FLAKE8 success!
CLANGFORMAT success!
MYPY failure
MYPYSTRICT success!
CLANGTIDY success!
TYPEIGNORE success!
NOQA success!
TYPENOSKIP success!
NATIVEFUNCTIONS success!
GHA success!
NEWLINE success!
SPACES success!
TABS success!
C10_UNUSED success!
INCLUDE success!
C10_NODISCARD success!
ERROR_PRONE_ISINSTANCE success!
PYBIND11_INCLUDE success!
PYBIND11_SPECIALIZATION success!
EXEC success!
PYPIDEP success!
CUBINCLUDE success!
ROOT_LOGGING success!
RAWCUDA success!
RAWCUDADEVICE success!
DEPLOY_DETECTION success!
CMAKE success!
ACTIONLINT success!
SHELLCHECK success!
TESTOWNERS success!
CALL_ONCE success!
TEST_HAS_MAIN success!
WORKFLOWSYNC success!
ONCE_FLAG success!
CONTEXT_DECORATOR success!
NO_WORKFLOWS_ON_FORK success!
PYFMT success!
BAZEL_LINTER success!
COPYRIGHT success!
LINTRUNNER_VERSION success!
RUFF success!
MERGE_CONFLICTLESS_CSV success!
META_NO_CREATE_UNBACKED success!
ATEN_CPU_GPU_AGNOSTIC success!
IMPORT_LINTER success!
SET_LINTER success!
DOCSTRING_LINTER success!
>>> Lint for torch/_functorch/_activation_checkpointing/graph_info_provider.py:
Error (MYPY) [attr-defined]
Module has no attribute "viridis"
276 | vmin=min(self.get_knapsack_memory_input()),
277 | vmax=max(self.get_knapsack_memory_input()),
278 | )
>>> 279 | cmap = cm.viridis
280 |
281 | # Assign colors based on memory
282 | node_colors = [
>>> Lint for torch/fx/experimental/proxy_tensor.py:
Error (MYPY) [attr-defined]
"Thunk[Proxy]" has no attribute "proxy"
1085 |
1086 | def unwrap_proxy(self, e: T) -> object:
1087 | if isinstance(e, Tensor):
>>> 1088 | return get_proxy_slot(e, self, e, lambda x: x.proxy)
1089 | elif isinstance(e, py_sym_types):
1090 | return get_proxy_slot(e, self, e, lambda e: e.force())
1091 | elif isinstance(e, _AnyScriptObject):
>>> Lint for torch/testing/_internal/common_utils.py:
Error (MYPY) [import-not-found]
Cannot find implementation or library stub for module named "pytest"
101 |import torch.utils._pytree as pytree
102 |from torch.utils import cpp_extension
103 |try:
>>> 104 | import pytest
105 | has_pytest = True
106 |except ImportError:
107 | has_pytest = False
Successfully applied all patches.
(/home/bobren/local/b/pytorch-env) [15:02] devgpu009:/home/bobren/local/b/pytorch git stash
Saved working directory and index state WIP on (no branch): 5c94ea34c52 Migrate from Tuple -> tuple in torch/_functorch
(/home/bobren/local/b/pytorch-env) [15:02] devgpu009:/home/bobren/local/b/pytorch git checkout --detach origin/main
Previous HEAD position was 5c94ea34c52 Migrate from Tuple -> tuple in torch/_functorch
HEAD is now at c7f12a4a7b8 [MPSInductor] Speedup maximum/minumum ops (#144581)
(/home/bobren/local/b/pytorch-env) [15:02] devgpu009:/home/bobren/local/b/pytorch lintrunner -a
Warning: Could not find a lintrunner config at: '.lintrunner.private.toml'. Continuing without using configuration file.
FLAKE8 success!
CLANGFORMAT success!
MYPY failure
MYPYSTRICT success!
CLANGTIDY success!
TYPENOSKIP success!
TYPEIGNORE success!
NOQA success!
NATIVEFUNCTIONS success!
NEWLINE success!
GHA success!
TABS success!
SPACES success!
C10_UNUSED success!
C10_NODISCARD success!
PYBIND11_INCLUDE success!
INCLUDE success!
PYBIND11_SPECIALIZATION success!
ERROR_PRONE_ISINSTANCE success!
EXEC success!
RAWCUDA success!
DEPLOY_DETECTION success!
RAWCUDADEVICE success!
CUBINCLUDE success!
PYPIDEP success!
ROOT_LOGGING success!
CMAKE success!
SHELLCHECK success!
ACTIONLINT success!
TESTOWNERS success!
CONTEXT_DECORATOR success!
TEST_HAS_MAIN success!
CALL_ONCE success!
ONCE_FLAG success!
WORKFLOWSYNC success!
NO_WORKFLOWS_ON_FORK success!
PYFMT success!
COPYRIGHT success!
BAZEL_LINTER success!
RUFF success!
LINTRUNNER_VERSION success!
MERGE_CONFLICTLESS_CSV success!
META_NO_CREATE_UNBACKED success!
ATEN_CPU_GPU_AGNOSTIC success!
DOCSTRING_LINTER success!
IMPORT_LINTER success!
SET_LINTER success!
>>> Lint for torch/_functorch/_activation_checkpointing/graph_info_provider.py:
Error (MYPY) [attr-defined]
Module has no attribute "viridis"
276 | vmin=min(self.get_knapsack_memory_input()),
277 | vmax=max(self.get_knapsack_memory_input()),
278 | )
>>> 279 | cmap = cm.viridis
280 |
281 | # Assign colors based on memory
282 | node_colors = [
>>> Lint for torch/fx/experimental/proxy_tensor.py:
Error (MYPY) [attr-defined]
"Thunk[Proxy]" has no attribute "proxy"
1085 |
1086 | def unwrap_proxy(self, e: T) -> object:
1087 | if isinstance(e, Tensor):
>>> 1088 | return get_proxy_slot(e, self, e, lambda x: x.proxy)
1089 | elif isinstance(e, py_sym_types):
1090 | return get_proxy_slot(e, self, e, lambda e: e.force())
1091 | elif isinstance(e, _AnyScriptObject):
>>> Lint for torch/testing/_internal/common_utils.py:
Error (MYPY) [import-not-found]
Cannot find implementation or library stub for module named "pytest"
100 |import torch.utils._pytree as pytree
101 |from torch.utils import cpp_extension
102 |try:
>>> 103 | import pytest
104 | has_pytest = True
105 |except ImportError:
106 | has_pytest = False
Successfully applied all patches.
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk15_zion_2630_gf27365f948db-x86_64-with-glibc2.34
Is CUDA available: N/A
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 535.154.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 76%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4792.43
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0a0+git2966fb3
[pip3] torchaudio==2.5.0a0+332760d
[pip3] torchdata==0.10.0a0+77bf3d1
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] torchvision==0.20.0a0+b33aef4
[conda] blas 1.0 mkl
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.11 py310h5eee18b_0
[conda] mkl_random 1.2.8 py310h1128e8f_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0a0+git2966fb3 dev_0 <develop>
[conda] torchaudio 2.5.0a0+332760d dev_0 <develop>
[conda] torchdata 0.10.0a0+77bf3d1 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 dev_0 <develop>
[conda] torchvision 0.20.0a0+b33aef4 dev_0 <develop>
cc @ZainRizvi @kit1980 @huydhn @clee2000 | true |
2,781,031,930 | [ez] add lint commits to .git-blame-ignore-revs | PaliC | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 19 | CONTRIBUTOR | Test Plan: Ran git blame on .lintrunner.toml and github's linter (+ manual testing) shows all commits exist | true |
2,780,949,402 | Binary builds Docker images - remove cuda 12.1 | atalman | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8 | CONTRIBUTOR | Remove cuda 12.1 from manylinux, libtoch and almalinux builds
| true |
2,780,922,516 | Request English for Issues | PaliC | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14 | CONTRIBUTOR | null | true |
2,780,845,864 | [ROCm] Implemented dropout usage for RNN with MIOpen backend | iupaikov-amd | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ciflow/rocm",
"ci-no-td"
] | 36 | CONTRIBUTOR | This PR fixes https://github.com/pytorch/pytorch/issues/107183 for ROCm.
Implemented the usage of new RNN descriptor for MIOpen backend that takes into account dropout rate value using dropout descriptor. This fixes associated test_RNN_dropout_state test.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @ColinPeppler @desertfire | true |
2,780,837,772 | [AOTI] Support _int_mm | desertfire | closed | [
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144571
Summary: Add _int_mm to the C shim, to resolve a torchao issue, https://github.com/pytorch/ao/pull/1531#issue-2776827015
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov @BoyuanFeng
Differential Revision: [D68030385](https://our.internmc.facebook.com/intern/diff/D68030385) | true |
2,780,833,864 | [MPS] Fix conv backward for channels last (cont) | pytorchbot | closed | [
"open source",
"release notes: mps",
"ciflow/mps"
] | 1 | COLLABORATOR | This is a continuation of https://github.com/pytorch/pytorch/issues/140902 but extends the same logic to input.
Looks like existing channels-last logic just produced incorrect results on pre MacOS-15 versions and fails on MacOS-15, so removing it feels like a right idea
Fixes https://github.com/pytorch/pytorch/issues/142344 | true |
2,780,812,006 | [BE][CI] bump `ruff` to 0.9.0: string quote styles | XuehaiPan | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145606
* #144546
* __->__ #144569
* #146509
Reference: https://docs.astral.sh/ruff/formatter/#f-string-formatting
- Change the outer quotes to double quotes for nested f-strings
```diff
- f'{", ".join(args)}'
+ f"{', '.join(args)}"
```
- Change the inner quotes to double quotes for triple f-strings
```diff
string = """
- {', '.join(args)}
+ {", ".join(args)}
"""
```
- Join implicitly concatenated strings
```diff
- string = "short string " "short string " f"{var}"
+ string = f"short string short string {var}"
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,780,784,988 | Disable scuba logging for autotuning | masnesral | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144568
Summary: the compile IDs are currently null, which is confusing. Turn it off until we have a solution.
Test Plan: https://fburl.com/scuba/dynamo_compile/sandbox/g2d2g5xs
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,780,758,400 | torch.accelerator.is_available() raise RuntimeError if no available CUDA/XPU devices | guangyey | closed | [
"high priority",
"triaged",
"module: regression",
"bug",
"module: accelerator"
] | 5 | COLLABORATOR | ### 🐛 Describe the bug
```python
>>> import torch
>>> torch.accelerator.is_available()
/home/guangyey/repos/stock-pytorch/torch/xpu/__init__.py:120: UserWarning: XPU device count is zero! (Triggered internally at /home/guangyey/repos/stock-pytorch/c10/xpu/XPUFunctions.cpp:117.)
torch._C._xpu_init()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guangyey/repos/stock-pytorch/torch/accelerator/__init__.py", line 46, in is_available
return device_count() > 0
File "/home/guangyey/repos/stock-pytorch/torch/accelerator/__init__.py", line 33, in device_count
return torch._C._accelerator_deviceCount()
File "/home/guangyey/repos/stock-pytorch/torch/xpu/__init__.py", line 120, in _lazy_init
torch._C._xpu_init()
RuntimeError: No XPU devices are available.
```
The root cause is that https://github.com/pytorch/pytorch/pull/144368 changed the current accelerator detection from runtime to compile time. The call stack now follows this flow `torch.accelerator.device_count` -> [device_lazy_init](https://github.com/pytorch/pytorch/blob/7a93a58b3c9bd528b86d76aaa924d7ad43be0864/torch/csrc/DeviceAccelerator.cpp#L16) -> [lazyInitDevice](https://github.com/pytorch/pytorch/blob/7a93a58b3c9bd528b86d76aaa924d7ad43be0864/torch/csrc/xpu/Module.cpp#L412) -> [device_count_ensure_non_zero](https://github.com/pytorch/pytorch/blob/7a93a58b3c9bd528b86d76aaa924d7ad43be0864/aten/src/ATen/xpu/detail/XPUHooks.cpp#L14)
As a result, a RuntimeError is raised if a user runs a PyTorch wheel built with XPU on a machine without any available XPU devices. The same issue applies to CUDA as well.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitcfd08f8
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 42 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 4838.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+gitcfd08f8
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+gitcfd08f8 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @EikanWang | true |
2,780,745,267 | Add a docstring to build.sh | zxiiro | closed | [
"open source",
"topic: not user facing",
"test-config/default"
] | 2 | COLLABORATOR | Add a little blurb to explain what build.sh is doing.
| true |
2,780,730,518 | Release validations: MacOS Rc2.6 failing with PyTorch must be built with OpenMP support | atalman | closed | [] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
Here is the wrofklow
https://github.com/pytorch/test-infra/actions/runs/12711717187/job/35435670774
Fix was merged : https://github.com/pytorch/pytorch/pull/143133 but somehow this error still showing up on MacOS Rc 2.6. (nightlies are fine)
Error Log:
```
+ eval pip3 install --force-reinstall torch --index-url https://download.pytorch.org/whl/test/cpu
+++ pip3 install --force-reinstall torch --index-url https://download.pytorch.org/whl/test/cpu
Looking in indexes: https://download.pytorch.org/whl/test/cpu
Collecting torch
Downloading https://download.pytorch.org/whl/test/cpu/torch-2.6.0-cp310-none-macosx_11_0_arm64.whl (66.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.3/66.3 MB 43.0 MB/s eta 0:00:00
Collecting filelock (from torch)
Downloading https://download.pytorch.org/whl/test/filelock-3.13.1-py3-none-any.whl (11 kB)
Collecting typing-extensions>=4.10.0 (from torch)
Downloading https://download.pytorch.org/whl/test/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting networkx (from torch)
Downloading https://download.pytorch.org/whl/test/networkx-3.3-py3-none-any.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 53.6 MB/s eta 0:00:00
Collecting jinja2 (from torch)
Downloading https://download.pytorch.org/whl/test/Jinja2-3.1.4-py3-none-any.whl (133 kB)
Collecting fsspec (from torch)
Downloading https://download.pytorch.org/whl/test/fsspec-2024.6.1-py3-none-any.whl (177 kB)
Collecting sympy==1.13.1 (from torch)
Downloading https://download.pytorch.org/whl/test/sympy-1.13.1-py3-none-any.whl (6.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 80.2 MB/s eta 0:00:00
Collecting mpmath<1.4,>=1.1.0 (from sympy==1.13.1->torch)
Downloading https://download.pytorch.org/whl/test/mpmath-1.3.0-py3-none-any.whl (536 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 19.1 MB/s eta 0:00:00
Collecting MarkupSafe>=2.0 (from jinja2->torch)
Downloading https://download.pytorch.org/whl/test/MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl (18 kB)
Installing collected packages: mpmath, typing-extensions, sympy, networkx, MarkupSafe, fsspec, filelock, jinja2, torch
Successfully installed MarkupSafe-2.1.5 filelock-3.13.1 fsspec-2024.6.1 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 sympy-1.13.1 torch-2.6.0 typing-extensions-4.12.2
++ pushd /Users/ec2-user/runner/_work/test-infra/test-infra/pytorch/pytorch/.ci/pytorch/
~/runner/_work/test-infra/test-infra/pytorch/pytorch/.ci/pytorch ~/runner/_work/test-infra/test-infra/pytorch/pytorch
++ [[ '' == \1\2\.\6 ]]
++ [[ cpu == \x\p\u ]]
++ [[ cpu == \r\o\c\m ]]
++ [[ macos-arm64 == \l\i\n\u\x ]]
++ [[ '' == \t\r\u\e ]]
++ python3 ./smoke_test/smoke_test.py --package torchonly
torch: 2.6.0
ATen/Parallel:
at::get_num_threads() : 4
at::get_num_interop_threads() : 8
OpenMP not found
MKL not found
MKLDNN not found
std::thread::hardware_concurrency() : 8
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: native thread pool
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/test-infra/test-infra/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 394, in <module>
main()
File "/Users/ec2-user/runner/_work/test-infra/test-infra/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 376, in main
raise RuntimeError("PyTorch must be built with OpenMP support")
RuntimeError: PyTorch must be built with OpenMP support
Error: Process completed with exit code 1.
```
### Versions
2.6.0 | true |
2,780,666,444 | Tabulate not official dependency of PyTorch but needed by features like FlopCounterMode | zou3519 | open | [
"triaged",
"dependency issue",
"module: flop counter"
] | 0 | CONTRIBUTOR | ```
Traceback (most recent call last):
File "/home/rzou/dev/ocu11/tutorials/recipes_source/torch_compile_user_defined_triton_kernel_tutorial.py", line 338, in <module>
with FlopCounterMode() as flop_counter:
File "/home/rzou/dev/ocu11/pt-ocu11/torch/utils/flop_counter.py", line 726, in __exit__
print(self.get_table(self.depth))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/ocu11/pt-ocu11/torch/utils/flop_counter.py", line 658, in get_table
import tabulate
ModuleNotFoundError: No module named 'tabulate'
``` | true |
2,780,604,811 | Multiple tests not run / run as no-ops by `run_test.py` | Flamefire | open | [
"high priority",
"module: tests",
"triaged"
] | 3 | COLLABORATOR | ### 🐛 Describe the bug
I noticed this while working on https://github.com/pytorch/pytorch/issues/126523
Basically the test suite runner `run_test.py` runs each test file separately or in parallel. It boils down to e.g. executing: `python -bb distributed/optim/test_apply_optimizer_in_backward.py --shard-id=1 --num-shards=1 -v -vv -rfEX -p no:xdist --use-pytest -x --reruns=2`
However for some tests this does effectively nothing. For example https://github.com/pytorch/pytorch/blob/main/test/distributed/optim/test_apply_optimizer_in_backward.py does not contain any code to be executed. The only way the tests would be executed is by running the file with `pytest` instead of `python` or by calling `common_utils.run_tests` as is done in most tests.
I can't imagine this is intentional, is it?
It also applies to e.g. https://github.com/pytorch/pytorch/blob/main/test/distributed/optim/test_named_optimizer, https://github.com/pytorch/pytorch/blob/main/tools/test/test_executorch_signatures.py and a few others
Are the tests intended to be run with pytest instead of `run_test.py` now? It looks like some tests are not compatible with pytest (judging from some code in `run_test.py`).
I also couldn't find How the tests on CI are executed to replicate that on our side.
### Versions
PyTorch 2.3.0 - main
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @mruberry @ZainRizvi @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,780,595,585 | docs: get rid of copyright year | kuraga | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Fixes https://github.com/pytorch/pytorch/pull/144153#pullrequestreview-2540418083 | true |
2,780,545,749 | [MPS] Expose `MPSProfiler::start/stopCapture` to Python | malfet | closed | [
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144561
I.e. when `MTL_CAPTURE_ENABLED` environment variable is set to 1, one should be able to invoke wrap the code with `torch.mps.profiler.capture_metal` to generate gputrace for shaders invoked inside the context manager.
For example, code below:
```python
import torch
import os
def foo(x):
return x[:,::2].sin() + x[:, 1::2].cos()
if __name__ == "__main__":
os.environ["MTL_CAPTURE_ENABLED"] = "1"
x = torch.rand(32, 1024, device="mps")
with torch.mps.profiler.metal_capture("compiled_shader"):
torch.compile(foo)(x)
```
should capture the execution of a `torch.compile` generated shader
<img width="734" alt="image" src="https://github.com/user-attachments/assets/718ff64e-103b-4b11-b66c-c89cfc770b5d" />
| true |
2,780,545,596 | [MPS] Make MPSProfiler usable from C++ | malfet | closed | [
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144561
* __->__ #144560
* #144559
By moving `buildTensorString` implementation away from the header | true |
2,780,479,565 | [MPS] Make sure that MPSStream is usable from C++ | malfet | closed | [
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144561
* #144560
* __->__ #144559
It's intended to be, but this was never tested.
This change introduces no new functionality, just properly isolates ObjC implementation details from the potential C++ caller | true |
2,780,462,050 | Extend bmm tiling to work up to 2^32 elem in any single output dim | pytorchbot | closed | [
"open source",
"release notes: mps",
"ciflow/mps"
] | 1 | COLLABORATOR | The previous tiling implementation worked for up to 2^32 total elements per single batch entry. This extends the functionality to support the dimensions encountered in ComfyUI (output shape: 1,72250,72250).
Fixes #141909 | true |
2,780,349,466 | [BE][PYFMT] remove `black`: finish `black -> ruff format` migration | XuehaiPan | open | [
"open source",
"better-engineering",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"no-stale",
"suppress-bc-linter"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144557
* #144556
* #148186
* #144555
* #144554
* #148185
* #144553
* #144552
* #144551
* #144548
| true |
2,780,348,717 | [BE][PYFMT] migrate PYFMT for `test/[i-z]*/` to `ruff format` | XuehaiPan | open | [
"oncall: jit",
"open source",
"release notes: quantization",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144556
* #148186
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,780,348,255 | [BE][PYFMT] migrate PYFMT for `test/[a-h]*/` to `ruff format` | XuehaiPan | open | [
"oncall: distributed",
"open source",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144555
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,780,347,541 | [BE][PYFMT] migrate PYFMT for `torch/[a-c]*/` to `ruff format` | XuehaiPan | open | [
"oncall: jit",
"open source",
"module: amp (automated mixed precision)",
"NNC",
"Stale",
"release notes: quantization",
"topic: not user facing",
"fx",
"release notes: AO frontend"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144554
* #148185
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mcarilli @ptrblck @leslie-fang-intel @ezyang @SherlockNoMad | true |
2,780,347,111 | [BE][PYFMT] migrate PYFMT for `torch/[e-n]*/` to `ruff format` | XuehaiPan | open | [
"oncall: jit",
"open source",
"topic: not user facing",
"fx",
"ciflow/inductor",
"release notes: export"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144553
* #144552
* #144548
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad | true |
2,780,346,695 | [BE][PYFMT] migrate PYFMT for `torch/[p-z]*/` to `ruff format` | XuehaiPan | open | [
"module: cpu",
"open source",
"release notes: quantization",
"topic: not user facing",
"fx"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144553
* __->__ #144552
* #144548
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv | true |
2,780,345,949 | [BE][PYFMT] migrate PYFMT for `torch/_[a-h]*/` to `ruff format` | XuehaiPan | open | [
"open source",
"Stale",
"topic: not user facing",
"ciflow/inductor",
"release notes: export"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144551
| true |
2,780,345,542 | [BE][PYFMT] migrate PYFMT for `torch._inductor` to `ruff format` | XuehaiPan | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"skip-pr-sanity-checks",
"module: inductor",
"ciflow/inductor"
] | 15 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144550
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @ColinPeppler @desertfire | true |
2,780,345,131 | [BE][PYFMT] migrate PYFMT for `torch._dynamo` to `ruff format` | XuehaiPan | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 8 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144550
* __->__ #144549
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan @yf225 | true |
2,780,344,778 | [BE][PYFMT] migrate PYFMT for `{torch,test}/{nn,optim}/**` to `ruff format` | XuehaiPan | open | [
"oncall: distributed",
"open source",
"release notes: quantization",
"topic: not user facing",
"ciflow/inductor",
"release notes: distributed (checkpoint)"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144553
* #144552
* __->__ #144548
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,780,344,442 | [BE][PYFMT] migrate PYFMT for `torch.{distributed,distributions}` to `ruff format` | XuehaiPan | closed | [
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (sharded)",
"topic: not user facing",
"ciflow/inductor"
] | 9 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144548
* __->__ #144547
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,780,344,116 | [BE][CI] bump `ruff` to 0.9.2: multiline `assert` statements | XuehaiPan | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 14 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145606
* __->__ #144546
Reference: https://docs.astral.sh/ruff/formatter/black/#assert-statements
> Unlike Black, Ruff prefers breaking the message over breaking the assertion, similar to how both Ruff and Black prefer breaking the assignment value over breaking the assignment target:
>
> ```python
> # Input
> assert (
> len(policy_types) >= priority + num_duplicates
> ), f"This tests needs at least {priority+num_duplicates} many types."
>
>
> # Black
> assert (
> len(policy_types) >= priority + num_duplicates
> ), f"This tests needs at least {priority+num_duplicates} many types."
>
> # Ruff
> assert len(policy_types) >= priority + num_duplicates, (
> f"This tests needs at least {priority + num_duplicates} many types."
> )
> ```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,780,340,561 | [MPS] fix triangular for >3D tensors | Isalia20 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps"
] | 3 | COLLABORATOR | Old implementation leads to incorrect output due to not handling the other batch sizes other than 3D tensors(B, M, N) | true |
2,780,126,583 | Avoid running helper functions as test | Flamefire | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 3 | COLLABORATOR | Pytest considers all symbols starting with `test_` as a test case/function and runs them.
The `test_compiled_fsdp` is a decorator but due to the import discovered by pytest.
Rename it to avoid.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,780,022,337 | fix typo: "assumbed" | crcrpar | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | COLLABORATOR | null | true |
2,779,814,277 | Fix clang-tidy warnings of performance from uncovered files | cyyever | open | [
"oncall: distributed",
"module: cpu",
"triaged",
"open source",
"release notes: quantization",
"release notes: sparse",
"module: dynamo",
"ciflow/inductor"
] | 8 | COLLABORATOR | Fixes clang-tidy warnings from performance* checks.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,779,764,071 | Fix poision child process issue when call getAccelerator() | pytorchbot | closed | [
"oncall: jit",
"open source"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144370
* __->__ #144368
# Motivation
fix https://github.com/pytorch/pytorch/issues/144152
# Solution
- Align `at::globalContext()::hasXXX` to determine if accelerator XXX is built with PyTorch or an extension already registered to PyTorch.
- Define `at::hasXXX` to determine if accelerator XXX is available at runtime.
- Use `at::globalContext()::hasXXX` in `getAccelerator` rather than `at::hasXXX` to avoid initializing the XXX runtime (which can poison child processes) while detecting the current accelerator.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @albanD | true |
2,779,711,473 | torch.compile does not work with Flash attention 3 | nighting0le01 | open | [
"high priority",
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher",
"dynamo-triage-jan2025"
] | 3 | NONE | ### 🐛 Describe the bug
Torch.compile cannot compile when using FA-3 kernels
### Error logs
```
FA3 not working with torch.compile
[rank7]: torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin flash_attn_3_cuda.PyCapsule.fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
```
### Versions
2.7 nightly, 3.0 FA
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh @yf225 | true |
2,779,650,676 | `torch.index_put` raise error when `accumulate=True` | 0x45f | open | [
"module: cuda",
"triaged",
"module: advanced indexing"
] | 3 | NONE | ### 🐛 Describe the bug
I run the following code
```python
torch.set_default_device('cuda')
x = torch.arange(1, 61).reshape(5, 4, 3)
indices=[
# torch.tensor([1, 2, 0]),
torch.tensor([[0, 2], [1, 3]]),
# torch.tensor([0, 1, 2]),
# torch.tensor([0, 1, 2]),
]
values=torch.tensor([100, 200, 300])
out2 = torch.index_put(x, indices, values, accumulate=True)
print(out2)
```
When `accumulate=False`, run correctly. But when `accumulate=True`, raise error:
```
RuntimeError: The expanded size of the tensor (12) must match the existing size (3) at non-singleton dimension 2. Target sizes: [2, 3, 12]. Tensor sizes: [3]
```
Is this a bug for index_put ?
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.129.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.72
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @ptrblck @msaroufim @eqy | true |
2,779,619,116 | dynamically set the number of SMs in torch.distributed.all_reduce | Rainlin007 | closed | [
"oncall: distributed",
"triaged"
] | 3 | NONE | ### 🚀 The feature, motivation and pitch
I want to dynamically set the number of SMs in torch.distributed.all_reduce. NCCL supports using the nccl_max_nchannels environment variable setting.but cant dynamically set in the program. It is mentioned here that ncclCommInitRankConfig can be used in the program [(link),](https://github.com/NVIDIA/nccl/issues/1572), but the corresponding setting is not found in torch. Can this capability be supported? This is useful in inference optimization scenarios
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,779,435,711 | FlexAttention uses much more GPU memory than FlashAttention-2 | ChenlongDeng | open | [
"module: memory usage",
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 9 | NONE | ### 🐛 Describe the bug
Thank you for the outstanding work on PyTorch FlexAttention! I am currently trying to integrate FlexAttention with the Hugging Face Transformers framework for training. However, I noticed that FlexAttention seems to consume more GPU memory compared to FlashAttention-2. The issue can be reproduced using the following demo scripts:
## Reproduction
You need two files to reproduce my observations, and these two files are in the same folder.
1. memory_test.py
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments, default_data_collator, TrainerCallback
import argparse
from transformers.models.llama.modeling_llama import LLAMA_ATTENTION_CLASSES
from datasets import Dataset
from flex_attention import LlamaFlexAttention, llama_model_forward
import os
class ProfilerCallback(TrainerCallback):
def __init__(self, prof):
self.prof = prof
def on_step_end(self, args, state, control, **kwargs):
self.prof.step()
def train_with_profiler(trainer, args=None):
with torch.profiler.profile(
activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA],
schedule=torch.profiler.schedule(skip_first=1, wait=1, warmup=1, active=trainer.args.max_steps-3),
on_trace_ready=torch.profiler.tensorboard_trace_handler(f'{trainer.args.output_dir}/profiler_log'),
profile_memory=True,
with_stack=False,
record_shapes=True
) as prof:
trainer.add_callback(ProfilerCallback(prof))
trainer.train()
local_rank = int(os.environ.get("LOCAL_RANK", -1))
if local_rank == 0:
prof.export_memory_timeline(f"./{args.attention_type}.html", device="cuda:0")
parser = argparse.ArgumentParser()
parser.add_argument("--model_name_or_path", type=str, default="meta-llama/Llama-3.2-3B")
parser.add_argument("--attention_type", type=str, default="flex")
parser.add_argument("--train_length", type=int, default=2048)
parser.add_argument("--dataset_size", type=int, default=8192)
args = parser.parse_args()
if __name__ == "__main__":
assert args.attention_type in ["flash_attention_2", "flex", "sdpa", "eager"], "Invalid attention type"
torch.compiler.reset()
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
if args.attention_type == "flex":
LLAMA_ATTENTION_CLASSES["flash_attention_2"] = LlamaFlexAttention
attn_implementation = "flash_attention_2"
else:
attn_implementation = args.attention_type
model = AutoModelForCausalLM.from_pretrained(args.model_name_or_path, torch_dtype=torch.bfloat16, attn_implementation=attn_implementation)
model.model.forward = llama_model_forward.__get__(model.model)
random_input_ids = torch.randint(low=0, high=tokenizer.vocab_size, size=(args.dataset_size, args.train_length))
train_dataset = Dataset.from_dict({"input_ids": random_input_ids.tolist(), "labels": random_input_ids.tolist()})
training_args = TrainingArguments(
output_dir=f"./tmp-{args.attention_type}",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=1,
save_steps=500,
save_total_limit=1,
max_steps=10,
logging_steps=1,
logging_dir="./logs",
logging_first_step=True,
report_to="none",
do_train=True,
gradient_checkpointing=True,
gradient_checkpointing_kwargs={"use_reentrant": False},
gradient_accumulation_steps=2,
deepspeed="../../config/deepspeed/stage2-offload.json",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
tokenizer=tokenizer,
data_collator=default_data_collator,
)
# train_with_profiler(trainer, args)
trainer.train()
```
2. flex_attention.py
```python
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
from transformers.models.llama.modeling_llama import LlamaAttention, StaticCache, apply_rotary_pos_emb, repeat_kv, Cache, logger, DynamicCache, BaseModelOutputWithPast, FlashAttentionKwargs, Unpack, LlamaModel, add_start_docstrings_to_model_forward, LLAMA_INPUTS_DOCSTRING
from typing import Optional, Tuple, Union, List
from functools import lru_cache
def flex_causal_mask(b, h, q_idx, kv_idx):
return q_idx >= kv_idx
def score_mod(score, b, h, q_idx, kv_idx):
return score
flex_attention = torch.compile(flex_attention, mode="max-autotune")
@lru_cache
def create_block_mask_cached(mask_mod: Optional[torch.BoolTensor] = None, B: int = 1, H: int = 1, Q_LEN: int = 1, KV_LEN: int = 1, device: Optional[torch.device] = None):
return create_block_mask(mask_mod=mask_mod, B=B, H=H, Q_LEN=Q_LEN, KV_LEN=KV_LEN, device=device, BLOCK_SIZE=(128, 64))
class LlamaFlexAttention(LlamaAttention):
"""
Llama flex attention module. This module inherits from `LlamaAttention` as the weights of the module stays
untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
flex attention and deal with padding tokens in case the input contains any of them.
"""
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Cache] = None,
output_attentions: bool = False,
use_cache: bool = False,
cache_position: Optional[torch.LongTensor] = None,
position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.45
**kwargs,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
if isinstance(past_key_value, StaticCache):
raise ValueError(
"`static` cache implementation is not compatible with `attn_implementation==flash_attention_2` "
"make sure to use `sdpa` in the mean time, and open an issue at https://github.com/huggingface/transformers"
)
output_attentions = False
bsz, q_len, _ = hidden_states.size()
query_states = self.q_proj(hidden_states)
key_states = self.k_proj(hidden_states)
value_states = self.v_proj(hidden_states)
# Flash attention requires the input to have the shape
# batch_size x seq_length x head_dim x hidden_dim
# therefore we just need to keep the original shape
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
if position_embeddings is None:
logger.warning_once(
"The attention layers in this model are transitioning from computing the RoPE embeddings internally "
"through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
"`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.45 `position_ids` will be "
"removed and `position_embeddings` will be mandatory."
)
cos, sin = self.rotary_emb(value_states, position_ids)
else:
cos, sin = position_embeddings
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
if past_key_value is not None:
# sin and cos are specific to RoPE models; cache_position needed for the static cache
cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
# TODO: These transpose are quite inefficient but Flash Attention requires the layout [batch_size, sequence_length, num_heads, head_dim]. We would need to refactor the KV cache
# to be able to avoid many of these transpose/reshape/view.
# query_states = query_states.transpose(1, 2)
# key_states = key_states.transpose(1, 2)
# value_states = value_states.transpose(1, 2)
key_states = repeat_kv(key_states, self.num_key_value_groups)
value_states = repeat_kv(value_states, self.num_key_value_groups)
dropout_rate = self.attention_dropout if self.training else 0.0
# In PEFT, usually we cast the layer norms in float32 for training stability reasons
# therefore the input hidden states gets silently casted in float32. Hence, we need
# cast them back in the correct dtype just to be sure everything works as expected.
# This might slowdown training & inference so it is recommended to not cast the LayerNorms
# in fp32. (LlamaRMSNorm handles it correctly)
input_dtype = query_states.dtype
if input_dtype == torch.float32:
if torch.is_autocast_enabled():
target_dtype = torch.get_autocast_gpu_dtype()
# Handle the case where the model is quantized
elif hasattr(self.config, "_pre_quantization_dtype"):
target_dtype = self.config._pre_quantization_dtype
else:
target_dtype = self.q_proj.weight.dtype
logger.warning_once(
f"The input hidden states seems to be silently casted in float32, this might be related to"
f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
f" {target_dtype}."
)
query_states = query_states.to(target_dtype)
key_states = key_states.to(target_dtype)
value_states = value_states.to(target_dtype)
attn_output = flex_attention(
query_states,
key_states,
value_states,
block_mask=kwargs["block_mask"] if "block_mask" in kwargs else None,
score_mod=None if "block_mask" in kwargs else score_mod,
)
attn_output = attn_output.transpose(1, 2).reshape(bsz, q_len, -1).contiguous()
attn_output = self.o_proj(attn_output)
if not output_attentions:
attn_weights = None
return attn_output, attn_weights, past_key_value
@add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
def llama_model_forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
cache_position: Optional[torch.LongTensor] = None,
**flash_attn_kwargs: Unpack[FlashAttentionKwargs],
) -> Union[Tuple, BaseModelOutputWithPast]:
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
if (input_ids is None) ^ (inputs_embeds is not None):
raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
if self.gradient_checkpointing and self.training and use_cache:
logger.warning_once(
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
)
use_cache = False
if inputs_embeds is None:
inputs_embeds = self.embed_tokens(input_ids)
# kept for BC (non `Cache` `past_key_values` inputs)
return_legacy_cache = False
if use_cache and not isinstance(past_key_values, Cache):
return_legacy_cache = True
if past_key_values is None:
past_key_values = DynamicCache()
else:
past_key_values = DynamicCache.from_legacy_cache(past_key_values)
logger.warning_once(
"We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and "
"will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class "
"(https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)"
)
if cache_position is None:
past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
cache_position = torch.arange(
past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
)
if position_ids is None:
position_ids = cache_position.unsqueeze(0)
causal_mask = self._update_causal_mask(
attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
)
hidden_states = inputs_embeds
# create position embeddings to be shared across the decoder layers
position_embeddings = self.rotary_emb(hidden_states, position_ids)
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
next_decoder_cache = None
# block_mask
if isinstance(self.layers[0].self_attn, LlamaFlexAttention):
block_mask = create_block_mask_cached(mask_mod=flex_causal_mask, B=1, H=1, Q_LEN=hidden_states.size(1), KV_LEN=hidden_states.size(1), device=hidden_states.device)
flash_attn_kwargs["block_mask"] = block_mask
if "num_items_in_batch" in flash_attn_kwargs:
flash_attn_kwargs.pop("num_items_in_batch")
for decoder_layer in self.layers[: self.config.num_hidden_layers]:
if output_hidden_states:
all_hidden_states += (hidden_states,)
if self.gradient_checkpointing and self.training:
layer_outputs = self._gradient_checkpointing_func(
decoder_layer.__call__,
hidden_states,
causal_mask,
position_ids,
past_key_values,
output_attentions,
use_cache,
cache_position,
position_embeddings,
**flash_attn_kwargs,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=causal_mask,
position_ids=position_ids,
past_key_value=past_key_values,
output_attentions=output_attentions,
use_cache=use_cache,
cache_position=cache_position,
position_embeddings=position_embeddings,
**flash_attn_kwargs,
)
hidden_states = layer_outputs[0]
if use_cache:
next_decoder_cache = layer_outputs[2 if output_attentions else 1]
if output_attentions:
all_self_attns += (layer_outputs[1],)
hidden_states = self.norm(hidden_states)
# add hidden states from the last decoder layer
if output_hidden_states:
all_hidden_states += (hidden_states,)
next_cache = next_decoder_cache if use_cache else None
if return_legacy_cache:
next_cache = next_cache.to_legacy_cache()
if not return_dict:
return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
return BaseModelOutputWithPast(
last_hidden_state=hidden_states,
past_key_values=next_cache,
hidden_states=all_hidden_states,
attentions=all_self_attns,
)
```
3. stage2-offload.json
```json
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true,
"round_robin_gradients": true
}
}
```
## Usage
```shell
torchrun --nproc_per_node=8 memory_test.py --attention_type flex # FlexAttention
torchrun --nproc_per_node=8 memory_test.py --attention_type flash_attention_2 # FlashAttention-2
```
The experiments are conducted on 8*A100-40G.
## Observations
I have noticed that FlexAttention uses approximately 28GB of GPU memory across 8 devices, whereas FlashAttention-2 requires only around 23GB. I'm currently unsure whether this discrepancy arises from the internal implementation of FlexAttention or the block mask. Changing the block mask to score_mod did not resolve the issue either.
I would appreciate any insights or explanations regarding this matter! Thank you!
### Versions
```shell
torch==2.6.0.dev20241218+cu118
transformers==4.47.1
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,779,342,607 | [bug report template format] Simplify version information with HTML tags | shaoyuyoung | open | [
"module: collect_env.py",
"triaged",
"needs design"
] | 4 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
When I looked at the bug report, I found the version information **too long and redundant**.
Many reporters are following the instructions here:

Reporters run the downloaded script and get the environment information. They paste the information in the bug report.
Unfortunately, I think the information are **too redundant** like below:
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
Actually, in most time, we just need **pytorch version**, **OS**, **CPU** and **GPU** information is enough! The rest of the infomation **can be folded** and viewed when needed like below, using some **html tags** (i.e., `<details>`and `<summary>`). That way, version information doesn't take up too much space on the browser page space. Refer this #144183
PyTorch version: 20241230
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
I think there are two possible solutions:
**solution1**: We can modify the [issue format here](https://github.com/pytorch/pytorch/tree/main/.github/ISSUE_TEMPLATE), preconfiguring these HTML tags.
**solution2**: But I think a more efficient way for bug reporters is to modify the [collect_env script](https://github.com/pytorch/pytorch/blob/main/torch/utils/collect_env.py). We can wrap the redunt information with some HTML tags.
### Alternatives
_No response_
### Additional context
_No response_ | true |
2,779,334,146 | [Pipelining] Fix FSDP+PP stream sync bug | wconstab | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)",
"module: pipelining"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144596
* #144352
* __->__ #144535
* #144534
This bug could cause gradient corruption as a race condition exists
between FSDP's reduce-scatter and any operations reading .grad on the
main stream. The root cause is that pipelining stage .backward implementation
got modified to support zero-bubble and in doing so, invoked .grad()
instead of .backward(), and performed manual gradient accumulation and
manually called into hooks for FSDP. But one key hook was missed for
FSDP, the '_root_post_backward_final_callback' hook, which is
responsible for syncing the grad reduction ops after the last layer's
backward completes.
Note: this fix applies to both zero-bubble and non-zero-bubble schedules. This caused some confusion initially, as non-zero-bubble schedules do use torch.autograd.backward() which would have called into fsdp's hooks and synced, unlike zero-bubble which uses .grad() which does not invoke hooks. However, this difference was already taken into consideration as FSDP's hooks are manually disabled before invoking either type of backward, and then the hooks are manually triggered.
A better fix as a follow up PR would be to invoke .backward() for the
weight grad, so that we never have to disable or manually invoke hooks.
Modified test_pp_dp to intentionally race against FSDP's reduce by
modifying the parameters inplace in a mathematically identical way, and
confirmed it fails intermittently when the FSDP sync is not applied and
passes with the FSDP sync added.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | true |
2,779,334,060 | [Pipelining] Improve test_pp_dp | wconstab | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144596
* #144352
* #144535
* __->__ #144534
Some refactoring, but important changes include
- initializing the weights properly so there are more nonzero gradients
flowing, which helped catch the DDP+PP+ZB bug
- make the DDP+ZB+PP bug skip for now and file an issue
- tighten the tolerances to defaults
- use separate targets instead of same inputs
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | true |
2,779,305,290 | [dynamo][hop] Introduce FlexAttentionBackwardHighOrderVariable | xmfan | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: higher order operators",
"module: compiled autograd",
"module: flex attention"
] | 8 | MEMBER | FIXES https://github.com/pytorch/pytorch/issues/143180
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144533
This PR adds a new variable mapping to SourcelessBuilder to represent the flex attention intermediates. The variable proxies a call to HOP, and carryovers the graph state (subgraphs represented as UnspecializedNNModuleVariable) to the dynamo output graph. This is safe to do because the nn modules used in flex attention have either been speculated on before, or are outputs of make_fx of the forward.
tlparse of `TestCompiledAutograd.test_flex_attention`: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpiWendk/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
```python
class GraphModule(torch.nn.Module):
def forward(self, L_inputs_ : list):
...
# File: /data/users/xmfan/core/b/pytorch/torch/_dynamo/compiled_autograd.py:832 in set_node_origin, code: CompiledFunctionBackward0 (NodeCall 1)
...
fw_graph0_0 = self.fw_graph0_0
joint_graph0_0 = self.joint_graph0_0
mask_graph0_0 = self.mask_graph0_0
flex_attention_backward = torch.ops.higher_order.flex_attention_backward(aot0_primals_1, aot0_primals_1, aot0_primals_1, aot0_detach_3, aot0_detach_5, aot0_expand_5, aot0_zeros_1, fw_graph0_0, joint_graph0_0, (1, 1, aot0_ones, aot0_zeros, None, None, aot0__to_copy_1, aot0__to_copy_2, None, None, 1073741824, 1073741824, mask_graph0_0), 0.125, {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True}, (), ()); aot0_primals_1 = aot0_detach_3 = aot0_detach_5 = aot0_expand_5 = aot0_zeros_1 = fw_graph0_0 = joint_graph0_0 = aot0_ones = aot0_zeros = aot0__to_copy_1 = aot0__to_copy_2 = mask_graph0_0 = None
aot0_getitem_4: "bf16[1, 1, s0, s1][s0*s1, s0*s1, s1, 1]cuda:0" = flex_attention_backward[0]
aot0_getitem_5: "bf16[1, 1, s0, s1][s0*s1, s0*s1, s1, 1]cuda:0" = flex_attention_backward[1]
aot0_getitem_6: "bf16[1, 1, s0, s1][s0*s1, s0*s1, s1, 1]cuda:0" = flex_attention_backward[2]; flex_attention_backward = None
...
class fw_graph0_0(torch.nn.Module):
def forward(self, arg0_1: "bf16[][]cuda:0", arg1_1: "i32[][]cuda:0", arg2_1: "i32[][]cuda:0", arg3_1: "i32[][]cuda:0", arg4_1: "i32[][]cuda:0"):
return arg0_1
class joint_graph0_0(torch.nn.Module):
def forward(self, arg0_1: "bf16[][]cuda:0", arg1_1: "i32[][]cuda:0", arg2_1: "i32[][]cuda:0", arg3_1: "i32[][]cuda:0", arg4_1: "i32[][]cuda:0", arg5_1: "bf16[][]cuda:0"):
return [arg5_1, None, None, None, None]
class mask_graph0_0(torch.nn.Module):
def forward(self, arg0_1: "i32[][]cuda:0", arg1_1: "i32[][]cuda:0", arg2_1: "i32[][]cuda:0", arg3_1: "i32[][]cuda:0"):
# File: /data/users/xmfan/core/b/pytorch/torch/_dynamo/compiled_autograd.py:832 in set_node_origin, code: CompiledFunctionBackward0 (NodeCall 1)
new_ones: "b8[][]cuda:0" = torch.ops.aten.new_ones.default(arg0_1, [], dtype = torch.bool, device = device(type='cuda', index=0), pin_memory = False); arg0_1 = None
return new_ones
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @zou3519 @ydwu4 @Chillee @drisspg @yanboliang @BoyuanFeng | true |
2,779,302,861 | Something wrong with the torch.triu function on mps device | Matzohe | closed | [
"triaged",
"module: NaNs and Infs",
"module: correctness (silent)",
"module: mps"
] | 3 | NONE | ### 🐛 Describe the bug
While using the torch.triu function after torch.full like below:
```python
mask = torch.full(
(10, 10), float("-inf"), device="mps"
)
print(mask)
mask = torch.triu(mask, diagonal=1)
print(mask)
```
The lower triangle area should be 0.0, however, it's nan in the end.
<img width="1265" alt="Screenshot 2025-01-10 at 13 07 52" src="https://github.com/user-attachments/assets/ad60055e-0bef-483b-8a4b-18f36bbf3363" />
### Versions
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.0
Libc version: N/A
Python version: 3.9.19 (main, May 6 2024, 14:39:30) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] facenet-pytorch==2.6.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.1
[pip3] torch==2.2.2
[pip3] torch-cka==0.21
[pip3] torchaudio==2.3.1
[pip3] torchextractor==0.3.0
[pip3] torchvision==0.17.2
[conda] facenet-pytorch 2.6.0 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.2.2 pypi_0 pypi
[conda] torch-cka 0.21 pypi_0 pypi
[conda] torchaudio 2.3.1 pypi_0 pypi
[conda] torchextractor 0.3.0 pypi_0 pypi
[conda] torchvision 0.17.2 pypi_0 pypi
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | true |
2,779,271,034 | Fix deepcopy hooks | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx"
] | 4 | CONTRIBUTOR | Summary: As title, fix bug when a GraphModule doesn't have _deepcopy_hooks attribute
Test Plan:
```
buck2 test 'fbcode//mode/opt' fbcode//torchmultimodal/tests:tests -- --exact 'torchmultimodal/tests:tests - test_albef.py::test_dequeue_and_enqueue'
```
Differential Revision: D68002767
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,779,260,369 | [Pipelining] PP+DDP does not work for Zero Bubble | wconstab | open | [
"oncall: distributed",
"triaged",
"bug",
"module: pipelining"
] | 0 | CONTRIBUTOR | Due to zero-bubble's implementation for backward bypassing torch.autograd.backward() in favor of calling .grad() directly, this skips hooks used by DDP for gradient reduction.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | true |
2,779,218,693 | Update torch-xpu-ops commit pin | xytintel | closed | [
"triaged",
"open source",
"topic: not user facing"
] | 2 | CONTRIBUTOR | Update the torch-xpu-ops commit to [a868a2e621e792c4393d86da9ccecd42a5bdfb84](https://github.com/intel/torch-xpu-ops/commit/a868a2e621e792c4393d86da9ccecd42a5bdfb84), includes:
- Enable device code compression on Windows and Linux
- Aten operator coverage improvement
- NestedTensorXPU backend support
| true |
2,779,213,245 | [canary] List -> list | bobrenjc93 | closed | [
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"module: amp (automated mixed precision)",
"release notes: quantization",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: export",
"module: compiled autograd"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144528
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mingfeima @XiaobingSuper @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan | true |
2,779,211,881 | [canary] Dict -> dict | bobrenjc93 | closed | [
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"module: amp (automated mixed precision)",
"release notes: quantization",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: export",
"module: compiled autograd"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144527
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mingfeima @XiaobingSuper @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan | true |
2,779,209,108 | [canary] Tuple -> tuple | bobrenjc93 | closed | [
"oncall: distributed",
"oncall: jit",
"module: cpu",
"module: amp (automated mixed precision)",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend",
"module: compiled autograd"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144526
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mingfeima @XiaobingSuper @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.