id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,776,226,574 | Link to transformer tutorial in transformer docs | mikaylagawarecki | closed | [
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144425
<img width="1045" alt="Screenshot 2025-01-08 at 4 50 20 PM" src="https://github.com/user-attachments/assets/05adfecb-8a23-4c48-9a2c-50c5b3f886b0" />
| true |
2,776,181,971 | Implement `generator.throw(exception)` | guilhermeleobas | closed | [
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142513
* #145223
* #144420
* __->__ #144424
* #144423
* #144422
* #144421
* #141055
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,776,181,757 | Implement `generator.close()` | guilhermeleobas | closed | [
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142513
* #145223
* #144420
* #144424
* __->__ #144423
* #144422
* #144421
* #141055
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,776,181,493 | Implement `generator.send(..)` | guilhermeleobas | closed | [
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142513
* #145223
* #144420
* #144424
* #144423
* __->__ #144422
* #144421
* #141055
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,776,181,363 | Implement `generator.__iter__()` | guilhermeleobas | closed | [
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142513
* #145223
* #144420
* #144424
* #144423
* #144422
* __->__ #144421
* #141055
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,776,181,212 | Add `CLEANUP_THROW` bytecode | guilhermeleobas | closed | [
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142513
* #145223
* __->__ #144420
* #144424
* #144423
* #144422
* #144421
* #141055
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,776,172,823 | [dynamo] Avoid graph break on updates to `obj.__dict__` | StrongerXi | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 10 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144419
`obj.__dict__` is handled specially in Dynamo, and prior to this patch
we only support read and membership check on that dictionary object.
This patch adds support for writes and some documentation.
Fixes #143756.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,776,109,172 | [ONNX] Avoid overwriting overlapped decomposed functions | pytorchbot | closed | [
"open source",
"release notes: onnx"
] | 1 | COLLABORATOR | Fixes #141770
The decomposed function in `torch.export.default_decompositions().items()` is overwritten by `torch._decomp.decomposition_table`. As from `torch.onnx.export()` perspective, we should rather respect the table of decompositions in `torch.export.default_decompositions().items()` and avoid overwriting it with `torch._decomp.decomposition_table`. | true |
2,776,100,913 | [ONNX] Handle list values as 0d inputs | pytorchbot | closed | [
"open source",
"release notes: onnx"
] | 1 | COLLABORATOR | Handle list values as 0d inputs instead of 1d, as the `SymInt`s are expected to be 0d tensors in ONNX.
This PR reshapes int64 values into 1D tensors in a list, assuming they are 0D tensors initially. | true |
2,776,099,826 | Allows pep658 metadata uploader script to backfill for prefix | clee2000 | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR |
Test
`uv run scripts/release/upload_metadata_file.py --use-s3-prefix --bucket pytorch --key-prefix whl/nightly/cpu-cxx11-abi --dry-run
`
I also did the upload of one file without dry run and checked that metadata uploaded looked sane.
I wonder if this would be better put in test-infra's s3 index manager script to be run periodically instead | true |
2,776,015,857 | [BE] fix ruff rule E226: add missing whitespace around operator in f-strings | XuehaiPan | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144415
The fixes are generated by:
```bash
ruff check --fix --preview --unsafe-fixes --select=E226 .
lintrunner -a --take "RUFF,PYFMT" --all-files
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,776,012,778 | [do not land] Test warm start compile latency with fx graph caching | masnesral | closed | [
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144414
* #144413
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,776,009,884 | [do not land] Test warm start compile latency with triton caching | masnesral | closed | [
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144414
* __->__ #144413
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,776,000,671 | [do not land] Test warm start compile latency with fx graph caching | masnesral | closed | [
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144412
* #144411
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,776,000,451 | [do not land] Test warm start compile latency with triton caching | masnesral | closed | [
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,775,999,184 | [do not land] Test warm start compile latency with triton caching | masnesral | closed | [
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144410
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,775,965,131 | Set maximum supported version of Python as 3.13 | pytorchbot | closed | [
"open source",
"topic: not user facing"
] | 1 | COLLABORATOR | Same as https://github.com/pytorch/pytorch/pull/119743 Required for Release 2.6.0 | true |
2,775,927,307 | torchgen: sharded_keys should be immutable | swolchok | closed | [
"fb-exported",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144408
* #144364
* #144363
Per @Skylion007.
Differential Revision: [D67943449](https://our.internmc.facebook.com/intern/diff/D67943449/) | true |
2,775,900,157 | Remove extra copy torch/_prims | LlamaFarm | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | updated _reshape_aten
| true |
2,775,886,087 | [inductor][cpu] Fix accuracy error in BMM benchmarking for input weight with offset | frost-intel | closed | [
"open source",
"Stale",
"topic: not user facing",
"module: inductor"
] | 3 | COLLABORATOR | Fixes #143770
When an input weight tensor has an offset (i.e. is a slice of another larger tensor at non-zero dim) the test/benchmarking process was changing the benchmarking argument to be only the one slice instead of the entire tensor. This resulted in an accuracy error and potentially a crash if in `VERIFY` mode in `select_algorithm.py`.
As a solution, we check if the input weight is a slice of a larger node, and if so, we use the larger node for the call to `as_strided` when preprocessing the benchmarking arguments.
* Why wasn't this happening before with GEMM?
- Since current GEMM code only supports constant weights, the blocking/packing process changed the input weight tensor so no offset was used. This is not the case for BMM.
The new UT here tests both the BMM and GEMM cases, where the GEMM input is a slice and a constant weight, and the BMM input is not constant.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,775,829,268 | [BE][pytree][Easy] change imports `torch.utils._pytree` -> `torch.utils.pytree.python` | XuehaiPan | open | [
"oncall: distributed",
"open source",
"Stale",
"release notes: quantization",
"release notes: distributed (fsdp)",
"topic: not user facing",
"module: pytree",
"fx",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd",
"oncall: distributed chec... | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144332
* #130141
* __->__ #144405
* #137400
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zou3519 @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan @ColinPeppler | true |
2,775,800,091 | [DTensor] Add `aten.view.dtype` op support | awgu | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 6 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144404
Fixes https://github.com/pytorch/pytorch/issues/144286
Viewing a tensor to a different dtype does not require any redistribution and can use the default strategy.
cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,775,791,527 | Extended functionality for torch.quantization.fuse_modules | Kautenja | open | [
"oncall: quantization",
"triaged"
] | 4 | NONE | ### 🚀 The feature, motivation and pitch
The method `torch.quantization.fuse_modules` supports many of the common fusion strategies, i.e., conv+bn, conv+bn+relu, etc. However, there are additional fusion operations that are useful in practice that could be interesting. Specifically, cascades of bn+linear layers can actually be fused trivially using the following. The documentation contains the algebraic derivation of the fusion.
```python
@torch.no_grad()
def fuse_batch_norm_1d_into_linear(norm: nn.BatchNorm1d, linear: nn.Linear, epsilon: float=1e-12) -> None:
"""
Fuse a batch norm module into the linear layer that follows it.
Args:
norm: The batch norm layer that occurs before the convolution layer.
linear: The linear layer to fuse the batch norm into.
epsilon: A small value for numerical stability.
Returns:
None
Details:
This function de-composes the fusion into four simple steps. Assume that the
cascade of a 1d batch normalization into a linear layer is formulated
as follows where \f$x\f$ is the input vector, \f$\mu, \sigma\f$ are the
moving statistics of the batch norm, \f$\gamma, \beta\f$ are the learned
affine parameters of the batch norm, and \f$W, b\f$ are the weights and
biases of the linear layer.
\f$y = \Big[ \frac{x - \mu}{\sigma} \odot \gamma + \beta \Big] \cdot W + b\f$
1. Apply the distributive property to group \f$\beta\f$ with the bias \f$b\f$.
This allows \f$\beta\f$ to be absorbed by the bias of the linear layer:
\f$y = \Big[ \frac{x - \mu}{\sigma} \odot \gamma \Big] \cdot W + \beta \cdot W + b\f$
Update: \f$b \gets \beta \cdot W + b\f$
2. Apply the associative law for scalar and dot product to group \f$\gamma\f$
with the weight \f$W\f$. This allows \f$\gamma\f$ to be absorbed by the weight:
\f$y = \Big[ \frac{x - \mu}{\sigma} \Big] \cdot \big[ W \odot \gamma \big] + b\f$
Update: \f$W \gets W \odot \gamma\f$
3. Apply the associative law for scalar and dot product to group \f$\sigma\f$
with the weight \f$W\f$. This allows \f$\sigma\f$ to be absorbed by the weight:
\f$y = \big[ x - \mu \big] \cdot \Big[ W \odot \frac{1}{\sigma} \Big] + b\f$
Update: \f$W \gets W \odot \frac{1}{\sigma}\f$
4. Apply the distributive property to group \f$\mu\f$ with the bias \f$b\f$.
This allows \f$\mu\f$ to be absorbed by the bias:
\f$y = x \cdot W - \mu \cdot W + b\f$
Update: \f$b \gets b - \mu \cdot W\f$
This leaves the final simplified linear form with the batch norm analytically
integrated into the calculation. The batch norm can now be replaced by the
fused linear layer:
\f$y = x \cdot W + b\f$
"""
# 1. Apply distributive property to group β with the bias.
offset = norm.bias @ linear.weight.T
if linear.bias is None:
linear.bias = nn.Parameter(offset)
else:
linear.bias[:] = linear.bias + offset
norm.bias.fill_(0.0) # Reset β to identity.
# 2. Apply associative law for scalar and dot product to group γ with weight.
linear.weight[:] = linear.weight * norm.weight
norm.weight.fill_(1.0) # Reset γ to identity.
# 3. Apply associative law for scalar and dot product to group Var[x] with weight.
linear.weight[:] = linear.weight / norm.running_var.add(epsilon).sqrt()
norm.running_var[:] = 1.0 # reset Var[x] to identity.
# 4. Apply distributive property to group E[x] with bias.
offset = norm.running_mean @ linear.weight.T
linear.bias[:] = linear.bias - offset
norm.running_mean[:] = 0.0 # reset E[x] to identity.
```
This same concept can be applied to bn+conv, though the derivation is less straight forward when supporting strided convolution, group convolution, etc. Happy to provide the derivation and code for that if these are features the PyTorch community would be interested in adding to the library directly. I certainly find them useful in practice!
### Alternatives
I'm aware that `torch.quantization.fuse_modules` can be augmented using `fuse_custom_config_dict`, but perhaps directly integrating these fusion policies into PyTorch could be helpful. I certainly find them useful in practice!
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | true |
2,775,791,001 | `Dirichlet.mode`: use `dim=` instead of `axis=` | randolf-scholz | closed | [
"module: distributions",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | `axis=` is undocumented and will raise typing errors when #144197 is merged.
See: https://github.com/pytorch/pytorch/pull/144197#pullrequestreview-2537398866
cc @fritzo @neerajprad @alicanb @nikitaved | true |
2,775,770,361 | ReshapeTransform: added missing argument in docstring | randolf-scholz | closed | [
"module: distributions",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 7 | CONTRIBUTOR | See https://github.com/pytorch/pytorch/pull/144197#discussion_r1907336339
cc @fritzo @neerajprad @alicanb @nikitaved | true |
2,775,760,331 | Fix `AffineTransform.sign` | randolf-scholz | closed | [
"module: distributions",
"open source",
"release notes: python_frontend"
] | 4 | CONTRIBUTOR | Fixes a bug where `AffineTransform.sign` could return a `Tensor` instead of `int`.
`AffineTransform` is applied element-wise, so the jacobian is diagonal and the sign of the determinant is the product of the signs of the diagonal entries.
See: https://github.com/pytorch/pytorch/pull/144197#discussion_r1907328379
cc @fritzo @neerajprad @alicanb @nikitaved | true |
2,775,751,767 | Update the Triton DeviceInterface in test/inductor/extension_backends/triton/device_interface.py | GeorgeWigley | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 14 | CONTRIBUTOR | Following the changes to how `DeviceInterface` is used in this [PR](https://github.com/pytorch/pytorch/pull/142033), the `DeviceInterface` in `extension_backend/triton/device_interface.py` should by updated to return the `DeviceProperties` instead of raising a NotImplementedError.
This PR mirrors the [changes](https://github.com/pytorch/pytorch/pull/142033/files#diff-06553e25e48e1d60f3030458bc46d52067d3d0c3eef2d5fcea29f7e8126bd7c9L112-R114) made in Dynamo when the PR landed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,775,687,940 | ROCm SDPA: Ensure attn_mask has the same dtype with q | pytorchbot | closed | [
"module: rocm",
"open source",
"ciflow/rocm"
] | 1 | COLLABORATOR | This is required by current AOTriton's backend.
Fixes NaN when calling SDPA ME backend with `q.dtype() != attn_mask.dtype()` when training llama2 using transformers+deepspeed+pytorch
Corresponding CUDA check seems to be here:
https://github.com/pytorch/pytorch/blob/708ce3c0082d670d9eaff84bc3c43cad4554a75d/aten/src/ATen/native/transformers/cuda/attention.cu#L1331-L1336
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,775,679,163 | Optimizer state cannot get offloaded to CPU | fingertap | closed | [
"triaged",
"module: fsdp"
] | 7 | NONE | ### 🐛 Describe the bug
When I try to offload the FSDP optimizer state to CPU, most states get left on GPU. This only happens with FSPD, and it is fine when I use a normal nn.Module.
nn.Module (using `main`):

FSDP (using `fsdp_main`):

Code to reproduce:
```python
from __future__ import annotations
import gc
import time
import pynvml
import torch
import torch.distributed as dist
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
INIT_MEMORY_USED = None
NDIM = 1024 * 1024 * 1024 // 4 # 1GB (4 bytes per element)
def get_memory_stats():
pynvml.nvmlInit()
device_count = pynvml.nvmlDeviceGetCount()
memory_used, total_memory = 0, 0
for i in range(device_count):
handle = pynvml.nvmlDeviceGetHandleByIndex(i)
memory_info = pynvml.nvmlDeviceGetMemoryInfo(handle)
memory_used += memory_info.used
total_memory += memory_info.total
return memory_used / 1024 ** 2, total_memory / 1024 ** 2
def print_memory_used(prefix: str | None = None):
gc.collect()
torch.cuda.empty_cache()
gc.collect()
torch.cuda.empty_cache()
time.sleep(1)
if dist.is_initialized() and dist.get_rank() != 0:
return
global INIT_MEMORY_USED
torch.cuda.synchronize()
prefix = prefix or "Total memory used"
memory_used, total_memory = get_memory_stats()
if INIT_MEMORY_USED is None:
INIT_MEMORY_USED = memory_used
print(
f" {prefix}: \033[93m{memory_used - INIT_MEMORY_USED} MB\033[0m"
f" / \033[92m{total_memory} MB\033[0m"
)
class MemoryTest(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(NDIM, 1, bias=False)
def forward(self, x):
return self.layer(x)
def offload_model(model: torch.nn.Module):
for _, param in model.named_parameters():
if hasattr(param, "_local_shard"):
param._local_shard = param._local_shard.to("cpu", non_blocking=True)
param.data = param.data.to("cpu", non_blocking=True)
if param.grad is not None:
param.grad = param.grad.to("cpu", non_blocking=True)
torch.cuda.empty_cache()
def reload_model(model: torch.nn.Module):
for _, param in model.named_parameters():
if hasattr(param, "_local_shard"):
param._local_shard = param._local_shard.to("cuda", non_blocking=True)
param.data = param.data.to("cuda", non_blocking=True)
if param.grad is not None:
param.grad = param.grad.to("cuda", non_blocking=True)
torch.cuda.empty_cache()
def offload_optimizer(optimizer: torch.optim.Optimizer):
optimizer.zero_grad()
for param_group in optimizer.param_groups:
for param in param_group['params']:
state = optimizer.state[param]
for value in state.values():
if isinstance(value, torch.Tensor):
value.data = value.data.to("cpu", non_blocking=True)
torch.cuda.empty_cache()
def reload_optimizer(optimizer: torch.optim.Optimizer):
for param_group in optimizer.param_groups:
for param in param_group['params']:
state = optimizer.state[param]
for value in state.values():
if isinstance(value, torch.Tensor):
value.data = value.data.to("cuda", non_blocking=True)
torch.cuda.empty_cache()
def backward(model: torch.nn.Module, optimizer: torch.optim.Optimizer):
x = torch.randn(1, NDIM).cuda()
y = model(x)
y.backward()
optimizer.step()
del x, y
torch.cuda.empty_cache()
def main():
print_memory_used("Initial")
model = MemoryTest().cuda()
print_memory_used("After allocating model")
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
backward(model, optimizer)
print_memory_used("After allocating optimizer and back pass")
offload_model(model)
print_memory_used("After offloading model")
offload_optimizer(optimizer)
print_memory_used("After offloading optimizer")
def fsdp_main():
dist.init_process_group(backend="nccl")
torch.cuda.set_device(dist.get_rank())
print_memory_used("Initial")
model = FSDP(MemoryTest().cuda())
print_memory_used("After allocating model")
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
backward(model, optimizer)
print_memory_used("After allocating optimizer and back pass")
offload_model(model)
print_memory_used("After offloading model")
offload_optimizer(optimizer)
print_memory_used("After offloading optimizer")
dist.destroy_process_group()
if __name__ == "__main__":
fsdp_main()
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-95
Off-line CPU(s) list: 96-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0+cu121
[pip3] torchaudio==2.4.0+cu121
[pip3] torchvision==0.19.0+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.0+cu121 pypi_0 pypi
[conda] torchaudio 2.4.0+cu121 pypi_0 pypi
[conda] torchvision 0.19.0+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang | true |
2,775,657,894 | Set maximum supported version of Python as 3.13 | atalman | closed | [
"Merged",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Same as https://github.com/pytorch/pytorch/pull/119743 Required for Release 2.6.0 | true |
2,775,634,966 | Fix fractional_max_pool lowering in inductor | isuruf | closed | [
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144395
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Fixes https://github.com/pytorch/pytorch/issues/141538 | true |
2,775,602,574 | fix a bug for constant_pad_nd | ywq880611 | open | [
"triaged",
"open source",
"Stale"
] | 13 | CONTRIBUTOR | Fixes #144187
This PR sync the implement of `constant_pad_nd` in cpp with its implement in python, please see details in the issue.
| true |
2,775,356,940 | [3.13t] use sysconfig to check for Python nogil builds | pytorchbot | closed | [
"open source",
"module: dynamo",
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144361
`sys._is_gil_enabled()` wasn't working in certain cases, according to @atalman
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,775,298,714 | EP_FAIL : Non-zero status code returned while running Conv node. Name:'/features/features.0/Conv' Status Message: Failed to initialize CUDNN Frontend | m0hammadjaan | closed | [
"module: cudnn",
"module: convolution",
"triaged"
] | 2 | NONE | ### 🐛 Describe the bug
I have an EC2 instance of type g5g.xlarge. I have installed the following:
```
CUDA-Toolit: Cuda compilation tools, release 12.4, V12.4.131
CUDNN Version: 9.6.0
Python: 3.12
Pytorch: Compiled from source as for aarch64 v2.5 is not available.
Onnxruntime: Compiled from source as the distrubution package is not available for the architecture
Architecture: aarch64
OS: Amazon Linux 2023
```
On the following code:
```
def to_numpy(tensor):
return tensor.detach().gpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
# compute ONNX Runtime output prediction
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input_batch)}
ort_outs = ort_session.run(None, ort_inputs)
```
I am getting the following Error:
```
EP Error: [ONNXRuntimeError] : 11 : EP_FAIL : Non-zero status code returned while running Conv node. Name:'/features/features.0/Conv' Status Message: Failed to initialize CUDNN Frontend/home/ec2-user/onnxruntime/onnxruntime/core/providers/cuda/cudnn_fe_call.cc:99 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudnn_frontend::error_object; bool THRW = true; SUCCTYPE = cudnn_frontend::error_code_t; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /home/ec2-user/onnxruntime/onnxruntime/core/providers/cuda/cudnn_fe_call.cc:91 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudnn_frontend::error_object; bool THRW = true; SUCCTYPE = cudnn_frontend::error_code_t; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDNN_FE failure 11: CUDNN_BACKEND_API_FAILED ; GPU=0 ; hostname=sg-gpu-1 ; file=/home/ec2-user/onnxruntime/onnxruntime/core/providers/cuda/nn/conv.cc ; line=224 ; expr=s_.cudnn_fe_graph->build_operation_graph(handle);
with the cudnn frontend json:
{"context":{"compute_data_type":"FLOAT","intermediate_data_type":"FLOAT","io_data_type":"FLOAT","name":"","sm_count":-1},"cudnn_backend_version":"9.6.0","cudnn_frontend_version":10700,"json_version":"1.0","nodes":[{"compute_data_type":"FLOAT","dilation":[1,1],"inputs":{"W":"w","X":"x"},"math_mode":"CROSS_CORRELATION","name":"","outputs":{"Y":"::Y"},"post_padding":[2,2],"pre_padding":[2,2],"stride":[4,4],"tag":"CONV_FPROP"}],"tensors":{"::Y":{"data_type":"FLOAT","dim":[1,64,55,55],"is_pass_by_value":false,"is_virtual":false,"name":"::Y","pass_by_value":null,"reordering_type":"NONE","stride":[193600,3025,55,1],"uid":0,"uid_assigned":false},"w":{"data_type":"FLOAT","dim":[64,3,11,11],"is_pass_by_value":false,"is_virtual":false,"name":"w","pass_by_value":null,"reordering_type":"NONE","stride":[363,121,11,1],"uid":1,"uid_assigned":true},"x":{"data_type":"FLOAT","dim":[1,3,224,224],"is_pass_by_value":false,"is_virtual":false,"name":"x","pass_by_value":null,"reordering_type":"NONE","stride":[150528,50176,224,1],"uid":0,"uid_assigned":false}}} using ['CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CPUExecutionProvider'] and retrying.
2025-01-08 12:06:10.797719929 [E:onnxruntime:Default, cudnn_fe_call.cc:33 CudaErrString<cudnn_frontend::error_object>] CUDNN_BACKEND_TENSOR_DESCRIPTOR cudnnFinalize failed cudnn_status: CUDNN_STATUS_SUBLIBRARY_LOADING_FAILED
2025-01-08 12:06:10.797924540 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running Conv node. Name:'/features/features.0/Conv' Status Message: Failed to initialize CUDNN Frontend/home/ec2-user/onnxruntime/onnxruntime/core/providers/cuda/cudnn_fe_call.cc:99 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudnn_frontend::error_object; bool THRW = true; SUCCTYPE = cudnn_frontend::error_code_t; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /home/ec2-user/onnxruntime/onnxruntime/core/providers/cuda/cudnn_fe_call.cc:91 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cudnn_frontend::error_object; bool THRW = true; SUCCTYPE = cudnn_frontend::error_code_t; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDNN_FE failure 11: CUDNN_BACKEND_API_FAILED ; GPU=0 ; hostname=sg-gpu-1 ; file=/home/ec2-user/onnxruntime/onnxruntime/core/providers/cuda/nn/conv.cc ; line=224 ; expr=s_.cudnn_fe_graph->build_operation_graph(handle);
with the cudnn frontend json:
{"context":{"compute_data_type":"FLOAT","intermediate_data_type":"FLOAT","io_data_type":"FLOAT","name":"","sm_count":-1},"cudnn_backend_version":"9.6.0","cudnn_frontend_version":10700,"json_version":"1.0","nodes":[{"compute_data_type":"FLOAT","dilation":[1,1],"inputs":{"W":"w","X":"x"},"math_mode":"CROSS_CORRELATION","name":"","outputs":{"Y":"::Y"},"post_padding":[2,2],"pre_padding":[2,2],"stride":[4,4],"tag":"CONV_FPROP"}],"tensors":{"::Y":{"data_type":"FLOAT","dim":[1,64,55,55],"is_pass_by_value":false,"is_virtual":false,"name":"::Y","pass_by_value":null,"reordering_type":"NONE","stride":[193600,3025,55,1],"uid":0,"uid_assigned":false},"w":{"data_type":"FLOAT","dim":[64,3,11,11],"is_pass_by_value":false,"is_virtual":false,"name":"w","pass_by_value":null,"reordering_type":"NONE","stride":[363,121,11,1],"uid":1,"uid_assigned":true},"x":{"data_type":"FLOAT","dim":[1,3,224,224],"is_pass_by_value":false,"is_virtual":false,"name":"x","pass_by_value":null,"reordering_type":"NONE","stride":[150528,50176,224,1],"uid":0,"uid_assigned":false}}}
```
However, prints from the below code confirms that the installation is done perfectly:
```
print("Pytorch CUDA:", torch.cuda.is_available())
print("Available Providers:", onnxruntime.get_available_providers())
print("Active Providers for this session:", ort_session.get_providers())
```
Output:
```
Pytorch CUDA: True
Available Providers: ['CUDAExecutionProvider', 'CPUExecutionProvider']
Active Providers for this session: ['CUDAExecutionProvider', 'CPUExecutionProvider']
```
In order to resolve this, I have installed the [nvidia_cudnn_frontend ](https://github.com/NVIDIA/cudnn-frontend) v1.9.0 from the source. Still it is not resolved.
nvidia-smi is working. Its version is: **NVIDIA-SMI 550.127.08**
nvcc is also working fine.
```
nvidia-cudnn-frontend==1.9.0
nvtx==0.2.10
onnx==1.17.0
onnxruntime-gpu==1.20.1
optree==0.13.1
torch==2.5.0a0+gita8d6afb
torchaudio==2.5.1
torchvision==0.20.1
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2023.6.20241212 (aarch64)
GCC version: (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.34
Python version: 3.12.0 (main, Jan 5 2025, 18:22:01) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (64-bit runtime)
Python platform: Linux-6.1.119-129.201.amzn2023.aarch64-aarch64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA T4G
Nvidia driver version: 550.127.08
cuDNN version: Probably one of the following:
/usr/local/cuda-12.4/targets/sbsa-linux/lib/libcudnn.so.9
/usr/local/cuda-12.4/targets/sbsa-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.4/targets/sbsa-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.4/targets/sbsa-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.4/targets/sbsa-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.4/targets/sbsa-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.4/targets/sbsa-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.4/targets/sbsa-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: ARM
Model name: Neoverse-N1
Model: 1
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: r3p1
BogoMIPS: 243.75
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
L1d cache: 256 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 4 MiB (4 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cudnn-frontend==1.9.0
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.20.1
[pip3] optree==0.13.1
[pip3] torch==2.5.0a0+gita8d6afb
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] Could not collect
```
cc @csarofeen @ptrblck @xwang233 @eqy | true |
2,775,261,339 | Fix a bug for conj_physical | ywq880611 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Fixes #141426
fix a bug in previous [PR](https://github.com/pytorch/pytorch/pull/141427), which shouldn't convert the data type for conj. | true |
2,775,172,890 | `torch.linalg.solve`: doc update on dealing with rank-deficient systems which admit a solution | nikitaved | closed | [
"triaged",
"open source",
"module: linear algebra",
"Stale",
"release notes: linalg_frontend",
"topic: docs"
] | 6 | COLLABORATOR | As per title.
cc @jianyuh @pearu @mruberry @walterddr @xwang233 @Lezcano | true |
2,775,122,642 | Fix lowering to inductor IR for triton CPU | kundaMwiza | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15 | CONTRIBUTOR | Example failing test:
`pytest -s test_torchinductor_opinfo.py -k test_comprehensive_special_polygamma_special_polygamma_n_0_cpu_float32` when using triton CPU.
Failure:
```shell
triton.compiler.errors.CompilationError: at 10:11:
def triton_poi_fused_polygamma_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 25
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tmp1 = 1.0
tl.static_assert(tmp1.dtype == tl.float32)
tmp2 = ops.polygamma(tmp1, tmp0)
^
NameError('ops is not defined')
```
This occurs because the registered triton fallbacks are not used during the lowering to inductor IR.
Marked the problematic code in the excerpt below from https://github.com/pytorch/pytorch/blob/6bc17b0725f8adc1b7293dd44c90e8a6c495ea03/torch/_inductor/lowering.py#L572
```python
def make_pointwise(
fn,
override_return_dtype=None,
override_device=None,
override_fn_when_input_bool=None,
override_fn_when_gpu_float64=None,
allow_alpha=False,
triton_fallback=None,
):
def inner(*inputs: TensorBox, alpha=None):
if triton_fallback is not None and any(
isinstance(inp, IRNode) and is_triton(inp) for inp in inputs <--- is_triton should return True when using triton CPU
):
assert not allow_alpha # not implemented
return triton_fallback(*inputs)
inputs = promote_constants(inputs, override_return_dtype)
if allow_alpha:
if alpha is not None and alpha != 1:
inputs = list(inputs)
```
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,775,080,085 | Add test cases of fp8 datatypes in pt2e | yintong-lu | closed | [
"triaged",
"open source",
"Stale",
"release notes: quantization"
] | 2 | CONTRIBUTOR | As fp8 datatypes have been added to torch export serialization, this PR aims to add test cases of fp8 datatypes in pt2e quantization.
| true |
2,775,037,908 | Adapt Dynamo tests to HPUs using instantiate_device_type_tests | amathewc | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 26 | CONTRIBUTOR | **MOTIVATION**
We recently integrated support for Intel Gaudi devices (identified as 'hpu') into the common_device_type framework via the pull request at https://github.com/pytorch/pytorch/pull/126970. This integration allows tests to be automatically instantiated for Gaudi devices upon loading the relevant library. Building on this development, the current pull request extends the utility of these hooks by adapting selected CUDA tests to operate on Gaudi devices. Additionally, we have confirmed that these modifications do not interfere with the existing tests on CUDA devices.
Other accelerators can also extend the functionality by adding the device in the devices list. ( For eg: xpu )
**CHANGES**
Create a separate class for test functions running on CUDA devices
Extend the functionality of these tests to include HPUs
Use instantiate_device_type_tests with targeted attributes to generate device-specific test instances within the new classes
Apply skipIfHPU decorator to bypass tests that are not yet compatible with HPU devices
Previously we had submitted some changes in https://github.com/pytorch/pytorch/pull/140131 . However, deleted that PR due to merge conflicts and other issues.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ankurneog
| true |
2,775,015,191 | cudagraph trees support handling live tensors from a previous run? | wbigat | closed | [
"triaged",
"module: cuda graphs",
"oncall: pt2"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
Hello,when I try cudagraph trees,I find the following case in ```https://pytorch.org/docs/2.4/torch.compiler_cudagraph_trees.html#cudagraph-trees```
```
import torch
@torch.compile(mode="reduce-overhead")
def my_model(x):
y = torch.matmul(x, x)
return y
x = torch.randn(10, 10)
y1 = my_model(x)
y2 = my_model(x)
print(y1)
# RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run.
```
According to the description in the document, an RuntimeError is expected when the case is executed. But when I actually did it, it was a successful case. Please help me to confirm whether the information here is wrong? Thanks a lot.
cc @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @chauhang
### Versions
torch 2.4.1
NVIDIA-SMI 560.35.05 Driver Version: 560.35.05 CUDA Version: 12.6 | true |
2,774,927,870 | [Intel GPU] fix memory leak in deconv backward | jianyizh | closed | [
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 13 | CONTRIBUTOR | Fixes #143807
We need manage onednn scratchpad in pytorch, otherwise onednn will always allocate scratchpad memory during primitive execution and causes memory leak.
cc @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,774,913,306 | ``torch.linalg.eigh`` produces significant errors compared to ``numpy.linalg.eigh`` | vuonghy2442 | closed | [
"triaged",
"module: linear algebra"
] | 8 | NONE | ### ``torch.linalg.eigh`` producing inaccurate eigenvalues/eigenvectors compared to NumPy
### Steps to Reproduce
1. Use the matrix A provided in the code below.
2. Compute the eigenvalues and eigenvectors using both torch.linalg.eigh and np.linalg.eigh.
3. Compare the results.
```python
import torch
import numpy as np
A = np.array([[ 1.60782897e+00, 2.28964731e-01, 5.37528796e-03, -3.68031830e-01, 7.76133314e-02, 1.95910275e-01, -3.12402956e-02, 3.67419720e-01, 1.22131474e-01, -2.53661489e+00, 2.17903289e-06, 5.94051089e-05, 2.33369647e-05, -3.33767384e-04, 7.98902474e-05, 9.64686275e-04, -4.88465303e-04, 2.36044801e-03, 4.49522515e-04, -4.29443026e+00],
[ 2.28964731e-01, 2.41090322e+00, -1.88907310e-02, -1.11321688e+00, 2.24941388e-01, 5.75562716e-01, -1.19260252e-01, 1.07684803e+00, 3.36590528e-01, -5.06800270e+00, -2.24896939e-06, -4.84753400e-06, -7.50925392e-06, 1.13487244e-04, -3.26065347e-05, -4.61697578e-04, 3.01518885e-04, -1.26068108e-03, 9.04885674e-05, 1.70146358e+00],
[ 5.37528796e-03, -1.88907310e-02, 1.62140822e+00, -1.66513994e-02, -1.20639233e-02, 2.17823274e-02, 2.43900251e-03, 2.59594470e-02, -1.06401583e-02, 1.67924047e-01, 2.05849938e-06, 2.44657276e-05, 1.31483248e-05, -1.48024876e-04, 6.39846548e-05, 5.71987592e-04, 3.61403363e-06, 1.01836876e-03, 1.68582925e-03, -3.52031112e+00],
[-3.68031830e-01, -1.11321688e+00, -1.66513994e-02, 3.52388382e+00, -3.89591247e-01, -8.86485994e-01, 3.69597822e-01, -2.19188643e+00, -4.24265265e-01, 3.27274895e+00, -2.42434326e-05, -2.98958272e-04, -1.23632257e-04, 1.87904201e-03, -5.15639316e-04, -5.35614789e-03, 2.87756487e-03, -1.53830992e-02, -4.53177467e-03, 2.52128124e+01],
[ 7.76133314e-02, 2.24941388e-01, -1.20639233e-02, -3.89591247e-01, 1.93465853e+00, 2.48911351e-01, -2.28518508e-02, 4.19600159e-01, 1.34637073e-01, 4.95270640e-01, 2.64251721e-05, 3.15882266e-04, 1.41064636e-04, -1.91451237e-03, 6.46093860e-04, 6.21308386e-03, -2.34547607e-03, 1.57597680e-02, 8.13580025e-03, -3.23177299e+01],
[ 1.95910275e-01, 5.75562716e-01, 2.17823274e-02, -8.86485994e-01, 2.48911351e-01, 2.98399925e+00, -4.25876319e-01, 9.31742251e-01, -4.73557204e-01, 9.36427712e-01, 1.56900787e-05, 1.95616623e-04, 9.17701400e-05, -1.15976110e-03, 4.29244712e-04, 4.25980613e-03, -8.00579088e-04, 8.80736019e-03, 8.03760253e-03, -2.24972534e+01],
[-3.12402956e-02, -1.19260252e-01, 2.43900251e-03, 3.69597822e-01, -2.28518508e-02, -4.25876319e-01, 3.06580067e+00, -6.48983538e-01, 4.06311929e-01, 1.37230349e+00, 7.44865829e-05, 8.81816493e-04, 3.89039924e-04, -5.36584575e-03, 1.68655277e-03, 1.71863157e-02, -5.95929101e-03, 4.48325910e-02, 2.24911068e-02, -8.79636688e+01],
[ 3.67419720e-01, 1.07684803e+00, 2.59594470e-02, -2.19188643e+00, 4.19600159e-01, 9.31742251e-01, -6.48983538e-01, 5.48731613e+00, -6.58562034e-02, -2.43317747e+00, -2.27659766e-05, -2.60235975e-04, -1.24252401e-04, 1.58591196e-03, -5.84547408e-04, -5.63996658e-03, 1.13803300e-03, -1.05895922e-02, -1.21539282e-02, 2.72431316e+01],
[ 1.22131474e-01, 3.36590528e-01, -1.06401583e-02, -4.24265265e-01, 1.34637073e-01, -4.73557204e-01, 4.06311929e-01, -6.58562034e-02, 5.43583727e+00, -2.92779040e+00, 2.11733277e-06, 3.67360190e-05, 3.78659461e-05, -1.60590280e-04, 2.01643910e-04, 1.61331519e-03, 1.96514279e-03, -1.50358269e-03, 1.71575230e-02, -1.18784990e+01],
[-2.53661489e+00, -5.06800270e+00, 1.67924047e-01, 3.27274895e+00, 4.95270640e-01, 9.36427712e-01, 1.37230349e+00, -2.43317747e+00, -2.92779040e+00, 8.76316345e+02, -1.22874253e-03, -1.46462396e-02, -6.36728108e-03, 8.91621411e-02, -2.74890810e-02, -2.80071974e-01, 1.21408194e-01, -7.62682676e-01, -3.09633642e-01, 1.52729504e+03],
[ 2.17903289e-06, -2.24896939e-06, 2.05849938e-06, -2.42434326e-05, 2.64251721e-05, 1.56900787e-05, 7.44865829e-05, -2.27659766e-05, 2.11733277e-06, -1.22874253e-03, 1.33819203e-03, 1.58352833e-02, 6.75189588e-03, -9.70604271e-02, 2.87030358e-02, 2.97664732e-01, -1.43668085e-01, 8.35517287e-01, 2.84892887e-01, -1.46552661e+03],
[ 5.94051089e-05, -4.84753400e-06, 2.44657276e-05, -2.98958272e-04, 3.15882266e-04, 1.95616623e-04, 8.81816493e-04, -2.60235975e-04, 3.67360190e-05, -1.46462396e-02, 1.58352833e-02, 1.87388241e-01, 7.99007788e-02, -1.14855564e+00, 3.39666307e-01, 3.52251792e+00, -1.69990075e+00, 9.88676643e+00, 3.37287593e+00, -1.73432988e+04],
[ 2.33369647e-05, -7.50925392e-06, 1.31483248e-05, -1.23632257e-04, 1.41064636e-04, 9.17701400e-05, 3.89039924e-04, -1.24252401e-04, 3.78659461e-05, -6.36728108e-03, 6.75189588e-03, 7.99007788e-02, 3.40821631e-02, -4.89710629e-01, 1.44927666e-01, 1.50249171e+00, -7.23849893e-01, 4.21417427e+00, 1.44301021e+00, -7.40001318e+03],
[-3.33767384e-04, 1.13487244e-04, -1.48024876e-04, 1.87904201e-03, -1.91451237e-03, -1.15976110e-03, -5.36584575e-03, 1.58591196e-03, -1.60590280e-04, 8.91621411e-02, -9.70604271e-02, -1.14855564e+00, -4.89710629e-01, 7.04013824e+00, -2.08170390e+00, -2.15894566e+01, 1.04222698e+01, -6.06031189e+01, -2.06613407e+01, 1.06286359e+05],
[ 7.98902474e-05, -3.26065347e-05, 6.39846548e-05, -5.15639316e-04, 6.46093860e-04, 4.29244712e-04, 1.68655277e-03, -5.84547408e-04, 2.01643910e-04, -2.74890810e-02, 2.87030358e-02, 3.39666307e-01, 1.44927666e-01, -2.08170390e+00, 6.16610885e-01, 6.38967180e+00, -3.07360172e+00, 1.79079399e+01, 6.14985657e+00, -3.14770039e+04],
[ 9.64686275e-04, -4.61697578e-04, 5.71987592e-04, -5.35614789e-03, 6.21308386e-03, 4.25980613e-03, 1.71863157e-02, -5.63996658e-03, 1.61331519e-03, -2.80071974e-01, 2.97664732e-01, 3.52251792e+00, 1.50249171e+00, -2.15894566e+01, 6.38967180e+00, 6.62452698e+01, -3.19060764e+01, 1.85773941e+02, 6.36354866e+01, -3.26240062e+05],
[-4.86891717e-04, 3.01372260e-04, 3.62051651e-06, 2.87755951e-03, -2.34574080e-03, -8.00948590e-04, -5.95919322e-03, 1.13837048e-03, 1.96490344e-03, 1.21410340e-01, -1.43668100e-01, -1.69990110e+00, -7.23848820e-01, 1.04222832e+01, -3.07360816e+00, -3.19060802e+01, 1.55375633e+01, -8.98448792e+01, -3.00951061e+01, 1.56913156e+05],
[ 2.37344205e-03, -1.26130879e-03, 1.01752952e-03, -1.53824687e-02, 1.57596469e-02, 8.80961865e-03, 4.48332652e-02, -1.05940849e-02, -1.50594860e-03, -7.62646675e-01, 8.35519135e-01, 9.88673401e+00, 4.21416092e+00, -6.06031990e+01, 1.79079304e+01, 1.85773956e+02, -8.98448792e+01, 5.21939880e+02, 1.77149094e+02, -9.14563250e+05],
[ 4.47247177e-04, 9.06437635e-05, 1.68623496e-03, -4.53158468e-03, 8.13601911e-03, 8.03825632e-03, 2.24906951e-02, -1.21534169e-02, 1.71582494e-02, -3.09652269e-01, 2.84893215e-01, 3.37288260e+00, 1.44301367e+00, -2.06612988e+01, 6.14986134e+00, 6.36355400e+01, -3.00951061e+01, 1.77149094e+02, 6.36062050e+01, -3.14022406e+05],
[-4.30882263e+00, 1.70046997e+00, -3.52318573e+00, 2.52160645e+01, -3.23214111e+01, -2.25007477e+01, -8.79653473e+01, 2.72448120e+01, -1.18774796e+01, 1.52729688e+03, -1.46553210e+03, -1.73432891e+04, -7.40001074e+03, 1.06286719e+05, -3.14769004e+04, -3.26239531e+05, 1.56913156e+05, -9.14563250e+05, -3.14022406e+05, 1.60835072e+09]], dtype=np.float32)
A = A + A.T # symmetrize
L, Q = np.linalg.eigh(A)
meo = Q @ np.diag(L) @ Q.T
print('numpy:', np.max(np.abs(Q @ np.diag(L) @ Q.T - A) / A)) # 1e-5 GOOD
L, Q = torch.linalg.eigh(torch.from_numpy(A))
print('torch cpu:', torch.max(torch.abs(Q @ torch.diag(L) @ Q.T - A) / A).item()) # 1584 BAD
L, Q = torch.linalg.eigh(torch.from_numpy(A), UPLO="U")
print('torch cpu upper:', torch.max(torch.abs(Q @ torch.diag(L) @ Q.T - A) / A).item()) # 0.11 OKAY
A_cuda = torch.from_numpy(A).to("cuda:0")
L, Q = torch.linalg.eigh(A_cuda)
print('torch gpu:', torch.max(torch.abs((Q @ torch.diag(L) @ Q.T) - A_cuda) / A_cuda).item()) # 18295 BAD
L, Q = torch.linalg.eigh(A_cuda, UPLO="U")
print('torch gpu upper:', torch.max(torch.abs((Q @ torch.diag(L) @ Q.T) - A_cuda) / A_cuda).item()) # 4687 BAD
```
### Observed Behavior:
The relative error of torch.linalg.eigh results is significantly larger than numpy.linalg.eigh.
Using UPLO="U" improves results, but does not resolve issues on the GPU.
Some eigenvalues returned by torch are negative
### Expected Behavior:
Results from torch.linalg.eigh should match the accuracy of numpy.linalg.eigh for symmetric matrices.
The eigenvalues shouldn't be negative
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.0 (https://github.com/llvm/llvm-project.git a4bf6cd7cfb1a1421ba92bca9d017b49936c55e4)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-102-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7643 48-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3640.9170
CPU min MHz: 1500.0000
BogoMIPS: 4600.14
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm sme sev sev_es
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-11,96-107
NUMA node1 CPU(s): 12-23,108-119
NUMA node2 CPU(s): 24-35,120-131
NUMA node3 CPU(s): 36-47,132-143
NUMA node4 CPU(s): 48-59,144-155
NUMA node5 CPU(s): 60-71,156-167
NUMA node6 CPU(s): 72-83,168-179
NUMA node7 CPU(s): 84-95,180-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.4
[pip3] numpy-groupies==0.11.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] onnx==1.16.1
[pip3] onnx2torch==1.5.13
[pip3] onnxruntime-gpu==1.18.0
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.1
[pip3] torch-summary==1.4.5
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.5.1
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.3.0.post0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.1.3 pypi_0 pypi
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | true |
2,774,838,127 | Totensor seems to have a memory leak | angel-yi | closed | [] | 0 | NONE | ### 🐛 Describe the bug
```python
tensor = transforms.ToTensor()(image)
tensor = transforms.Normalize(mean=self.cfg['MEAN'], std=self.cfg['STD'], inplace=True)(tensor)
tensor = tensor.unsqueeze_(0)
tensor = tensor.to(self.device)
```
use memory_profiler
Continuously accumulating memory
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
2690 3370.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3373.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3374.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3374.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3374.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3374.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3374.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3374.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3374.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3374.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 5.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3379.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3392.6 MiB 12.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3397.1 MiB 4.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3403.9 MiB 6.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3403.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3403.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3403.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3403.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3403.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3410.4 MiB 6.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3410.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3417.9 MiB 7.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3424.4 MiB 6.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3424.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3424.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3431.6 MiB 7.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3438.1 MiB 6.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3438.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3443.6 MiB 5.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3443.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3443.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3448.9 MiB 5.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3448.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3456.1 MiB 7.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3456.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3461.9 MiB 5.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3468.4 MiB 6.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3468.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3474.1 MiB 5.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3474.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3474.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3474.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3474.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3481.4 MiB 7.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3481.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3481.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3481.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3481.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3488.9 MiB 7.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3488.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3488.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3488.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3488.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3488.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3488.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3489.1 MiB 0.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3496.6 MiB 7.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3496.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3496.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3496.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3503.4 MiB 6.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3503.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3503.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3503.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3513.4 MiB 10.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3513.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3521.1 MiB 7.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3521.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3528.6 MiB 7.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3528.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3528.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3535.9 MiB 7.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3535.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3536.1 MiB 0.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3543.9 MiB 7.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3543.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3543.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3543.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3543.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3543.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3549.1 MiB 5.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3555.4 MiB 6.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3555.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3555.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3555.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3560.1 MiB 4.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3560.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3560.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3567.9 MiB 7.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3568.1 MiB 0.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 7.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3575.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3576.9 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3584.1 MiB 7.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3584.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3591.1 MiB 7.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3591.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3591.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3593.4 MiB 2.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3597.6 MiB 4.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3597.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3602.4 MiB 4.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3603.1 MiB 0.8 MiB 1 tensor = transforms.ToTensor()(image)
2690 3603.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3603.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3603.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3604.1 MiB 1.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3604.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3604.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3604.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3604.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3604.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3604.6 MiB 0.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3604.9 MiB 0.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3609.9 MiB 5.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3611.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3611.4 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3618.4 MiB 7.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3619.6 MiB 1.2 MiB 1 tensor = transforms.ToTensor()(image)
2690 3623.1 MiB 3.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3623.1 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
2690 3632.6 MiB 9.5 MiB 1 tensor = transforms.ToTensor()(image)
2690 3632.6 MiB 0.0 MiB 1 tensor = transforms.ToTensor()(image)
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla M40 24GB
GPU 1: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 535.216.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 32
在线 CPU 列表: 0-31
厂商 ID: GenuineIntel
型号名称: Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
CPU 系列: 6
型号: 79
每个核的线程数: 2
每个座的核数: 16
座: 1
步进: 1
CPU 最大 MHz: 3000.0000
CPU 最小 MHz: 1200.0000
BogoMIPS: 4190.07
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi md_clear flush_l1d
虚拟化: VT-x
L1d 缓存: 512 KiB (16 instances)
L1i 缓存: 512 KiB (16 instances)
L2 缓存: 4 MiB (16 instances)
L3 缓存: 40 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.5.147 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py38h5eee18b_1
[conda] mkl_fft 1.3.8 py38h5eee18b_0
[conda] mkl_random 1.2.4 py38hdb19cb5_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
[conda] nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.2.10.91 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.14.3 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.7.91 pypi_0 pypi
[conda] pytorch 2.0.1 py3.8_cuda11.8_cudnn8.7.0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] pytorch-mutex 1.0 cuda https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torchaudio 2.0.2 py38_cu118 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torchtriton 2.0.0 py38 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torchvision 0.15.2 py38_cu118 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
| true |
2,774,788,159 | [ONNX] MelSpectrogram results in "Pads has incorrect number of values" | WangHHY19931001 | closed | [
"module: onnx",
"triaged"
] | 10 | NONE | ### 🐛 Describe the bug
``` python
class DataCov(nn.Module):
def __init__(self):
super(DataCov, self).__init__()
self.transform = nn.Sequential(
torchaudio.transforms.MelSpectrogram(sample_rate=48000, n_fft=1536, hop_length=768, f_min=20, f_max=20000)
)
def forward(self, x1):
return self.transform(x1)
def export_datacov_onnx(path):
model = DataCov()
model.eval()
src_wav = torch.randn((1, 1, 48000 * 12), requires_grad=True)
input_names = ["wav_data"]
output_names = ["ans"]
args = (src_wav,)
torch.onnx.export(
model,
args,
path,
export_params=True,
opset_version=19,
do_constant_folding=True,
verbose=False,
input_names=input_names,
output_names=output_names,
dynamo=True,
report=True
)
onnx_model = onnx.load(path)
onnx.checker.check_model(onnx_model)
def test_data_cov_onnx(onnx_path):
sess_options = ort.SessionOptions()
sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
providers = [
'CUDAExecutionProvider',
'DmlExecutionProvider',
'CPUExecutionProvider'
]
session = ort.InferenceSession(onnx_path, sess_options,
providers=providers)
src_wav = torch.randn((1, 1, 48000 * 12))
ort_inputs = {session.get_inputs()[0].name: src_wav.numpy(), }
ort_outs = session.run(None, ort_inputs)
ort_outs = ort_outs[0]
ort_outs = torch.from_numpy(ort_outs)
model = DataCov()
model.eval()
deal_1 = model(src_wav)
print(f'Torch Output Shape: {deal_1.shape}, ONNX Output Shape: {ort_outs.shape}')
print(f'Torch Output Min/Max: {torch.min(deal_1)}, {torch.max(deal_1)}')
print(f'ONNX Output Min/Max: {torch.min(ort_outs)}, {torch.max(ort_outs)}')
print(f'Torch Output Mean/Std: {torch.mean(deal_1)}, {torch.std(deal_1)}')
print(f'ONNX Output Mean/Std: {torch.mean(ort_outs)}, {torch.std(ort_outs)}')
np.testing.assert_allclose(deal_1.detach().numpy(), ort_outs.detach().numpy(), rtol=1e-02, atol=1e-04)
if __name__ == '__main__':
export_datacov_onnx("DataCov.onnx")
test_data_cov_onnx("DataCov.onnx")
```
error code:
``` shell
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (_inlfunc_aten_reflection_pad1d_n11) Op (Pad) [ShapeInferenceError] Pads has incorrect number of values. Expected 2 * 3 values. Got 4 values.
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250107+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
CPU family: 6
Model: 167
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 53%
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20250108
[pip3] onnxsim==0.4.36
[pip3] onnxslim==0.1.46
[pip3] torch==2.7.0.dev20250107+cpu
[pip3] torchaudio==2.6.0.dev20250107+cpu
[pip3] torchvision==0.22.0.dev20250107+cpu
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250107+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250107+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250107+cpu pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi | true |
2,774,786,956 | onnx export error | WangHHY19931001 | closed | [] | 1 | NONE | ### 🐛 Describe the bug
``` python
class DataCov(nn.Module):
def __init__(self):
super(DataCov, self).__init__()
self.transform = nn.Sequential(
torchaudio.transforms.MelSpectrogram(sample_rate=48000, n_fft=1536, hop_length=768, f_min=20, f_max=20000)
)
def forward(self, x1):
return self.transform(x1)
def export_datacov_onnx(path):
model = DataCov()
model.eval()
src_wav = torch.randn((1, 1, 48000 * 12), requires_grad=True)
input_names = ["wav_data"]
output_names = ["ans"]
args = (src_wav,)
torch.onnx.export(
model,
args,
path,
export_params=True,
opset_version=19,
do_constant_folding=True,
verbose=False,
input_names=input_names,
output_names=output_names,
dynamo=True,
report=True
)
onnx_model = onnx.load(path)
onnx.checker.check_model(onnx_model)
def test_data_cov_onnx(onnx_path):
sess_options = ort.SessionOptions()
sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
providers = [
'CUDAExecutionProvider',
'DmlExecutionProvider',
'CPUExecutionProvider'
]
session = ort.InferenceSession(onnx_path, sess_options,
providers=providers)
src_wav = torch.randn((1, 1, 48000 * 12))
ort_inputs = {session.get_inputs()[0].name: src_wav.numpy(), }
ort_outs = session.run(None, ort_inputs)
ort_outs = ort_outs[0]
ort_outs = torch.from_numpy(ort_outs)
model = DataCov()
model.eval()
deal_1 = model(src_wav)
print(f'Torch Output Shape: {deal_1.shape}, ONNX Output Shape: {ort_outs.shape}')
print(f'Torch Output Min/Max: {torch.min(deal_1)}, {torch.max(deal_1)}')
print(f'ONNX Output Min/Max: {torch.min(ort_outs)}, {torch.max(ort_outs)}')
print(f'Torch Output Mean/Std: {torch.mean(deal_1)}, {torch.std(deal_1)}')
print(f'ONNX Output Mean/Std: {torch.mean(ort_outs)}, {torch.std(ort_outs)}')
np.testing.assert_allclose(deal_1.detach().numpy(), ort_outs.detach().numpy(), rtol=1e-02, atol=1e-04)
if __name__ == '__main__':
export_datacov_onnx("DataCov.onnx")
test_data_cov_onnx("DataCov.onnx")
```
error code:
``` shell
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (_inlfunc_aten_reflection_pad1d_n11) Op (Pad) [ShapeInferenceError] Pads has incorrect number of values. Expected 2 * 3 values. Got 4 values.
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250107+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
CPU family: 6
Model: 167
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 53%
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20250108
[pip3] onnxsim==0.4.36
[pip3] onnxslim==0.1.46
[pip3] torch==2.7.0.dev20250107+cpu
[pip3] torchaudio==2.6.0.dev20250107+cpu
[pip3] torchvision==0.22.0.dev20250107+cpu
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250107+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250107+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250107+cpu pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi | true |
2,774,751,007 | Update readme | Lonely523 | closed | [
"topic: not user facing"
] | 2 | NONE | add dependency
| true |
2,774,680,653 | Refine torch.xpu.get_device_properties API error message | guangyey | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"release notes: xpu"
] | 6 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144379
# Motivation
Remove the redundant error message.
Without this PR:
```python
>>> import torch
>>> torch.xpu.get_device_name(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guangyey/repos/stock-pytorch/torch/xpu/__init__.py", line 215, in get_device_name
return get_device_properties(device).name
File "/home/guangyey/repos/stock-pytorch/torch/xpu/__init__.py", line 258, in get_device_properties
raise AssertionError("Invalid device index")
AssertionError: Invalid device index
```
With this PR:
```python
>>> import torch
>>> torch.xpu.get_device_name(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guangyey/repos/stock-pytorch/torch/xpu/__init__.py", line 215, in get_device_name
return get_device_properties(device).name
File "/home/guangyey/repos/stock-pytorch/torch/xpu/__init__.py", line 257, in get_device_properties
return _get_device_properties(device) # type: ignore[name-defined] # noqa: F821
RuntimeError: The device index is out of range. It must be in [0, 1), but got 1.
``` | true |
2,774,670,017 | Filter out iGPU if dGPU is found on XPU | guangyey | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"release notes: xpu"
] | 9 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144378
# Motivation
for https://github.com/pytorch/pytorch/issues/143914
On Windows, there are two separate SYCL platforms for iGPU and dGPU. To simplify the logic, we will exclude iGPUs when a dGPU is present. This ensures that all XPU devices enumerated by PyTorch share the same SYCL context.
Now I generalize the logic as below:
1. We find the first L0 platform containing at least one dGPU and enumerate all dGPUs of that platform.
2. If no dGPU is found, we find the first L0 platform containing iGPU and enumerate all iGPUs of that platform.
3. No GPU is found (neither iGPU nor dGPU). | true |
2,774,425,008 | error in RMSNorm documentation | yuanyao-nv | closed | [] | 1 | NONE | ### 📚 The doc issue
The formula for RMS [documentation ](https://pytorch.org/docs/stable/generated/torch.nn.modules.normalization.RMSNorm.html) should have MS instead of RMS on the denominator. Writing RMS inside sqrt implies there are two sqrt operations.

### Suggest a potential alternative/fix
_No response_ | true |
2,774,404,478 | torch.compile post_accumulate_grad_hook ordering is wrong for tiebreakers | xmfan | open | [
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 0 | MEMBER | ### 🐛 Describe the bug
```python
import torch
import torch.nn as nn
import functools
model = nn.Sequential(
nn.Linear(10, 10, bias=False), # i=0
nn.Linear(10, 10, bias=False), # i=1
nn.Linear(10, 10, bias=False), # i=2
)
hook_ordering = []
def hook(param, i):
global hook_ordering
hook_ordering.append(i)
for i, param in enumerate(model.parameters()):
param.register_post_accumulate_grad_hook(functools.partial(hook, i=i))
x = torch.randn(10, 10)
out = model(x)
out.sum().backward()
print(f"eager hook ordering: {hook_ordering}")
# eager hook ordering: [2, 1, 0]
model.zero_grad()
hook_ordering = []
out = torch.compile(model, backend="eager")(x)
out.sum().backward()
print(f"compiled backend=eager hook ordering: {hook_ordering}")
# compiled backend=eager hook ordering: [2, 1, 0]
model.zero_grad()
hook_ordering = []
out = torch.compile(model, backend="aot_eager")(x)
out.sum().backward()
print(f"compiled backend=aot_eager hook ordering: {hook_ordering}")
# compiled backend=aot_eager hook ordering: [0, 1, 2]
```
We found this while working on Functional Autograd + Compiled Autograd. This is a consequence of implementing CompiledFunction as an autograd.Function. `CompiledFunction.backward` gradient return order must match the input order to `CompiledFunction.forward` i.e. [0, 1, 2].
While autograd does schedule AccumulateGrad nodes (and their post hook) ASAP, it can't peek into the autograd node, so there is a tiebreaker scenario when the autograd node returns multiple grads. The current autograd engine implementation just follows the output order.
One possible solution is to have the partitioner tell the autograd engine the desired ordering of outputs.
### Versions
main
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | true |
2,774,361,882 | Unable to compile models using tensorrt backend: CUDNN_STATUS_BAD_PARAM_STREAM_MISMATCH | deo-abhijit | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | ### 🐛 Describe the bug
When i use torch compile with tensorrt backend, im getting following error.
apparently tracing for conv2d operation is getting too many values (my guess)?
```bash
convolution = torch.ops.aten.convolution.default(slice_1, arg3_1, None, [2, 2], [3, 3], [1, 1], False, [0, 0], 1); slice_1 = arg3_1 = None
```
Convolution operation recieves only 7 arguments, but while tracing this has recieved 9.
Following is the trace log.
The error only pops up when im testing my library with pytest. I am not sure how to write reproducible code here.
```
--------------------------------------------------------------------------------------------------------------------------- Captured log call ---------------------------------------------------------------------------------------------------------------------------
WARNING torch_tensorrt.dynamo._compiler:_compiler.py:354 Node linear_default of op type call_function does not have metadata. This could sometimes lead to undefined behavior.
WARNING torch_tensorrt.dynamo._compiler:_compiler.py:363 Some nodes do not have metadata (shape and dtype information). This could lead to problems sometimes if the graph has PyTorch and TensorRT segments.
WARNING torch_tensorrt.dynamo.backend.backends:backends.py:123 TRT conversion failed on the subgraph. See trace above. Returning GraphModule forward instead.
Traceback (most recent call last):
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/backend/backends.py", line 114, in _pretraced_backend
trt_compiled = compile_module(
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 464, in compile_module
trt_module = convert_module(
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 142, in convert_module
interpreter_result = interpret_module_to_result(
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 105, in interpret_module_to_result
output_dtypes = infer_module_output_dtypes(
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 49, in infer_module_output_dtypes
module_outputs = module(*torch_inputs, **torch_kwarg_inputs)
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/fx/graph_module.py", line 784, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/fx/graph_module.py", line 361, in __call__
raise e
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/fx/graph_module.py", line 348, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.8", line 9, in forward
convolution = torch.ops.aten.convolution.default(slice_1, arg3_1, None, [2, 2], [3, 3], [1, 1], False, [0, 0], 1); slice_1 = arg3_1 = None
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/_ops.py", line 717, in __call__
return self._op(*args, **kwargs)
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM_STREAM_MISMATCH
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5881.0000
CPU min MHz: 400.0000
BogoMIPS: 8983.44
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] onnx==1.17.0
[pip3] onnx_tensorrt==10.5.0
[pip3] onnxruntime-gpu==1.19.2
[pip3] torch==2.5.0+cu118
[pip3] torch_tensorrt==2.5.0+cu118
[pip3] torchvision==0.20.0+cu118
[pip3] triton==3.1.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] torch 2.5.0+cu118 pypi_0 pypi
[conda] torch-tensorrt 2.5.0+cu118 pypi_0 pypi
[conda] torchvision 0.20.0+cu118 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | true |
2,774,339,345 | [mps/inductor] Add support for rsqrt(). | dcci | closed | [
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,774,324,177 | [Windows] Experimental `torch.compile` support for Windows on XPU | Stonepia | closed | [
"module: windows",
"oncall: pt2",
"module: inductor",
"module: xpu"
] | 6 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
BKC for Experimental Support on `torch.compile` for Windows on XPU
This document provides early experimental support for `torch.compile` on Windows with XPU. It tracks the status and known issues.
- [1. Overall Branch](#1-overall-branch)
- [2. Build Steps](#2-build-steps)
- [2.0.1. Windows Environment Setup](#201-windows-environment-setup)
- [2.0.2. Build PyTorch](#202-build-pytorch)
- [2.0.3. Build Triton](#203-build-triton)
- [3. Running Setup](#3-running-setup)
# 1. Overall Branch
Refer to the PyTorch PR: https://github.com/pytorch/pytorch/pull/144303. Use the branch specified in that PR.
For Triton, use the branch: https://github.com/intel/intel-xpu-backend-for-triton/tree/hot-fixes-for-pytorch.
# 2. Build Steps
Currently, Triton needs to be built from source and installed. The PyTorch build process remains unchanged.
### 2.0.1. Windows Environment Setup
For more details about the env setting, please refer to [this discussion](https://github.com/intel/torch-xpu-ops/discussions/1205).
1. Enable Long Path
```PowerShell
# Enable long path for the system (Need admin)
New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" -Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force
# Enable long path for git (Need admin)
git config --system core.longpaths true
git config --global core.longpaths true
```
2. Enable Symlink Creation
Activate [developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development#activate-developer-mode). It allows normal user to create symlinks, which may lead build failures during Triton build.
### 2.0.2. Build PyTorch
Use the branch in https://github.com/pytorch/pytorch/pull/144303 . All the steps are the same with existing BKC.
### 2.0.3. Build Triton
Use the pinned commit in the above PR.
1. **Download Level Zero SDK**
Please download `level-zero-win-sdk-*.zip` from https://github.com/oneapi-src/level-zero/releases. We tried with `v1.19.2`.
Unzip the file and put the folder to some path like `C:\level_zero`.
2. **Build Triton**
Open the `Intel oneAPI command prompt for Intel 64 for Visual Studio 2022` or activate oneAPI env by:
```CMD
"C:\Program Files (x86)\Intel\oneAPI\<toolkit-version>\oneapi-vars.bat"
```
Set the following env flag for Triton build:
```CMD
set VS2022INSTALLDIR="C:\Program Files\Microsoft Visual Studio\2022\Community"
set ZE_PATH=C:\level_zero
set CL=/D_CRT_SECURE_NO_WARNINGS
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvars64.bat"
"C:\Program Files (x86)\Intel\oneAPI\2025.0\oneapi-vars.bat"
```
Build Triton. Please put the triton folder in a shallow folder path (e.g., `C:\triton`)
```CMD
git clone https://github.com/intel/intel-xpu-backend-for-triton triton
cd triton
git checkout c23ff25775780cc4bb1ca530fd3ae33b0cf3b56e
cd python
pip install -U wheel pybind11 certifi cython cmake setuptools>=65.6.1
python -m certifi
pip install -v --no-build-isolation '.[build,tests,tutorials]'
```
One can also use `python setup.py bdist_wheels` in `triton\python` to get the wheels. Then `pip install dist\*.whl`.
# 3. Running Setup
The overall running setup is the same. One additional step is to be sure to add level-zero to `ZE_PATH`:
```CMD
set ZE_PATH=C:\level_zero
```
Then one could run the tests. Before running tests (especially PyTorch UT), please clean the TEMP folder to reduce size.
Due to the limitations of Windows OS, the TEMP may not be cleaned automatically.
```CMD
del /q %TEMP%\* & rd /s /q %TEMP%
```
Then you could run like below:
```
cd pytorch\test\inductor
pytest -v -k xpu test_torchinductor.py
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @gujinghui @fengyuan14 @guangyey | true |
2,774,313,667 | [dynamo] log compiler collective duration to tlparse chromium trace | xmfan | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144372
To show wall time in tlparse for the synchronous compiler collective. Can eliminate the leading hypothesis from https://fb.workplace.com/groups/1075192433118967/permalink/1578670289437843.
<img width="1296" alt="image" src="https://github.com/user-attachments/assets/b17d4efb-8573-43e5-af58-c51af05acb54" />
sample: https://gist.github.com/xmfan/19eeaa80d55a4e7c168e150355ec7392
rank 0: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpr5WNMt/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10
rank 1: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpr5WNMt/rank_1/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,774,248,961 | [codemod] Remove unused-variable in caffe2/aten/src/ATen/native/quantized/cpu/fbgemm_utils.cpp +2 | r-barnes | closed | [
"oncall: distributed",
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Summary:
LLVM-15 has a warning `-Wunused-variable` which we treat as an error because it's so often diagnostic of a code issue. Unused variables can compromise readability or, worse, performance.
This diff either (a) removes an unused variable and, possibly, it's associated code or (b) qualifies the variable with `[[maybe_unused]]`.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Reviewed By: palmje
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,774,199,471 | [RELAND] Generalize at::manual_seed for all accelerators | guangyey | closed | [
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: improvements",
"topic: not user facing",
"ciflow/mps",
"ciflow/rocm",
"ciflow/xpu",
"ci-no-td",
"module: accelerator"
] | 7 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144370
# Additional Context
This is a reland PR originated from eeb57394f93d720bca498c3fa9d167fc7b9cca46
cc @albanD @EikanWang | true |
2,774,164,162 | Migrate from Tuple -> tuple in torch/utils/data | bobrenjc93 | closed | [
"release notes: dataloader"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144369
Pull Request resolved: #144255 | true |
2,774,161,250 | [Don't Merge] Fix poision child process issue when call getAccelerator() | guangyey | closed | [
"oncall: jit",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: improvements",
"topic: not user facing",
"ciflow/xpu",
"ci-no-td",
"module: accelerator"
] | 13 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144664
* __->__ #144368
# Motivation
fix https://github.com/pytorch/pytorch/issues/144152
# Solution
- Align `at::globalContext()::hasXXX` to determine if accelerator XXX is built with PyTorch or an extension already registered to PyTorch.
- Define `at::hasXXX` to determine if accelerator XXX is available at runtime.
- Use `at::globalContext()::hasXXX` in `getAccelerator` rather than `at::hasXXX` to avoid initializing the XXX runtime (which can poison child processes) while detecting the current accelerator.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @albanD | true |
2,774,087,022 | [XPU] quantile related tests failed with Assertion failed: helper.isSupportedLayout() && "Unexpected srcLayout in ReduceOpConversion" | Stonepia | closed | [
"triaged",
"module: xpu"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
When running the UT on Windows/Linux:
```Python
pytest -k test_comprehensive_nanquantile_xpu_float32 -v test_torchinductor_opinfo.py
pytest -k test_comprehensive_quantile_xpu_float32 -v test_torchinductor_opinfo.py
```
The test failed with the following:
```Python
Assertion failed: helper.isSupportedLayout() && "Unexpected srcLayout in ReduceOpConversion"
```
### Versions
PyTorch: d0f5df83a50d9bb630764c92ac63fcb2640b1f94
Triton (for intel xpu): c23ff25775780cc4bb1ca530fd3ae33b0cf3b56e
Platform: Ubuntu 24.10 / Windows 11
cc @gujinghui @EikanWang @fengyuan14 @guangyey | true |
2,774,077,428 | disable experimental benchmarker | nmacchioni | open | [
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144366
* #144507
* #144505
* #144501
* #144353
* #133287
* #144365
* #133121
* #133058
* #144315
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,774,077,329 | implement LazyInductorBenchmarker | nmacchioni | closed | [
"module: rocm",
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144507
* #144505
* #144501
* #144353
* #133287
* __->__ #144365
* #133121
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,774,035,904 | Shard RegisterDispatchKey | swolchok | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: build"
] | 13 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144364
* #144363
Should fix https://github.com/pytorch/pytorch/issues/143952 .
Testing: built PyTorch on Raspberry Pi 5; this seemed to alleviate high peak memory requirement. (I did increase shard counts for other generated files along the way, but I need to go back and figure out how much of that was strictly necessary vs. needing to use -j1 or -j2.)
Differential Revision: [D67925496](https://our.internmc.facebook.com/intern/diff/D67925496/) | true |
2,774,035,714 | torchgen: move dispatch_helpers out of RegisterDispatchDefinitions.ini | swolchok | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: build"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144364
* __->__ #144363
The dispatch_helpers should be generated once, not once per kernel namespace.
Differential Revision: [D67925497](https://our.internmc.facebook.com/intern/diff/D67925497/) | true |
2,774,032,221 | Some operators miss dtype check when using `torch.compile` | maybeLee | open | [
"module: error checking",
"triaged",
"module: structured kernels",
"oncall: pt2",
"module: inductor"
] | 5 | CONTRIBUTOR | ### 🐛 Describe the bug
As reported here (https://github.com/pytorch/pytorch/issues/144314#issuecomment-2574508557), I notice some operators missing dtype check when executed in the context of `torch.compile`. The specific symptom is as follows:
- Eager Mode: Raises `not implemented for [specific dtype]` error
- torch.compile Mode: Yields regular outputs (I guess implicit data type casting happens under `torch.compile`)
Some related issues: https://github.com/pytorch/pytorch/issues/144314, https://github.com/pytorch/pytorch/issues/144310, https://github.com/pytorch/pytorch/issues/144247.
Although this dtype-check-missing issue may not be severe, in case you are interested, I cherrypick a few operators where dtype checks are missing in the CPU and CUDA backends. Here's a breakdown:
| Operator Name | CPU Backend Missing Check | CUDA Backend Misses Check | Expected Behavior (Eager Behavior) |
| -------- | ------- | ------- | ------- |
| torch.nn.functional.{log_softmax,softmax,logsigmoid} | uint, int8, int16, int32, int64 | uint, int8, int16, int32, int64 | Raise `not implemented for xxx` error |
| torch.nn.functional.{gelu,celu,hardsigmoid,hardswish}/torch.nextafter | uint, bool, int8, int16, int32, int64 | uint, bool, int8, int16, int32, int64 | Raise `not implemented for xxx` error |
| torch.nn.functional.prelu | bool, int8, int16, int32, int64 | uint, bool, int8, int16, int32, int64 | Raise `not implemented for xxx` error |
| torch.Tensor.mm | uint, bool | N/A | Raise `not implemented for xxx` error |
| torch.trace | uint, bfloat16 , half, bool | N/A | Raise `not implemented for xxx` error |
| torch.fmax | complex32, complex64 | N/A | Raise `not implemented for xxx` error |
| torch.xlogy/torch.nn.functional.mse_loss | complex64, complex32 | complex64, complex32 | Raise `not implemented for xxx` error |
Since these cases seem to share the same root cause, I am wondering if they can be fixed in a general way?
Below are detailed code that can reproduce the reported case for each operator.
<details>
<summary>log_softmax/softmax</summary>
```
import torch
from torch import nn
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input,dim):
return torch.nn.functional.log_softmax(input,dim) # replace `log_softmax` with `softmax to reproduce the issue in softmax
f = MyModel()
cf = torch.compile(f)
input = torch.randn((2))
dim = -1
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input, dim)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, dim)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>logsigmoid/gelu/celu/hardsigmoid/hardswish</summary>
```
import torch
from torch import nn
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input):
return torch.nn.functional.logsigmoid(input) # change logsigmoid to gelu/celu/hardsigmoid/hardswish will reproduce related inconsistent behaviors
f = MyModel()
cf = torch.compile(f)
input = torch.randn((2))
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>prelu</summary>
```
import torch
from torch import nn
import numpy as np
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input,weight):
return torch.nn.functional.prelu(input,weight)
f = MyModel()
cf = torch.compile(f)
input = torch.tensor(np.random.randint(-10, 10, (1,1,1)))
weight = torch.tensor(np.random.randint(-10, 10, (1)))
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
weight = weight.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input,weight)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input,weight)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.nextafter</summary>
```
import torch
from torch import nn
import numpy as np
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input, other):
return torch.nextafter(input, other)
f = MyModel()
cf = torch.compile(f)
input = torch.tensor(np.random.randint(-10, 10, ()), dtype=torch.int64)
other = torch.tensor(np.random.randint(-10, 10, ()), dtype=torch.int64)
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
other = other.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input, other)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, other)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.Tensor.mm</summary>
```
import torch
from torch import nn
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input, mat2):
return torch.Tensor.mm(input,mat2)
f = MyModel()
cf = torch.compile(f)
input = torch.randn(1, 1)
mat2 = torch.randn(1, 1)
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
mat2 = mat2.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input, mat2)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, mat2)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.trace</summary>
```
import torch
from torch import nn
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input):
return torch.trace(input)
f = MyModel()
cf = torch.compile(f)
input = torch.randn(0,1)
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.bfloat16, torch.half]:
input = input.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.fmax</summary>
```
import torch
from torch import nn
import numpy as np
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input,other):
return torch.fmax(input,other)
f = MyModel()
cf = torch.compile(f)
input = torch.tensor(np.random.randn(1,1,1), dtype=torch.complex128)
other = torch.tensor(np.random.randn(0), dtype=torch.double)
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.bfloat16, torch.half, torch.complex64, torch.complex128]:
input = input.to(dtype).to(device)
try:
f(input, other)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, other)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.xlogy/mse_loss</summary>
```
import torch
from torch import nn
import numpy as np
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input,other):
return torch.xlogy(input,other) # change torch.xlogy to torch.nn.functional.mse_loss can reproduce mse_loss's inconsistent behavior
f = MyModel()
cf = torch.compile(f)
input = torch.tensor(np.random.randn(1))
other = torch.tensor(np.random.randn(1,1))
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.bfloat16, torch.half, torch.complex64, torch.complex128]:
input = input.to(dtype).to(device)
other = other.to(dtype).to(device)
try:
f(input, other)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, other)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
To my best knowledge, I track other related issues here.
cc @malfet @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng
```[tasklist]
### Tasks
- [ ] https://github.com/pytorch/pytorch/issues/147256
- [ ] https://github.com/pytorch/pytorch/issues/144314
- [ ] https://github.com/pytorch/pytorch/issues/144310
- [ ] https://github.com/pytorch/pytorch/issues/144247
- [ ] https://github.com/pytorch/pytorch/issues/143779
- [ ] https://github.com/pytorch/pytorch/issues/143801
- [ ] https://github.com/pytorch/pytorch/issues/143752
- [ ] https://github.com/pytorch/pytorch/issues/143729
```
| true |
2,774,027,067 | [3.13t] use sysconfig to check for Python nogil builds | williamwen42 | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144361
`sys._is_gil_enabled()` wasn't working in certain cases, according to @atalman
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,774,010,872 | Skip empty frames recursively when top-level is empty | ydwu4 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
```python
import torch
def k(x):
return x
def g(x):
return k(x)
def f(x):
return g(x)
a = torch.ones(2, 2)
c = torch.compile(f, fullgraph=True)(a)
```
The above compile 3 times, f, g, and k with following log:
```
I0107 16:55:09.455000 1702873 torch/_dynamo/utils.py:1403] [0/0] ChromiumEventLogger initialized with id 50c41bbc-3619-4642-a30a-ca5562f3b129
V0107 16:55:09.456000 1702873 torch/_dynamo/convert_frame.py:941] [0/0] torchdynamo start compiling f /data/users/yidi/pytorch/test_while_loop.py:9, stack (elided 4 frames):
V0107 16:55:09.456000 1702873 torch/_dynamo/convert_frame.py:941] [0/0] File "/data/users/yidi/pytorch/test_while_loop.py", line 12, in <module>
V0107 16:55:09.456000 1702873 torch/_dynamo/convert_frame.py:941] [0/0] c = torch.compile(f, fullgraph=True)(a)
V0107 16:55:09.456000 1702873 torch/_dynamo/convert_frame.py:941] [0/0]
I0107 16:55:10.342000 1702873 torch/_dynamo/symbolic_convert.py:2744] [0/0] Step 1: torchdynamo start tracing f /data/users/yidi/pytorch/test_while_loop.py:9
I0107 16:55:10.343000 1702873 torch/fx/experimental/symbolic_shapes.py:3221] [0/0] create_env
V0107 16:55:10.347000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:10 in f (f)
V0107 16:55:10.347000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] return g(x)
V0107 16:55:10.348000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL g []
V0107 16:55:10.351000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_FAST x [UserFunctionVariable()]
V0107 16:55:10.351000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [UserFunctionVariable(), LazyVariableTracker()]
V0107 16:55:10.351000 1702873 torch/_dynamo/symbolic_convert.py:3204] [0/0] INLINING <code object g at 0x7f4599c97260, file "/data/users/yidi/pytorch/test_while_loop.py", line 6>, inlined according trace_rules.lookup inlined by default
V0107 16:55:10.352000 1702873 torch/_dynamo/variables/builder.py:2869] [0/0] wrap_to_fake L['x'] (2, 2) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None], constraint_strides=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='x', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0107 16:55:10.354000 1702873 torch/_dynamo/output_graph.py:2201] [0/0] create_graph_input L_x_ L['x'] FakeTensor(..., size=(2, 2)) at debug_level 0 before=False
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:7 in g (g) (inline depth: 1)
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] return k(x)
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL k []
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_FAST x [UserFunctionVariable()]
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [UserFunctionVariable(), TensorVariable()]
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:3204] [0/0] INLINING <code object k at 0x7f4599d3b3c0, file "/data/users/yidi/pytorch/test_while_loop.py", line 3>, inlined according trace_rules.lookup inlined by default
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:4 in k (k) (inline depth: 2)
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] return x
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_FAST x []
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:3272] [0/0] DONE INLINING <code object k at 0x7f4599d3b3c0, file "/data/users/yidi/pytorch/test_while_loop.py", line 3>
V0107 16:55:10.357000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.357000 1702873 torch/_dynamo/symbolic_convert.py:3272] [0/0] DONE INLINING <code object g at 0x7f4599c97260, file "/data/users/yidi/pytorch/test_while_loop.py", line 6>
V0107 16:55:10.357000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.357000 1702873 torch/_dynamo/convert_frame.py:778] [0/0] Skipping frame because no content in function call f /data/users/yidi/pytorch/test_while_loop.py 9
V0107 16:55:10.357000 1702873 torch/_dynamo/convert_frame.py:787] [0/0] No graph captured with one_graph=True
I0107 16:55:10.358000 1702873 torch/_dynamo/pgo.py:639] [0/0] put_code_state: no cache key, skipping
I0107 16:55:10.358000 1702873 torch/_dynamo/convert_frame.py:1059] [0/0] run_gc_after_compile: running gc
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] torchdynamo start compiling g /data/users/yidi/pytorch/test_while_loop.py:6, stack (elided 4 frames):
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] File "/data/users/yidi/pytorch/test_while_loop.py", line 12, in <module>
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] c = torch.compile(f, fullgraph=True)(a)
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 576, in _fn
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] return fn(*args, **kwargs)
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0]
I0107 16:55:10.365000 1702873 torch/_dynamo/symbolic_convert.py:2744] [1/0] Step 1: torchdynamo start tracing g /data/users/yidi/pytorch/test_while_loop.py:6
I0107 16:55:10.365000 1702873 torch/fx/experimental/symbolic_shapes.py:3221] [1/0] create_env
V0107 16:55:10.366000 1702873 torch/_dynamo/symbolic_convert.py:956] [1/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:7 in g (g)
V0107 16:55:10.366000 1702873 torch/_dynamo/symbolic_convert.py:956] [1/0] [__trace_source] return k(x)
V0107 16:55:10.366000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE LOAD_GLOBAL k []
V0107 16:55:10.367000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE LOAD_FAST x [UserFunctionVariable()]
V0107 16:55:10.367000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [UserFunctionVariable(), LazyVariableTracker()]
V0107 16:55:10.367000 1702873 torch/_dynamo/symbolic_convert.py:3204] [1/0] INLINING <code object k at 0x7f4599d3b3c0, file "/data/users/yidi/pytorch/test_while_loop.py", line 3>, inlined according trace_rules.lookup inlined by default
V0107 16:55:10.367000 1702873 torch/_dynamo/variables/builder.py:2869] [1/0] wrap_to_fake L['x'] (2, 2) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None], constraint_strides=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='x', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0107 16:55:10.368000 1702873 torch/_dynamo/output_graph.py:2201] [1/0] create_graph_input L_x_ L['x'] FakeTensor(..., size=(2, 2)) at debug_level 0 before=False
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:956] [1/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:4 in k (k) (inline depth: 1)
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:956] [1/0] [__trace_source] return x
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE LOAD_FAST x []
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:3272] [1/0] DONE INLINING <code object k at 0x7f4599d3b3c0, file "/data/users/yidi/pytorch/test_while_loop.py", line 3>
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.370000 1702873 torch/_dynamo/convert_frame.py:778] [1/0] Skipping frame because no content in function call g /data/users/yidi/pytorch/test_while_loop.py 6
V0107 16:55:10.370000 1702873 torch/_dynamo/convert_frame.py:787] [1/0] No graph captured with one_graph=True
I0107 16:55:10.370000 1702873 torch/_dynamo/pgo.py:639] [1/0] put_code_state: no cache key, skipping
I0107 16:55:10.370000 1702873 torch/_dynamo/convert_frame.py:1059] [1/0] run_gc_after_compile: running gc
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] torchdynamo start compiling k /data/users/yidi/pytorch/test_while_loop.py:3, stack (elided 4 frames):
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] File "/data/users/yidi/pytorch/test_while_loop.py", line 12, in <module>
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] c = torch.compile(f, fullgraph=True)(a)
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 576, in _fn
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] return fn(*args, **kwargs)
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] File "/data/users/yidi/pytorch/test_while_loop.py", line 10, in f
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] return g(x)
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0]
I0107 16:55:10.374000 1702873 torch/_dynamo/symbolic_convert.py:2744] [2/0] Step 1: torchdynamo start tracing k /data/users/yidi/pytorch/test_while_loop.py:3
I0107 16:55:10.375000 1702873 torch/fx/experimental/symbolic_shapes.py:3221] [2/0] create_env
V0107 16:55:10.375000 1702873 torch/_dynamo/symbolic_convert.py:956] [2/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:4 in k (k)
V0107 16:55:10.375000 1702873 torch/_dynamo/symbolic_convert.py:956] [2/0] [__trace_source] return x
V0107 16:55:10.375000 1702873 torch/_dynamo/symbolic_convert.py:979] [2/0] [__trace_bytecode] TRACE LOAD_FAST x []
V0107 16:55:10.375000 1702873 torch/_dynamo/symbolic_convert.py:979] [2/0] [__trace_bytecode] TRACE RETURN_VALUE None [LazyVariableTracker()]
V0107 16:55:10.376000 1702873 torch/_dynamo/variables/builder.py:2869] [2/0] wrap_to_fake L['x'] (2, 2) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None], constraint_strides=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='x', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0107 16:55:10.376000 1702873 torch/_dynamo/output_graph.py:2201] [2/0] create_graph_input L_x_ L['x'] FakeTensor(..., size=(2, 2)) at debug_level 0 before=False
V0107 16:55:10.377000 1702873 torch/_dynamo/convert_frame.py:778] [2/0] Skipping frame because no content in function call k /data/users/yidi/pytorch/test_while_loop.py 3
V0107 16:55:10.377000 1702873 torch/_dynamo/convert_frame.py:787] [2/0] No graph captured with one_graph=True
I0107 16:55:10.377000 1702873 torch/_dynamo/pgo.py:639] [2/0] put_code_state: no cache key, skipping
I0107 16:55:10.377000 1702873 torch/_dynamo/convert_frame.py:1059] [2/0] run_gc_after_compile: running gc
I0107 16:55:12.533000 1703243 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0107 16:55:12.533000 1703243 torch/_dynamo/eval_frame.py:398]
I0107 16:55:12.533000 1703243 torch/_dynamo/eval_frame.py:398] ]
I0107 16:55:12.538000 1703243 torch/_dynamo/utils.py:636] TorchDynamo compilation metrics:
I0107 16:55:12.538000 1703243 torch/_dynamo/utils.py:636] Function Runtimes (s)
I0107 16:55:12.538000 1703243 torch/_dynamo/utils.py:636] ---------- --------------
V0107 16:55:12.538000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats defer_runtime_assert: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats evaluate_expr: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats simplify: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats replace: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats get_axioms: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _maybe_evaluate_static_worker: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats safe_expand: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] * f /data/users/yidi/pytorch/test_while_loop.py:9
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] * g /data/users/yidi/pytorch/test_while_loop.py:6
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] * k /data/users/yidi/pytorch/test_while_loop.py:3
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] ]
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] TorchDynamo compilation metrics:
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] Function Runtimes (s)
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] ---------------------- --------------
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] _compile.compile_inner 0.9094
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] gc 0.0024
```
Ideally, we should be able to skip compilation of function calls to g and k.
### Versions
main
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,773,999,079 | Incorrect Results with Tensor Parallelism | amogkam | open | [
"oncall: distributed"
] | 3 | NONE | ### 🐛 Describe the bug
I am trying a basic Tensor Parallel implementation on a 2 layer MLP using `ColwiseParallel` followed by a `RowwiseParallel`. I would expect the final output of the MLP to be the same in the Tensor Parallel version compared to the non-parallelized version. However, the output tensors are different.
```python
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.distributed.tensor.parallel import parallelize_module, ColwiseParallel, RowwiseParallel
from torch.distributed.tensor.placement_types import Replicate, Shard
class MLP(nn.Module):
def __init__(
self,
dim: int,
expand_ratio: int,
mp_mesh,
_parallelize=True
):
super().__init__()
self.mp_mesh = mp_mesh
self.proj_in = nn.Linear(dim, dim * expand_ratio)
self.act = nn.GELU("tanh")
self.proj_out = nn.Linear(dim * expand_ratio, dim)
def forward(self, x: torch.FloatTensor) -> torch.FloatTensor:
x = self.proj_in(x)
x = self.act(x)
x = self.proj_out(x)
return x
if __name__ == "__main__":
import os
from torch.distributed.device_mesh import init_device_mesh
import torch.distributed.tensor as dtensor
torch.manual_seed(0)
local_rank = int(os.environ["LOCAL_RANK"])
device = torch.device(f'cuda:{local_rank}')
mesh = init_device_mesh("cuda", (8,))
head_dim = 80
num_heads = 24
d_model = head_dim * num_heads
text_seq_len = 10
model = MLP(d_model, expand_ratio=4, mp_mesh=mesh, _parallelize=parallelize).to(device).to(torch.bfloat16)
dtext = dtensor.randn((text_seq_len, d_model), dtype=torch.bfloat16, device_mesh=mesh, placements=[Replicate()])
text = dtext.full_tensor()
text_output = model(text)
model = parallelize_module(model, device_mesh=mesh, parallelize_plan={
"proj_in": ColwiseParallel(use_local_output=True),
"proj_out": RowwiseParallel(use_local_output=True)})
parallel_text_out = model(dtext)
if local_rank == 0:
print("Text output: ", text_output)
print("Parallel text output: ", parallel_text_out)
assert text_output.size() == parallel_text_out.size()
assert torch.allclose(text_output, parallel_text_out) # This assertion fails
```
I run this on a single node with 8 GPUs via `torchrun --nproc_per_node=8 torch_tp_test.py`.
But the assertion fails with
```
Text output: tensor([[-0.1299, -0.1758, -0.0344, ..., 0.1128, -0.2178, -0.0466],
[-0.0226, 0.1167, 0.1768, ..., -0.0160, -0.0405, -0.2656],
[-0.1641, -0.0554, 0.2715, ..., 0.1475, 0.0967, 0.1309],
...,
[-0.0820, -0.0391, 0.2480, ..., -0.0525, -0.0962, 0.0903],
[-0.0179, -0.0850, -0.1641, ..., -0.2451, 0.0364, -0.0962],
[-0.2676, 0.0332, -0.2500, ..., -0.0410, -0.2412, 0.2930]],
device='cuda:0', dtype=torch.bfloat16, grad_fn=<AddmmBackward0>)
Parallel text output: AsyncCollectiveTensor(tensor([[-0.1309, -0.1758, -0.0334, ..., 0.1108, -0.2188, -0.0471],
[-0.0234, 0.1162, 0.1758, ..., -0.0176, -0.0381, -0.2676],
[-0.1621, -0.0549, 0.2695, ..., 0.1455, 0.0967, 0.1318],
...,
[-0.0825, -0.0366, 0.2480, ..., -0.0537, -0.0977, 0.0898],
[-0.0181, -0.0830, -0.1621, ..., -0.2451, 0.0361, -0.0977],
[-0.2676, 0.0325, -0.2490, ..., -0.0410, -0.2402, 0.2930]],
device='cuda:0', dtype=torch.bfloat16))
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/amogkamsetty/torch_tp_test.py", line 88, in <module>
[rank0]: assert torch.allclose(text_output, parallel_text_out)
[rank0]: AssertionError
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (conda-forge gcc 12.1.0-17) 12.1.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch-tb-profiler==0.3.1
[pip3] torchaudio==2.0.1+3b40834
[pip3] torchmetrics==1.4.0.post0
[pip3] torchtyping==0.1.4
[pip3] triton==3.1.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torch-tb-profiler 0.3.1 pypi_0 pypi
[conda] torchaudio 2.0.1+3b40834 pypi_0 pypi
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
[conda] torchtyping 0.1.4 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
[conda] vllm-nccl-cu12 2.18.1.0.4.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,773,991,895 | [ONNX] Update images and APIs to onnx_dynamo.rst | titaiwangms | closed | [
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: docs",
"suppress-bc-linter"
] | 15 | COLLABORATOR | Update the result image of exporting, and delete the functions/class that belongs to `torch.onnx.dynamo_export` | true |
2,773,985,315 | python-3.13t binaries are only available for Linux x86 | malfet | closed | [
"module: binaries",
"oncall: releng",
"triaged"
] | 7 | CONTRIBUTOR | ### 🐛 Describe the bug
Looking at https://download.pytorch.org/whl/test/torch/ I've noticed that 3.13t binaries are only available for Linux-x86, neither linux-aarch64, not Windows nor Mac support those
### Versions
2.6/CI
cc @seemethere @osalpekar @atalman | true |
2,773,943,148 | [ONNX] Use torch.export.Dim.AUTO in dynamo_export | titaiwangms | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 3 | COLLABORATOR | Align to the changes in https://github.com/pytorch/pytorch/pull/143158 | true |
2,773,939,907 | Add `is_dtype_supported` predicate to DeviceInterface | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Which will return true, unless dtype is bf16 by default
For MPS device it will return false if dtype is double
Check that it works by refactoring `test_inf` that should expect TypeError raised if invoked with unsupported dtype
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,773,885,857 | Improve torchrun documentation | fepegar | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13 | CONTRIBUTOR | Fixes #142042:
- #142042
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,773,874,438 | implement pruning for GroupedInductorBenchmarker | nmacchioni | closed | [
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144507
* #144505
* #144501
* __->__ #144353
* #133287
* #144365
* #133121
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,773,865,542 | [Pipelining] Fix PP grad scaling | wconstab | closed | [
"oncall: distributed",
"Merged",
"release notes: distributed (pipeline)"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144734
* #144596
* __->__ #144352
Adds a grad-scaling method `perform_pp_grad_scaling()` which divides grads by num_microbatches.
Enables grad scaling by default, unless disabled due to using a loss function that sums instead of averaging losses.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | true |
2,773,814,912 | Add fp8 content for hipify | PoodleWang | closed | [
"fb-exported",
"topic: not user facing"
] | 12 | NONE | Summary:
Add fp8 hipify content.
Test plan:
Internal test for NV and AMD GPUs.
Internal usage for meta. [D67305195]
| true |
2,773,745,007 | Remove tests for linux-focal-py3_9-clang10-build | zxiiro | closed | [
"triaged",
"open source",
"topic: not user facing"
] | 4 | COLLABORATOR | The 2 test suites seem to run the same tests.
* linux-focal-py3_9-clang10-build
* linux-focal-py3_13-clang10-build
Perhaps we can reduce redundancy and only run the test suites with one of the builds?
```
{ include: [
{ config: "default", shard: 1, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge" },
{ config: "default", shard: 2, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge" },
{ config: "default", shard: 3, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge" },
{ config: "default", shard: 4, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge" },
{ config: "default", shard: 5, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge" },
{ config: "crossref", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "crossref", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "dynamo_wrapped", shard: 1, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "dynamo_wrapped", shard: 2, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "dynamo_wrapped", shard: 3, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
]}
```
Issue: pytorch/pytorch#67352
| true |
2,773,693,492 | codecache.py: Utilize precompiled headers for CPP python bindings | benjaminglass1 | closed | [
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144349
* #144293
* #146928
Significantly increase default inductor OpInfo testing speed by precompiling a complex header included in CPU tests.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,773,671,121 | Add SM89 support for f8f8bf16_rowwise() | alexsamardzic | closed | [
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"no-runner-experiments"
] | 12 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144348
cc @ptrblck @msaroufim @eqy @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,773,596,696 | [CD] Aarch64 builds should not override `OVERRIDE_PACKAGE_VERSION` envvar | pytorchbot | closed | [
"open source"
] | 1 | COLLABORATOR | Currently our nightly aarch64 binaries have correct suffixes +cpu or +cu126. But release binaries are missing these suffixes. Hence to correct this, make sure are nightly and release binaries are consistent, I propose this change.
I see that override is already set correctly in release workflow:
https://github.com/pytorch/pytorch/actions/runs/12383179841/job/34565381200
For CPU:
```
OVERRIDE_PACKAGE_VERSION="2.6.0+cpu"
```
For CUDA:
```
OVERRIDE_PACKAGE_VERSION="2.6.0+cu126"
```
The removed code will set : OVERRIDE_PACKAGE_VERSION="2.6.0" for both cuda and cpu builds for release binaries.
cc @tinglvv | true |
2,773,558,920 | Eliminate c10::optional usage in PyTorch | houseroad | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | MEMBER | Differential Revision: D67907427
| true |
2,773,554,047 | [Pipelining] Refactor pp composability test to use faster MPCT | wconstab | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144426
* #144352
* __->__ #144345
* Using MultiProcessContinuousTest base class is faster (60s vs 279s for
the full run of `test_manual_with_data_parallel` and all its
parametrizations
* Have to move to a new file to use MPTC since it requires a different
launcher style in `__main__`
* Propose to reorganize the composability tests anyway, since
`test/_composable/test_composability/test_pp_composability` is an
annoyingly long path
* rename `test_manual_with_data_parallel` to `test_pp_dp` for
simplicity/consistency with newer test names. (manual refers to not
using tracer frontend, but that's not so important to call out in the
test name)
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | true |
2,773,541,254 | custom_op's backward changes can't invalidate `torch.compile` cache for backward | YouJiacheng | open | [
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 7 | CONTRIBUTOR | ### 🐛 Describe the bug
(clean cache: `rm -r /tmp/torchinductor_root/*`)
First, run the following code
```python
import torch
from torch import Tensor
@torch.library.custom_op("mylib::foo", mutates_args=())
def foo(x: Tensor) -> Tensor:
return x.clone()
@foo.register_fake
def _(x):
return torch.empty_like(x)
def backward(ctx, grad):
return 1.0 * grad
foo.register_autograd(backward)
x = torch.tensor(0., requires_grad=True)
@torch.compile
def bar(x):
return torch.ops.mylib.foo(x)
bar(x).backward()
print(x.grad) # tensor(1.)
```
Then, change the code to
```python
import torch
from torch import Tensor
@torch.library.custom_op("mylib::foo", mutates_args=())
def foo(x: Tensor) -> Tensor:
return x.clone()
@foo.register_fake
def _(x):
return torch.empty_like(x)
def backward(ctx, grad):
return 2.0 * grad
foo.register_autograd(backward)
x = torch.tensor(0., requires_grad=True)
@torch.compile
def bar(x):
return torch.ops.mylib.foo(x)
bar(x).backward()
print(x.grad) # tensor(1.)
```
It will still print `tensor(1.)`.
Interestingly, if the "sequence" of backwards is
(clean cache: `rm -r /tmp/torchinductor_root/*`)
```python
def backward(ctx, grad):
return grad
# tensor(1.)
```
```python
def backward(ctx, grad):
return 2.0 * grad
# tensor(2.)
```
```python
def backward(ctx, grad):
return grad
# tensor(2.)
```
It will print `tensor(1.)`, `tensor(2.)`, `tensor(2.)`.
I inspected the code generated by inductor, and found it didn't change after `1.0` changed to `2.0`
```python
# /tmp/torchinductor_root/5g/c5gahzddocrqqegxwc4iud6jjufbxmvx6rwvify7r4bkdc5tec6v.py
# other lines omitted
cpp_fused_mul_0 = async_compile.cpp_pybinding(['const float*', 'float*'], '''
#include "/tmp/torchinductor_root/db/cdb7hyptwxpzukwd42x4ajfjlgrpum4a4htdd6lhb65apclsmno4.h"
extern "C" void kernel(const float* in_ptr0,
float* out_ptr0)
{
{
{
{
auto tmp0 = in_ptr0[static_cast<int64_t>(0L)];
auto tmp1 = static_cast<float>(1.0);
auto tmp2 = decltype(tmp0)(tmp0 * tmp1);
out_ptr0[static_cast<int64_t>(0L)] = tmp2;
}
}
}
}
''')
async_compile.wait(globals())
del async_compile
def call(args):
tangents_1, = args
args.clear()
assert_size_stride(tangents_1, (), ())
buf0 = empty_strided_cpu((), (), torch.float32)
cpp_fused_mul_0(tangents_1, buf0)
del tangents_1
return (buf0, )
```
And deleting this generated file (`/tmp/torchinductor_root/5g/c5gahzddocrqqegxwc4iud6jjufbxmvx6rwvify7r4bkdc5tec6v.py`) can't solve the problem -- an identical file will be generated.
### Versions
PyTorch version: 2.7.0.dev20250107+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 19 2024, 14:33:20) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-161
Off-line CPU(s) list: 162-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250107+cu126
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng @zou3519 @bdhirsh | true |
2,773,487,918 | [ONNX] Handle list values as 0d inputs | justinchuby | closed | [
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: bug fixes"
] | 11 | COLLABORATOR | Handle list values as 0d inputs instead of 1d, as the `SymInt`s are expected to be 0d tensors in ONNX.
This PR reshapes int64 values into 1D tensors in a list, assuming they are 0D tensors initially. | true |
2,773,485,767 | [dynamo][dicts] Consolidate dict(..) construction | anijain2305 | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 11 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144342
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,773,432,144 | torchgen: support exception boundary for ExecuTorch functions | swolchok | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144341
Needed for ExecuTorch diff D67904052.
Differential Revision: [D67906411](https://our.internmc.facebook.com/intern/diff/D67906411/) | true |
2,773,424,409 | c10::optional -> std::optional in a few places | r-barnes | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements"
] | 26 | CONTRIBUTOR | Test Plan: Sandcastle
| true |
2,773,415,953 | `logsumexp` parameter `dim` is optional according to the doc, but the code errors out if it's not provided | kit1980 | closed | [
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 5 | CONTRIBUTOR | ### 🐛 Describe the bug
```python
import torch
a = torch.randn(3, 3)
torch.logsumexp(a)
```
Should be "all dimensions are reduced" (https://pytorch.org/docs/stable/generated/torch.logsumexp.html), instead there is an error:
```
TypeError: logsumexp() received an invalid combination of arguments - got (Tensor), but expected one of:
* (Tensor input, tuple of ints dim, bool keepdim, *, Tensor out)
* (Tensor input, tuple of names dim, bool keepdim, *, Tensor out)
```
### Versions
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-2255 CPU @ 3.70GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 7
CPU max MHz: 4700.0000
CPU min MHz: 1200.0000
BogoMIPS: 7399.70
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 10 MiB (10 instances)
L3 cache: 19.3 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @albanD | true |
2,773,382,684 | [TorchInductor] Add ALiBi (Attention with Linear Biases) Fused Attention Pattern | vyom1611 | open | [
"triaged",
"open source",
"Stale",
"topic: not user facing",
"module: inductor"
] | 4 | NONE | ## Summary
This PR adds support for ALiBi (Attention with Linear Biases) in TorchInductor’s fused-attention. ALiBi applies a position-based bias to attention scores, improving extrapolation for language modeling tasks. With this addition, ALiBi-based attention can leverage PyTorch’s optimized `_scaled_dot_product_attention` kernel.
## Changes
- **New ALiBi Pattern & Replacement**
- `_sfdp_pattern_alibi(...)`: Recognizes \[Q @ Kᵀ / √d + alibi_bias\] → softmax → dropout → matmul(V).
- `_sfdp_replacement_alibi(...)`: Fuses into `_scaled_dot_product_attention` using `attn_mask=alibi_bias`.
- **Test**
- Added `_test_sdpa_rewriter_alibi` in `TestSDPAPatternRewriterTemplate`.
- Confirms forward/backward correctness under dropout.
- If you get error: `torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Duplicate pattern: expand_default = CallFunction(aten.expand.default, KeywordArg('query'), Ignored())`,
-> run `export PYTORCH_GEN_PATTERNS=1` in the terminal to generate the attention pattern.
## Notes
- If FlashAttention does not support ALiBi directly, PyTorch gracefully falls back to MATH or MEM-EFFICIENT kernels.
- Combining ALiBi with a causal mask can be done by summing the bias and mask if needed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,773,376,588 | Testing new triton llvm commit | jataylo | closed | [
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-micro-benchmark",
"ciflow/inductor-rocm",
"ciflow/inductor-periodic"
] | 3 | COLLABORATOR | Previous triton llvm commit (https://github.com/pytorch/pytorch/pull/140698) broke A100 in resnet models, retesting CI to see if this is resolved. | true |
2,773,374,580 | Fix int8 mm V.ops.mul dispatching | pytorchbot | closed | [
"open source",
"module: inductor",
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142350
* __->__ #143127
This is sort of subtle - because we were doing `V.ops.mul` at binding time, we dont redispatch later when we invoke the epilogue. and then later running into assertion checking in pr above.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,773,371,675 | Fix PythonMod printing | isuruf | closed | [
"module: cpu",
"open source",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3 | COLLABORATOR | Cherry pick #144078 and its dependency #143197
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,773,355,897 | Implement `Size.__radd__` (currently `tuple + Size` upcasts to `tuple`) | randolf-scholz | open | [
"triaged",
"actionable",
"module: python frontend"
] | 4 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
`torch.Size`, just like `tuple` which it subclasses from, does not implement an `__radd__` function. This has the consequence that `Size + tuple` returns a `Size`, whereas `tuple + Size` returns a `tuple`, since it falls back to `tuple.__add__(left, right)`:
```py
>>> import torch
>>> torch.Size([1,2,3]) + (4,5,6)
torch.Size([1, 2, 3, 4, 5, 6])
>>> (4,5,6) + torch.Size([1,2,3])
(4, 5, 6, 1, 2, 3)
```
This can be unexpected, so it would be useful if `Size` implemented
```py
def __radd__(self, other: tuple[int, ...]) -> Size: ...
```
Since in most cases, upcasting to `tuple` is likely not the desired outcome.
cc @albanD | true |
2,773,272,071 | [pytree][2/N] change pytree usages to implementation agnostic | XuehaiPan | open | [
"oncall: distributed",
"oncall: jit",
"open source",
"Stale",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #138056
* __->__ #144333
* #144332
* #130141
* #137884
* #144405
* #137400
* #130140
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,773,271,744 | [pytree][1/N] change pytree usages to implementation agnostic: `torch.distributed` | XuehaiPan | open | [
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (sharded)",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144332
* #130141
* #144405
* #137400
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,773,188,988 | [Export] UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule | bhack | open | [
"oncall: pt2",
"oncall: export"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
Using `torch.export` on https://github.com/MCG-NJU/VFIMamba
I got
```python
/opt/conda/lib/python3.11/site-packages/torch/export/_unlift.py:75: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
getattr_node = gm.graph.get_attr(lifted_node)
/opt/conda/lib/python3.11/site-packages/torch/fx/graph.py:1801: UserWarning: Node lifted_tensor_0 target lifted_tensor_0 lifted_tensor_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
```
I think that the problem it could be in
https://github.com/MCG-NJU/VFIMamba/blob/main/model/warplayer.py
Is it safe this warning or does it require a workaround. In any case, can we improve the message?
### Versions
nightly
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,773,174,371 | Fix batch-specific attention mod for NJT + Flex | pytorchbot | closed | [
"open source"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143866
Fixes #143788 | true |
2,773,108,317 | [BE]: Remove unnecessary copy of gradients in util | Skylion007 | closed | [
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | COLLABORATOR | No need to copy gradients to CPU too
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,773,070,647 | Debug build fails to compile on x86 with WERROR=1 | robert-hardwick | open | [
"module: build",
"triaged"
] | 1 | COLLABORATOR | ### 🐛 Describe the bug
Attempted to build a debug whl on x86 machine in ubuntu docker image 'pytorch-linux-jammy-py3.9-gcc11'
Build passes when DEBUG=0 OR with DEBUG=1 and WERROR=0
`In file included from /var/lib/jenkins/workspace/torch/csrc/jit/tensorexpr/llvm_codegen.cpp:24:
/opt/llvm/include/llvm/IR/IRBuilder.h: In member function ‘llvm::LoadInst* llvm::IRBuilder<T, Inserter>::CreateLoad(llvm::Type*, llvm::Value*, const llvm::Twine&) [with T = llvm::ConstantFolder; Inserter = llvm::IRBuilderDefaultInserter]’:
/opt/llvm/include/llvm/IR/IRBuilder.h:1581:19: error: ‘static void llvm::User::operator delete(void*)’ called on pointer returned from a mismatched allocation function [-Werror=mismatched-new-delete]
1581 | return Insert(new LoadInst(Ty, Ptr), Name);
| ^~~~~~~~~~~~~~~~~~~~~
/opt/llvm/include/llvm/IR/IRBuilder.h:1581:19: note: returned from ‘static void* llvm::UnaryInstruction::operator new(size_t)’
/opt/llvm/include/llvm/IR/IRBuilder.h: In member function ‘llvm::Value* llvm::IRBuilder<T, Inserter>::CreateFCmp(llvm::CmpInst::Predicate, llvm::Value*, llvm::Value*, const llvm::Twine&, llvm::MDNode*) [with T = llvm::ConstantFolder; Inserter = llvm::IRBuilderDefaultInserter]’:
/opt/llvm/include/llvm/IR/IRBuilder.h:2181:30: error: ‘static void llvm::User::operator delete(void*)’ called on pointer returned from a mismatched allocation function [-Werror=mismatched-new-delete]
2181 | return Insert(setFPAttrs(new FCmpInst(P, LHS, RHS), FPMathTag, FMF), Name);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
/opt/llvm/include/llvm/IR/IRBuilder.h:2181:30: note: returned from ‘static void* llvm::CmpInst::operator new(size_t)’
/opt/llvm/include/llvm/IR/IRBuilder.h: In member function ‘llvm::Value* llvm::IRBuilder<T, Inserter>::CreateICmp(llvm::CmpInst::Predicate, llvm::Value*, llvm::Value*, const llvm::Twine&) [with T = llvm::ConstantFolder; Inserter = llvm::IRBuilderDefaultInserter]’:
/opt/llvm/include/llvm/IR/IRBuilder.h:2173:19: error: ‘static void llvm::User::operator delete(void*)’ called on pointer returned from a mismatched allocation function [-Werror=mismatched-new-delete]
2173 | return Insert(new ICmpInst(P, LHS, RHS), Name);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
/opt/llvm/include/llvm/IR/IRBuilder.h:2173:19: note: returned from ‘static void* llvm::CmpInst::operator new(size_t)’
/opt/llvm/include/llvm/IR/IRBuilder.h: In member function ‘llvm::AllocaInst* llvm::IRBuilder<T, Inserter>::CreateAlloca(llvm::Type*, llvm::Value*, const llvm::Twine&) [with T = llvm::ConstantFolder; Inserter = llvm::IRBuilderDefaultInserter]’:
/opt/llvm/include/llvm/IR/IRBuilder.h:1571:19: error: ‘static void llvm::User::operator delete(void*)’ called on pointer returned from a mismatched allocation function [-Werror=mismatched-new-delete]
1571 | return Insert(new AllocaInst(Ty, DL.getAllocaAddrSpace(), ArraySize), Name);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/opt/llvm/include/llvm/IR/IRBuilder.h:1571:19: note: returned from ‘static void* llvm::UnaryInstruction::operator new(size_t)’
/opt/llvm/include/llvm/IR/IRBuilder.h: In member function ‘llvm::StoreInst* llvm::IRBuilder<T, Inserter>::CreateStore(llvm::Value*, llvm::Value*, bool) [with T = llvm::ConstantFolder; Inserter = llvm::IRBuilderDefaultInserter]’:
/opt/llvm/include/llvm/IR/IRBuilder.h:1606:19: error: ‘static void llvm::User::operator delete(void*)’ called on pointer returned from a mismatched allocation function [-Werror=mismatched-new-delete]
1606 | return Insert(new StoreInst(Val, Ptr, isVolatile));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/opt/llvm/include/llvm/IR/IRBuilder.h:1606:19: note: returned from ‘static void* llvm::StoreInst::operator new(size_t)’
/opt/llvm/include/llvm/IR/IRBuilder.h: In member function ‘llvm::Value* llvm::IRBuilder<T, Inserter>::CreateShuffleVector(llvm::Value*, llvm::Value*, llvm::Value*, const llvm::Twine&) [with T = llvm::ConstantFolder; Inserter = llvm::IRBuilderDefaultInserter]’:
/opt/llvm/include/llvm/IR/IRBuilder.h:2296:19: error: ‘static void llvm::User::operator delete(void*)’ called on pointer returned from a mismatched allocation function [-Werror=mismatched-new-delete]
2296 | return Insert(new ShuffleVectorInst(V1, V2, Mask), Name);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/opt/llvm/include/llvm/IR/IRBuilder.h:2296:19: note: returned from ‘static void* llvm::ShuffleVectorInst::operator new(size_t)’`
### Versions
PyTorch Version = 8d35333498e9433a379611746c177285fa51c8c5
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8488C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 8
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmp
erf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_
enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16
wbnoinvd ida arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear serialize amx_bf16 avx512_fp16 am
x_tile amx_int8 flush_l1d arch_capabilities
cc @malfet @seemethere | true |
2,772,977,781 | [Fix]: Enable support for Arm Neon & SVE support for FP32 Gemm Wrapper | nikhil-arm | closed | [
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ciflow/linux-aarch64"
] | 12 | COLLABORATOR | **Performance Improvements**:
Linear Layer [ 1x512 * 512x512 ] -> 2x - 4x
Linear Layer [ 3x512 * 512x512 ] -> 2x - 4x
cc @malfet @snadampal @milpuz01 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng | true |
2,772,858,969 | Add batch_add function and test case for simplifying tensor operations | namezz | closed | [
"open source",
"release notes: nn"
] | 3 | NONE | Fixes #ISSUE_NUMBER
| true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.