repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/accelerate | 3,748 | How pass two layer class by use --fsdp_transformer_layer_cls_to_wrap? | https://github.com/huggingface/accelerate/issues/3748 | closed | [] | 2025-08-26T08:56:32Z | 2025-08-26T09:14:18Z | null | sunjian2015 | |
huggingface/diffusers | 12,239 | Support for InfiniteTalk | ### Model/Pipeline/Scheduler description
https://huggingface.co/MeiGen-AI/InfiniteTalk is a wonderful audio driven video generation model and can also support infinite frame , which is based on wan2.1. The demo and user's workflow is also awesome. some examples: https://www.runninghub.cn/ai-detail/195843862495620301... | https://github.com/huggingface/diffusers/issues/12239 | open | [
"help wanted",
"New pipeline/model",
"contributions-welcome"
] | 2025-08-26T06:57:43Z | 2025-09-05T00:18:46Z | 1 | supermeng |
huggingface/transformers | 40,406 | Cache tokenlizer | ### Feature request
I am using Grounding DINO, which makes use of the `bert-base-uncanned` tokenlizer. Unfortunately, this model is never downloaded to cache, forcing a remote call to the API. Please allow for tokenlizer to be cached locally.
### Motivation
I want to use my software offline.
### Your contribution
... | https://github.com/huggingface/transformers/issues/40406 | open | [
"Feature request"
] | 2025-08-24T08:36:14Z | 2025-09-10T11:49:06Z | 5 | axymeus |
huggingface/tokenizers | 1,851 | SentencePieceBPE + Unicode NFD preprocessing leads to noise ? | Hi,
I have had the issue multiple times, so I assume I am doing something wrong.
**Versions:**
- tokenizers==0.21.4
- transformers==4.55.4
**Training script**
```py
from transformers import PreTrainedTokenizerFast
from pathlib import Path
from read import get_texts_iter_for_tokenizer
from tokenizers import SentenceP... | https://github.com/huggingface/tokenizers/issues/1851 | open | [] | 2025-08-24T08:28:08Z | 2025-09-17T09:33:11Z | 3 | PonteIneptique |
huggingface/coreml-examples | 17 | how to get absolute depth,meters? | how to get absolute depth,meters? | https://github.com/huggingface/coreml-examples/issues/17 | open | [] | 2025-08-24T03:20:58Z | 2025-08-24T03:20:58Z | null | jay25208 |
huggingface/transformers | 40,398 | NVIDIA RADIO-L | ### Model description
While exploring, I came across [nvidia/RADIO-L](https://huggingface.co/nvidia/RADIO-L) and was wondering about its current support.
1. May I ask if RADIO-L is already supported in Transformers?
2. If not, would it be considered suitable to add?
3. If a model requires trust_remote_code=True, what... | https://github.com/huggingface/transformers/issues/40398 | open | [
"New model"
] | 2025-08-23T11:14:42Z | 2025-08-26T14:44:11Z | 4 | Uvi-12 |
pytorch/ao | 2,862 | Duplicated tests in test_mx_tensor.py and test_nvfp4_tensor.py? | seems like there are some duplicated tests, e.g. https://github.com/pytorch/ao/blob/27f4d7581f8fc6bab4ef37d54b09b6fa76c1ffe6/test/prototype/mx_formats/test_mx_tensor.py#L610 and https://github.com/pytorch/ao/blob/27f4d7581f8fc6bab4ef37d54b09b6fa76c1ffe6/test/prototype/mx_formats/test_nvfp4_tensor.py#L47 | https://github.com/pytorch/ao/issues/2862 | open | [] | 2025-08-23T03:26:13Z | 2025-08-23T03:26:25Z | 0 | jerryzh168 |
huggingface/diffusers | 12,222 | [Contribution welcome] adding a fast test for Qwen-Image Controlnet Pipeline | We are looking for help from community to add a fast time for this PR
https://github.com/huggingface/diffusers/pull/12215
You can add a file under this folder:
https://github.com/huggingface/diffusers/tree/main/tests/pipelines/qwenimage
You can reference other tests we added for qwee pipelines [example](https://git... | https://github.com/huggingface/diffusers/issues/12222 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-08-22T21:04:50Z | 2025-08-25T01:58:59Z | 6 | yiyixuxu |
pytorch/executorch | 13,607 | "How to Support a Custom Model in HTP Backend" example code is out of date | ### 📚 The doc issue
In the "How to Support a Custom Model in HTP Backend" section of the QNN backend docs, there are a few imports that do not work. It looks like they might have moved in code, but missed in the docs. Specifically, the imports under `executorch.backends.qualcomm.compiler` and for `to_edge_transform_a... | https://github.com/pytorch/executorch/issues/13607 | closed | [
"module: doc",
"module: qnn"
] | 2025-08-22T20:53:38Z | 2025-09-30T22:34:54Z | null | GregoryComer |
huggingface/diffusers | 12,221 | [Looking for community contribution] support DiffSynth Controlnet in diffusers | ### Model/Pipeline/Scheduler description
Hi!
We want to add first party support for DiffSynth controlnet in diffusers, and we are looking for some help from the community!
Let me know if you're interested!
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (... | https://github.com/huggingface/diffusers/issues/12221 | open | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2025-08-22T20:49:18Z | 2025-09-11T10:01:08Z | 5 | yiyixuxu |
pytorch/xla | 9,578 | API for disabling SPMD? | The side effects of use_spmd() do not seem reversible through any obvious APIs.
https://github.com/pytorch/xla/blob/6b6ef5c7d757f955565b2083c48d936bfd758dcd/torch_xla/runtime.py#L191-L231
Is there some mechanism to do this?
| https://github.com/pytorch/xla/issues/9578 | open | [
"enhancement",
"distributed"
] | 2025-08-22T19:28:18Z | 2025-08-23T13:49:31Z | 1 | jameszianxuTT |
huggingface/safetensors | 649 | How to determine if a file is a safetensor file | Is there a good and fast way to determine if a file is a safetensors file. We would like to avoid reading the whole header.
Background we are currently trying to add safetensors as a datatype to the Galaxy project: https://github.com/galaxyproject/galaxy/pull/20754 | https://github.com/huggingface/safetensors/issues/649 | open | [] | 2025-08-22T09:17:49Z | 2025-09-03T11:08:30Z | null | bernt-matthias |
huggingface/lerobot | 1,775 | What's the finetuning method? Is it all full-finetuning? | I could't find any thing about LORA finetuning, is the default method full-finetuning by now? | https://github.com/huggingface/lerobot/issues/1775 | closed | [
"question",
"policies"
] | 2025-08-22T06:48:25Z | 2025-10-07T20:55:10Z | null | lin-whale |
huggingface/lerobot | 1,774 | Finetune smolvla with vision encoder | ### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-6.8.0-65-generic-x86_64-with-glibc2.35
- Python version: 3.10.18
- Huggingface_hub version: 0.33.4
- Dataset version: 3.6.0
- Numpy version: 2.2.6
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Cuda version: 12060
- Using GPU in script?: <fill in>
`... | https://github.com/huggingface/lerobot/issues/1774 | open | [
"question",
"policies",
"good first issue"
] | 2025-08-22T05:20:58Z | 2025-10-08T11:31:02Z | null | THU-yancow |
huggingface/transformers | 40,366 | [Feature] Support fromjson in jinja2 chat template rendering | ### Feature request
GLM45 requires `fromjson` in jinja2 to deserialize str typed `tool_calls.function.arguments` to dict within chat template so it can iterate over `arguments`'s k-v within jinja2 chat template.
```
{% for tc in m.tool_calls %}
{%- if tc.function %}
{%- set tc = tc.function %}
{%- endif %}
{{ '\n<to... | https://github.com/huggingface/transformers/issues/40366 | open | [
"Feature request"
] | 2025-08-22T05:11:06Z | 2025-08-22T05:18:45Z | 1 | byjiang1996 |
huggingface/peft | 2,749 | Set multiple adapters actively when training | Hi! In incremental scenarios, I want to train a new adapter while keeping some old adapters actively. Notice that PeftModel can set active adapter by "model.set_adapter()". But every time can set only one adapter, where the type of args "adapter_name" is "str" rather than "List[str]". I also notice that class "PeftMixe... | https://github.com/huggingface/peft/issues/2749 | closed | [] | 2025-08-21T09:59:25Z | 2025-09-29T15:04:15Z | 4 | Yongyi-Liao |
pytorch/torchtitan | 1,612 | PP doesn't work with FlexAttention | Today PP doesn't work with FlexAttention block causal masking, because PP can't receive `eos_id` as a non-Tensor input (nor can it receive a mask function).
https://github.com/pytorch/torchtitan/blob/main/torchtitan/train.py#L433
This regression is coming from a recent refactor https://github.com/pytorch/torchtitan/pu... | https://github.com/pytorch/torchtitan/issues/1612 | closed | [
"module: pipelining",
"high priority",
"module: flex attention",
"triage review"
] | 2025-08-21T07:25:15Z | 2025-08-22T15:35:06Z | 0 | tianyu-l |
huggingface/lerobot | 1,765 | Questions about using LIBERO dataset (loss starts extremely high) | Hello,
I am training on the "**IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot**" dataset, but I encountered an issue(here is the dateset:https://huggingface.co/datasets/IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot):
At the very beginning of training, the loss is extremely high (around 500).
I would lik... | https://github.com/huggingface/lerobot/issues/1765 | open | [
"question",
"dataset",
"simulation"
] | 2025-08-21T05:06:51Z | 2025-09-23T09:46:41Z | null | hamondyan |
huggingface/transformers | 40,330 | open-qwen2vl-base | ### Model description
is there any plan to add open-qwen2vl-base model?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/40330 | open | [
"New model"
] | 2025-08-21T02:24:01Z | 2025-08-23T10:18:28Z | 5 | olccihyeon |
huggingface/tokenizers | 1,850 | Safe encoding of strings that might contain special token text | When feeding untrusted string inputs into an LLM, it's often important not convert any of the input into special tokens, which might indicate message boundaries or other syntax. Among other reasons, this is important for guarding against prompt injection attacks.
tiktoken provides a way to control how the encoding dea... | https://github.com/huggingface/tokenizers/issues/1850 | closed | [] | 2025-08-21T00:53:17Z | 2025-09-01T18:03:59Z | 5 | joschu |
pytorch/ao | 2,828 | [fp8 blockwise training] add benchmarking scripts comparing triton quantization kernels vs torch.compile | ## Summary
- We currently have benchmarking scripts comparing bf16 GEMMs vs Triton fp8 groupwise/blockwise GEMMs vs torch.compile generated fp8 groupwise/blockwise GEMMs [here](https://github.com/pytorch/ao/tree/main/benchmarks/prototype/blockwise_fp8_training)
- However, we have no benchmarks mentioning the quantizati... | https://github.com/pytorch/ao/issues/2828 | open | [] | 2025-08-21T00:35:55Z | 2025-08-21T00:37:13Z | 0 | danielvegamyhre |
pytorch/torchtitan | 1,605 | How could I run the DeepSpeed-Megatron gpt_model in TorchTitan ? | Here is the model I would like to run with TorchTitan
https://github.com/deepspeedai/Megatron-DeepSpeed/blob/main/megatron/model/gpt_model.py#L188 .
Any recommendation will be appreciated.
| https://github.com/pytorch/torchtitan/issues/1605 | closed | [
"question"
] | 2025-08-20T18:55:50Z | 2025-08-21T02:34:22Z | null | githubsgi |
huggingface/peft | 2,746 | Gemma 2/3 Attention: Expected a single attention mask, got 2 instead | Hi! I'm getting this error `ValueError: Expected a single attention mask, got 2 instead` at inference (after prompt tuning)--I've only had this happen with the Gemma 2 and 3 models, so it might have something to do with their specific attention mechanism. Is there a workaround (or am I maybe missing something)?
I'm ru... | https://github.com/huggingface/peft/issues/2746 | closed | [] | 2025-08-20T18:08:02Z | 2025-08-27T02:43:22Z | 8 | michelleezhang |
huggingface/transformers | 40,323 | Is there a plan to add DINOv3 into AutoBackbone? | ### Feature request
Is there a plan to add DINOv3 to AutoBackbone. At present, DINOv2 is already inside, and I think DINOv3 should be able to inherit it directly. Appreciate a lot.
### Motivation
For the convenience of use
### Your contribution
DINOv3 should be able to inherit from DINOv2 directly. | https://github.com/huggingface/transformers/issues/40323 | closed | [
"Feature request",
"Vision"
] | 2025-08-20T16:02:45Z | 2025-11-11T16:22:08Z | 4 | Farenweh |
pytorch/pytorch | 161,060 | [Question] How to robustly prevent operator fusion in Inductor to workaround a compilation bug? | ### 🐛 Describe the bug
I've encountered a Triton compilation failure when using torch.compile with the AOT Inductor backend. The issue appears in a model that uses a computation pattern similar to Rotary Position Embeddings (RoPE).
I'm opening this issue in advance while I work on creating a minimal, self-contained ... | https://github.com/pytorch/pytorch/issues/161060 | closed | [
"oncall: pt2"
] | 2025-08-20T15:44:48Z | 2025-08-21T10:10:23Z | null | sujuyu |
pytorch/torchrec | 3,298 | apply 2d parallel but how to save and restore weights | how to save and restore weights when applying 2d parallel ? | https://github.com/meta-pytorch/torchrec/issues/3298 | closed | [] | 2025-08-20T10:42:19Z | 2025-08-21T01:21:42Z | 0 | zxr888 |
pytorch/ao | 2,811 | NVFP4Tensor to_copy is wrong? | ```
>>> from torchao.prototype.mx_formats.nvfp4_tensor import NVFP4Tensor
>>> import torch
>>> torch.ops.aten._to_copy(NVFP4Tensor.to_nvfp4(torch.randn((32, 128))), dtype=torch.bfloat16)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrewor/local/pytorch/torch/_ops.py", line 1... | https://github.com/pytorch/ao/issues/2811 | closed | [] | 2025-08-19T23:39:31Z | 2025-08-22T21:59:15Z | 0 | andrewor14 |
pytorch/xla | 9,569 | Remove excessive warn message in maybe_get_jax as it creates too many log lines during training | ## 🐛 Bug
The maybe_get_jax() function in torch_xla/_internal/jax_workarounds.py merged in #9521 currently emits a warning message when JAX is not installed. While informative, this warning results in an excessive number of log lines during training workloads, cluttering the logs and making it difficult to spot genui... | https://github.com/pytorch/xla/issues/9569 | open | [
"performance",
"usability",
"2.8 release"
] | 2025-08-19T20:27:24Z | 2025-10-11T02:52:17Z | 10 | rajkthakur |
pytorch/TensorRT | 3,786 | How to convert a AMP trained model to get best performance and speed? | According to the doc: https://docs.pytorch.org/TensorRT/user_guide/mixed_precision.html We can convert model with this project where the param precision are explicitly said in the code. But when I train a model with torch AMP GradScaler where no value precision tagged in model code, Can we use this method to get a c... | https://github.com/pytorch/TensorRT/issues/3786 | open | [] | 2025-08-19T07:30:31Z | 2025-10-23T00:20:02Z | null | JohnHerry |
huggingface/transformers | 40,263 | [VLMs] How to process a batch that contains samples with and without images? | Is there a **standard** way to process a batch that contains samples with and without images?
For example:
```python
from transformers import AutoProcessor
from PIL import Image
import numpy as np
model_id = ... # tested are "google/gemma-3-4b-it", "HuggingFaceM4/idefics2-8b", "HuggingFaceM4/Idefics3-8B-Llama3", "H... | https://github.com/huggingface/transformers/issues/40263 | closed | [] | 2025-08-19T05:09:36Z | 2025-09-18T08:08:51Z | null | qgallouedec |
huggingface/diffusers | 12,185 | What's the difference between DreamBooth LoRa and traditional LoRa? | I see a lot of examples using DreamBooth LoRa training code. What's the difference between this and traditional LoRa training? Can this DreamBooth LoRa training code be adapted to standard SFT LoRa code? Does disabling with_prior_preservation return normal LoRa training? | https://github.com/huggingface/diffusers/issues/12185 | open | [] | 2025-08-19T03:32:30Z | 2025-08-19T15:04:22Z | 3 | MetaInsight7 |
huggingface/trl | 3,918 | How to use trl-SFTTrainer to train Qwen-30B-A3B? | Has anyone tried using TRL to train Qwen-30B-A3B-Instruct-2507? | https://github.com/huggingface/trl/issues/3918 | open | [
"❓ question"
] | 2025-08-19T03:04:36Z | 2025-08-19T03:11:30Z | null | JeffWb |
huggingface/datasets | 7,739 | Replacement of "Sequence" feature with "List" breaks backward compatibility | PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training... | https://github.com/huggingface/datasets/issues/7739 | open | [] | 2025-08-18T17:28:38Z | 2025-09-10T14:17:50Z | 1 | evmaki |
huggingface/gsplat.js | 119 | How to 4DGS (.splatv) | How can I generate the .splatv file and get it running on my local server? | https://github.com/huggingface/gsplat.js/issues/119 | open | [] | 2025-08-18T07:35:04Z | 2025-08-18T07:35:04Z | null | CetosEdit |
huggingface/diffusers | 12,165 | Failed to finetune the pre-trained model of 'stable-diffusion-v1-4' on image inpainting task | I finetuned the pre-trained model of 'stable-diffusion-inpainting' on image inpainting task, and all work well as the model is trained on image inpainting. But when I finetuned with the pre-trained model of 'stable-diffusion-v1-4' which is trained on text-to-image, the loss is NaN and the result is pure black.
As the... | https://github.com/huggingface/diffusers/issues/12165 | closed | [] | 2025-08-17T07:15:36Z | 2025-09-07T09:35:38Z | 7 | micklexqg |
pytorch/pytorch | 160,833 | How to address the bug 'unwaited collective calls' when using DTensor? | ### 🐛 Describe the bug
I have called .wait() like this:
```
def custom_wait(_dtensor):
_local_t = _dtensor.to_local()
if isinstance(_local_t, AsyncCollectiveTensor):
_local_t.wait()
```
But it still has a BUG:
```
[W817 11:39:12.975673267 ProcessGroup.cpp:266] Warning: At the time of p... | https://github.com/pytorch/pytorch/issues/160833 | open | [
"high priority",
"triage review",
"needs reproduction",
"oncall: distributed"
] | 2025-08-17T03:54:50Z | 2026-01-03T06:31:42Z | null | arminzhu |
pytorch/data | 1,506 | v0.12.0 (or 0.11.1?) release timeline | Hi!
Is there a timeline for the next stable release? | https://github.com/meta-pytorch/data/issues/1506 | open | [] | 2025-08-16T21:39:05Z | 2026-01-02T22:27:59Z | 3 | mirceamironenco |
huggingface/gym-hil | 27 | How to close the gripper in gym-hill-sim? | Hello all.
I'm using macOS to practice with tutorial gym-hill-sim.
I figured out how to move robot like x,y,z but, it's impossible to close the gripper....
Could you all please share the correct key?
Chatgpt answered ctrl-key but, it's not working!
Thanks in advance. | https://github.com/huggingface/gym-hil/issues/27 | open | [] | 2025-08-15T13:46:12Z | 2025-08-15T13:57:26Z | null | cory0619 |
huggingface/peft | 2,742 | RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn | Hello, I am fine-tuning the LLaMA-2 7B model on an A100 40 GB GPU. Initially, I was getting a CUDA out-of-memory error. I tried various methods, such as reducing batch size, but none worked. Then I enabled:
model.gradient_checkpointing_enable()
After doing this, the OOM issue was resolved, but now I get the following... | https://github.com/huggingface/peft/issues/2742 | closed | [] | 2025-08-15T06:21:50Z | 2025-09-23T15:04:07Z | 4 | Mishajain1110 |
pytorch/torchtitan | 1,576 | API for custom metric reporting? | It would be nice if it were easier to report custom metrics for particular models more easily, but currently this seems to require changing `train.py` and/or modifying `MetricsProcessor` in some invasive way.
Could we introduce an easier mechanism for reporting additional metrics for specific models? A specific use ca... | https://github.com/pytorch/torchtitan/issues/1576 | open | [] | 2025-08-15T01:30:37Z | 2025-08-16T00:32:18Z | 4 | garrett361 |
pytorch/torchtitan | 1,574 | Will Dinov3 be included as a model in torchtitan? | Newly released models from Meta dropped for Dino, will this be included for torchtitan?
https://github.com/facebookresearch/dinov3 | https://github.com/pytorch/torchtitan/issues/1574 | open | [] | 2025-08-14T21:08:05Z | 2025-08-21T03:23:59Z | 1 | kmccaffr2023 |
pytorch/TensorRT | 3,779 | Announcement: PyTorch org (and TensorRt) will be offered to PyTorch Foundation | Hey folks, heads up that as part of PyTorch [moving to the PyTorch Foundation](https://pytorch.org/blog/PyTorchfoundation/). Meta will be handing ownership of the PyTorch github organization over to the PyTorch Foundation, along with all the repos in it.
**What's the impact?**
Technical ownership of the repos given... | https://github.com/pytorch/TensorRT/issues/3779 | open | [
"question"
] | 2025-08-14T17:15:12Z | 2025-08-14T19:53:22Z | null | ZainRizvi |
pytorch/pytorch | 160,648 | How to Use Pipeline Parallelism in Multi-input Models | ### 🚀 The feature, motivation and pitch
I am developing a multimodal model and would like to use the pipeline feature of torch. However, I found that the samples in the introductory docs are rather simple, and they all have only single-input, single-output scenarios. I would like to know how to use the pipeline funct... | https://github.com/pytorch/pytorch/issues/160648 | open | [
"oncall: distributed",
"module: pipelining"
] | 2025-08-14T15:41:44Z | 2025-08-20T03:05:32Z | null | Bin1024 |
huggingface/trl | 3,896 | How to gather completions before computing rewards in GRPOTrainer | Hi,
I found that the `reward_funcs` passed to GRPOTrainer is used per-device.
That is, if I set `num_generation=16`, `per_device_train_batch_size=4`, my customized reward function can only receive `4` completions.
However, my customized reward function calculates rewards depending on a global view over all `16` comple... | https://github.com/huggingface/trl/issues/3896 | closed | [
"❓ question",
"🏋 Reward",
"🏋 GRPO"
] | 2025-08-14T14:41:42Z | 2025-09-03T14:09:16Z | null | rubickkcibur |
huggingface/peft | 2,738 | Which base model weights are getting frozen after applying LoRA? | I have finetuned LLaVA-v1.5-7B with peft LoRA, and I have found out that after adding the LoRA adapters, all the weights are getting frozen except for the newly added LoRA layers and mm_projector weights (non-LoRA). I will be glad to know the freezing logic implemented by peft since not all the base model weights are g... | https://github.com/huggingface/peft/issues/2738 | closed | [] | 2025-08-13T17:35:10Z | 2025-08-14T04:20:42Z | 1 | srbh-dl |
pytorch/tutorials | 3,518 | [BUG] - <why does the C++ Libtorch performance slower than pytorch? (show the full code)> | ### Add Link
none ...
### Describe the bug
i use a same .pt model, test is in a same computer, but libtorch is slower than pytorch 30~40%.
in python, 30 times inference only 18 ms AVG , but in C++ libtorch needs 24ms AVG.
i am using CUDA 12.8 , CUDNN 9.5.1 and libtorch 2.8
my codes are below..
`
#include <chrono>
... | https://github.com/pytorch/tutorials/issues/3518 | closed | [
"bug",
"question"
] | 2025-08-13T13:25:44Z | 2025-09-03T21:32:30Z | null | Sukidesyo |
pytorch/xla | 9,558 | Performance of Torchax | ## ❓ Questions and Help
Hello Community,
Will using torchAx be slower than native PyTorch? Is there any transformation layer of tensors which makes it slower ? | https://github.com/pytorch/xla/issues/9558 | open | [
"question",
"torchxla2"
] | 2025-08-13T10:05:08Z | 2025-08-15T21:29:25Z | null | yuanfz98 |
huggingface/diffusers | 12,136 | How to use Diffusers to Convert Safetensors SDXL 1.0 to Onnx? | Hello,
I'm trying to convert a safetensors checkpoint for SDXL to onnx format.
I've tried Optimum already but it fails everytime.
Please help. | https://github.com/huggingface/diffusers/issues/12136 | closed | [] | 2025-08-13T06:33:22Z | 2025-10-31T03:13:28Z | null | CypherpunkSamurai |
huggingface/lerobot | 1,712 | Why hasn't the pi0 model learned the ability to place something in the specified positions? Is it because the number of datasets is insufficient? | I am creating a tic-tac-toe board and using yellow and green sandbags as pieces. I have collected a dataset of "the entire process of a robotic arm picking up yellow sandbags and placing them in nine different positions on the board". This dataset is used to train the pi0 model to achieve autonomous playing. The collec... | https://github.com/huggingface/lerobot/issues/1712 | open | [
"question",
"policies"
] | 2025-08-12T10:15:26Z | 2025-12-22T08:10:47Z | null | Alex-Wlog |
pytorch/pytorch | 160,405 | [Expandable block] how to get the best-fit free block | To get free expandable block, the algorithm selects a locally optimal solution instead of the globally best-fit block, since the expandable sizes are not sorted. The best-fit block is the block that meets the requirements and has the smallest expandable size. The original code is
```
auto expandable_size = []... | https://github.com/pytorch/pytorch/issues/160405 | open | [
"triaged",
"module: CUDACachingAllocator"
] | 2025-08-12T08:38:38Z | 2025-08-14T05:24:01Z | null | HU-qingqing |
pytorch/torchtitan | 1,554 | The ordering of fsdp, ac, tp, pp and complie etc. | Based on the code, the ordering of parallelization and optimization appears to be: PP → TP → AC → Compile → FSDP/DDP.
Is it possible to modify this ordering? If not, could you explain the rationale for this specific sequence? | https://github.com/pytorch/torchtitan/issues/1554 | open | [
"documentation",
"question"
] | 2025-08-12T04:35:02Z | 2025-12-12T10:56:00Z | null | aoyulong |
pytorch/torchtitan | 1,553 | Inquiry about torchtitan v0.1.0 compatibility with CUDA 12.3 | Hello,
I would like to inquire about the compatibility of torchtitan with CUDA 12.3.
I am trying to use torchtitan v0.1.0, but I am facing some challenges due to my environment constraints. My computing resources are equipped with CUDA 12.3, and I am unable to upgrade the CUDA version at this moment.
When I attempte... | https://github.com/pytorch/torchtitan/issues/1553 | closed | [
"question"
] | 2025-08-12T04:17:26Z | 2025-08-15T14:34:55Z | null | Sun2018421 |
pytorch/torchtitan | 1,552 | Any example for vpp scheduler for Deepseek/llama | I'm learning VPP 1F1B recently and want to figure out different implementation between tortitan and megatron, but i don't know how to build Vpp-1f1b schedule thus i cannot figure out how it works in titan. Is there any example to helpl me build vpp-1f1b example ? | https://github.com/pytorch/torchtitan/issues/1552 | closed | [
"question"
] | 2025-08-12T01:40:12Z | 2025-08-28T22:33:58Z | null | YingLaiLin |
huggingface/transformers | 40,089 | Could not import module 'AutoTokenizer'. Are this object's requirements defined correctly? | ### System Info
- torch @ https://download.pytorch.org/whl/cu124/torch-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- torchaudio @ https://download.pytorch.org/whl/cu124/torchaudio-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- torchvision @ https://download.pytorch.org/whl/cu124/torchvision-0.21.0%2Bcu124-cp310-cp310-lin... | https://github.com/huggingface/transformers/issues/40089 | closed | [
"bug"
] | 2025-08-11T21:44:05Z | 2025-09-08T03:09:11Z | 3 | octavianBordeanu |
huggingface/candle | 3,052 | Candle vs. PyTorch performance | I'm running https://github.com/huggingface/candle/tree/main/candle-examples/examples/llava vs. https://github.com/fpgaminer/joycaption/blob/main/scripts/batch-caption.py on a Mac m1.
Seeing significant performance difference, Candle seems much slower.
I enabled accelerate and metal features.
Would love some pointers ... | https://github.com/huggingface/candle/issues/3052 | open | [] | 2025-08-11T16:14:17Z | 2025-11-14T20:05:16Z | 8 | ohaddahan |
huggingface/diffusers | 12,124 | For qwen-image training file, Maybe "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False? | ### Describe the bug
I think "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False. Otherwise, it will lead to errors in the correspondence between prompt embedding and image during training, and prompt will not be followed when performing the task of T2I.
### R... | https://github.com/huggingface/diffusers/issues/12124 | open | [
"bug"
] | 2025-08-11T13:15:21Z | 2025-08-30T01:57:02Z | 2 | yinguoweiOvO |
huggingface/diffusers | 12,120 | How to train a lora with distilled flux model, such as flux-schnell??? | **Is your feature request related to a problem? Please describe.**
I can use flux as base model to train a lora, but it need 20 steps , it cost a lot of time , and I want to train a lora base on distill model to implement use fewer step make a better image, such as based on flux-schnell model train a lora it only nee... | https://github.com/huggingface/diffusers/issues/12120 | open | [] | 2025-08-11T03:07:42Z | 2025-08-11T06:01:45Z | null | Johnson-yue |
huggingface/diffusers | 12,108 | Qwen Image and Chroma pipeline breaks using schedulers that enable flow matching by parameter. | ### Describe the bug
Several Schedulers support flow matching by using the prediction_type='flow_prediction" e.g.
```
pipe.scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True)
```
However Chroma and Qwen Image will not work with th... | https://github.com/huggingface/diffusers/issues/12108 | open | [
"bug"
] | 2025-08-09T21:34:28Z | 2025-08-09T21:39:30Z | 0 | Vargol |
huggingface/transformers | 40,056 | Question: How to write a custome tokenizer form scratch | In this guide you introduced how to write a custom model and custom model configuration: [here](https://huggingface.co/docs/transformers/main/en/custom_models), IN addition I want to create a custom tokenizer form scratch why ?
I have a problem of multilevel transcription: the model takes an input utterance and output... | https://github.com/huggingface/transformers/issues/40056 | closed | [] | 2025-08-09T16:39:19Z | 2025-09-24T08:03:02Z | null | obadx |
huggingface/diffusers | 12,107 | accelerator.init_trackers error when try with a custom object such as list | ### Describe the bug
I set multiple prompts with nargs for argument "--validation_prompt " in "train_dreambooth.py":
` parser.add_argument(
"--validation_prompt",
type=str,
default=["A photo of sks dog in a bucket", "A sks cat wearing a coat"],
nargs="*",
help="A prompt that... | https://github.com/huggingface/diffusers/issues/12107 | open | [
"bug"
] | 2025-08-09T10:04:06Z | 2025-08-09T10:04:06Z | 0 | micklexqg |
huggingface/diffusers | 12,104 | IndexError: index 0 is out of bounds for dimension 0 with size 0 | ### Describe the bug
When I test the mit-han-lab/nunchaku-flux.1-kontext-dev model, it runs normally in a non-concurrent scenario, but throws an error when I try to run it with concurrent requests.
My GPU is a single RTX 4090D.
How can I enable multi-concurrency support on a single GPU?
Thank you in advance for yo... | https://github.com/huggingface/diffusers/issues/12104 | closed | [
"bug"
] | 2025-08-08T09:20:52Z | 2025-08-17T22:22:37Z | 1 | liushiton |
pytorch/TensorRT | 3,766 | ❓ [Question] C++ Windows runtime error | ## ❓ Question
How can I fix this error?
```
Unknown type name '__torch__.torch.classes.tensorrt.Engine':
File "code/__torch__/torch_tensorrt/dynamo/runtime/_TorchTensorRTModule.py", line 6
training : bool
_is_full_backward_hook : Optional[bool]
engine : __torch__.torch.classes.tensorrt.Engine
~~~~~~~... | https://github.com/pytorch/TensorRT/issues/3766 | open | [
"question"
] | 2025-08-08T07:56:17Z | 2025-08-15T14:32:30Z | null | zsef123 |
pytorch/ao | 2,713 | [fp8 blockwise training] try using torch._scaled_mm instead of Triton kernels for fp8 gemms | We have an initial prototype of DeepSeekV3 style fp8 blockwise training done [here](https://github.com/pytorch/ao/blob/main/torchao/prototype/blockwise_fp8_training/linear.py). Numerics are accurate but performance has not been optimized yet.
Initial tests with a local torchtitan integration on my H100 devgpu show the... | https://github.com/pytorch/ao/issues/2713 | open | [
"good first issue",
"float8"
] | 2025-08-07T20:15:10Z | 2025-08-07T20:26:11Z | 0 | danielvegamyhre |
huggingface/datasets | 7,729 | OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | > Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu | https://github.com/huggingface/datasets/issues/7729 | open | [] | 2025-08-07T14:07:23Z | 2025-09-24T02:17:15Z | 1 | SaleemMalikAI |
huggingface/transformers | 39,992 | [gpt-oss] Transform checkpoint from safetensors to state dict | Yesterday I was working on gpt-oss. However, loading the weights give me troubles.
For models like Qwen, I did things like this:
1. Create model on meta device
2. FSDP2 shard it, so it can fit in memory
3. On each GPU, it read weights from safetensors in a generator style, to save memory.
4. Chunk the weights and cop... | https://github.com/huggingface/transformers/issues/39992 | closed | [] | 2025-08-07T13:24:06Z | 2025-09-15T08:02:55Z | 1 | fingertap |
huggingface/diffusers | 12,094 | [Wan2.2] pipeline_wan miss the 'shift' parameter which used by Wan2.2-A14B-diffusers. | **Firstly, I found that the quality of output using diffusers is poor**
Later, I found that the pipeline_wan in diffusers[0.34.0] did not support two-stage processing. I noticed that the community had already updated it, so I installed diffusers[0.35.0-dev] by source code and it worked.
Then I found that the scheduler... | https://github.com/huggingface/diffusers/issues/12094 | closed | [] | 2025-08-07T11:37:36Z | 2025-08-10T08:43:27Z | 7 | yvmilir |
pytorch/torchtitan | 1,543 | Minimum number of GPUs needed to pretrain llama4_17bx16e - 8 ? | Going by the config files it would be 8 H100 class GPUs, Is 8 a reasonable number ? | https://github.com/pytorch/torchtitan/issues/1543 | closed | [] | 2025-08-06T23:54:05Z | 2025-08-07T20:32:35Z | 3 | githubsgi |
pytorch/tutorials | 3,512 | Redirect for prototype/ -> unstable/ | ### 🚀 Describe the improvement or the new tutorial
When I search ["flight recorder pytorch" on Google](https://www.google.com/search?q=pytorch+flight+recorder&sca_esv=56a8724cb68766c6&ei=_7yTaKLqN4ra5NoP38nhqAg&oq=pytorch+flight+recorder&gs_lp=Egxnd3Mtd2l6LXNlcnAiF3B5dG9yY2ggZmxpZ2h0IHJlY29yZGVyKgIIADIIEAAYgAQYsAMyCR... | https://github.com/pytorch/tutorials/issues/3512 | closed | [] | 2025-08-06T20:40:07Z | 2025-08-07T18:07:35Z | 2 | H-Huang |
huggingface/lerobot | 1,687 | When using AMP to train a model, why are the saved model weights still in fp32? | <img width="1668" height="95" alt="Image" src="https://github.com/user-attachments/assets/406a1879-f2f2-43c6-8341-8733873ee911" /> | https://github.com/huggingface/lerobot/issues/1687 | open | [
"question",
"policies"
] | 2025-08-06T12:42:40Z | 2025-08-12T08:52:00Z | null | Hukongtao |
huggingface/diffusers | 12,084 | Will `cosmos-transfer1` be supported in diffusers in the future? |
Hi @a-r-r-o-w and @yiyixuxu :)
First of all, thank you for recently enabling cosmos-predict1 models (text2world and video2world) in the diffusers library — it's super exciting to see them integrated!
I was wondering if there are any plans to also support [cosmos-transfer1](https://github.com/nvidia-cosmos/cosmos-tr... | https://github.com/huggingface/diffusers/issues/12084 | open | [] | 2025-08-06T11:22:28Z | 2025-08-19T12:11:33Z | 3 | rebel-shshin |
huggingface/lerobot | 1,683 | SmolVLMWithExpertModel | Excuse me, I would like to know about each module. In this class, I would like to know how to define inputs. | https://github.com/huggingface/lerobot/issues/1683 | open | [
"question",
"policies"
] | 2025-08-06T10:30:21Z | 2025-08-12T08:52:21Z | null | xjushengjie |
huggingface/lerobot | 1,674 | How to train smolvla for multi-task | I have trained smolvla for aloha_sim_transfer_cube and aloha_sim_insertion, and smolvla performs well in each single task. Now I'd like to train smolvla for multi-task ---- one model can complete the two tasks above. What should I do Now? | https://github.com/huggingface/lerobot/issues/1674 | closed | [] | 2025-08-06T02:40:01Z | 2025-10-15T02:52:29Z | null | w673 |
huggingface/diffusers | 12,079 | API Suggestion: Expose Methods to Convert to Sample Prediction in Schedulers | **What API design would you like to have changed or added to the library? Why?**
My proposal is for schedulers to expose `convert_to_sample_prediction` and `convert_to_prediction_type` methods, which would do the following:
1. `convert_to_sample_prediction`: Converts from a given `prediction_type` to `sample_predicti... | https://github.com/huggingface/diffusers/issues/12079 | open | [] | 2025-08-06T02:24:46Z | 2025-08-06T02:24:46Z | 0 | dg845 |
huggingface/candle | 3,047 | Can the safetensor files from OpenAI's new gpt-oss-20b work with any existing setup? | Is the new gpt-oss-20b a totally different architecture or can I use an existing candle setup, swap out the files and start playing around with gpt-oss-20b?
| https://github.com/huggingface/candle/issues/3047 | open | [] | 2025-08-06T01:59:59Z | 2025-08-06T02:01:52Z | 1 | zcourts |
huggingface/diffusers | 12,078 | Problem with provided example validation input in the Flux Control finetuning example | ### Describe the bug
The help page for the Flux control finetuning example, https://github.com/huggingface/diffusers/blob/main/examples/flux-control/README.md, provides a sample validation input, a pose condition image
[<img src="https://huggingface.co/api/resolve-cache/models/Adapter/t2iadapter/3c291e0547a1b17bed9342... | https://github.com/huggingface/diffusers/issues/12078 | open | [
"bug"
] | 2025-08-05T22:29:35Z | 2025-08-07T08:47:45Z | 1 | kzhang2 |
huggingface/lerobot | 1,672 | How to resume training? | My old setting of training:
```
# batch_size: 64
steps: 20000
# output_dir: outputs/train
```
in outputs/train/ there are 020000 folder and last folder,eash has pretrained_model and training_state
When I want to resume training, I read configs/train.py
so I set
```
resume: true
output_dir: outputs/train/
# or output... | https://github.com/huggingface/lerobot/issues/1672 | closed | [] | 2025-08-05T14:57:32Z | 2025-08-06T03:04:28Z | null | milong26 |
huggingface/transformers | 39,921 | [Gemma3N] Not able to add new special tokens to model/tokenizer due to projection error | ### System Info
```
- transformers==4.54.1
- Platform: Linux-5.15.0-1084-aws-x86_64-with-glibc2.31
- Python version: 3.13
- TRL version: 0.19.1
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch versio... | https://github.com/huggingface/transformers/issues/39921 | open | [
"Usage",
"Good Second Issue",
"bug"
] | 2025-08-05T14:43:37Z | 2025-08-19T19:37:39Z | 14 | debasisdwivedy |
huggingface/transformers | 39,910 | Question: Llama4 weight reshaping | Hi all
I am trying to extract the original Llama4 MoE weights, specifically:
- `experts.w1` (aka `experts.moe_w_in_eD_F`)
- `experts.w3` (aka `experts.moe_w_swiglu_eD_F`)
I need both of these in the shape `[E, D, N]`, where:
- E is the number of experts (16 for Scout)
- D is the embedding dimension (5120)
- N is th... | https://github.com/huggingface/transformers/issues/39910 | closed | [] | 2025-08-05T10:19:25Z | 2025-08-13T09:35:52Z | 0 | gskorokhod |
huggingface/datasets | 7,724 | Can not stepinto load_dataset.py? | I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | https://github.com/huggingface/datasets/issues/7724 | open | [] | 2025-08-05T09:28:51Z | 2025-08-05T09:28:51Z | 0 | micklexqg |
huggingface/lerobot | 1,670 | How does leroBot address the issue of training heterogeneous datasets? | Specifically, suppose I have a dataset A and dataset B. In dataset A, both the state and action are represented as (x, y, z, gripper), where x, y, and z denote the distances moved along the x, y, and z axes, respectively, and gripper represents the on/off state of the gripper. In dataset B, both the state and action ar... | https://github.com/huggingface/lerobot/issues/1670 | open | [
"question",
"processor"
] | 2025-08-05T08:20:08Z | 2025-08-12T09:01:57Z | null | mahao18cm |
huggingface/lerobot | 1,667 | How many episode to have a good result of SmolVLA | ### System Info
```Shell
Hello, I'm trying to do a simple task like dual hand pick banana to a basket using SmolVLA,may I know how many episode to train for having a good result?
Many thanks
Julien
```
### Reproduction
I've used 100 episode for training, looks like the arm can not pick the banana accurately, sometim... | https://github.com/huggingface/lerobot/issues/1667 | closed | [
"question",
"policies"
] | 2025-08-05T05:12:12Z | 2025-10-17T11:27:14Z | null | chejulien |
pytorch/torchtitan | 1,527 | Any model fp8 training | ### Bug description
Do you have a further plan to extend training models from llama and deepseek to any model from huggingface transformers library? I've seen an issue where a user asked about qwen but in recent days other companies have announced their excellent MOE models with weights and configs on huggingface, and... | https://github.com/pytorch/torchtitan/issues/1527 | closed | [
"question"
] | 2025-08-05T00:21:01Z | 2025-08-05T22:46:15Z | null | pizzaball |
pytorch/torchtitan | 1,525 | Transformer is running with float32 instead of bfloat16 ! | ### Bug description
Modified the Llama3 modle.py to print dtype as follows and ran just 1 rank. The
```
def forward(
self,
tokens: torch.Tensor,
eos_id: int | None = None,
input_batch: torch.Tensor | None = None,
):
"""
Perform a forward pass through the Trans... | https://github.com/pytorch/torchtitan/issues/1525 | open | [
"question"
] | 2025-08-04T22:37:20Z | 2025-08-14T21:25:04Z | null | githubsgi |
huggingface/lerobot | 1,666 | Please add multi gpu training support | MultiGPU training currently does not work with lerobot as mentioned here https://github.com/huggingface/lerobot/issues/1377
Please add this support. | https://github.com/huggingface/lerobot/issues/1666 | closed | [
"enhancement",
"question",
"policies"
] | 2025-08-04T18:06:40Z | 2025-10-17T09:53:59Z | null | nahidalam |
huggingface/lerobot | 1,663 | No way to train on subset of features | Currently, when loading a policy from a config.json, the input_features seem to be ignored and re-generated from the dataset provided. However, it may not always be desirable to train on all features, perhaps if I have multiple camera views but I only want to train on one.
I would prefer that config.json features are ... | https://github.com/huggingface/lerobot/issues/1663 | open | [
"question",
"policies",
"processor"
] | 2025-08-04T15:19:35Z | 2025-08-12T09:03:47Z | null | atyshka |
pytorch/tutorials | 3,507 | Feedback about Optimizing Model Parameters Page | There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/optimization_tutorial.html
Within the section [Full implementation](https://docs.pytorch.org/tutorials/beginner/basics/optimization_tutorial.html#full-implementation), the loop does not contain the `zero_grad` function on top... | https://github.com/pytorch/tutorials/issues/3507 | open | [] | 2025-08-04T14:50:13Z | 2025-08-04T14:50:13Z | 0 | madhaven |
huggingface/diffusers | 12,060 | Is there any DiT block defined in the huggingface/diffusers OR huggingface/transformers project? | **Is your feature request related to a problem? Please describe.**
I want to make some experiments about DiT based flow-matching model, I need an implementation of the common DiT block, but did not found it in both huggingface/diffusers and huggingface/transformers. Is there any implementation about it with just some ... | https://github.com/huggingface/diffusers/issues/12060 | open | [] | 2025-08-04T09:40:43Z | 2025-08-04T10:19:00Z | 2 | JohnHerry |
pytorch/xla | 9,537 | What are some large model use cases for torch-xla? | ## ❓ Questions and Help
I’ve observed that torch-xla has been actively developed for GPU support recently. Are there any benchmark comparisons between torch-xla and standard PyTorch, particularly for large-scale model training? Additionally, regarding frameworks such as Megatron-LM, is there any plan for official suppo... | https://github.com/pytorch/xla/issues/9537 | closed | [
"question",
"xla:gpu"
] | 2025-08-04T09:04:32Z | 2025-08-06T08:24:30Z | null | south-ocean |
huggingface/diffusers | 12,052 | Wan 2.2 with LightX2V offloading tries to multiply tensors from different devices and fails | ### Describe the bug
After @sayakpaul great work in https://github.com/huggingface/diffusers/pull/12040 LightX2V now works. However what doesn't work is adding both a lora and offloading to the transformer_2. I can get away with either (i.e. offload both transformers but add a lora only to transformer and NOT to trans... | https://github.com/huggingface/diffusers/issues/12052 | closed | [
"bug"
] | 2025-08-03T12:43:13Z | 2025-08-11T15:53:41Z | 4 | luke14free |
pytorch/tutorials | 3,506 | Feedback about 在 Google Colab 中运行教程 | There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/colab.html
The content in this page clearly shows how to upload or download your dataset into your Google Drive or your Desktop | https://github.com/pytorch/tutorials/issues/3506 | open | [] | 2025-08-03T03:51:11Z | 2025-12-09T19:11:27Z | 1 | KevinAllen66 |
huggingface/peft | 2,699 | UserWarning: Found missing adapter keys while loading the checkpoint | I have been fine-tuning different LLM models (mainly Llama family) since last year and use peft with lora config all the time with no issues.
Just recently I was fine-tuning the llama 70B on multiple GPU using accelerate then saving the adapter once training is done. (This was always my setup since last year)
Howeve... | https://github.com/huggingface/peft/issues/2699 | closed | [] | 2025-08-02T20:49:31Z | 2025-11-09T15:03:46Z | 41 | manitadayon |
pytorch/tutorials | 3,505 | Why am I 2:4 sparse slower than dense in the decode stage of LLaMA2‑7B? | ## Description
Hi
<img width="1000" height="800" alt="Image" src="https://github.com/user-attachments/assets/0e08ab66-423a-4ef0-a876-8e6e735affad" />
As shown in the figure, during the decoding phase, the 2:4 sparsity model is about 12% slower than the dense model, the questions are as follows:
- Is the decode phase... | https://github.com/pytorch/tutorials/issues/3505 | closed | [
"question"
] | 2025-08-02T03:44:06Z | 2025-08-09T03:14:49Z | null | wang-qitong |
huggingface/diffusers | 12,044 | AttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'? | I am train the Flux.1-dev model and get this error. I found the solution to bring diffuser to version 0.21.0 but then it would beconflict with some other libraries. Is there any solution for this?
```
Traceback (most recent call last):
File "/home/quyetnv/t2i/ai-toolkit/run.py", line 120, in <module>
main()
Fi... | https://github.com/huggingface/diffusers/issues/12044 | closed | [] | 2025-08-02T01:37:30Z | 2025-08-21T01:27:19Z | 3 | qngv |
pytorch/torchtitan | 1,515 | MiCS (Mixture of Communicators for Scaling) | Wondering if MiCS (Mixture of Communicators for Scaling) has been considered as a feature in TorchTitan. Would appreciate thoughts on the topic. | https://github.com/pytorch/torchtitan/issues/1515 | closed | [
"question"
] | 2025-08-01T22:15:13Z | 2025-08-05T19:55:36Z | null | githubsgi |
huggingface/optimum | 2,333 | Support for exporting t5gemma-2b-2b-prefixlm-it to onnx | ### Feature request
I’ve tried to export t5gemma-2b-2b-prefixlm-it to onnx using optimum. But it outputs: ValueError: Trying to export a t5gemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum... | https://github.com/huggingface/optimum/issues/2333 | closed | [
"Stale"
] | 2025-08-01T16:39:52Z | 2026-01-03T02:51:13Z | 2 | botan-r |
huggingface/transformers | 39,842 | Expected behavior of `compute_result` is hard to expect and inconsistent | In trainer there exists a parameter `compute_result` given to `compute_metrics` when `batch_eval_metrics` is given to True.
https://github.com/huggingface/transformers/blob/1e0665a191f73f6b002209c3dfcda478baac6bac/src/transformers/trainer.py#L370-L375
I think there are several problems for `compute_result`,
1. User c... | https://github.com/huggingface/transformers/issues/39842 | closed | [] | 2025-08-01T11:43:28Z | 2025-10-04T08:02:41Z | 3 | MilkClouds |
huggingface/transformers | 39,841 | MistralCommonTokenizer does not match PreTrainedTokenizer | ### System Info
on docker
os: ubuntu 24.04
transformers: 4.55.0.dev0
mistral_common: 1.8.3
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own ... | https://github.com/huggingface/transformers/issues/39841 | closed | [
"bug"
] | 2025-08-01T09:16:24Z | 2025-11-23T08:03:33Z | 3 | Fhrozen |
huggingface/transformers | 39,839 | pack_image_features RuntimeError when vision_feature_select_strategy="full" | ### System Info
transformers 4.54.0
### Who can help?
@zucchini-nlp
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproductio... | https://github.com/huggingface/transformers/issues/39839 | closed | [
"bug"
] | 2025-08-01T07:55:40Z | 2025-09-08T08:02:56Z | 2 | llnnnnnn |
huggingface/gsplat.js | 117 | How to generate a Mesh mesh? | I need a scene where Gaussian Splatting and Mesh are mixed, and I don't know if GSPLAT generates Mesh or not. | https://github.com/huggingface/gsplat.js/issues/117 | open | [] | 2025-08-01T03:29:22Z | 2025-08-01T03:29:22Z | null | ZXStudio |
pytorch/ao | 2,649 | Deprecation for Float8DynamicActivationFloat8WeightConfig (version 1) and Float8WeightOnlyConfig (version 1) and the models | This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.
What is deprecated:
1. We added version 2 config in https://github.com/pytorch/ao/pull/2463, and switched the default version to 2 in https://github.com/pytorch/ao/pull/2650, the version 1 config is now deprec... | https://github.com/pytorch/ao/issues/2649 | open | [
"tracker"
] | 2025-07-31T22:45:07Z | 2025-10-02T20:48:54Z | 0 | jerryzh168 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.