repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/candle | 3,096 | [Question] Minimal documentation/example on including weights in compiled executable | Just what the title says: Is there a minimal code example on including weights in the compiled executable using include_bytes. Nervous to implement this without understanding best practices and end up with a suboptimal solution. | https://github.com/huggingface/candle/issues/3096 | closed | [] | 2025-09-24T02:47:28Z | 2025-10-07T04:49:26Z | 1 | bitanath |
huggingface/optimum-executorch | 149 | Add documentation for how to run each type of exported model on ExecuTorch | Blocked on runner / multimodal runner work in ExecuTorch | https://github.com/huggingface/optimum-executorch/issues/149 | open | [] | 2025-09-23T18:53:55Z | 2025-09-23T18:54:00Z | null | jackzhxng |
huggingface/safetensors | 653 | `get_slice` is slow because it uses `tensors()` method instead of `info()` | ### Feature request
Replace
```rust
self.metadata.tensors().get(name)
```
with
```rust
self.metadata.info(name)
```
in `get_slice` method
### Motivation
I noticed that the `get_slice` method of `Open` [does](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/bindings/python/src... | https://github.com/huggingface/safetensors/issues/653 | closed | [] | 2025-09-23T15:09:51Z | 2025-09-28T16:42:45Z | 1 | PgLoLo |
huggingface/diffusers | 12,375 | What kernels should we integrate in Diffusers? | Now that we have an [integration](https://github.com/huggingface/diffusers/pull/12236) with the `kernels` lib to use Flash Attention 3 (FA3), it'd be nice to gather community interest about which kernels we should try to incorporate in the library through the [`kernels` lib](https://github.com/huggingface/kernels/). FA... | https://github.com/huggingface/diffusers/issues/12375 | open | [
"performance"
] | 2025-09-23T09:03:13Z | 2025-09-30T06:56:39Z | 8 | sayakpaul |
huggingface/peft | 2,798 | Add stricter type checking in LoraConfig for support with HfArgumentParser | ### System Info
System Info
transformers version: 4.57.0.dev0
Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39
Python version: 3.12.3
Huggingface_hub version: 0.34.4
Safetensors version: 0.5.2
Accelerate version: 1.10.1
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (ac... | https://github.com/huggingface/peft/issues/2798 | closed | [] | 2025-09-23T05:19:34Z | 2025-09-23T12:37:47Z | 3 | romitjain |
huggingface/lerobot | 1,995 | Questions about SmolVLA design | Hi! I am looking into the details of SmolVLA implementation, and got some questions.
I wonder the following points are necessary, or beneficial for the performance.
1.
https://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/smolvlm_with_expert.py#L354C63-L354... | https://github.com/huggingface/lerobot/issues/1995 | open | [
"question",
"policies"
] | 2025-09-22T11:53:01Z | 2025-10-17T01:58:12Z | null | gliese581gg |
huggingface/lerobot | 1,994 | How to improve success rate and generalization | Hi, I have one question regarding the success rate, if I ensure the object appears in the frame of wrist camera at the beginning of dataset collection/inference, will this lead to higher success rate for pick and place task?
My initial attempt was object appears in the side view camera but does not appear in the wrist... | https://github.com/huggingface/lerobot/issues/1994 | closed | [
"question",
"policies"
] | 2025-09-22T09:55:53Z | 2025-09-23T09:26:16Z | null | Liu9999ai |
huggingface/smol-course | 248 | [QUESTION] About applying chat template for base model via `clone_chat_template` from trl | In the course [Supervised Fine-Tuning](https://huggingface.co/learn/smol-course/unit1/3), author uses base model `HuggingFaceTB/SmolLM3-3B-Base` but I choose `HuggingFaceTB/SmolLM2-135M` because it is lighter. However, I found that the base model `SmolLM2-135M` does not have its own chat template but it already had spe... | https://github.com/huggingface/smol-course/issues/248 | open | [
"question"
] | 2025-09-22T03:03:56Z | 2025-09-22T19:13:17Z | null | binhere |
huggingface/transformers.js | 1,419 | Why is `token-classification` with T5 not available? (`T5ForTokenClassification`) | ### Question
In python `tranformers` i can do:
```python
model = AutoModelForTokenClassification.from_pretrained("google-t5/t5-base")
```
and use it with `Trainer` to train it (quite successfully).
Or
```python
classifier = pipeline("token-classification", model="google-t5/t5-base")
```
and use it for token classifica... | https://github.com/huggingface/transformers.js/issues/1419 | open | [
"question"
] | 2025-09-21T23:30:22Z | 2025-09-24T21:42:56Z | null | debevv |
huggingface/transformers.js | 1,418 | EmbeddingGemma usage | ### Question
I'm new to transformers.js
I want to use embeddinggemma into my web app and I've looked at the example on its usage at this link:
https://huggingface.co/blog/embeddinggemma#transformersjs
At the same time I've seen a different code, using pipeline, regarding embeddings:
https://huggingface.co/docs/tran... | https://github.com/huggingface/transformers.js/issues/1418 | open | [
"question",
"v4"
] | 2025-09-21T10:26:22Z | 2025-11-08T15:33:16Z | null | MithrilMan |
huggingface/diffusers | 12,359 | Chroma pipeline documentation bug regarding the `guidance_scale` parameter | ### Describe the bug
From my understanding, Chroma is a retrained and dedistilled version of the Flux architecture, so it uses true CFG, unlike Flux. I can indeed confirm that this is true by tracing through the source code.
However, currently the documentation for the `guidance_scale` parameter in the `ChromaPipelin... | https://github.com/huggingface/diffusers/issues/12359 | closed | [
"bug"
] | 2025-09-21T08:34:15Z | 2025-09-22T20:04:15Z | 1 | mingyi456 |
huggingface/trl | 4,110 | How does `trl` know what part of dataset is prompt and completion in the following situation? | ### Reproduction
```python
import torch
import trl as r
import peft as p
import datasets as d
import accelerate as a
import transformers as t
allowed_entities = ['AGE', 'EYECOLOR', 'GENDER', 'HEIGHT', 'WEIGHT', 'SEX']
entity_mapping = {
"ACCOUNTNAME": "account_name",
"ACCOUNTNUMBER": "account_number",
"AG... | https://github.com/huggingface/trl/issues/4110 | closed | [
"🐛 bug",
"📚 documentation"
] | 2025-09-19T17:42:26Z | 2025-09-19T20:02:16Z | null | bminesh-shah |
huggingface/transformers | 41,005 | Are we have Qwen3VL Official Model Published by Alibaba | ### Model description
Reference - https://huggingface.co/docs/transformers/main/en/model_doc/qwen3_vl#transformers.Qwen3VLForConditionalGeneration
If not when can we expect any guess? | https://github.com/huggingface/transformers/issues/41005 | closed | [
"New model"
] | 2025-09-19T13:59:34Z | 2025-09-20T10:00:04Z | 1 | Dineshkumar-Anandan-ZS0367 |
huggingface/transformers | 40,993 | HfArgumentParser cannot parse TRL Config | ### System Info
transformers==4.56.1
trl==0.17.0
I used to apply code below
```python
from transformers import HfArgumentParser
from trl import (
ScriptArguments, ModelConfig, SFTConfig
)
parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))
script_arguments, trainer_config, model_config = parser.par... | https://github.com/huggingface/transformers/issues/40993 | closed | [
"bug"
] | 2025-09-19T08:29:48Z | 2025-09-19T09:06:20Z | 5 | caoyang-sufe |
huggingface/lerobot | 1,978 | Is there a best fit model to each sim env? | I try to train diffusion,smolvla,even pi0 on the aloha with 200k steps, and found that they all perform much worse (with less than 10% success rate) than act policy, why? Did each env task exist a best-fit policy? or there are problems on my training strategy. | https://github.com/huggingface/lerobot/issues/1978 | closed | [
"question",
"policies",
"simulation"
] | 2025-09-19T02:45:14Z | 2025-10-17T11:25:27Z | null | shs822 |
huggingface/accelerate | 3,784 | AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'. Did you mean: 'deepspeed_plugin'? | ### System Info
```Shell
- Name: accelerate Version: 1.10.1
- Name: transformers Version: 4.54.0
- Name: deepspeed Version: 0.17.5
- Name: torch Version: 2.8.0
- Name: wandb Version: 0.21.4
```
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] One of the scripts in th... | https://github.com/huggingface/accelerate/issues/3784 | closed | [] | 2025-09-18T17:07:54Z | 2025-10-27T15:08:19Z | 1 | alexge233 |
huggingface/lerobot | 1,969 | how to record a multi-task dataset on so101? | I found that only can use "dataset.single_task" to record , but i need to record a dataset contains more than 3 tasks. how to solve it. | https://github.com/huggingface/lerobot/issues/1969 | closed | [] | 2025-09-18T10:18:00Z | 2025-09-21T02:50:59Z | null | Temmp1e |
huggingface/lerobot | 1,966 | SO101FollowerEndEffector? | I am trying to get inverse kinematics to work on my SO-101, and I found SO100FollowerEndEffector but there is no SO101FollowerEndEffector?
I suspect they are interchangeable, but when I use SO100FollowerEndEffector on my SO-101, it want me to recalibrate it, so I just want to make sure before I break anything. | https://github.com/huggingface/lerobot/issues/1966 | open | [
"question",
"robots"
] | 2025-09-17T23:56:38Z | 2025-10-30T08:56:22Z | null | cashlo |
huggingface/lighteval | 970 | How to use a configuration file? | The documentation makes references to using configuration yaml files like [here](https://huggingface.co/docs/lighteval/main/en/use-litellm-as-backend) but it doesn't give the name of the file or which option to feed the config to lighteval. I tried making a `config.yaml`, `config.yml` in the current directory and tryin... | https://github.com/huggingface/lighteval/issues/970 | closed | [] | 2025-09-16T20:13:48Z | 2025-09-24T22:08:32Z | null | oluwandabira |
huggingface/transformers | 40,915 | HfArgumentParser does not support peft.LoraConfig | ### System Info
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch ... | https://github.com/huggingface/transformers/issues/40915 | closed | [
"bug"
] | 2025-09-16T16:23:56Z | 2025-09-23T05:16:14Z | 5 | romitjain |
huggingface/diffusers | 12,338 | `AutoencoderDC` bug with `pipe.enable_vae_slicing()` and decoding multiple images | ### Describe the bug
When using the Sana_Sprint_1.6B_1024px and the SANA1.5_4.8B_1024px models, I cannot enable VAE slicing when generating multiple images. I guess this issue will affect the rest of the Sana model and pipeline configurations because they all use the same `AutoencoderDC` model.
I traced the issue to ... | https://github.com/huggingface/diffusers/issues/12338 | closed | [
"bug"
] | 2025-09-16T12:23:29Z | 2025-09-22T06:55:35Z | 0 | mingyi456 |
huggingface/optimum | 2,355 | Support exporting text-ranking for BERT models | ### Feature request
Currently, `optimum-cli export onnx --model cross-encoder/ms-marco-MiniLM-L-12-v2 cross-encoder--ms-marco-MiniLM-L-12-v2-onnx` says:
```
ValueError: Asked to export a bert model for the task text-ranking (auto-detected), but the Optimum ONNX exporter only supports the tasks feature-extraction, fi... | https://github.com/huggingface/optimum/issues/2355 | closed | [
"Stale"
] | 2025-09-15T21:23:35Z | 2025-10-21T02:10:29Z | 1 | kshitijl |
huggingface/lerobot | 1,923 | Deploying SmolVLA with a simulator | Has anyone been able to deploy the SmolVLA model to control say the SO-100 on a simulator like IsaacSim?
Even if the fine-tuning reliably converges the observed performance on the simulator seems erratic. Do we apply the predicted actions from SmolVLA directly into the Articulation controller as positions? | https://github.com/huggingface/lerobot/issues/1923 | closed | [
"question",
"policies",
"simulation"
] | 2025-09-12T21:06:40Z | 2025-12-11T22:07:02Z | null | aditya1709 |
huggingface/swift-transformers | 237 | Please help. Seeing issues with Hub when integrating | Hello, I'm trying to integrate WhisperKit via https://github.com/argmaxinc/WhisperKit/blob/main/Package.swift but that seems to bring in [swift-transformers](https://github.com/huggingface/swift-transformers) and Hub. I'm seeing issues as below
Hub.package.swiftinterface:34:32: warning: 'BinaryDistinctCharacter' is n... | https://github.com/huggingface/swift-transformers/issues/237 | closed | [
"question"
] | 2025-09-12T17:06:28Z | 2025-09-17T15:36:52Z | null | rpatnayakuni22 |
huggingface/transformers | 40,815 | get_decoder feature regression in 4.56.0 | ### System Info
In the release of transformers v4.56.0, this PR https://github.com/huggingface/transformers/pull/39509 introduced a refactor of the public `get_decoder` method which previously existed on modes by moving it to the PreTrainedModel class.
Unfortunately this introduced a significant behavior change in th... | https://github.com/huggingface/transformers/issues/40815 | closed | [
"bug"
] | 2025-09-11T09:25:12Z | 2025-09-16T08:57:14Z | 4 | KyleMylonakisProtopia |
huggingface/transformers | 40,813 | Incorrect sharding configuration for Starcoder2 model | ### System Info
Transformers main branch (commit [0f1b128](https://github.com/huggingface/transformers/commit/0f1b128d3359a26bd18be99c26d7f04fb3cba914) )
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safeten... | https://github.com/huggingface/transformers/issues/40813 | closed | [
"bug"
] | 2025-09-11T09:02:53Z | 2025-09-15T08:46:33Z | 1 | greg-kwasniewski1 |
huggingface/lerobot | 1,911 | How to avoid re-write cache data from pyarrow into parquet everytime? | Hi Authors,
When using lerobot dataset in a pytorch dataloader, lerobot dataset will write a huge cache data which is converted from pyarrow to Apache Parquet. How to avoid that?
I can think of two options:
1. Avoid converting to Parquet data and directly read from parquet data. But this may loose reading performanc... | https://github.com/huggingface/lerobot/issues/1911 | open | [] | 2025-09-10T22:19:25Z | 2025-09-10T22:19:25Z | null | songlinwei-we |
huggingface/transformers | 40,767 | 3D Object Detection Models | ### Model description
Hi together,
is there a reason or any other thread where 3D models like those at mmdet3d are discussed to be implemented. I have not found any discussion.
Thanks
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links fo... | https://github.com/huggingface/transformers/issues/40767 | open | [
"New model"
] | 2025-09-09T13:16:33Z | 2025-11-13T21:18:40Z | 3 | SeucheAchat9115 |
huggingface/lerobot | 1,899 | Has anyone tried to export the smolvla as onnx model for deployment? | I have tried to test the trained smolvla model on my PC, it works. I want now to deploy the smolvla on our target board.
I looked into the model structure of smolvla, for the vision-encoder and language embedding parts I can refer to the smolvlm and export them as tow onnx models. I think the robot state embedding al... | https://github.com/huggingface/lerobot/issues/1899 | open | [
"question",
"policies",
"performance"
] | 2025-09-09T10:41:14Z | 2025-10-07T20:50:12Z | null | TankerLee |
huggingface/huggingface_hub | 3,339 | What is the best replacement of HfFileSystem.glob with HfApi | In some of our code, we were using something like
```python
hf_fs = HfFileSystem()
files = hf_fs.glob('my/repo/*/model.onnx')
```
But I found that HfFileSystem is much less stable than HfApi, especially in those edge cases (e.g. network unstable)
So what is the best replacement of HfFileSystem.glob with HfApi? Any s... | https://github.com/huggingface/huggingface_hub/issues/3339 | closed | [] | 2025-09-09T09:02:07Z | 2025-09-15T09:12:04Z | null | narugo1992 |
huggingface/transformers | 40,754 | Potentially incorrect value assignment of Llama4TextModel's output in Llama4ForCausalLM's output? | ### System Info
**System Info**
- `transformers` version: 4.55.4
- Platform: Linux-6.15.9-201.fc42.x86_64-x86_64-with-glibc2.41
- Python version: 3.13.5
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTo... | https://github.com/huggingface/transformers/issues/40754 | closed | [
"Usage",
"bug"
] | 2025-09-08T12:31:39Z | 2025-09-16T19:25:03Z | 3 | st143575 |
huggingface/transformers | 40,752 | How to extract attention weights for the first generated token? | **Title:** Request for clarification: How to extract attention weights for the first generated token?
**Description:**
Hi, I'm trying to extract the attention weights **of the first generated token** (i.e., the first new token produced by `generate()`) with respect to the input prompt. However, I'm observing inconsis... | https://github.com/huggingface/transformers/issues/40752 | closed | [] | 2025-09-08T09:53:16Z | 2025-09-08T11:41:22Z | null | VincentLHH |
huggingface/transformers.js | 1,407 | Expected time to load a super-resolution model locally | ### Question
Loading a image super-resolution model locally can take more than 10 seconds on my MacBook Pro (M1 Max). Is this expected behavior?
```javascript
env.allowRemoteModels = false;
env.allowLocalModels = true;
env.backends.onnx.wasm.wasmPaths = `/wasm/`;
const upscaler = ref(null);
onMounted(async () => {
... | https://github.com/huggingface/transformers.js/issues/1407 | closed | [
"question"
] | 2025-09-08T06:26:49Z | 2025-09-30T19:22:34Z | null | ymtoo |
huggingface/lerobot | 1,891 | How to checkout a commit id? | The underlying datasets supports a "revision" flag. Does lerobot? | https://github.com/huggingface/lerobot/issues/1891 | closed | [] | 2025-09-08T04:39:37Z | 2025-09-10T22:53:18Z | null | richardrl |
huggingface/transformers | 40,743 | Support for 4D attention mask for T5 | ### Feature request
Currently, T5 cannot take 4D attention masks (batch_size, num_heads, seq_len, seq_len) as inputs. Passing a 4D attention_mask and a 4D decoder_attention_mask like so leads to a shape-related exception :
```python
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
toke... | https://github.com/huggingface/transformers/issues/40743 | open | [
"Feature request"
] | 2025-09-07T07:18:05Z | 2025-09-09T11:43:33Z | 5 | Aethor |
huggingface/lerobot | 1,882 | Pretrain - Code for pretraining smolvla | ## Guidance on Replicating the Pre-training Process with Community Datasets
Hi team,
First off, thank you for the fantastic work on SmolVLA and for open-sourcing the model and code. It's a great contribution to the community.
I am trying to replicate the pre-training process as described in the original paper. I ha... | https://github.com/huggingface/lerobot/issues/1882 | closed | [
"question",
"dataset"
] | 2025-09-07T03:18:04Z | 2025-09-23T09:06:13Z | null | ruiheng123 |
huggingface/transformers | 40,708 | When using a custom model, it copies the code into Hugging Face’s cache directory. | ```
model = AutoModel.from_pretrained(
model_args.model_name_or_path,
trust_remote_code=True,
torch_dtype=compute_dtype,
device_map=device_map,
# init_vision=True,
# init_audio=False,
# init_tts=False,
)
```
`model_args.model_name_or_path=/mnt/241hdd/wzr/M... | https://github.com/huggingface/transformers/issues/40708 | closed | [] | 2025-09-05T07:21:40Z | 2025-11-15T08:03:16Z | 4 | wzr0108 |
huggingface/transformers | 40,690 | Batches loaded from wrong epoch when resuming from second epoch | ### System Info
**Required system information**
```text
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed ve... | https://github.com/huggingface/transformers/issues/40690 | closed | [
"bug"
] | 2025-09-04T11:48:41Z | 2025-12-03T13:14:04Z | 6 | ngazagna-qc |
huggingface/optimum | 2,347 | Gemma3n convert to onnx format | Hello,
How do I convert the Gemma3n model to the ONNX format using the OptimumCLI command?
Thanks in advance. | https://github.com/huggingface/optimum/issues/2347 | closed | [
"Stale"
] | 2025-09-04T09:13:19Z | 2025-10-15T02:09:55Z | 2 | shahizat |
huggingface/transformers | 40,680 | Idea: Exploring Mathematical Extensions for GPT-style Models (teaser) | Hi Transformers team 👋,
I’ve been experimenting with a conceptual enhancement to GPT-style architectures—introducing mathematical mechanisms for memory and adaptive learning—while keeping the overall transformer backbone intact.
I’ve documented the approach in Markdown (README + comparison notes), but haven’t publis... | https://github.com/huggingface/transformers/issues/40680 | closed | [] | 2025-09-04T07:23:29Z | 2025-10-12T08:02:38Z | 3 | muzamil-ashiq |
huggingface/transformers | 40,647 | how to get response text during training | I want to obtain the inferred output text during the evaluation step in the training process, not just the eval loss.
<img width="1264" height="211" alt="Image" src="https://github.com/user-attachments/assets/9dd432c5-74ea-4290-adff-7865cf3ea481" /> | https://github.com/huggingface/transformers/issues/40647 | closed | [] | 2025-09-03T10:37:51Z | 2025-10-12T08:02:43Z | null | zyandtom |
huggingface/diffusers | 12,276 | The image is blurry. | How to solve image blurriness during fine-tuning? | https://github.com/huggingface/diffusers/issues/12276 | open | [] | 2025-09-03T08:29:38Z | 2025-09-03T08:29:38Z | 0 | sucessfullys |
huggingface/gym-hil | 32 | how to perform hil in sim | https://github.com/huggingface/gym-hil/issues/32 | closed | [] | 2025-09-02T17:10:05Z | 2025-09-16T14:02:32Z | null | prathamv0811 | |
huggingface/transformers | 40,606 | GPT-OSS attention backends available for SM120 other than Eager? | I was wondering any attention backend we can use for long context if using SM120 GPU? Since the "eager_attention_forward" uses the naive implementation that computes the full attention in one go, which can lead to OOM for large context, but I couldn't use other implementations since they either do not support sinks or ... | https://github.com/huggingface/transformers/issues/40606 | closed | [] | 2025-09-02T03:21:16Z | 2025-10-12T08:02:48Z | 4 | TheTinyTeddy |
huggingface/peft | 2,764 | merge_and_unload returns the base (prior to fine-tuning) back!!!! | I have fine-tune a model using PEFT and now I want to merge the base model to adapter. This is what I am doing:
```
base_model = AutoModelForCausalLM(model_id, device_map = 'auto')
model_finetuned = PeftModel.from_pretrained(base_model, adapter_path)
```
Now the size of `model_finetuned `is roughly 42GB but when I... | https://github.com/huggingface/peft/issues/2764 | closed | [] | 2025-09-01T04:07:36Z | 2025-10-09T15:26:15Z | 12 | manitadayon |
huggingface/lerobot | 1,822 | As of 08/31/2025, how do you create a v2.1 dataset from raw data? | My search is cursory, but I can't find any tutorial or example on creating a v2.1 dataset on the main branch. So, how do you create a Lerobot dataset in the current version? Should I refer to older commits | https://github.com/huggingface/lerobot/issues/1822 | open | [
"question",
"dataset"
] | 2025-08-31T18:29:34Z | 2025-10-08T13:02:44Z | null | IrvingF7 |
huggingface/text-generation-inference | 3,318 | Infinite tool call loop: `HuggingFaceModel` and `text-generation-inference` | ## Description
Hello. Needless to say, amazing library. Please let me know if you'd like me to try something or if you need more info.
I've been going through various local model providers trying to find one that works well, when I cam across a rather shocking bug when running against Huggingface's TGI model host.
T... | https://github.com/huggingface/text-generation-inference/issues/3318 | open | [] | 2025-08-31T08:23:46Z | 2025-08-31T08:58:13Z | 1 | baughmann |
huggingface/diffusers | 12,257 | [Looking for community contribution] support Wan 2.2 S2V: an audio-driven cinematic video generation model | We're super excited about the Wan 2.2 S2V (Speech-to-Video) model and want to get it integrated into Diffusers! This would be an amazing addition, and we're looking for experienced community contributors to help make this happen.
- **Project Page**: https://humanaigc.github.io/wan-s2v-webpage/
- **Source Code**: htt... | https://github.com/huggingface/diffusers/issues/12257 | open | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2025-08-29T08:04:43Z | 2025-08-29T10:23:52Z | 0 | yiyixuxu |
huggingface/optimum-onnx | 44 | How to use streaming inference for onnx models exported from QWEN3-4B models | How to use streaming inference for onnx models exported from QWEN3-4B models | https://github.com/huggingface/optimum-onnx/issues/44 | closed | [] | 2025-08-29T01:48:07Z | 2025-10-06T12:29:34Z | null | williamlzw |
huggingface/diffusers | 12,255 | [BUG] Misleading ValueError when subclassing StableDiffusionImg2ImgPipeline with a mismatched __init__ signature | ### Describe the bug
When subclassing diffusers.StableDiffusionImg2ImgPipeline, if the subclass's __init__ signature does not include the requires_safety_checker: bool = True argument, the default .from_pretrained() loader raises a confusing and indirect ValueError.
The official documentation for StableDiffusionImg2I... | https://github.com/huggingface/diffusers/issues/12255 | closed | [
"bug"
] | 2025-08-28T18:31:14Z | 2025-08-30T07:41:16Z | 2 | BoostZhu |
huggingface/peft | 2,759 | PeftModel trainable parameters with multiple adapters | ### System Info
peft-0.17.1
python 3.9
### Who can help?
@BenjaminBossan
### Reproduction
**1) modules_to_save gradient true even when is_trainable=False**
The adapters has both modules_to_save and target_modules
```
peft_backbone = PeftModel.from_pretrained(
target_backbone,
... | https://github.com/huggingface/peft/issues/2759 | closed | [] | 2025-08-28T16:36:25Z | 2025-10-06T15:04:09Z | 8 | NguyenRichard |
huggingface/transformers | 40,462 | Question about RoPE Implementation in modeling_llama: Should torch.cat be repeat_interleave? | Hi,
I was going through the code for `modeling_llama` and the RoPE implementation. I came across the following function:
```
def forward(self, x, position_ids):
inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
position_ids_expanded = position_id... | https://github.com/huggingface/transformers/issues/40462 | closed | [] | 2025-08-26T16:32:41Z | 2025-08-27T10:01:11Z | 2 | abhidipbhattacharyya |
huggingface/transformers | 40,459 | `use_kernels=True` does not invoke custom kernels | ### System Info
- `transformers` version: 4.56.0.dev0
- Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31
- Python version: 3.12.7
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (ac... | https://github.com/huggingface/transformers/issues/40459 | closed | [
"bug"
] | 2025-08-26T13:32:35Z | 2025-09-16T08:50:55Z | 1 | ariG23498 |
huggingface/diffusers | 12,241 | WAN2.1 FLF2V: Incorrect MASK Creation???? | Hello! I think that it is maybe error. (Or not, please explain it for me!!)
In **WanImageToVideoPipeline** class in `pipline_wan_i2v.py`,
<img width="868" height="243" alt="Image" src="https://github.com/user-attachments/assets/8108a9e9-8632-44a1-93b8-abd9ae6a22cd" />
(the code is the part of `prepare_latents` funct... | https://github.com/huggingface/diffusers/issues/12241 | open | [] | 2025-08-26T12:23:09Z | 2025-08-27T02:10:49Z | 1 | KyujinHan |
huggingface/lerobot | 1,792 | how to train lerobot model offline with offline data? | Hi, I'm trying to configure lerobot to train with pre-downloaded models and datasets. I'm stuck, however, with how to organize the model cache and dataset cache, and how to tell the train script I'm using offline everything?
I tried to download the model and dataset:
```
$ hf download lerobot/pi0 --cache-dir ~/lerobot... | https://github.com/huggingface/lerobot/issues/1792 | closed | [] | 2025-08-26T10:20:56Z | 2025-09-03T10:48:37Z | null | dalishi |
huggingface/accelerate | 3,748 | How pass two layer class by use --fsdp_transformer_layer_cls_to_wrap? | https://github.com/huggingface/accelerate/issues/3748 | closed | [] | 2025-08-26T08:56:32Z | 2025-08-26T09:14:18Z | null | sunjian2015 | |
huggingface/diffusers | 12,239 | Support for InfiniteTalk | ### Model/Pipeline/Scheduler description
https://huggingface.co/MeiGen-AI/InfiniteTalk is a wonderful audio driven video generation model and can also support infinite frame , which is based on wan2.1. The demo and user's workflow is also awesome. some examples: https://www.runninghub.cn/ai-detail/195843862495620301... | https://github.com/huggingface/diffusers/issues/12239 | open | [
"help wanted",
"New pipeline/model",
"contributions-welcome"
] | 2025-08-26T06:57:43Z | 2025-09-05T00:18:46Z | 1 | supermeng |
huggingface/transformers | 40,406 | Cache tokenlizer | ### Feature request
I am using Grounding DINO, which makes use of the `bert-base-uncanned` tokenlizer. Unfortunately, this model is never downloaded to cache, forcing a remote call to the API. Please allow for tokenlizer to be cached locally.
### Motivation
I want to use my software offline.
### Your contribution
... | https://github.com/huggingface/transformers/issues/40406 | open | [
"Feature request"
] | 2025-08-24T08:36:14Z | 2025-09-10T11:49:06Z | 5 | axymeus |
huggingface/tokenizers | 1,851 | SentencePieceBPE + Unicode NFD preprocessing leads to noise ? | Hi,
I have had the issue multiple times, so I assume I am doing something wrong.
**Versions:**
- tokenizers==0.21.4
- transformers==4.55.4
**Training script**
```py
from transformers import PreTrainedTokenizerFast
from pathlib import Path
from read import get_texts_iter_for_tokenizer
from tokenizers import SentenceP... | https://github.com/huggingface/tokenizers/issues/1851 | open | [] | 2025-08-24T08:28:08Z | 2025-09-17T09:33:11Z | 3 | PonteIneptique |
huggingface/coreml-examples | 17 | how to get absolute depth,meters? | how to get absolute depth,meters? | https://github.com/huggingface/coreml-examples/issues/17 | open | [] | 2025-08-24T03:20:58Z | 2025-08-24T03:20:58Z | null | jay25208 |
huggingface/transformers | 40,398 | NVIDIA RADIO-L | ### Model description
While exploring, I came across [nvidia/RADIO-L](https://huggingface.co/nvidia/RADIO-L) and was wondering about its current support.
1. May I ask if RADIO-L is already supported in Transformers?
2. If not, would it be considered suitable to add?
3. If a model requires trust_remote_code=True, what... | https://github.com/huggingface/transformers/issues/40398 | open | [
"New model"
] | 2025-08-23T11:14:42Z | 2025-08-26T14:44:11Z | 4 | Uvi-12 |
huggingface/diffusers | 12,222 | [Contribution welcome] adding a fast test for Qwen-Image Controlnet Pipeline | We are looking for help from community to add a fast time for this PR
https://github.com/huggingface/diffusers/pull/12215
You can add a file under this folder:
https://github.com/huggingface/diffusers/tree/main/tests/pipelines/qwenimage
You can reference other tests we added for qwee pipelines [example](https://git... | https://github.com/huggingface/diffusers/issues/12222 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-08-22T21:04:50Z | 2025-08-25T01:58:59Z | 6 | yiyixuxu |
huggingface/diffusers | 12,221 | [Looking for community contribution] support DiffSynth Controlnet in diffusers | ### Model/Pipeline/Scheduler description
Hi!
We want to add first party support for DiffSynth controlnet in diffusers, and we are looking for some help from the community!
Let me know if you're interested!
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (... | https://github.com/huggingface/diffusers/issues/12221 | open | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2025-08-22T20:49:18Z | 2025-09-11T10:01:08Z | 5 | yiyixuxu |
huggingface/safetensors | 649 | How to determine if a file is a safetensor file | Is there a good and fast way to determine if a file is a safetensors file. We would like to avoid reading the whole header.
Background we are currently trying to add safetensors as a datatype to the Galaxy project: https://github.com/galaxyproject/galaxy/pull/20754 | https://github.com/huggingface/safetensors/issues/649 | open | [] | 2025-08-22T09:17:49Z | 2025-09-03T11:08:30Z | null | bernt-matthias |
huggingface/lerobot | 1,775 | What's the finetuning method? Is it all full-finetuning? | I could't find any thing about LORA finetuning, is the default method full-finetuning by now? | https://github.com/huggingface/lerobot/issues/1775 | closed | [
"question",
"policies"
] | 2025-08-22T06:48:25Z | 2025-10-07T20:55:10Z | null | lin-whale |
huggingface/lerobot | 1,774 | Finetune smolvla with vision encoder | ### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-6.8.0-65-generic-x86_64-with-glibc2.35
- Python version: 3.10.18
- Huggingface_hub version: 0.33.4
- Dataset version: 3.6.0
- Numpy version: 2.2.6
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Cuda version: 12060
- Using GPU in script?: <fill in>
`... | https://github.com/huggingface/lerobot/issues/1774 | open | [
"question",
"policies",
"good first issue"
] | 2025-08-22T05:20:58Z | 2025-10-08T11:31:02Z | null | THU-yancow |
huggingface/transformers | 40,366 | [Feature] Support fromjson in jinja2 chat template rendering | ### Feature request
GLM45 requires `fromjson` in jinja2 to deserialize str typed `tool_calls.function.arguments` to dict within chat template so it can iterate over `arguments`'s k-v within jinja2 chat template.
```
{% for tc in m.tool_calls %}
{%- if tc.function %}
{%- set tc = tc.function %}
{%- endif %}
{{ '\n<to... | https://github.com/huggingface/transformers/issues/40366 | open | [
"Feature request"
] | 2025-08-22T05:11:06Z | 2025-08-22T05:18:45Z | 1 | byjiang1996 |
huggingface/peft | 2,749 | Set multiple adapters actively when training | Hi! In incremental scenarios, I want to train a new adapter while keeping some old adapters actively. Notice that PeftModel can set active adapter by "model.set_adapter()". But every time can set only one adapter, where the type of args "adapter_name" is "str" rather than "List[str]". I also notice that class "PeftMixe... | https://github.com/huggingface/peft/issues/2749 | closed | [] | 2025-08-21T09:59:25Z | 2025-09-29T15:04:15Z | 4 | Yongyi-Liao |
huggingface/lerobot | 1,765 | Questions about using LIBERO dataset (loss starts extremely high) | Hello,
I am training on the "**IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot**" dataset, but I encountered an issue(here is the dateset:https://huggingface.co/datasets/IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot):
At the very beginning of training, the loss is extremely high (around 500).
I would lik... | https://github.com/huggingface/lerobot/issues/1765 | open | [
"question",
"dataset",
"simulation"
] | 2025-08-21T05:06:51Z | 2025-09-23T09:46:41Z | null | hamondyan |
huggingface/transformers | 40,330 | open-qwen2vl-base | ### Model description
is there any plan to add open-qwen2vl-base model?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/40330 | open | [
"New model"
] | 2025-08-21T02:24:01Z | 2025-08-23T10:18:28Z | 5 | olccihyeon |
huggingface/tokenizers | 1,850 | Safe encoding of strings that might contain special token text | When feeding untrusted string inputs into an LLM, it's often important not convert any of the input into special tokens, which might indicate message boundaries or other syntax. Among other reasons, this is important for guarding against prompt injection attacks.
tiktoken provides a way to control how the encoding dea... | https://github.com/huggingface/tokenizers/issues/1850 | closed | [] | 2025-08-21T00:53:17Z | 2025-09-01T18:03:59Z | 5 | joschu |
huggingface/peft | 2,746 | Gemma 2/3 Attention: Expected a single attention mask, got 2 instead | Hi! I'm getting this error `ValueError: Expected a single attention mask, got 2 instead` at inference (after prompt tuning)--I've only had this happen with the Gemma 2 and 3 models, so it might have something to do with their specific attention mechanism. Is there a workaround (or am I maybe missing something)?
I'm ru... | https://github.com/huggingface/peft/issues/2746 | closed | [] | 2025-08-20T18:08:02Z | 2025-08-27T02:43:22Z | 8 | michelleezhang |
huggingface/transformers | 40,323 | Is there a plan to add DINOv3 into AutoBackbone? | ### Feature request
Is there a plan to add DINOv3 to AutoBackbone. At present, DINOv2 is already inside, and I think DINOv3 should be able to inherit it directly. Appreciate a lot.
### Motivation
For the convenience of use
### Your contribution
DINOv3 should be able to inherit from DINOv2 directly. | https://github.com/huggingface/transformers/issues/40323 | closed | [
"Feature request",
"Vision"
] | 2025-08-20T16:02:45Z | 2025-11-11T16:22:08Z | 4 | Farenweh |
huggingface/transformers | 40,263 | [VLMs] How to process a batch that contains samples with and without images? | Is there a **standard** way to process a batch that contains samples with and without images?
For example:
```python
from transformers import AutoProcessor
from PIL import Image
import numpy as np
model_id = ... # tested are "google/gemma-3-4b-it", "HuggingFaceM4/idefics2-8b", "HuggingFaceM4/Idefics3-8B-Llama3", "H... | https://github.com/huggingface/transformers/issues/40263 | closed | [] | 2025-08-19T05:09:36Z | 2025-09-18T08:08:51Z | null | qgallouedec |
huggingface/diffusers | 12,185 | What's the difference between DreamBooth LoRa and traditional LoRa? | I see a lot of examples using DreamBooth LoRa training code. What's the difference between this and traditional LoRa training? Can this DreamBooth LoRa training code be adapted to standard SFT LoRa code? Does disabling with_prior_preservation return normal LoRa training? | https://github.com/huggingface/diffusers/issues/12185 | open | [] | 2025-08-19T03:32:30Z | 2025-08-19T15:04:22Z | 3 | MetaInsight7 |
huggingface/trl | 3,918 | How to use trl-SFTTrainer to train Qwen-30B-A3B? | Has anyone tried using TRL to train Qwen-30B-A3B-Instruct-2507? | https://github.com/huggingface/trl/issues/3918 | open | [
"❓ question"
] | 2025-08-19T03:04:36Z | 2025-08-19T03:11:30Z | null | JeffWb |
huggingface/datasets | 7,739 | Replacement of "Sequence" feature with "List" breaks backward compatibility | PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training... | https://github.com/huggingface/datasets/issues/7739 | open | [] | 2025-08-18T17:28:38Z | 2025-09-10T14:17:50Z | 1 | evmaki |
huggingface/gsplat.js | 119 | How to 4DGS (.splatv) | How can I generate the .splatv file and get it running on my local server? | https://github.com/huggingface/gsplat.js/issues/119 | open | [] | 2025-08-18T07:35:04Z | 2025-08-18T07:35:04Z | null | CetosEdit |
huggingface/diffusers | 12,165 | Failed to finetune the pre-trained model of 'stable-diffusion-v1-4' on image inpainting task | I finetuned the pre-trained model of 'stable-diffusion-inpainting' on image inpainting task, and all work well as the model is trained on image inpainting. But when I finetuned with the pre-trained model of 'stable-diffusion-v1-4' which is trained on text-to-image, the loss is NaN and the result is pure black.
As the... | https://github.com/huggingface/diffusers/issues/12165 | closed | [] | 2025-08-17T07:15:36Z | 2025-09-07T09:35:38Z | 7 | micklexqg |
huggingface/gym-hil | 27 | How to close the gripper in gym-hill-sim? | Hello all.
I'm using macOS to practice with tutorial gym-hill-sim.
I figured out how to move robot like x,y,z but, it's impossible to close the gripper....
Could you all please share the correct key?
Chatgpt answered ctrl-key but, it's not working!
Thanks in advance. | https://github.com/huggingface/gym-hil/issues/27 | open | [] | 2025-08-15T13:46:12Z | 2025-08-15T13:57:26Z | null | cory0619 |
huggingface/peft | 2,742 | RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn | Hello, I am fine-tuning the LLaMA-2 7B model on an A100 40 GB GPU. Initially, I was getting a CUDA out-of-memory error. I tried various methods, such as reducing batch size, but none worked. Then I enabled:
model.gradient_checkpointing_enable()
After doing this, the OOM issue was resolved, but now I get the following... | https://github.com/huggingface/peft/issues/2742 | closed | [] | 2025-08-15T06:21:50Z | 2025-09-23T15:04:07Z | 4 | Mishajain1110 |
huggingface/trl | 3,896 | How to gather completions before computing rewards in GRPOTrainer | Hi,
I found that the `reward_funcs` passed to GRPOTrainer is used per-device.
That is, if I set `num_generation=16`, `per_device_train_batch_size=4`, my customized reward function can only receive `4` completions.
However, my customized reward function calculates rewards depending on a global view over all `16` comple... | https://github.com/huggingface/trl/issues/3896 | closed | [
"❓ question",
"🏋 Reward",
"🏋 GRPO"
] | 2025-08-14T14:41:42Z | 2025-09-03T14:09:16Z | null | rubickkcibur |
huggingface/peft | 2,738 | Which base model weights are getting frozen after applying LoRA? | I have finetuned LLaVA-v1.5-7B with peft LoRA, and I have found out that after adding the LoRA adapters, all the weights are getting frozen except for the newly added LoRA layers and mm_projector weights (non-LoRA). I will be glad to know the freezing logic implemented by peft since not all the base model weights are g... | https://github.com/huggingface/peft/issues/2738 | closed | [] | 2025-08-13T17:35:10Z | 2025-08-14T04:20:42Z | 1 | srbh-dl |
huggingface/diffusers | 12,136 | How to use Diffusers to Convert Safetensors SDXL 1.0 to Onnx? | Hello,
I'm trying to convert a safetensors checkpoint for SDXL to onnx format.
I've tried Optimum already but it fails everytime.
Please help. | https://github.com/huggingface/diffusers/issues/12136 | closed | [] | 2025-08-13T06:33:22Z | 2025-10-31T03:13:28Z | null | CypherpunkSamurai |
huggingface/lerobot | 1,712 | Why hasn't the pi0 model learned the ability to place something in the specified positions? Is it because the number of datasets is insufficient? | I am creating a tic-tac-toe board and using yellow and green sandbags as pieces. I have collected a dataset of "the entire process of a robotic arm picking up yellow sandbags and placing them in nine different positions on the board". This dataset is used to train the pi0 model to achieve autonomous playing. The collec... | https://github.com/huggingface/lerobot/issues/1712 | open | [
"question",
"policies"
] | 2025-08-12T10:15:26Z | 2025-12-22T08:10:47Z | null | Alex-Wlog |
huggingface/transformers | 40,089 | Could not import module 'AutoTokenizer'. Are this object's requirements defined correctly? | ### System Info
- torch @ https://download.pytorch.org/whl/cu124/torch-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- torchaudio @ https://download.pytorch.org/whl/cu124/torchaudio-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- torchvision @ https://download.pytorch.org/whl/cu124/torchvision-0.21.0%2Bcu124-cp310-cp310-lin... | https://github.com/huggingface/transformers/issues/40089 | closed | [
"bug"
] | 2025-08-11T21:44:05Z | 2025-09-08T03:09:11Z | 3 | octavianBordeanu |
huggingface/candle | 3,052 | Candle vs. PyTorch performance | I'm running https://github.com/huggingface/candle/tree/main/candle-examples/examples/llava vs. https://github.com/fpgaminer/joycaption/blob/main/scripts/batch-caption.py on a Mac m1.
Seeing significant performance difference, Candle seems much slower.
I enabled accelerate and metal features.
Would love some pointers ... | https://github.com/huggingface/candle/issues/3052 | open | [] | 2025-08-11T16:14:17Z | 2025-11-14T20:05:16Z | 8 | ohaddahan |
huggingface/diffusers | 12,124 | For qwen-image training file, Maybe "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False? | ### Describe the bug
I think "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False. Otherwise, it will lead to errors in the correspondence between prompt embedding and image during training, and prompt will not be followed when performing the task of T2I.
### R... | https://github.com/huggingface/diffusers/issues/12124 | open | [
"bug"
] | 2025-08-11T13:15:21Z | 2025-08-30T01:57:02Z | 2 | yinguoweiOvO |
huggingface/diffusers | 12,120 | How to train a lora with distilled flux model, such as flux-schnell??? | **Is your feature request related to a problem? Please describe.**
I can use flux as base model to train a lora, but it need 20 steps , it cost a lot of time , and I want to train a lora base on distill model to implement use fewer step make a better image, such as based on flux-schnell model train a lora it only nee... | https://github.com/huggingface/diffusers/issues/12120 | open | [] | 2025-08-11T03:07:42Z | 2025-08-11T06:01:45Z | null | Johnson-yue |
huggingface/diffusers | 12,108 | Qwen Image and Chroma pipeline breaks using schedulers that enable flow matching by parameter. | ### Describe the bug
Several Schedulers support flow matching by using the prediction_type='flow_prediction" e.g.
```
pipe.scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True)
```
However Chroma and Qwen Image will not work with th... | https://github.com/huggingface/diffusers/issues/12108 | open | [
"bug"
] | 2025-08-09T21:34:28Z | 2025-08-09T21:39:30Z | 0 | Vargol |
huggingface/transformers | 40,056 | Question: How to write a custome tokenizer form scratch | In this guide you introduced how to write a custom model and custom model configuration: [here](https://huggingface.co/docs/transformers/main/en/custom_models), IN addition I want to create a custom tokenizer form scratch why ?
I have a problem of multilevel transcription: the model takes an input utterance and output... | https://github.com/huggingface/transformers/issues/40056 | closed | [] | 2025-08-09T16:39:19Z | 2025-09-24T08:03:02Z | null | obadx |
huggingface/diffusers | 12,107 | accelerator.init_trackers error when try with a custom object such as list | ### Describe the bug
I set multiple prompts with nargs for argument "--validation_prompt " in "train_dreambooth.py":
` parser.add_argument(
"--validation_prompt",
type=str,
default=["A photo of sks dog in a bucket", "A sks cat wearing a coat"],
nargs="*",
help="A prompt that... | https://github.com/huggingface/diffusers/issues/12107 | open | [
"bug"
] | 2025-08-09T10:04:06Z | 2025-08-09T10:04:06Z | 0 | micklexqg |
huggingface/diffusers | 12,104 | IndexError: index 0 is out of bounds for dimension 0 with size 0 | ### Describe the bug
When I test the mit-han-lab/nunchaku-flux.1-kontext-dev model, it runs normally in a non-concurrent scenario, but throws an error when I try to run it with concurrent requests.
My GPU is a single RTX 4090D.
How can I enable multi-concurrency support on a single GPU?
Thank you in advance for yo... | https://github.com/huggingface/diffusers/issues/12104 | closed | [
"bug"
] | 2025-08-08T09:20:52Z | 2025-08-17T22:22:37Z | 1 | liushiton |
huggingface/datasets | 7,729 | OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | > Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu | https://github.com/huggingface/datasets/issues/7729 | open | [] | 2025-08-07T14:07:23Z | 2025-09-24T02:17:15Z | 1 | SaleemMalikAI |
huggingface/transformers | 39,992 | [gpt-oss] Transform checkpoint from safetensors to state dict | Yesterday I was working on gpt-oss. However, loading the weights give me troubles.
For models like Qwen, I did things like this:
1. Create model on meta device
2. FSDP2 shard it, so it can fit in memory
3. On each GPU, it read weights from safetensors in a generator style, to save memory.
4. Chunk the weights and cop... | https://github.com/huggingface/transformers/issues/39992 | closed | [] | 2025-08-07T13:24:06Z | 2025-09-15T08:02:55Z | 1 | fingertap |
huggingface/diffusers | 12,094 | [Wan2.2] pipeline_wan miss the 'shift' parameter which used by Wan2.2-A14B-diffusers. | **Firstly, I found that the quality of output using diffusers is poor**
Later, I found that the pipeline_wan in diffusers[0.34.0] did not support two-stage processing. I noticed that the community had already updated it, so I installed diffusers[0.35.0-dev] by source code and it worked.
Then I found that the scheduler... | https://github.com/huggingface/diffusers/issues/12094 | closed | [] | 2025-08-07T11:37:36Z | 2025-08-10T08:43:27Z | 7 | yvmilir |
huggingface/lerobot | 1,687 | When using AMP to train a model, why are the saved model weights still in fp32? | <img width="1668" height="95" alt="Image" src="https://github.com/user-attachments/assets/406a1879-f2f2-43c6-8341-8733873ee911" /> | https://github.com/huggingface/lerobot/issues/1687 | open | [
"question",
"policies"
] | 2025-08-06T12:42:40Z | 2025-08-12T08:52:00Z | null | Hukongtao |
huggingface/diffusers | 12,084 | Will `cosmos-transfer1` be supported in diffusers in the future? |
Hi @a-r-r-o-w and @yiyixuxu :)
First of all, thank you for recently enabling cosmos-predict1 models (text2world and video2world) in the diffusers library — it's super exciting to see them integrated!
I was wondering if there are any plans to also support [cosmos-transfer1](https://github.com/nvidia-cosmos/cosmos-tr... | https://github.com/huggingface/diffusers/issues/12084 | open | [] | 2025-08-06T11:22:28Z | 2025-08-19T12:11:33Z | 3 | rebel-shshin |
huggingface/lerobot | 1,683 | SmolVLMWithExpertModel | Excuse me, I would like to know about each module. In this class, I would like to know how to define inputs. | https://github.com/huggingface/lerobot/issues/1683 | open | [
"question",
"policies"
] | 2025-08-06T10:30:21Z | 2025-08-12T08:52:21Z | null | xjushengjie |
huggingface/lerobot | 1,674 | How to train smolvla for multi-task | I have trained smolvla for aloha_sim_transfer_cube and aloha_sim_insertion, and smolvla performs well in each single task. Now I'd like to train smolvla for multi-task ---- one model can complete the two tasks above. What should I do Now? | https://github.com/huggingface/lerobot/issues/1674 | closed | [] | 2025-08-06T02:40:01Z | 2025-10-15T02:52:29Z | null | w673 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.