repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/candle | 3,096 | [Question] Minimal documentation/example on including weights in compiled executable | Just what the title says: Is there a minimal code example on including weights in the compiled executable using include_bytes. Nervous to implement this without understanding best practices and end up with a suboptimal solution. | https://github.com/huggingface/candle/issues/3096 | closed | [] | 2025-09-24T02:47:28Z | 2025-10-07T04:49:26Z | 1 | bitanath |
huggingface/optimum-executorch | 149 | Add documentation for how to run each type of exported model on ExecuTorch | Blocked on runner / multimodal runner work in ExecuTorch | https://github.com/huggingface/optimum-executorch/issues/149 | open | [] | 2025-09-23T18:53:55Z | 2025-09-23T18:54:00Z | null | jackzhxng |
huggingface/safetensors | 653 | `get_slice` is slow because it uses `tensors()` method instead of `info()` | ### Feature request
Replace
```rust
self.metadata.tensors().get(name)
```
with
```rust
self.metadata.info(name)
```
in `get_slice` method
### Motivation
I noticed that the `get_slice` method of `Open` [does](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/bindings/python/src/lib.rs#L851)
```rust
self.metadata.tensors().get(name)
````
instead of
```rust
self.metadata.info(name)
```
like `get_tensor()` [does](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/bindings/python/src/lib.rs#L638) when retrieving `TensorInfo` by name.
Because of this, `get_slice` is much slower, since the `tensors()` method [reconstructs](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/safetensors/src/tensor.rs#L633) a new `HashMap` on each call.
Is there any particular reason for this approach? Would it be possible to replace it with `self.metadata.info(name)` to improve performance?
### Your contribution
I do not mind doing a PR | https://github.com/huggingface/safetensors/issues/653 | closed | [] | 2025-09-23T15:09:51Z | 2025-09-28T16:42:45Z | 1 | PgLoLo |
huggingface/diffusers | 12,375 | What kernels should we integrate in Diffusers? | Now that we have an [integration](https://github.com/huggingface/diffusers/pull/12236) with the `kernels` lib to use Flash Attention 3 (FA3), it'd be nice to gather community interest about which kernels we should try to incorporate in the library through the [`kernels` lib](https://github.com/huggingface/kernels/). FA3 delivers a significant speedup on Hopper GPUs.
I have done some work in the `kernelize` branch to see if replacing `GELU`, `SiLU`, and `RMSNorm` with their optimized kernels would have any speedups on Flux. So far, it hasn't had any. Benchmarking script: https://gist.github.com/sayakpaul/35236dd96e15d9f7d658a7ad11918411. One can compare the changes here: https://github.com/huggingface/diffusers/compare/kernelize?expand=1.
> [!NOTE]
> The changes in the `kernelize` branch are quite hacky as we're still evaluating things.
Please use this issue to let us know which kernels we should try to support in Diffusers. Some notes to keep in mind:
* Layers where the `forward()` method is easily replaceable with the `kernelize()` [mechanism](https://github.com/huggingface/kernels/blob/main/docs/source/layers.md#kernelizing-a-model) would be prioritized. A reference is here: https://github.com/huggingface/transformers/pull/38205.
* Even if a kernel isn't directly compatible with `kernels`, we can try to make it so, like we have for https://huggingface.co/kernels-community/flash-attn3.
* Not all kernels contribute non-trivial gains in terms of speedup. So, please bear that in mind when proposing a kernel.
Cc: @MekkCyber | https://github.com/huggingface/diffusers/issues/12375 | open | [
"performance"
] | 2025-09-23T09:03:13Z | 2025-09-30T06:56:39Z | 8 | sayakpaul |
huggingface/peft | 2,798 | Add stricter type checking in LoraConfig for support with HfArgumentParser | ### System Info
System Info
transformers version: 4.57.0.dev0
Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39
Python version: 3.12.3
Huggingface_hub version: 0.34.4
Safetensors version: 0.5.2
Accelerate version: 1.10.1
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?: No
Using GPU in script?: No
GPU type: NVIDIA A100-SXM4-80GB
peft version: 0.17.1
### Who can help?
@benjaminbossan @githubnemo
### Reproduction
```
from peft import LoraConfig
from transformers import HfArgumentParser
p = HfArgumentParser(dataclass_types=LoraConfig) # fails
```
### Expected behavior
I would expect LoraConfig to be supported by HfArgumentParser.
As I understand, this fails because HfArgumentParser does not support fields of type (`Optional[List[str], str]`).
I had raised this in transformers as well, please refer [here](https://github.com/huggingface/transformers/issues/40915).
Can we add stricter type checking for such fields so it can be easily integrated with other libraries and argument parsers? | https://github.com/huggingface/peft/issues/2798 | closed | [] | 2025-09-23T05:19:34Z | 2025-09-23T12:37:47Z | 3 | romitjain |
huggingface/lerobot | 1,995 | Questions about SmolVLA design | Hi! I am looking into the details of SmolVLA implementation, and got some questions.
I wonder the following points are necessary, or beneficial for the performance.
1.
https://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/smolvlm_with_expert.py#L354C63-L354C74
In the cross-attention layer, the VLM keys and values are linear-projected before the attention interface.
They have compatible shape without the projection, and ROPE is not applied after the projection (although ROPE is applied in the VLM part, interaction between the ROPEd queries and projected keys might not work as rotation?)
2.
https://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/modeling_smolvla.py#L566
https://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/modeling_smolvla.py#L592C1-L593C1
image and text embeddings are multiplied by `sqrt(dim)` before they are fed to the llm and expert layers.
I could not find the same multiplication in SmolVLM modeling (https://github.com/huggingface/transformers/blob/main/src/transformers/models/smolvlm/modeling_smolvlm.py)
I guess that this multiplication might change the distribution of image-text features.
3.
SmolVLM and SmolVLA are trained with different ROPE max frequency.
It seems like SmolVLM is trained with 100_000, and SmolVLA is trained with 10_000.
4.
It seems like SmolVLM uses causal mask for all LLM layers. (no bidirectional attention for images)
SmolVLA uses similar mask with PI0 (paligemma).
| https://github.com/huggingface/lerobot/issues/1995 | open | [
"question",
"policies"
] | 2025-09-22T11:53:01Z | 2025-10-17T01:58:12Z | null | gliese581gg |
huggingface/lerobot | 1,994 | How to improve success rate and generalization | Hi, I have one question regarding the success rate, if I ensure the object appears in the frame of wrist camera at the beginning of dataset collection/inference, will this lead to higher success rate for pick and place task?
My initial attempt was object appears in the side view camera but does not appear in the wrist camera at the initial point/ beginning of dataset collection/inference.
**Should I ensure object appears in both side view camera and wrist camera at the starting point of program?** | https://github.com/huggingface/lerobot/issues/1994 | closed | [
"question",
"policies"
] | 2025-09-22T09:55:53Z | 2025-09-23T09:26:16Z | null | Liu9999ai |
huggingface/smol-course | 248 | [QUESTION] About applying chat template for base model via `clone_chat_template` from trl | In the course [Supervised Fine-Tuning](https://huggingface.co/learn/smol-course/unit1/3), author uses base model `HuggingFaceTB/SmolLM3-3B-Base` but I choose `HuggingFaceTB/SmolLM2-135M` because it is lighter. However, I found that the base model `SmolLM2-135M` does not have its own chat template but it already had special tokens. However, speical tokens may be incorrect, for example, bos_token and eos_token share the same token `<|endoftext|>`
<img width="654" height="305" alt="Image" src="https://github.com/user-attachments/assets/87a4cea8-c372-4540-b617-9c41825f5a7e" />
I also refer to course [LLM Course, Fine-Tuning with SFTTrainer](https://huggingface.co/learn/llm-course/en/chapter11/3?fw=pt#implementation-with-trl) and author uses `setup_chat_format` to create the chat template for base model's tokenizer which does not have its own chat template
However, [`setup_chat_format`](https://github.com/huggingface/trl/blob/86f74b486fda475e5530a451d06b835361d959ac/trl/models/utils.py#L87) only supports `chatml` format and will be deprecated in trl version 0.26.0. That is why I use [`clone_chat_template`](https://github.com/huggingface/trl/blob/86f74b486fda475e5530a451d06b835361d959ac/trl/models/utils.py#L165) instead.
But another issue appears here: while `clone_chat_template` only overwrites eos from source tokenizer to target tokenizer, the `setup_chat_format` overwrites all bos, eos, and pad tokens. After I try to clone `Llama-3.2-Instruct`'s chat template, only eos changes to `<|eot_id|>`
`model, tokenizer, added_tokens = clone_chat_template(model=model, tokenizer=tokenizer, source_tokenizer_path='meta-llama/Llama-3.2-1B-Instruct')`
<img width="633" height="186" alt="Image" src="https://github.com/user-attachments/assets/4428af4d-b8d8-4974-893f-af4033d516ed" />
Question:
1. Why in the base model, although the tokenizer does not have a chat template, it already has special tokens?
2. `clone_chat_template` does not overwrite all special tokens like bos, eos, pad, ... so are there any training SFT impacts, and what is the solution for this?
I am new to SFT and I very appreciate any support. Thank you.
| https://github.com/huggingface/smol-course/issues/248 | open | [
"question"
] | 2025-09-22T03:03:56Z | 2025-09-22T19:13:17Z | null | binhere |
huggingface/transformers.js | 1,419 | Why is `token-classification` with T5 not available? (`T5ForTokenClassification`) | ### Question
In python `tranformers` i can do:
```python
model = AutoModelForTokenClassification.from_pretrained("google-t5/t5-base")
```
and use it with `Trainer` to train it (quite successfully).
Or
```python
classifier = pipeline("token-classification", model="google-t5/t5-base")
```
and use it for token classification.
Instead, if I try to use it in `transformers.js` (web, 3.7.3):
```js
classifier = await pipeline('token-classification', "google-t5/t5-base")
```
I receive this error:
```
Unsupported model type: t5
```
How come? Or there is another way to use T5 for token classification in javascript?
| https://github.com/huggingface/transformers.js/issues/1419 | open | [
"question"
] | 2025-09-21T23:30:22Z | 2025-09-24T21:42:56Z | null | debevv |
huggingface/transformers.js | 1,418 | EmbeddingGemma usage | ### Question
I'm new to transformers.js
I want to use embeddinggemma into my web app and I've looked at the example on its usage at this link:
https://huggingface.co/blog/embeddinggemma#transformersjs
At the same time I've seen a different code, using pipeline, regarding embeddings:
https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesfeatureextractionpipeline
I'm trying to create a custom pipeline and in typescript I'm building the pipeline like
```ts
class EmbeddingPipeline {
private static instance: Promise<FeatureExtractionPipeline> | null = null;
private static model = 'onnx-community/embeddinggemma-300m-ONNX';
private static readonly task = 'feature-extraction';
// Device rilevato (default wasm)
private static device: 'webgpu' | 'wasm' = 'wasm';
private static deviceInitPromise: Promise<void> | null = null;
private static async detectDeviceOnce(): Promise<void> {
if (this.deviceInitPromise) return this.deviceInitPromise;
this.deviceInitPromise = (async () => {
if (typeof navigator !== 'undefined' && 'gpu' in navigator) {
try {
const adapter = await (navigator as any).gpu.requestAdapter();
if (adapter) {
this.device = 'webgpu';
return;
}
} catch {
// ignore, fallback to wasm
}
}
this.device = 'wasm';
})();
return this.deviceInitPromise;
}
static getSelectedDevice(): 'webgpu' | 'wasm' {
return this.device;
}
static async getInstance(progress_callback?: ProgressCallback): Promise<FeatureExtractionPipeline> {
if (this.instance) return this.instance;
// Rileva device una sola volta
await this.detectDeviceOnce();
const build = async (device: 'webgpu' | 'wasm') =>
pipeline(
this.task,
this.model,
{
progress_callback,
dtype: 'q8',
device
}
) as Promise<FeatureExtractionPipeline>;
this.instance = (async (): Promise<FeatureExtractionPipeline> => {
try {
return await build(this.device);
} catch (e) {
if (this.device === 'webgpu') {
// Fallback automatico a wasm
this.device = 'wasm';
return await build('wasm');
}
throw e;
}
})();
return this.instance;
}
}
const getEmbeddingDevice = () => EmbeddingPipeline.getSelectedDevice();
const embedding_prefixes_per_task: Record<EmbeddingTask, string> = {
'query': "task: search result | query: ",
'document': "title: none | text: ",
};
export type EmbeddingTask = 'query' | 'document';
export const getEmbedding = async (task: EmbeddingTask, text: string): Promise<Float32Array> => {
const extractor = await EmbeddingPipeline.getInstance();
const prefix = embedding_prefixes_per_task[task];
const result = await extractor(`${prefix}${text}`, { pooling: 'mean', normalize: true });
return result.data as Float32Array;
};
```
I'm using the same sentences (with prefixes) used by your example (I'm running both my class and your code to be sure if they matches) and the embedding result is different.
What am I doing wrong? Do you have any reference to some proper docs reference that explain properly how this works?
Thanks | https://github.com/huggingface/transformers.js/issues/1418 | open | [
"question",
"v4"
] | 2025-09-21T10:26:22Z | 2025-11-08T15:33:16Z | null | MithrilMan |
huggingface/diffusers | 12,359 | Chroma pipeline documentation bug regarding the `guidance_scale` parameter | ### Describe the bug
From my understanding, Chroma is a retrained and dedistilled version of the Flux architecture, so it uses true CFG, unlike Flux. I can indeed confirm that this is true by tracing through the source code.
However, currently the documentation for the `guidance_scale` parameter in the `ChromaPipeline.__call__()` method mentions otherwise, presumably because it was copied over from the `FluxPipeline` documentation.
### Reproduction
The current documentation for the `guidance_scale` parameter in the `ChromaPipeline.__call__()` method:
```python
'''
guidance_scale (float, optional, defaults to 3.5) โ Embedded guiddance scale is enabled by setting guidance_scale > 1. Higher guidance_scale encourages a model to generate images more aligned with prompt at the expense of lower image quality.
Guidance-distilled models approximates true classifer-free guidance for guidance_scale > 1. Refer to the [paper](https://huggingface.co/papers/2210.03142) to learn more.
'''
```
### Logs
```shell
```
### System Info
- ๐ค Diffusers version: 0.36.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.11.9
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.4
- Transformers version: 4.55.0
- Accelerate version: 1.10.0
- PEFT version: 0.17.0
- Bitsandbytes version: 0.47.0
- Safetensors version: 0.6.2
- xFormers version: 0.0.31.post1
- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@stevhliu | https://github.com/huggingface/diffusers/issues/12359 | closed | [
"bug"
] | 2025-09-21T08:34:15Z | 2025-09-22T20:04:15Z | 1 | mingyi456 |
huggingface/trl | 4,110 | How does `trl` know what part of dataset is prompt and completion in the following situation? | ### Reproduction
```python
import torch
import trl as r
import peft as p
import datasets as d
import accelerate as a
import transformers as t
allowed_entities = ['AGE', 'EYECOLOR', 'GENDER', 'HEIGHT', 'WEIGHT', 'SEX']
entity_mapping = {
"ACCOUNTNAME": "account_name",
"ACCOUNTNUMBER": "account_number",
"AGE": "age",
"AMOUNT": "amount",
"BIC": "bic",
"BITCOINADDRESS": "bitcoin_address",
"BUILDINGNUMBER": "building_number",
"CITY": "city",
"COMPANYNAME": "company_name",
"COUNTY": "county",
"CREDITCARDCVV": "credit_card_cvv",
"CREDITCARDISSUER": "credit_card_issuer",
"CREDITCARDNUMBER": "credit_card_number",
"CURRENCY": "currency",
"CURRENCYCODE": "currency_code",
"CURRENCYNAME": "currency_name",
"CURRENCYSYMBOL": "currency_symbol",
"DATE": "date",
"DOB": "dob",
"EMAIL": "email",
"ETHEREUMADDRESS": "ethereum_address",
"EYECOLOR": "eye_color",
"FIRSTNAME": "first_name",
"GENDER": "gender",
"HEIGHT": "height",
"IBAN": "iban",
"IP": "ip",
"IPV4": "ipv4",
"IPV6": "ipv6",
"JOBAREA": "job_area",
"JOBTITLE": "job_title",
"JOBTYPE": "job_type",
"LASTNAME": "last_name",
"LITECOINADDRESS": "litecoin_address",
"MAC": "mac",
"MASKEDNUMBER": "masked_number",
"MIDDLENAME": "middle_name",
"NEARBYGPSCOORDINATE": "nearby_gps_coordinate",
"ORDINALDIRECTION": "ordinal_direction",
"PASSWORD": "password",
"PHONEIMEI": "phone_imei",
"PHONENUMBER": "phone_number",
"PIN": "pin",
"PREFIX": "prefix",
"SECONDARYADDRESS": "secondary_address",
"SEX": "sex",
"SSN": "ssn",
"STATE": "state",
"STREET": "street",
"TIME": "time",
"URL": "url",
"USERAGENT": "user_agent",
"USERNAME": "username",
"VEHICLEVIN": "vehicle_vin",
"VEHICLEVRM": "vehicle_vrm",
"ZIPCODE": "zip_code"
}
def formatting_function(x):
entities = []
for entity in x['privacy_mask']:
if entity['label'] not in allowed_entities:
entities.append({'value': entity['value'], 'label': entity_mapping[entity['label']]})
prompt = f"Extract all the personal information from the following text and classify it: {x['source_text']}"
completion = str(entities)
return {"text": f"### PROMPT\n{prompt}\n\n### COMPLETION\n{completion}"}
def main():
model_name = "Qwen/Qwen3-0.6B"
dataset_name = "ai4privacy/pii-masking-200k"
quantization = False
quantization_bits = "8"
lora = True
lora_rank = 8
lora_alpha = 16
lora_dropout = 0.05
use_mixed_precision = True
# Training parameters
completion_only_loss = True
output_dir = f"/scratch/bminesh-shah/phi-ner/{model_name.replace('/', '-')}_pii_finetuned_prompt_completion"
learning_rate = 1e-4
num_train_epochs = 10
per_device_train_batch_size = 2
gradient_accumulation_steps = 8
accelerator = a.Accelerator()
dataset = d.load_dataset(dataset_name)
dataset = dataset.filter(lambda x: x['language'] == 'en')
dataset = dataset.remove_columns(['target_text', 'span_labels', 'mbert_text_tokens', 'mbert_bio_labels', 'id', 'language', 'set'])
dataset = dataset['train']
dataset = dataset.train_test_split(test_size=0.2, seed=24, shuffle=True)
print(dataset)
if accelerator.is_main_process:
dataset = dataset.map(formatting_function, remove_columns=['source_text', 'privacy_mask'])
print(dataset)
print(dataset['train'][0])
tokenizer = t.AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
bnb_config = None
if quantization and quantization_bits == "4":
bnb_config = t.BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True)
elif quantization and quantization_bits == "8":
bnb_config = t.BitsAndBytesConfig(load_in_8bit=True)
model = t.AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map={"": accelerator.process_index},
dtype=torch.bfloat16 if use_mixed_precision else torch.float32,
trust_remote_code=True
)
if quantization:
model = p.prepare_model_for_kbit_training(model)
model.config.use_cache = False
model.config.pretraining_tp = 1
model.config.pad_token_id = model.config.eos_token_id
if lora:
lora_config = p.LoraConfig(r=lora_rank, lora_alpha=lora_alpha, lora_dropout=lora_dropout, bias="none", task_type="CAUSAL_LM")
model = p.get_peft_model(model, lora_config)
model.train()
sft_config = r.SFTConfig(
learning_rate=learning_rate,
num_train_epochs=num_train_epochs,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
output_dir=output_dir,
eval_s | https://github.com/huggingface/trl/issues/4110 | closed | [
"๐ bug",
"๐ documentation"
] | 2025-09-19T17:42:26Z | 2025-09-19T20:02:16Z | null | bminesh-shah |
huggingface/transformers | 41,005 | Are we have Qwen3VL Official Model Published by Alibaba | ### Model description
Reference - https://huggingface.co/docs/transformers/main/en/model_doc/qwen3_vl#transformers.Qwen3VLForConditionalGeneration
If not when can we expect any guess? | https://github.com/huggingface/transformers/issues/41005 | closed | [
"New model"
] | 2025-09-19T13:59:34Z | 2025-09-20T10:00:04Z | 1 | Dineshkumar-Anandan-ZS0367 |
huggingface/transformers | 40,993 | HfArgumentParser cannot parse TRL Config | ### System Info
transformers==4.56.1
trl==0.17.0
I used to apply code below
```python
from transformers import HfArgumentParser
from trl import (
ScriptArguments, ModelConfig, SFTConfig
)
parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))
script_arguments, trainer_config, model_config = parser.parse_args_into_dataclasses()
```
to parse training args, but after updating transformers to 4.56, it does not work:
```
Traceback (most recent call last):
File "D:\mytest.py", line 5, in <module>
parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))
File "E:\Anaconda3\envs\myopenai\lib\site-packages\transformers\hf_argparser.py", line 143, in __init__
self._add_dataclass_arguments(dtype)
File "E:\Anaconda3\envs\myopenai\lib\site-packages\transformers\hf_argparser.py", line 260, in _add_dataclass_arguments
raise RuntimeError(
RuntimeError: Type resolution failed for <class 'trl.trainer.sft_config.SFTConfig'>. Try declaring the class in global scope or removing line of `from __future__ import annotations` which opts in Postponed Evaluation of Annotations (PEP 563)
```
How to fix it?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Run
```python
from transformers import HfArgumentParser
from trl import (
ScriptArguments, ModelConfig, SFTConfig
)
parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))
script_arguments, trainer_config, model_config = parser.parse_args_into_dataclasses()
```
### Expected behavior
It should be work | https://github.com/huggingface/transformers/issues/40993 | closed | [
"bug"
] | 2025-09-19T08:29:48Z | 2025-09-19T09:06:20Z | 5 | caoyang-sufe |
huggingface/lerobot | 1,978 | Is there a best fit model to each sim env๏ผ | I try to train diffusion๏ผsmolvla๏ผeven pi0 on the aloha with 200k steps, and found that they all perform much worse (with less than 10% success rate) than act policy, why? Did each env task exist a best-fit policy? or there are problems on my training strategy. | https://github.com/huggingface/lerobot/issues/1978 | closed | [
"question",
"policies",
"simulation"
] | 2025-09-19T02:45:14Z | 2025-10-17T11:25:27Z | null | shs822 |
huggingface/accelerate | 3,784 | AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'. Did you mean: 'deepspeed_plugin'? | ### System Info
```Shell
- Name: accelerate Version: 1.10.1
- Name: transformers Version: 4.54.0
- Name: deepspeed Version: 0.17.5
- Name: torch Version: 2.8.0
- Name: wandb Version: 0.21.4
```
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [x] My own task or dataset (give details below)
### Reproduction
This is a deepspeed stage 2 config which is in json:
```
json = {
"fp16": {
"enabled": false,
"auto_cast": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": true
},
"amp": {
"enabled": false
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.0003,
"betas": [0.9, 0.999],
"eps": 1e-08,
"weight_decay": 0.001
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.0003,
"warmup_num_steps": 0
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5.000000e+08,
"overlap_comm": false,
"reduce_scatter": true,
"reduce_bucket_size": 9.000000e+05,
"contiguous_gradients": true,
"use_multi_rank_bucket_allreduce": false
},
"zero_state": 2,
"gradient_accumulation_steps": 1,
"gradient_clipping": 1,
"train_micro_batch_size_per_gpu": 4,
"mixed_precision": "bf16",
"communication_data_type": "bf16",
"steps_per_print": inf
}
```
I use `accelerate to spin up 8 workers on an AWS EC2 instance`:
```bash
accelerate launch --config_file configs/deepspeed.yaml scripts/main.py
```
The following error is raised when the `trainer` runs `train`:
```
File "/home/ubuntu/llm-classifiier/scripts/main.py", line 88, in <module>
train_qwen_any(cli_args, run_args)
File "/home/ubuntu/llm-classifiier/scripts/train_qwen.py", line 138, in train_qwen_any
trainer.train()
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py", line 2237, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py", line 2758, in _inner_training_loop
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer_callback.py", line 509, in on_train_end
return self.call_event("on_train_end", args, state, control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer_callback.py", line 556, in call_event
result = getattr(callback, event)(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/integrations/integration_utils.py", line 958, in on_train_end
fake_trainer.save_model(temp_dir)
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py", line 3965, in save_model
state_dict = self.accelerator.get_state_dict(self.deepspeed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/accelerate/accelerator.py", line 3903, in get_state_dict
zero3_sharding = self.deepspeed_config["zero_optimization"]["stage"] == 3
^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'. Did you mean: 'deepspeed_plugin'?
```
I am not using zero3 sharding, so I don't know why this is an issue at all!
My deepspeed.yaml looks like this
```
compute_environment: LOCAL_MACHINE
debug: true
deepspeed_config:
deepspeed_config_file: configs/deepspeed_stg2.json
distributed_type: DEEPSPEED
enable_cpu_affinity: false
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
And the actual json file is above.
Because of this I cannot save my models or state_dicts.
### Expected behavior
Unless I am missing something profound, this really shouldn't be happening. | https://github.com/huggingface/accelerate/issues/3784 | closed | [] | 2025-09-18T17:07:54Z | 2025-10-27T15:08:19Z | 1 | alexge233 |
huggingface/lerobot | 1,969 | how to record a multi-task dataset on so101? | I found that only can use "dataset.single_task" to record , but i need to record a dataset contains more than 3 tasks. how to solve it. | https://github.com/huggingface/lerobot/issues/1969 | closed | [] | 2025-09-18T10:18:00Z | 2025-09-21T02:50:59Z | null | Temmp1e |
huggingface/lerobot | 1,966 | SO101FollowerEndEffector? | I am trying to get inverse kinematics to work on my SO-101, and I found SO100FollowerEndEffector but there is no SO101FollowerEndEffector?
I suspect they are interchangeable, but when I use SO100FollowerEndEffector on my SO-101, it want me to recalibrate it, so I just want to make sure before I break anything. | https://github.com/huggingface/lerobot/issues/1966 | open | [
"question",
"robots"
] | 2025-09-17T23:56:38Z | 2025-10-30T08:56:22Z | null | cashlo |
huggingface/lighteval | 970 | How to use a configuration file? | The documentation makes references to using configuration yaml files like [here](https://huggingface.co/docs/lighteval/main/en/use-litellm-as-backend) but it doesn't give the name of the file or which option to feed the config to lighteval. I tried making a `config.yaml`, `config.yml` in the current directory and trying a `--config` option (doesn't exist). | https://github.com/huggingface/lighteval/issues/970 | closed | [] | 2025-09-16T20:13:48Z | 2025-09-24T22:08:32Z | null | oluwandabira |
huggingface/transformers | 40,915 | HfArgumentParser does not support peft.LoraConfig | ### System Info
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@ydshieh (I am not really sure who to tag here)
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from peft import LoraConfig # v0.17.1
from transformers import HfArgumentParser # Built from source
p = HfArgumentParser(dataclass_types=LoraConfig) # fails
```
### Expected behavior
I would expect LoraConfig to be supported by HfArgumentParser.
As I understand, this fails because HfArgumentParser does not support fields of type (`Optional[List[str], str]`).
Is there a plan to support such fields? | https://github.com/huggingface/transformers/issues/40915 | closed | [
"bug"
] | 2025-09-16T16:23:56Z | 2025-09-23T05:16:14Z | 5 | romitjain |
huggingface/diffusers | 12,338 | `AutoencoderDC` bug with `pipe.enable_vae_slicing()` and decoding multiple images | ### Describe the bug
When using the Sana_Sprint_1.6B_1024px and the SANA1.5_4.8B_1024px models, I cannot enable VAE slicing when generating multiple images. I guess this issue will affect the rest of the Sana model and pipeline configurations because they all use the same `AutoencoderDC` model.
I traced the issue to the following [line of code](https://github.com/huggingface/diffusers/blob/751e250f70cf446ae342c8a860d92f6a8b78261a/src/diffusers/models/autoencoders/autoencoder_dc.py#L620), and if I remove the `.sample` part the issue seems to be fixed.
I intend to submit a PR for my proposed fix. Can I confirm that this is supposed to be the correct solution?
### Reproduction
```python
from diffusers import SanaSprintPipeline
import torch
pipe = SanaSprintPipeline.from_pretrained("Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers", text_encoder=text_encoder, torch_dtype=torch.bfloat16)
pipe.to("cuda")
pipe.enable_vae_slicing()
prompt = "A girl"
num_images_per_prompt = 8
output = pipe(
prompt=prompt,
height=1024,
width=1024,
num_inference_steps=2,
num_images_per_prompt=num_images_per_prompt,
intermediate_timesteps=1.3,
max_timesteps=1.56830,
timesteps=None
).images
```
### Logs
```shell
Traceback (most recent call last):
File "F:\AI setups\Diffusers\scripts\inference sana-sprint.py", line 24, in <module>
output = pipe(
^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\diffusers\pipelines\sana\pipeline_sana_sprint.py", line 874, in __call__
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_dc.py", line 620, in decode
decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI setups\Diffusers\diffusers-venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_dc.py", line 620, in <listcomp>
decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Tensor' object has no attribute 'sample'
```
### System Info
- ๐ค Diffusers version: 0.36.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.11.9
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.4
- Transformers version: 4.55.0
- Accelerate version: 1.10.0
- PEFT version: 0.17.0
- Bitsandbytes version: 0.47.0
- Safetensors version: 0.6.2
- xFormers version: 0.0.31.post1
- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@yiyixuxu @DN6 | https://github.com/huggingface/diffusers/issues/12338 | closed | [
"bug"
] | 2025-09-16T12:23:29Z | 2025-09-22T06:55:35Z | 0 | mingyi456 |
huggingface/optimum | 2,355 | Support exporting text-ranking for BERT models | ### Feature request
Currently, `optimum-cli export onnx --model cross-encoder/ms-marco-MiniLM-L-12-v2 cross-encoder--ms-marco-MiniLM-L-12-v2-onnx` says:
```
ValueError: Asked to export a bert model for the task text-ranking (auto-detected), but the Optimum ONNX exporter only supports the tasks feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification for bert. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task text-ranking to be supported in the ONNX export for bert.
```
### Motivation
I'm working on a tool that I intend to distribute to others, for example via `brew install`. It's difficult to packaghe and ship Python, and I also want to prioritize speed of many filesystem and related operations, so I'm writing in Rust, using candle.
It can be a lot of work to implement every single model type by hand in candle. candle-transformers doesn't implement BertForSequenceClassification. Moreover, as model architectures change, I don't want to have to implement each one. It's great to be able to have the entire computation graph stored as data, as in ONNX.
### Your contribution
I'm willing to take a stab at this! If you think it would be helpful, and if you could give a couple pointers how to start! | https://github.com/huggingface/optimum/issues/2355 | closed | [
"Stale"
] | 2025-09-15T21:23:35Z | 2025-10-21T02:10:29Z | 1 | kshitijl |
huggingface/lerobot | 1,923 | Deploying SmolVLA with a simulator | Has anyone been able to deploy the SmolVLA model to control say the SO-100 on a simulator like IsaacSim?
Even if the fine-tuning reliably converges the observed performance on the simulator seems erratic. Do we apply the predicted actions from SmolVLA directly into the Articulation controller as positions? | https://github.com/huggingface/lerobot/issues/1923 | closed | [
"question",
"policies",
"simulation"
] | 2025-09-12T21:06:40Z | 2025-12-11T22:07:02Z | null | aditya1709 |
huggingface/swift-transformers | 237 | Please help. Seeing issues with Hub when integrating | Hello, I'm trying to integrate WhisperKit via https://github.com/argmaxinc/WhisperKit/blob/main/Package.swift but that seems to bring in [swift-transformers](https://github.com/huggingface/swift-transformers) and Hub. I'm seeing issues as below
Hub.package.swiftinterface:34:32: warning: 'BinaryDistinctCharacter' is not a member type of struct 'Hub.Hub'
23:54:09 32 | public init(_ str: Foundation.NSString)
23:54:09 33 | public init(_ str: Swift.String)
23:54:09 34 | public init(_ character: Hub.BinaryDistinctCharacter)
23:54:09 | `- warning: 'BinaryDistinctCharacter' is not a member type of struct 'Hub.Hub'
23:54:09 35 | public init(_ characters: [Hub.BinaryDistinctCharacter])
23:54:09 36 | public init(stringLiteral value: Swift.String
I'm on xcode 16.4 and using swift 5.10. Please help!! Thanks in advance! | https://github.com/huggingface/swift-transformers/issues/237 | closed | [
"question"
] | 2025-09-12T17:06:28Z | 2025-09-17T15:36:52Z | null | rpatnayakuni22 |
huggingface/transformers | 40,815 | get_decoder feature regression in 4.56.0 | ### System Info
In the release of transformers v4.56.0, this PR https://github.com/huggingface/transformers/pull/39509 introduced a refactor of the public `get_decoder` method which previously existed on modes by moving it to the PreTrainedModel class.
Unfortunately this introduced a significant behavior change in that `*CausalForLM` models no longer have the same behavior of having `get_decoder()` return the underlying base model.
For example a `MistralForCausalLM` model named `model` returns `None` when `model.get_decoder()` is called.
The logic for why is occurring is obvious when looking at the offending PR:
```python
def get_decoder(self):
"""
Best-effort lookup of the *decoder* module.
Order of attempts (covers ~85 % of current usages):
1. `self.decoder`
2. `self.model` (many wrappers store the decoder here)
3. `self.model.get_decoder()` (nested wrappers)
4. fallback: raise for the few exotic models that need a bespoke rule
"""
if hasattr(self, "decoder"):
return self.decoder
if hasattr(self, "model"):
inner = self.model
if hasattr(inner, "get_decoder"):
return inner.get_decoder()
return inner
return None
```
In these cases the `if hasattr(self, "model"):` conditional block is entered, and the underlying model has a `get_decoder` method, as it is a `PreTrainedModel`, as all transformers models are. This block will always be entered. At this point we are now in the decoder itself calling its `get_decoder` method. The decoder has no decoder or model attribute, so the function returns `None`, which is the passed to the parent caller.
There are a couple of ways this could be fixed, but I don't know what their current impact would be on other parts of the code. I may open a PR, but I am quite busy at the moment. @molbap @ArthurZucker since you were the authors and reviewers here, do you mind taking another look at this?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use `get_decoder` on say a `MistralForCausalLM` model.
### Expected behavior
The underlying `model` attribute should be returned for `*ForCausalLM` models, not None, as these models are decoder only models by transformers convention. | https://github.com/huggingface/transformers/issues/40815 | closed | [
"bug"
] | 2025-09-11T09:25:12Z | 2025-09-16T08:57:14Z | 4 | KyleMylonakisProtopia |
huggingface/transformers | 40,813 | Incorrect sharding configuration for Starcoder2 model | ### System Info
Transformers main branch (commit [0f1b128](https://github.com/huggingface/transformers/commit/0f1b128d3359a26bd18be99c26d7f04fb3cba914) )
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0a0+5228986c39.nv25.06 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: tensor-parallel
- Using GPU in script?: yes
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Tunning TP inference on `bigcode/starcoder2-7b` throws an error with incorrect tensor shapes due to `base_model_tp_plan` misconfiguration.
`demo.py`:
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigcode/starcoder2-7b"
model = AutoModelForCausalLM.from_pretrained(model_id, tp_plan="auto")
model._tp_plan['model.layers.*.mlp.c_proj'] = 'rowwise'
print(f"TP plan: {model._tp_plan}, class: {type(model._tp_plan)}")
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Can I help"
inputs = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
# distributed run
outputs = model(inputs)
# print the output
print(outputs)
```
run with
```
torchrun --nproc_per_node=2 demo.py
```
The correct `base_model_tp_plan` should replace:
```
['model.layers.*.mlp.c_proj'] = 'colwise'
```
with
```
['model.layers.*.mlp.c_proj'] = 'rowwise'
```
### Expected behavior
Throws:
```
(...)
[rank0]: File "/lustre/fs1/portfolios/coreai/users/gkwasniewski/hf-repo/transformers/src/transformers/models/starcoder2/modeling_starcoder2.py", line 65, in forward
[rank0]: hidden_states = self.c_proj(hidden_states)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank0]: return inner()
[rank0]: ^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1805, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/linear.py", line 125, in forward
[rank0]: return F.linear(input, self.weight, self.bias)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_compile.py", line 51, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
[rank0]: return DTensor._op_dispatcher.dispatch(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_dispatch.py", line 160, in dispatch
[rank0]: self.sharding_propagator.propagate(op_info)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 266, in propagate
[rank0]: OutputSharding, self.propagate_op_sharding(op_info.schema)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 45, in __call__
[rank0]: return self.cache(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 279, in propagate_op_sharding_non_cached
[rank0]: out_tensor_meta = self._propagate_tensor_meta_non_cached(op_schema)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 126, in _propagate_tensor_meta_non_cached
[rank0]: fake_out = op_schema.op(*fake_args, **fake_kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[ra | https://github.com/huggingface/transformers/issues/40813 | closed | [
"bug"
] | 2025-09-11T09:02:53Z | 2025-09-15T08:46:33Z | 1 | greg-kwasniewski1 |
huggingface/lerobot | 1,911 | How to avoid re-write cache data from pyarrow into parquet everytime? | Hi Authors,
When using lerobot dataset in a pytorch dataloader, lerobot dataset will write a huge cache data which is converted from pyarrow to Apache Parquet. How to avoid that?
I can think of two options:
1. Avoid converting to Parquet data and directly read from parquet data. But this may loose reading performance.
2. Can we instead store the Parquet data?
Thanks.
Songlin | https://github.com/huggingface/lerobot/issues/1911 | open | [] | 2025-09-10T22:19:25Z | 2025-09-10T22:19:25Z | null | songlinwei-we |
huggingface/transformers | 40,767 | 3D Object Detection Models | ### Model description
Hi together,
is there a reason or any other thread where 3D models like those at mmdet3d are discussed to be implemented. I have not found any discussion.
Thanks
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
BEVFormer:
https://github.com/fundamentalvision/BEVFormer | https://github.com/huggingface/transformers/issues/40767 | open | [
"New model"
] | 2025-09-09T13:16:33Z | 2025-11-13T21:18:40Z | 3 | SeucheAchat9115 |
huggingface/lerobot | 1,899 | Has anyone tried to export the smolvla as onnx model for deployment? | I have tried to test the trained smolvla model on my PC, it works. I want now to deploy the smolvla on our target board.
I looked into the model structure of smolvla, for the vision-encoder and language embedding parts I can refer to the smolvlm and export them as tow onnx models. I think the robot state embedding also needs to be considered to export as a new onnx model.
The most important part of smolvla inference, i met several issues and have no good idea how to export it as a onnx model.l.
Has anyone tried and successfully exported the smolvla as onnx models for deployment? Thanks! | https://github.com/huggingface/lerobot/issues/1899 | open | [
"question",
"policies",
"performance"
] | 2025-09-09T10:41:14Z | 2025-10-07T20:50:12Z | null | TankerLee |
huggingface/huggingface_hub | 3,339 | What is the best replacement of HfFileSystem.glob with HfApi | In some of our code, we were using something like
```python
hf_fs = HfFileSystem()
files = hf_fs.glob('my/repo/*/model.onnx')
```
But I found that HfFileSystem is much less stable than HfApi, especially in those edge cases (e.g. network unstable)
So what is the best replacement of HfFileSystem.glob with HfApi? Any suggestions? | https://github.com/huggingface/huggingface_hub/issues/3339 | closed | [] | 2025-09-09T09:02:07Z | 2025-09-15T09:12:04Z | null | narugo1992 |
huggingface/transformers | 40,754 | Potentially incorrect value assignment of Llama4TextModel's output in Llama4ForCausalLM's output? | ### System Info
**System Info**
- `transformers` version: 4.55.4
- Platform: Linux-6.15.9-201.fc42.x86_64-x86_64-with-glibc2.41
- Python version: 3.13.5
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX A6000
### Who can help?
@ArthurZucker
@amyeroberts
@qubvel
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
**Task Detail**
Obtaining hidden_states from the outputs of Llama4ForCausalLM
**Problem**
In the source code [modeling_llama4.py](https://github.com/huggingface/transformers/blob/v4.55.4/src/transformers/models/llama4/modeling_llama4.py), the outputs of Llama4ForCausalLM contains a *hidden_states* (See [line 642](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L642)), which is assigned with *outputs.hidden_states*. Here, the *outputs* is the output of Llama4TextModel (See [line 619](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L619C9-L619C16)). However, the output of Llama4TextModel consists of a *last_hidden_state* (assigned the value of *hidden_states*) and a *past_key_values*, but no *hidden_states* (See [line 554-557](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L554-L557)).
Thus, I'm wondering if there is either a typo in [line 642](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L642) where the *hidden_states=outputs.hidden_states* should be replaced by *hidden_states=outputs.last_hidden_state*, or a typo in [line 555](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L555C13-L555C45) where the *last_hidden_state=hidden_states* should be replaced by *hidden_states=hidden_states*?
Thank you for your patience!
### Expected behavior
An explanation or a correction of the source code in [modeling_llama4.py](https://github.com/huggingface/transformers/blob/v4.55.4/src/transformers/models/llama4/modeling_llama4.py) | https://github.com/huggingface/transformers/issues/40754 | closed | [
"Usage",
"bug"
] | 2025-09-08T12:31:39Z | 2025-09-16T19:25:03Z | 3 | st143575 |
huggingface/transformers | 40,752 | How to extract attention weights for the first generated token? | **Title:** Request for clarification: How to extract attention weights for the first generated token?
**Description:**
Hi, I'm trying to extract the attention weights **of the first generated token** (i.e., the first new token produced by `generate()`) with respect to the input prompt. However, I'm observing inconsistent behavior in the shape of `attentions` returned by `model.generate(..., output_attentions=True)`.
Here's what I found:
- For `step 0` (the first generation step), `attentions[0][layer].shape` is `(batch, heads, seq_len, seq_len)` โ e.g., `[1, 16, 1178, 1178]`, where `seq_len` equals the input prompt length.
- This appears to be the **full self-attention matrix of the prompt context**, not the attention of the newly generated token.
- Starting from `step 1`, the shape becomes `(batch, heads, 1, ctx_len)`, which correctly represents the attention of a single generated token.
**Question:**
- Is there a way to directly extract the attention weights **from the first generated token** (i.e., the query of the first new token attending to the prompt keys)?
- Or is the intended behavior to use the last position of the context attention (i.e., `attentions[0][layer][..., -1, :]`) as a proxy for the generation decision?
**Use Case:**
I want to interpret which parts of the input prompt the model attends to when generating the first output token, for interpretability and analysis purposes.
**Environment:**
- Transformers version: [4.51.3]
- Model: [Qwen3]
- Code snippet:
```python
outputs = model.generate(
input_ids,
output_attentions=True,
return_dict_in_generate=True
)
# outputs.attentions[0][layer] has shape (1, 16, 1178, 1178) | https://github.com/huggingface/transformers/issues/40752 | closed | [] | 2025-09-08T09:53:16Z | 2025-09-08T11:41:22Z | null | VincentLHH |
huggingface/transformers.js | 1,407 | Expected time to load a super-resolution model locally | ### Question
Loading a image super-resolution model locally can take more than 10 seconds on my MacBook Pro (M1 Max). Is this expected behavior?
```javascript
env.allowRemoteModels = false;
env.allowLocalModels = true;
env.backends.onnx.wasm.wasmPaths = `/wasm/`;
const upscaler = ref(null);
onMounted(async () => {
upscaler.value = await pipeline('image-to-image', 'Xenova/swin2SR-realworld-sr-x4-64-bsrgan-psnr', {
dtype: 'fp32',
device: 'webgpu',
})
});
```
Warnings observed during the model loading:
```
ort-wasm-simd-threaded.jsep.mjs:100
2025-09-08 13:58:52.881399 [W:onnxruntime:, session_state.cc:1280 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
ort-wasm-simd-threaded.jsep.mjs:100
2025-09-08 13:58:52.882499 [W:onnxruntime:, session_state.cc:1282 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
```
### System Info
npm: @huggingface/transformers@3.7.2
OS: macOS Sequoia 15.6.1
model: Xenova/swin2SR-realworld-sr-x4-64-bsrgan-psnr | https://github.com/huggingface/transformers.js/issues/1407 | closed | [
"question"
] | 2025-09-08T06:26:49Z | 2025-09-30T19:22:34Z | null | ymtoo |
huggingface/lerobot | 1,891 | How to checkout a commit id? | The underlying datasets supports a "revision" flag. Does lerobot? | https://github.com/huggingface/lerobot/issues/1891 | closed | [] | 2025-09-08T04:39:37Z | 2025-09-10T22:53:18Z | null | richardrl |
huggingface/transformers | 40,743 | Support for 4D attention mask for T5 | ### Feature request
Currently, T5 cannot take 4D attention masks (batch_size, num_heads, seq_len, seq_len) as inputs. Passing a 4D attention_mask and a 4D decoder_attention_mask like so leads to a shape-related exception :
```python
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
input_ids = tokenizer("Where is", return_tensors="pt").input_ids
decoder_input_ids = tokenizer("<pad>", return_tensors="pt").input_ids
batch_size, seq_len = input_ids.shape
tgt_len = decoder_input_ids.shape[1]
num_heads = model.config.num_heads
attention_mask = torch.ones(batch_size, num_heads, seq_len, seq_len)
decoder_attention_mask = torch.ones(batch_size, num_heads, tgt_len, tgt_len).tril(0)
model(
input_ids,
decoder_input_ids=decoder_input_ids,
attention_mask=attention_mask,
decoder_attention_mask=decoder_attention_mask,
)
```
One of the problems in the current code is in the handling of the cross-attention mask. Currently, it is created using the 1D encoder attention mask when supplied. However, in the case of a 4D mask, it seems unclear how to correctly use the encoder mask: therefore, the best solution might be to introduce a new 4D mask argument `cross_attention_mask` of shape (batch_size, num_heads, tgt_len, seq_len)`. This lets the user controls all attention masks if necessary.
### Motivation
4D masks are useful for many purposes, as outlined by #27539 and [this blog post](https://huggingface.co/blog/poedator/4d-masks), but not all models support them.
### Your contribution
I propose to fix the code to handle 4D attention masks, and to add a new `cross_attention_mask` argument to add the possibility to control the cross attention mask manually. I wrote a version of that code in [this fork](https://github.com/Aethor/transformers/tree/t5-4d-attention-mask).
I'm happy to create a PR with my code, but:
1. This is my first transformers contribution, I need help with some things such as handling the "Copy" code duplication mechanism of transformers. Should other similar models with copied functions from T5 be changed as well?
2. Although I wrote a [first test with trivial masks](https://github.com/Aethor/transformers/blob/22dc62edbdbc3f2afeb90a31c75047711c1afc5c/tests/models/t5/test_modeling_t5.py#L1876), I am not entirely sure how to test this
3. I want to be sure that adding the new `cross_attention` mask parameter is the right way to do this and will be approved | https://github.com/huggingface/transformers/issues/40743 | open | [
"Feature request"
] | 2025-09-07T07:18:05Z | 2025-09-09T11:43:33Z | 5 | Aethor |
huggingface/lerobot | 1,882 | Pretrain - Code for pretraining smolvla | ## Guidance on Replicating the Pre-training Process with Community Datasets
Hi team,
First off, thank you for the fantastic work on SmolVLA and for open-sourcing the model and code. It's a great contribution to the community.
I am trying to replicate the pre-training process as described in the original paper. I have located the pre-training data on the Hugging Face Hub, specifically:
- `HuggingFaceVLA/community_dataset_v1`
- `HuggingFaceVLA/community_dataset_v2`
My plan is to download both datasets and merge them into a single directory, for example `/path/to/my/pretrain_data/`, to serve as the input for the pre-training script.
To ensure I am on the right track, I would be grateful if you could provide some guidance on the following points:
1: **Data Preparation & Merging**: Regarding the two datasets (community_dataset_v1 and v2), what is the correct procedure for using them together? Should I manually download and merge their contents into a single local directory? I also noticed the data is in a multi-directory (sharded) format, unlike many simpler single-folder datasets. Does the training code handle this structure automatically once the data is prepared locally?
2: **Dataset Configuration**: How should the combined dataset be specified in the configuration file? My main confusion is that the parameter dataset.repo_id appears to be a required field that accepts a single repository ID. How can I configure the training script to use the merged data from both v1 and v2, which I have stored locally?
3: **Training Script & Execution**: Once the data is correctly prepared and configured, could you please point me to the exact script and provide an example command to launch the pre-training? Since the weight of VLM is initialized, so what I need is the script after initializing VLM weight and then train on large-scale community dataset. In particular, I'd like to ask the `dataset.repo_id` if I store v1 and v2 under the same folder? Since I discovered this param cannot be None.
Any help or pointers to the relevant documentation would be greatly appreciated. I believe a short tutorial or a section in the README on pre-training would also be immensely helpful for others in the community looking to build upon your work.
Thank you for your time and consideration! | https://github.com/huggingface/lerobot/issues/1882 | closed | [
"question",
"dataset"
] | 2025-09-07T03:18:04Z | 2025-09-23T09:06:13Z | null | ruiheng123 |
huggingface/transformers | 40,708 | When using a custom model, it copies the code into Hugging Faceโs cache directory. | ```
model = AutoModel.from_pretrained(
model_args.model_name_or_path,
trust_remote_code=True,
torch_dtype=compute_dtype,
device_map=device_map,
# init_vision=True,
# init_audio=False,
# init_tts=False,
)
```
`model_args.model_name_or_path=/mnt/241hdd/wzr/MiniCPM-V-CookBook/MiniCPM-V-4_5`
The code actually runs in `/root/.cache/huggingface/modules/transformers_modules/MiniCPM-V-4_5`.
This makes my debugging difficult.
Is there a way to run the code directly? | https://github.com/huggingface/transformers/issues/40708 | closed | [] | 2025-09-05T07:21:40Z | 2025-11-15T08:03:16Z | 4 | wzr0108 |
huggingface/transformers | 40,690 | Batches loaded from wrong epoch when resuming from second epoch | ### System Info
**Required system information**
```text
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): 2.15.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: no
- GPU type: GRID A100D-16C
```
### Who can help?
@zach-huggingface @SunMarc as it concerns `transfomers`' `Trainer`
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
### **1. Bug description**
Let's take the example of the provided script:
- number of data points: 10
- batch size: 2
So 1 epoch = 5 steps.
If we launch a training until the end and monitor the data order:
- epoch 0: 4, 1, 7, 5, 3, 9, 0, 8, 6, 2
- epoch 1: 5, 6, **|| 1, 2, 0, 8, 9, 3, 7, 4**
- epoch 2: 8, 7, 1, 5, 6, 9, 0, 4, 2, 3
But if we stop the training at step 6 and resume (from character `||`) the training to the end, we get the following data order:
- epoch 0: 4, 1, _7, 5, 3, 9, 0, 8, 6, 2_
- epoch 1: 5, 6 **|| 7, 5, 3, 9, 0, 8, 6, 2**
- epoch 2: 8, 7, 1, 5, 6, 9, 0, 4, 2, 3
We spotted that the `epoch_dataloader.iteration` is not properly set for the first epoch after resuming. It is initially set to 0, this is why it loads the same order as in epoch 0 (cf data order in italic of the last 4 batches of epoch 0).
### **2. Reproducing the error**
The script to run is available at https://github.com/ngazagna-qc/transformers/blob/fix-data-order-resumed-epoch/reproduce_wrong_resumed_epoch.py.
Run:
```shell
python reproduce_wrong_resumed_epoch.py --trainer-class Trainer
```
### Expected behavior
### **3. Bug fix**
We provide the fixed `Trainer` here: https://github.com/ngazagna-qc/transformers/blob/fix-data-order-resumed-epoch/src/transformers/trainer_fixed.py#L56
The fix only consists to add a line to the `_inner_training_loop` method:
```python
if steps_trained_in_current_epoch > 0:
epoch_dataloader = skip_first_batches(epoch_dataloader, steps_trained_in_current_epoch)
#### BEGINNING OF THE FIX ####
epoch_dataloader.iteration = epochs_trained # FIX: set dataloader to correct epoch
#### END OF THE FIX ####
steps_skipped = steps_trained_in_current_epoch
steps_trained_in_current_epoch = 0
rng_to_sync = True
```
It can be tested that this solves the order by running:
```shell
python reproduce_wrong_resumed_epoch.py --trainer-class TrainerFixed
``` | https://github.com/huggingface/transformers/issues/40690 | closed | [
"bug"
] | 2025-09-04T11:48:41Z | 2025-12-03T13:14:04Z | 6 | ngazagna-qc |
huggingface/optimum | 2,347 | Gemma3n convert to onnx format | Hello,
How do I convert the Gemma3n model to the ONNX format using the OptimumCLI command?
Thanks in advance. | https://github.com/huggingface/optimum/issues/2347 | closed | [
"Stale"
] | 2025-09-04T09:13:19Z | 2025-10-15T02:09:55Z | 2 | shahizat |
huggingface/transformers | 40,680 | Idea: Exploring Mathematical Extensions for GPT-style Models (teaser) | Hi Transformers team ๐,
Iโve been experimenting with a conceptual enhancement to GPT-style architecturesโintroducing mathematical mechanisms for memory and adaptive learningโwhile keeping the overall transformer backbone intact.
Iโve documented the approach in Markdown (README + comparison notes), but havenโt published it yet. Before I share more, Iโd love your input:
- Does this kind of experimental idea fit within the scope of Transformers?
- Would you be open to viewing or discussing the draft privately?
Looking forward to hearing your thoughts ๐ | https://github.com/huggingface/transformers/issues/40680 | closed | [] | 2025-09-04T07:23:29Z | 2025-10-12T08:02:38Z | 3 | muzamil-ashiq |
huggingface/transformers | 40,647 | how to get response text during training | I want to obtain the inferred output text during the evaluation step in the training process, not just the eval loss.
<img width="1264" height="211" alt="Image" src="https://github.com/user-attachments/assets/9dd432c5-74ea-4290-adff-7865cf3ea481" /> | https://github.com/huggingface/transformers/issues/40647 | closed | [] | 2025-09-03T10:37:51Z | 2025-10-12T08:02:43Z | null | zyandtom |
huggingface/diffusers | 12,276 | The image is blurry. | How to solve image blurriness during fine-tuning? | https://github.com/huggingface/diffusers/issues/12276 | open | [] | 2025-09-03T08:29:38Z | 2025-09-03T08:29:38Z | 0 | sucessfullys |
huggingface/gym-hil | 32 | how to perform hil in sim | https://github.com/huggingface/gym-hil/issues/32 | closed | [] | 2025-09-02T17:10:05Z | 2025-09-16T14:02:32Z | null | prathamv0811 | |
huggingface/transformers | 40,606 | GPT-OSS attention backends available for SM120 other than Eager? | I was wondering any attention backend we can use for long context if using SM120 GPU? Since the "eager_attention_forward" uses the naive implementation that computes the full attention in one go, which can lead to OOM for large context, but I couldn't use other implementations since they either do not support sinks or SM120.
Many thanks! | https://github.com/huggingface/transformers/issues/40606 | closed | [] | 2025-09-02T03:21:16Z | 2025-10-12T08:02:48Z | 4 | TheTinyTeddy |
huggingface/peft | 2,764 | merge_and_unload returns the base (prior to fine-tuning) back!!!! | I have fine-tune a model using PEFT and now I want to merge the base model to adapter. This is what I am doing:
```
base_model = AutoModelForCausalLM(model_id, device_map = 'auto')
model_finetuned = PeftModel.from_pretrained(base_model, adapter_path)
```
Now the size of `model_finetuned `is roughly 42GB but when I do the following to merge the adapter into base:
`merged_model = model_finetuned.()
`
the size of `merged_model `is 36GB and its performance is like the base model, seems the adapter effect is gone.
I remember I used this feature in the past to get merged model, is anything changed?
This is related post, where the last comment says this is normal, can someone elaborate?
https://github.com/huggingface/peft/issues/868
Can I just save the `model_finetuned ` as my merged model, can someone explain what is going on and why the merge_and_unload() is doing opposite of what it is supposed to do.
| https://github.com/huggingface/peft/issues/2764 | closed | [] | 2025-09-01T04:07:36Z | 2025-10-09T15:26:15Z | 12 | manitadayon |
huggingface/lerobot | 1,822 | As of 08/31/2025, how do you create a v2.1 dataset from raw data? | My search is cursory, but I can't find any tutorial or example on creating a v2.1 dataset on the main branch. So, how do you create a Lerobot dataset in the current version? Should I refer to older commits | https://github.com/huggingface/lerobot/issues/1822 | open | [
"question",
"dataset"
] | 2025-08-31T18:29:34Z | 2025-10-08T13:02:44Z | null | IrvingF7 |
huggingface/text-generation-inference | 3,318 | Infinite tool call loop: `HuggingFaceModel` and `text-generation-inference` | ## Description
Hello. Needless to say, amazing library. Please let me know if you'd like me to try something or if you need more info.
I've been going through various local model providers trying to find one that works well, when I cam across a rather shocking bug when running against Huggingface's TGI model host.
The problem appears whether using the OpenAI "compatible" endpoints or the `HuggingfaceModel` with custom `AsyncInferenceClient` and `HuggingFaceProvider`. The latter probably being the official approach, the code included here will be using that.
## System Info
`curl 127.0.0.1:8080/info | jq`:
```json
{
"model_id": "/models/meta-llama/Meta-Llama-3-8B-Instruct",
"model_sha": null,
"model_pipeline_tag": null,
"max_concurrent_requests": 128,
"max_best_of": 2,
"max_stop_sequences": 4,
"max_input_tokens": 8191,
"max_total_tokens": 8192,
"validation_workers": 2,
"max_client_batch_size": 4,
"router": "text-generation-router",
"version": "3.3.4-dev0",
"sha": "9f38d9305168f4b47c8c46b573f5b2c07881281d",
"docker_label": "sha-9f38d93"
}
```
`nvidia-smi`:
```shell
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.64.05 Driver Version: 575.64.05 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 On | Off |
| 40% 54C P2 61W / 450W | 21499MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 4090 Off | 00000000:48:00.0 Off | Off |
| 30% 43C P2 52W / 450W | 21394MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
```
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
## Reproduction
### Setup
Here's the `docker-compose.yaml` I'm using to start TGI:
```yaml
services:
text-generation-inference:
image: ghcr.io/huggingface/text-generation-inference:latest
container_name: tgi
ports:
- "8081:80"
volumes:
- ../../../models:/models:ro
- tgi-data:/data
environment:
- RUST_LOG=info
# I have also tested with 3.1-8B and 3.2-3B with the same end results
command: >
--model-id /models/meta-llama/Meta-Llama-3-8B-Instruct
--hostname 0.0.0.0
--port 80
--trust-remote-code
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["0", "1"]
capabilities: [gpu]
shm_size: "64g"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes:
tgi-data:
driver: local
```
### Code
All code is running in a Jupyter notebook.
Here's the common setup cell:
```python
from huggingface_hub import AsyncInferenceClient
from pydantic_ai.models.huggingface import HuggingFaceModel
from pydantic_ai.providers.huggingface import HuggingFaceProvider
from pydantic_ai.providers.openai import OpenAIProvider
provider = OpenAIProvider(base_url="http://localhost:8081/v1") # Just used to get the model slug
models = await provider.client.models.list()
client = AsyncInferenceClient(base_url="http://localhost:8081/")
print(f"Connected to TGI. Available models: {len(models.data)}")
for model in models.data:
print(f" - {model.id}")
# Create the model instance
agent_model = HuggingFaceModel(
models.data[0].id,
provider=HuggingFaceProvider(hf_client=client, api_key="None"),
# Annoyingly, despite this being basically the default profile, Llama 3's tool calls often fall through to the response without this
profile=ModelProfile(
supports_tools=True,
json_schema_transformer=InlineDefsJsonSchemaTransformer
)
)
```
### Working: Basic requests and history
1. Create the basic agent
```python
from pydantic_ai import Agent
simple_agent = Agent(model=agent_model)
```
2. Make a simple request
```python
simple_result = await simple_agent.run("Tell me a joke.")
simple_result.output # "Why couldn't the bicycle stand up by itself?\n\nBecau | https://github.com/huggingface/text-generation-inference/issues/3318 | open | [] | 2025-08-31T08:23:46Z | 2025-08-31T08:58:13Z | 1 | baughmann |
huggingface/diffusers | 12,257 | [Looking for community contribution] support Wan 2.2 S2V: an audio-driven cinematic video generation model | We're super excited about the Wan 2.2 S2V (Speech-to-Video) model and want to get it integrated into Diffusers! This would be an amazing addition, and we're looking for experienced community contributors to help make this happen.
- **Project Page**: https://humanaigc.github.io/wan-s2v-webpage/
- **Source Code**: https://github.com/Wan-Video/Wan2.2#run-speech-to-video-generation
- **Model Weights**: https://huggingface.co/Wan-AI/Wan2.2-S2V-14B
This is a priority for us, so we will try review fast and actively collabrate with you throughout the process :)
| https://github.com/huggingface/diffusers/issues/12257 | open | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2025-08-29T08:04:43Z | 2025-08-29T10:23:52Z | 0 | yiyixuxu |
huggingface/optimum-onnx | 44 | How to use streaming inference for onnx models exported from QWEN3-4B models | How to use streaming inference for onnx models exported from QWEN3-4B models | https://github.com/huggingface/optimum-onnx/issues/44 | closed | [] | 2025-08-29T01:48:07Z | 2025-10-06T12:29:34Z | null | williamlzw |
huggingface/diffusers | 12,255 | [BUG] Misleading ValueError when subclassing StableDiffusionImg2ImgPipeline with a mismatched __init__ signature | ### Describe the bug
When subclassing diffusers.StableDiffusionImg2ImgPipeline, if the subclass's __init__ signature does not include the requires_safety_checker: bool = True argument, the default .from_pretrained() loader raises a confusing and indirect ValueError.
The official documentation for StableDiffusionImg2ImgPipeline confirms that requires_safety_checker is an explicit keyword argument in its __init__ signature.
The current ValueError (pasted below) reports a component list mismatch between 'kwargs' and 'requires_safety_checker'. This error message hides the true root causeโa TypeError from the signature mismatchโmaking the problem very difficult to debug.
### Reproduction
The following minimal script reliably reproduces the error.
```
from diffusers import StableDiffusionImg2ImgPipeline
from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.schedulers import KarrasDiffusionSchedulers
from transformers import CLIPTextModel, CLIPTokenizer
from typing import Optional, Any
# A custom pipeline inheriting from StableDiffusionImg2ImgPipeline,
# but with an incorrect __init__ signature. It incorrectly tries
# to catch `requires_safety_checker` with **kwargs.
class MyCustomPipeline(StableDiffusionImg2ImgPipeline):
def __init__(
self,
vae: AutoencoderKL,
text_encoder: CLIPTextModel,
tokenizer: CLIPTokenizer,
unet: UNet2DConditionModel,
scheduler: KarrasDiffusionSchedulers,
safety_checker: Optional[Any] = None,
feature_extractor: Optional[Any] = None,
image_encoder: Optional[Any] = None,
**kwargs,
):
super().__init__(
vae=vae,
text_encoder=text_encoder,
tokenizer=tokenizer,
unet=unet,
scheduler=scheduler,
safety_checker=safety_checker,
feature_extractor=feature_extractor,
image_encoder=image_encoder,
**kwargs,
)
# This line will fail and raise the misleading ValueError.
# It can be copy-pasted directly to reproduce the bug.
pipe = MyCustomPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
```
### Logs
```shell
ValueError: MyCustomPipeline {
"_class_name": "MyCustomPipeline",
"_diffusers_version": "0.29.0.dev0", # Replace with your version
"feature_extractor": [
"transformers",
"CLIPImageProcessor"
],
"image_encoder": [
null,
null
],
"requires_safety_checker": true,
"safety_checker": [
"stable_diffusion",
"StableDiffusionSafetyChecker"
],
"scheduler": [
"diffusers",
"PNDMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
}
has been incorrectly initialized or <class '__main__.MyCustomPipeline'> is incorrectly implemented. Expected ['feature_extractor', 'image_encoder', 'kwargs', 'safety_checker', 'scheduler', 'text_encoder', 'tokenizer', 'unet', 'vae'] to be defined, but ['feature_extractor', 'image_encoder', 'requires_safety_checker', 'safety_checker', 'scheduler', 'text_encoder', 'tokenizer', 'unet', 'vae'] are defined.
```
### System Info
diffusers version: 0.34.0
Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
Python version: 3.12.11 | [GCC 11.2.0]
PyTorch version: 2.5.1+cu121
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/12255 | closed | [
"bug"
] | 2025-08-28T18:31:14Z | 2025-08-30T07:41:16Z | 2 | BoostZhu |
huggingface/peft | 2,759 | PeftModel trainable parameters with multiple adapters | ### System Info
peft-0.17.1
python 3.9
### Who can help?
@BenjaminBossan
### Reproduction
**1) modules_to_save gradient true even when is_trainable=False**
The adapters has both modules_to_save and target_modules
```
peft_backbone = PeftModel.from_pretrained(
target_backbone,
safe_encoder_adapter_path1,
adapter_name=adapter_name1,
is_trainable=False
)
status = peft_backbone.get_model_status()
check_trainable_params(target_backbone)
```
```
def check_trainable_params(model, print_layers=True):
total_params = 0
trainable_params = 0
for name, param in model.named_parameters():
num_params = param.numel()
total_params += num_params
if param.requires_grad:
trainable_params += num_params
if print_layers:
print(f"[TRAINABLE] {name} - shape: {tuple(param.shape)}")
elif print_layers:
print(f"[FROZEN] {name} - shape: {tuple(param.shape)}")
print(f"\nTotal parameters: {total_params:,}")
print(f"Trainable parameters: {trainable_params:,}")
print(f"Frozen parameters: {total_params - trainable_params:,}")
print(f"Trainable ratio: {100 * trainable_params / total_params:.2f}%")
return trainable_params, total_params
```
example of printed trainable params
[TRAINABLE] blocks.0.modules_to_save.adapter1.norm1.weight - shape: (1408,)
[FROZEN] blocks.2.attn.qkv.lora_A.adapter1.weight - shape: (32, 1408)
**2) Loading an adapter after using from_pretrained**
```
peft_backbone = PeftModel.from_pretrained(
target_backbone,
safe_encoder_adapter_path1,
adapter_name=modality_name,
is_trainable=False
)
status = peft_backbone.get_model_status()
target_backbone.load_adapter(safe_encoder_adapter_path2, is_trainable=False, adapter_name=adapter2)
status = peft_backbone.get_model_status()
```
status before load_adapter shows {'adapter1': False} while after the load_adapter {'adapter2': False, 'adapter1': True}
I think the issue comes from BaseTurnerLayer.set_adapter that set True all my adapter1 lora layers' gradient while setting properly the adapter2 lora layers' gradient to False.
BaseTurnerLayer.set_adapter is called when doing self.add_adapter in PeftModel.load_adapter.
### Expected behavior
**1) modules_to_save gradient true even when is_trainable=False**
Expecting the gradients for modules_to_save layers to be false. It's working properly for lora layers.
**2) Loading an adapter after using from_pretrained**
Expecting adapter1 to remain gradient false (is_trainable=False during from_pretrained loading) even after loading another adapter.
**Other informations:**
Regarding issue 1), in the code of 2), the modules_to_save for adapter2 were properly set to false when using load_adapter with is_trainable=false.
[TRAINABLE] base_model.model.blocks.39.modules_to_save.adapter1.mlp.fc2.bias - shape: (1408,)
[FROZEN] base_model.model.blocks.39.modules_to_save.adapter2.norm1.weight - shape: (1408,)
More generally, is there any reason peftmodel has to change the requires_gradient of adapters when calling set_adapter? (https://github.com/huggingface/peft/issues/2749)
I assume that it might be related to the fact that there might be a problem to have non activated adapter but with requires_gradient=True?
When using the library I was expecting to be able to set what params needed to be trained on all my adapters upon loading them with from_pretrained and load_adapter (or manually) then simply switch between adapters during the training with set_adapter.
| https://github.com/huggingface/peft/issues/2759 | closed | [] | 2025-08-28T16:36:25Z | 2025-10-06T15:04:09Z | 8 | NguyenRichard |
huggingface/transformers | 40,462 | Question about RoPE Implementation in modeling_llama: Should torch.cat be repeat_interleave? | Hi,
I was going through the code for `modeling_llama` and the RoPE implementation. I came across the following function:
```
def forward(self, x, position_ids):
inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
position_ids_expanded = position_ids[:, None, :].float()
device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
with torch.autocast(device_type=device_type, enabled=False): # Force float32
freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
emb = torch.cat((freqs, freqs), dim=-1)
cos = emb.cos() * self.attention_scaling
sin = emb.sin() * self.attention_scaling
return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
```
I believe the line `emb = torch.cat((freqs, freqs), dim=-1)` should be replaced with `repeat_interleave`. This is because the cosine/sine angles for matrix multiplication should be structured like:
```
[cos(ฮธโ), cos(ฮธโ), cos(ฮธโ), cos(ฮธโ), cos(ฮธโ), cos(ฮธโ), ...]
```
This way, further down the stream when we compute:
```
q_embed = (q * cos) + (rotate_half(q) * sin)
```
...the values are aligned properly for pairwise rotation. However, the current `torch.cat((freqs, freqs), dim=-1) ` should produce:
```
[cos(ฮธโ), cos(ฮธโ), cos(ฮธโ), cos(ฮธโ), cos(ฮธโ), cos(ฮธโ), ...]
```
which seems incorrect. Am I missing something?
Thanks,
Abhidip | https://github.com/huggingface/transformers/issues/40462 | closed | [] | 2025-08-26T16:32:41Z | 2025-08-27T10:01:11Z | 2 | abhidipbhattacharyya |
huggingface/transformers | 40,459 | `use_kernels=True` does not invoke custom kernels | ### System Info
- `transformers` version: 4.56.0.dev0
- Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31
- Python version: 3.12.7
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@ArthurZucker
### Reproduction
```python
import logging
logging.basicConfig(level=logging.INFO)
import torch
from transformers import (
AutoTokenizer, AutoModelForCausalLM,
)
model_id = "openai/gpt-oss-20b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto,
use_kernels=True,
).eval()
messages = [
{"role": "system", "content": "What is Tensor Parallelism?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="low",
).to(model.device)
with torch.inference_mode():
generated = model.generate(
**inputs,
do_sample=False,
temperature=None,
max_new_tokens=64,
disable_compile=True,
)
decoded_generation = tokenizer.batch_decode(generated, skip_special_tokens=True)[0]
print(decoded_generation)
```
### Expected behavior
Noting that I have activated logging, I should be able to see the logs for all the custom kernels being invoked. While the `LigerRMSNorm` is being invoked I do not see the `MegaBlocksMoeMLP` as it should be (as [stated in the modelling file here](https://github.com/huggingface/transformers/blob/263d06fedc17bb28f70dabe2acae562bc617ef9b/src/transformers/models/gpt_oss/modeling_gpt_oss.py#L156)).
I also note that while the `LigerRMSNorm` is invoked but it complains that it cannot be used due to not being compatible with compile:
```
INFO:root:Using layer `LigerRMSNorm` from repo `kernels-community/liger_kernels` (revision: main) for layer `LigerRMSNorm`
INFO:root:Layer does not support torch.compile, using fallback
```
I have used `disable_compile=True,` in the `.generate()` method, which should have taken care of the issue.
### Solution
The way I could invoke the custom kernels was to swap out these lines:
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L5241-L5243
With the following
```py
from kernels import Device, Mode, kernelize
kernelize(model, device=Device(type=model.device.type), mode=Mode.INFERENCE)
```
While this is not the solution, and we should infer what mode the model is in, I thought of listing the current personal solution down for ease of ideation. | https://github.com/huggingface/transformers/issues/40459 | closed | [
"bug"
] | 2025-08-26T13:32:35Z | 2025-09-16T08:50:55Z | 1 | ariG23498 |
huggingface/diffusers | 12,241 | WAN2.1 FLF2V: Incorrect MASK Creation???? | Hello! I think that it is maybe error. (Or not, please explain it for me!!)
In **WanImageToVideoPipeline** class in `pipline_wan_i2v.py`,
<img width="868" height="243" alt="Image" src="https://github.com/user-attachments/assets/8108a9e9-8632-44a1-93b8-abd9ae6a22cd" />
(the code is the part of `prepare_latents` function)
**For I2V**, masking shape like as below:
```
[[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]]
```
I understood: when the mask is 1, input video frame does not change.
(*Mask shape: [1, 4, 21, 60, 104] = [B, C, F, H, W])
**But in the FLF2V case,** masking shape like as below:
```
[[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]
[1, 0, 0, ... , 0]
**[1, 0, 0, ... , 1]]**
```
Here, **why the last frame mask has 1 only in last channel??**
Is there anyone who can explain this part? | https://github.com/huggingface/diffusers/issues/12241 | open | [] | 2025-08-26T12:23:09Z | 2025-08-27T02:10:49Z | 1 | KyujinHan |
huggingface/lerobot | 1,792 | how to train lerobot model offline with offline data? | Hi, I'm trying to configure lerobot to train with pre-downloaded models and datasets. I'm stuck, however, with how to organize the model cache and dataset cache, and how to tell the train script I'm using offline everything?
I tried to download the model and dataset:
```
$ hf download lerobot/pi0 --cache-dir ~/lerobot_download/hf_models/lerobot/pi0/
$ hf download lerobot/aloha_sim_transfer_cube_human --repo-type dataset --cache-dir ~/lerobot_download/hf_datasets/lerobot/aloha_sim_transfer_cube_human/
```
| https://github.com/huggingface/lerobot/issues/1792 | closed | [] | 2025-08-26T10:20:56Z | 2025-09-03T10:48:37Z | null | dalishi |
huggingface/accelerate | 3,748 | How pass two layer class by use --fsdp_transformer_layer_cls_to_wrap? | https://github.com/huggingface/accelerate/issues/3748 | closed | [] | 2025-08-26T08:56:32Z | 2025-08-26T09:14:18Z | null | sunjian2015 | |
huggingface/diffusers | 12,239 | Support for InfiniteTalk | ### Model/Pipeline/Scheduler description
https://huggingface.co/MeiGen-AI/InfiniteTalk is a wonderful audio driven video generation model and can also support infinite frame , which is based on wan2.1. The demo and user's workflow is also awesome. some examples: https://www.runninghub.cn/ai-detail/1958438624956203010
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://huggingface.co/MeiGen-AI/InfiniteTalk
https://github.com/MeiGen-AI/InfiniteTalk | https://github.com/huggingface/diffusers/issues/12239 | open | [
"help wanted",
"New pipeline/model",
"contributions-welcome"
] | 2025-08-26T06:57:43Z | 2025-09-05T00:18:46Z | 1 | supermeng |
huggingface/transformers | 40,406 | Cache tokenlizer | ### Feature request
I am using Grounding DINO, which makes use of the `bert-base-uncanned` tokenlizer. Unfortunately, this model is never downloaded to cache, forcing a remote call to the API. Please allow for tokenlizer to be cached locally.
### Motivation
I want to use my software offline.
### Your contribution
I'm trying to find a way to download it manually as a workaround. | https://github.com/huggingface/transformers/issues/40406 | open | [
"Feature request"
] | 2025-08-24T08:36:14Z | 2025-09-10T11:49:06Z | 5 | axymeus |
huggingface/tokenizers | 1,851 | SentencePieceBPE + Unicode NFD preprocessing leads to noise ? | Hi,
I have had the issue multiple times, so I assume I am doing something wrong.
**Versions:**
- tokenizers==0.21.4
- transformers==4.55.4
**Training script**
```py
from transformers import PreTrainedTokenizerFast
from pathlib import Path
from read import get_texts_iter_for_tokenizer
from tokenizers import SentencePieceBPETokenizer, normalizers, pre_tokenizers
def main():
output_dir = Path("hf_tokenizer")
output_dir.mkdir(parents=True, exist_ok=True)
# Dump texts to a file
texts = get_texts_iter_for_tokenizer()
# Train SentencePiece model
tokenizer = SentencePieceBPETokenizer()
# Adding normalization and pre_tokenizer
tokenizer.normalizer = normalizers.Sequence([normalizers.NFD()])
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
# Adding special tokens and creating trainer instance
special_tokens = ["<unk>", "<pad>", "<cls>", "<sep>", "<mask>"]
# Training from iterator REMEMBER it's training on test set...
tokenizer.train_from_iterator(texts, special_tokens=special_tokens, show_progress=True)
fast_tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token="<unk>",
pad_token="<pad>",
cls_token="<cls>",
sep_token="<sep>",
mask_token="<mask>"
)
fast_tokenizer.save_pretrained(str(output_dir))
```
Script to reproduce bug:
```py
from transformers import PreTrainedTokenizerFast
hf_tokenizer = PreTrainedTokenizerFast.from_pretrained("hf_tokenizer")
# Test
print(hf_tokenizer.tokenize("โiฬ reฬ dnฬi uฬพsum"))
# ['รขฤฃฤฌ', 'i', 'รฤฅ', 'ฤ re', 'รฤฅ', 'ฤ dn', 'รฤฅ', 'i', 'ฤ u', 'รยพ', 'sum']
print(hf_tokenizer.decode(hf_tokenizer.encode("โiฬ reฬ dnฬi uฬพsum"))
# รขฤฃฤฌiรฤฅฤ reรฤฅฤ dnรฤฅiฤ uรยพsum
```
I assume I am doing something wrong around preprocessing / postprocessing ?
| https://github.com/huggingface/tokenizers/issues/1851 | open | [] | 2025-08-24T08:28:08Z | 2025-09-17T09:33:11Z | 3 | PonteIneptique |
huggingface/coreml-examples | 17 | how to get absolute depth๏ผmeters๏ผ | how to get absolute depth๏ผmeters๏ผ | https://github.com/huggingface/coreml-examples/issues/17 | open | [] | 2025-08-24T03:20:58Z | 2025-08-24T03:20:58Z | null | jay25208 |
huggingface/transformers | 40,398 | NVIDIA RADIO-L | ### Model description
While exploring, I came across [nvidia/RADIO-L](https://huggingface.co/nvidia/RADIO-L) and was wondering about its current support.
1. May I ask if RADIO-L is already supported in Transformers?
2. If not, would it be considered suitable to add?
3. If a model requires trust_remote_code=True, what does that signify regarding its suitability for addition to Transformers?
Please share the general criteria for models to be added to Transformers.
Thank you very much for your guidance
cc: @zucchini-nlp @Rocketknight1
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/40398 | open | [
"New model"
] | 2025-08-23T11:14:42Z | 2025-08-26T14:44:11Z | 4 | Uvi-12 |
huggingface/diffusers | 12,222 | [Contribution welcome] adding a fast test for Qwen-Image Controlnet Pipeline | We are looking for help from community to add a fast time for this PR
https://github.com/huggingface/diffusers/pull/12215
You can add a file under this folder:
https://github.com/huggingface/diffusers/tree/main/tests/pipelines/qwenimage
You can reference other tests we added for qwee pipelines [example](https://github.com/huggingface/diffusers/blob/main/tests/pipelines/qwenimage/test_qwenimage.py), as well as controlnet fasts tests [example](https://github.com/huggingface/diffusers/tree/main/tests/pipelines/controlnet_flux) | https://github.com/huggingface/diffusers/issues/12222 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-08-22T21:04:50Z | 2025-08-25T01:58:59Z | 6 | yiyixuxu |
huggingface/diffusers | 12,221 | [Looking for community contribution] support DiffSynth Controlnet in diffusers | ### Model/Pipeline/Scheduler description
Hi!
We want to add first party support for DiffSynth controlnet in diffusers, and we are looking for some help from the community!
Let me know if you're interested!
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://huggingface.co/SahilCarterr/Qwen-Image-Blockwise-ControlNet-Canny
https://huggingface.co/SahilCarterr/Qwen-Image-Blockwise-ControlNet-Depth | https://github.com/huggingface/diffusers/issues/12221 | open | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2025-08-22T20:49:18Z | 2025-09-11T10:01:08Z | 5 | yiyixuxu |
huggingface/safetensors | 649 | How to determine if a file is a safetensor file | Is there a good and fast way to determine if a file is a safetensors file. We would like to avoid reading the whole header.
Background we are currently trying to add safetensors as a datatype to the Galaxy project: https://github.com/galaxyproject/galaxy/pull/20754 | https://github.com/huggingface/safetensors/issues/649 | open | [] | 2025-08-22T09:17:49Z | 2025-09-03T11:08:30Z | null | bernt-matthias |
huggingface/lerobot | 1,775 | What's the finetuning method? Is it all full-finetuning? | I could't find any thing about LORA finetuning, is the default method full-finetuning by now? | https://github.com/huggingface/lerobot/issues/1775 | closed | [
"question",
"policies"
] | 2025-08-22T06:48:25Z | 2025-10-07T20:55:10Z | null | lin-whale |
huggingface/lerobot | 1,774 | Finetune smolvla with vision encoder | ### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-6.8.0-65-generic-x86_64-with-glibc2.35
- Python version: 3.10.18
- Huggingface_hub version: 0.33.4
- Dataset version: 3.6.0
- Numpy version: 2.2.6
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Cuda version: 12060
- Using GPU in script?: <fill in>
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [x] My own task or dataset (give details below)
### Reproduction
nothing
### Expected behavior
I found that when attempting to fine-tune the model to grasp objects of different colors but identical shapes, it consistently grasped the wrong object. I found that the output feature differences from the VLM for the same image, such as โgrasp the green duck into the boxโ versus โgrasp the yellow duck into the box,โ were nearly zero. Is it possible that the VLM has weak color differentiation capabilities? Can the official support fine-tuning the visual encoder together? | https://github.com/huggingface/lerobot/issues/1774 | open | [
"question",
"policies",
"good first issue"
] | 2025-08-22T05:20:58Z | 2025-10-08T11:31:02Z | null | THU-yancow |
huggingface/transformers | 40,366 | [Feature] Support fromjson in jinja2 chat template rendering | ### Feature request
GLM45 requires `fromjson` in jinja2 to deserialize str typed `tool_calls.function.arguments` to dict within chat template so it can iterate over `arguments`'s k-v within jinja2 chat template.
```
{% for tc in m.tool_calls %}
{%- if tc.function %}
{%- set tc = tc.function %}
{%- endif %}
{{ '\n<tool_call>' + tc.name }}
{% set _args = tc.arguments | fromjson %}
{% for k, v in _args.items() %}
<arg_key>{{ k }}</arg_key>
<arg_value>{{ v \| tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
{% endfor %}
</tool_call>{% endfor %}
{% endif %}
```
https://huggingface.co/zai-org/GLM-4.5/blob/main/chat_template.jinja#L75
### Motivation
GLM45 requires `fromjson` in jinja2 to deserialize str typed `tool_calls.function.arguments` to dict within chat template so it can iterate over `arguments`'s k-v within jinja2 chat template.
```
{% for tc in m.tool_calls %}
{%- if tc.function %}
{%- set tc = tc.function %}
{%- endif %}
{{ '\n<tool_call>' + tc.name }}
{% set _args = tc.arguments | fromjson %}
{% for k, v in _args.items() %}
<arg_key>{{ k }}</arg_key>
<arg_value>{{ v \| tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
{% endfor %}
</tool_call>{% endfor %}
{% endif %}
```
https://huggingface.co/zai-org/GLM-4.5/blob/main/chat_template.jinja#L75
### Your contribution
I will submit a PR | https://github.com/huggingface/transformers/issues/40366 | open | [
"Feature request"
] | 2025-08-22T05:11:06Z | 2025-08-22T05:18:45Z | 1 | byjiang1996 |
huggingface/peft | 2,749 | Set multiple adapters actively when training | Hi! In incremental scenarios, I want to train a new adapter while keeping some old adapters actively. Notice that PeftModel can set active adapter by "model.set_adapter()". But every time can set only one adapter, where the type of args "adapter_name" is "str" rather than "List[str]". I also notice that class "PeftMixedModel" can set multiple adapters actively but only support for inference, and this class uses "model.base_model.set_adapter()" to achieve it. So I am not sure can I also set multiple adapters actively when training. My code is as following:
```python
model = AutoModelForCausalLM.from_pretrained()
peft_config = LoraConfig()
model = get_peft_model(model, peft_config, adapter_name="new")
model.load_adapter(adapter_path, adapter_name="old")
model.base_model.set_adapter(["new", "old"])
for name, param in model.named_parameters():
if "lora_A.old" in name or "lora_B.old" in name:
param.requires_grad = False
training_args = TrainingArguments()
trainer = Trainer()
trainer.train()
```
| https://github.com/huggingface/peft/issues/2749 | closed | [] | 2025-08-21T09:59:25Z | 2025-09-29T15:04:15Z | 4 | Yongyi-Liao |
huggingface/lerobot | 1,765 | Questions about using LIBERO dataset (loss starts extremely high) | Hello,
I am training on the "**IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot**" dataset, but I encountered an issue(here is the dateset:https://huggingface.co/datasets/IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot):
At the very beginning of training, the loss is extremely high (around 500).
I would like to clarify a few points:
Is the policy output expected to be relative actions or absolute actions?
Do I need to perform any preprocessing on the dataset? For example:
Normalizing the gripper action to the range [-1, 1]?
Any other scaling or transformation?
What is the exact relationship between the action and state in the dataset?
I noticed that trajectories sometimes look different than expected(shown in the figure below).
Do we need to process either the action or state to align them?
Any guidance on the correct usage of the dataset would be greatly appreciated. Thanks!
<img width="1229" height="592" alt="Image" src="https://github.com/user-attachments/assets/b1102728-4916-405f-9a87-ab190b07f58b" />
| https://github.com/huggingface/lerobot/issues/1765 | open | [
"question",
"dataset",
"simulation"
] | 2025-08-21T05:06:51Z | 2025-09-23T09:46:41Z | null | hamondyan |
huggingface/transformers | 40,330 | open-qwen2vl-base | ### Model description
is there any plan to add open-qwen2vl-base model?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/40330 | open | [
"New model"
] | 2025-08-21T02:24:01Z | 2025-08-23T10:18:28Z | 5 | olccihyeon |
huggingface/tokenizers | 1,850 | Safe encoding of strings that might contain special token text | When feeding untrusted string inputs into an LLM, it's often important not convert any of the input into special tokens, which might indicate message boundaries or other syntax. Among other reasons, this is important for guarding against prompt injection attacks.
tiktoken provides a way to control how the encoding deals with special tokens, using the `allowed_special` and `disallowed_special` arguments. For example.
```python
enc = tiktoken.get_encoding("o200k_base")
enc.encode("<|endoftext|>", disallowed_special=[]) # => [27, 91, 419, 1440, 919, 91, 29]
enc.encode("<|endoftext|>") # => ValueError
enc.encode("<|endoftext|>", allowed_special=set(["<|endoftext|>"]) # => [199999]
```
However, I can't figure out how to avoid tokenizing strings like <|im_start|> into special tokens, when using the tokenizers library. Note that I want to be able to *decode* the special token to its string representation for visualization. However, I want to make sure that when I call `encode`, I don't get a special token -- I tokenize the string representation as if there was no <|im_start|> special token.
Maybe the easiest way to do this is to create two separate tokenizers, by creating new json files, but this is pretty inconvenient. | https://github.com/huggingface/tokenizers/issues/1850 | closed | [] | 2025-08-21T00:53:17Z | 2025-09-01T18:03:59Z | 5 | joschu |
huggingface/peft | 2,746 | Gemma 2/3 Attention: Expected a single attention mask, got 2 instead | Hi! I'm getting this error `ValueError: Expected a single attention mask, got 2 instead` at inference (after prompt tuning)--I've only had this happen with the Gemma 2 and 3 models, so it might have something to do with their specific attention mechanism. Is there a workaround (or am I maybe missing something)?
I'm running the following:
```
model_name = "google/gemma-2-2b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
soft_model = get_peft_model(model, prompt_config)
inputs = tokenizer(model_instruction, return_tensors="pt")
outputs = soft_model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=num_gen_tokens,
eos_token_id=tokenizer.eos_token_id,
)
``` | https://github.com/huggingface/peft/issues/2746 | closed | [] | 2025-08-20T18:08:02Z | 2025-08-27T02:43:22Z | 8 | michelleezhang |
huggingface/transformers | 40,323 | Is there a plan to add DINOv3 into AutoBackbone? | ### Feature request
Is there a plan to add DINOv3 to AutoBackbone. At present, DINOv2 is already inside, and I think DINOv3 should be able to inherit it directly. Appreciate a lot.
### Motivation
For the convenience of use
### Your contribution
DINOv3 should be able to inherit from DINOv2 directly. | https://github.com/huggingface/transformers/issues/40323 | closed | [
"Feature request",
"Vision"
] | 2025-08-20T16:02:45Z | 2025-11-11T16:22:08Z | 4 | Farenweh |
huggingface/transformers | 40,263 | [VLMs] How to process a batch that contains samples with and without images? | Is there a **standard** way to process a batch that contains samples with and without images?
For example:
```python
from transformers import AutoProcessor
from PIL import Image
import numpy as np
model_id = ... # tested are "google/gemma-3-4b-it", "HuggingFaceM4/idefics2-8b", "HuggingFaceM4/Idefics3-8B-Llama3", "HuggingFaceTB/SmolVLM2-2.2B-Instruct", "llava-hf/llava-1.5-7b-hf", "llava-hf/llava-v1.6-mistral-7b-hf", "OpenGVLab/InternVL3-8B-hf", "Qwen/Qwen2-VL-2B-Instruct","Qwen/Qwen2.5-VL-3B-Instruct"]
processor = AutoProcessor.from_pretrained(model_id)
messages = [
[{"role": "user", "content": [{"type": "text", "text": "What's the capital of France?"}]}],
[{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "What is it?"}]}],
]
texts = processor.apply_chat_template(messages)
image = Image.fromarray(
np.random.uniform(low=0.0, high=255.0, size=(32, 48, 3)).astype(np.uint8)
)
images = [[], [image]]
processor(images=images, text=texts)
```
This fails for all models I tested.
```python
images=[image] # The only syntax I found that works for some models: llava-hf/llava-1.5-7b-hf, llava-hf/llava-v1.6-mistral-7b-hf, OpenGVLab/InternVL3-8B-hf, Qwen/Qwen2-VL-2B-Instruct, Qwen/Qwen2.5-VL-3B-Instruct
images = [None, [image]] # always fails
images = [None, image] # always fails
images = [[], [image]] # always fails
```
### Expected behavior
There should be a standard / documented way to batch process mixed inputs (some samples with images, some without).
| https://github.com/huggingface/transformers/issues/40263 | closed | [] | 2025-08-19T05:09:36Z | 2025-09-18T08:08:51Z | null | qgallouedec |
huggingface/diffusers | 12,185 | What's the difference between DreamBooth LoRa and traditional LoRa? | I see a lot of examples using DreamBooth LoRa training code. What's the difference between this and traditional LoRa training? Can this DreamBooth LoRa training code be adapted to standard SFT LoRa code? Does disabling with_prior_preservation return normal LoRa training? | https://github.com/huggingface/diffusers/issues/12185 | open | [] | 2025-08-19T03:32:30Z | 2025-08-19T15:04:22Z | 3 | MetaInsight7 |
huggingface/trl | 3,918 | How to use trl-SFTTrainer to train Qwen-30B-A3B? | Has anyone tried using TRL to train Qwen-30B-A3B-Instruct-2507? | https://github.com/huggingface/trl/issues/3918 | open | [
"โ question"
] | 2025-08-19T03:04:36Z | 2025-08-19T03:11:30Z | null | JeffWb |
huggingface/datasets | 7,739 | Replacement of "Sequence" feature with "List" breaks backward compatibility | PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how. | https://github.com/huggingface/datasets/issues/7739 | open | [] | 2025-08-18T17:28:38Z | 2025-09-10T14:17:50Z | 1 | evmaki |
huggingface/gsplat.js | 119 | How to 4DGS (.splatv) | How can I generate the .splatv file and get it running on my local server? | https://github.com/huggingface/gsplat.js/issues/119 | open | [] | 2025-08-18T07:35:04Z | 2025-08-18T07:35:04Z | null | CetosEdit |
huggingface/diffusers | 12,165 | Failed to finetune the pre-trained model of 'stable-diffusion-v1-4' on image inpainting task | I finetuned the pre-trained model of 'stable-diffusion-inpainting' on image inpainting task, and all work well as the model is trained on image inpainting. But when I finetuned with the pre-trained model of 'stable-diffusion-v1-4' which is trained on text-to-image, the loss is NaN and the result is pure black.
As the two models have different input channels for unet, I have changed the unet input channels of 'stable-diffusion-v1-4' to be fit for image inpainting task. So far, the code can run but the loss is NaN. I do not know where is the problem, how to finetune the pre-trained model of 'stable-diffusion-inpainting' on image inpainting task ? should I change some hyparameters? Any help will be appreciated, thanks! | https://github.com/huggingface/diffusers/issues/12165 | closed | [] | 2025-08-17T07:15:36Z | 2025-09-07T09:35:38Z | 7 | micklexqg |
huggingface/gym-hil | 27 | How to close the gripper in gym-hill-sim? | Hello all.
I'm using macOS to practice with tutorial gym-hill-sim.
I figured out how to move robot like x,y,z but, it's impossible to close the gripper....
Could you all please share the correct key?
Chatgpt answered ctrl-key but, it's not working!
Thanks in advance. | https://github.com/huggingface/gym-hil/issues/27 | open | [] | 2025-08-15T13:46:12Z | 2025-08-15T13:57:26Z | null | cory0619 |
huggingface/peft | 2,742 | RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn | Hello, I am fine-tuning the LLaMA-2 7B model on an A100 40 GB GPU. Initially, I was getting a CUDA out-of-memory error. I tried various methods, such as reducing batch size, but none worked. Then I enabled:
model.gradient_checkpointing_enable()
After doing this, the OOM issue was resolved, but now I get the following error during backpropagation:
torch.autograd.backward(
File ".../torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File ".../torch/autograd/graph.py", line 829, in _engine_run_backward
return Variable._execution_engine.run_backward(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I also tried:
model.enable_input_require_grads()
but the error still persists. I suspect the issue is related to enabling gradient checkpointing.
# In model_init()
reft_model.gradient_checkpointing_enable()
reft_model.enable_input_require_grads()
Is there something I am missing when using gradient checkpointing in this setup? | https://github.com/huggingface/peft/issues/2742 | closed | [] | 2025-08-15T06:21:50Z | 2025-09-23T15:04:07Z | 4 | Mishajain1110 |
huggingface/trl | 3,896 | How to gather completions before computing rewards in GRPOTrainer | Hi,
I found that the `reward_funcs` passed to GRPOTrainer is used per-device.
That is, if I set `num_generation=16`, `per_device_train_batch_size=4`, my customized reward function can only receive `4` completions.
However, my customized reward function calculates rewards depending on a global view over all `16` completions for each question.
How can I implement this? | https://github.com/huggingface/trl/issues/3896 | closed | [
"โ question",
"๐ Reward",
"๐ GRPO"
] | 2025-08-14T14:41:42Z | 2025-09-03T14:09:16Z | null | rubickkcibur |
huggingface/peft | 2,738 | Which base model weights are getting frozen after applying LoRA? | I have finetuned LLaVA-v1.5-7B with peft LoRA, and I have found out that after adding the LoRA adapters, all the weights are getting frozen except for the newly added LoRA layers and mm_projector weights (non-LoRA). I will be glad to know the freezing logic implemented by peft since not all the base model weights are getting frozen after applying LoRA.
Also, I have not added the mm_projector weights inside the module_to_save. | https://github.com/huggingface/peft/issues/2738 | closed | [] | 2025-08-13T17:35:10Z | 2025-08-14T04:20:42Z | 1 | srbh-dl |
huggingface/diffusers | 12,136 | How to use Diffusers to Convert Safetensors SDXL 1.0 to Onnx? | Hello,
I'm trying to convert a safetensors checkpoint for SDXL to onnx format.
I've tried Optimum already but it fails everytime.
Please help. | https://github.com/huggingface/diffusers/issues/12136 | closed | [] | 2025-08-13T06:33:22Z | 2025-10-31T03:13:28Z | null | CypherpunkSamurai |
huggingface/lerobot | 1,712 | Why hasn't the pi0 model learned the ability to place something in the specified positions? Is it because the number of datasets is insufficient? | I am creating a tic-tac-toe board and using yellow and green sandbags as pieces. I have collected a dataset of "the entire process of a robotic arm picking up yellow sandbags and placing them in nine different positions on the board". This dataset is used to train the pi0 model to achieve autonomous playing. The collection scope includes: changes in the board scene, motor action status, visual images, and text task instructions. However, when testing the trained pi0 model by giving tasks of placing sandbags in different positions on the board, it turns out that the so101 robotic arm has a poor understanding of position information. It can grab the sandbags just like in the recorded dataset, but most of the time it cannot place them in the specified positions. | https://github.com/huggingface/lerobot/issues/1712 | open | [
"question",
"policies"
] | 2025-08-12T10:15:26Z | 2025-12-22T08:10:47Z | null | Alex-Wlog |
huggingface/transformers | 40,089 | Could not import module 'AutoTokenizer'. Are this object's requirements defined correctly? | ### System Info
- torch @ https://download.pytorch.org/whl/cu124/torch-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- torchaudio @ https://download.pytorch.org/whl/cu124/torchaudio-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- torchvision @ https://download.pytorch.org/whl/cu124/torchvision-0.21.0%2Bcu124-cp310-cp310-linux_x86_64.whl
- unsloth==2025.6.12
- unsloth_zoo==2025.6.8
- accelerate==1.8.1
- bitsandbytes==0.46.0
- pydantic==2.11.7
- pydantic_core==2.33.2
- tokenizers==0.21.2
- transformers==4.52.4
- treelite==4.4.1
- treescope==0.1.9
- triton==3.2.0
- trl==0.19.0
- xformers==0.0.29.post3
- sympy==1.13.1
- cut-cross-entropy==25.1.1
- Python 3.10.16
- NVIDIA A10G (CUDA Version: 12.5)
- Ubuntu 24.04.2 LTS
### Who can help?
@ArthurZucker @itazap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2045](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2044), in _LazyModule.__getattr__(self, name)
2044 try:
-> 2045 module = self._get_module(self._class_to_module[name])
2046 value = getattr(module, name)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2075](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2074), in _LazyModule._get_module(self, module_name)
2074 except Exception as e:
-> 2075 raise e
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2073](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2072), in _LazyModule._get_module(self, module_name)
2072 try:
-> 2073 return importlib.import_module("." + module_name, self.__name__)
2074 except Exception as e:
File /usr/local/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:992, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1004, in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'transformers.models.ipynb_checkpoints'
The above exception was the direct cause of the following exception:
ModuleNotFoundError Traceback (most recent call last)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2045](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2044), in _LazyModule.__getattr__(self, name)
2044 try:
-> 2045 module = self._get_module(self._class_to_module[name])
2046 value = getattr(module, name)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2075](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2074), in _LazyModule._get_module(self, module_name)
2074 except Exception as e:
-> 2075 raise e
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2073](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2072), in _LazyModule._get_module(self, module_name)
2072 try:
-> 2073 return importlib.import_module("." + module_name, self.__name__)
2074 except Exception as e:
File /usr/local/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, i | https://github.com/huggingface/transformers/issues/40089 | closed | [
"bug"
] | 2025-08-11T21:44:05Z | 2025-09-08T03:09:11Z | 3 | octavianBordeanu |
huggingface/candle | 3,052 | Candle vs. PyTorch performance | I'm running https://github.com/huggingface/candle/tree/main/candle-examples/examples/llava vs. https://github.com/fpgaminer/joycaption/blob/main/scripts/batch-caption.py on a Mac m1.
Seeing significant performance difference, Candle seems much slower.
I enabled accelerate and metal features.
Would love some pointers how to improve it. | https://github.com/huggingface/candle/issues/3052 | open | [] | 2025-08-11T16:14:17Z | 2025-11-14T20:05:16Z | 8 | ohaddahan |
huggingface/diffusers | 12,124 | For qwen-image training file, Maybe "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False? | ### Describe the bug
I think "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False. Otherwise, it will lead to errors in the correspondence between prompt embedding and image during training, and prompt will not be followed when performing the task of T2I.
### Reproduction
None
### Logs
```shell
```
### System Info
None
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/12124 | open | [
"bug"
] | 2025-08-11T13:15:21Z | 2025-08-30T01:57:02Z | 2 | yinguoweiOvO |
huggingface/diffusers | 12,120 | How to train a lora with distilled flux model, such as flux-schnell??? | **Is your feature request related to a problem? Please describe.**
I can use flux as base model to train a lora, but it need 20 steps , it cost a lot of time , and I want to train a lora base on distill model to implement use fewer step make a better image, such as based on flux-schnell model train a lora it only need 4 steps can generate a good image !! and I can train many lora like this, only need 4 steps generated
**Describe the solution you'd like.**
I need a script , maybe it locate in examples\dreambooth\train_dreambooth_lora_flux_schennl.py
I want to know to train a lora based on distilled model and get a good result ?
**Describe alternatives you've considered.**
I want to train many lora for base model( flux or flux-schnell), not only one lora , and I want to generated with fewer steps. So , I want to train loras with distilled model ... how to implment it ? I test scripts : [train_dreambooth_lora_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py) by modify based mode from flux to flux-schnell ,but the result is bad...
**Additional context.**
any other implement method is OK , | https://github.com/huggingface/diffusers/issues/12120 | open | [] | 2025-08-11T03:07:42Z | 2025-08-11T06:01:45Z | null | Johnson-yue |
huggingface/diffusers | 12,108 | Qwen Image and Chroma pipeline breaks using schedulers that enable flow matching by parameter. | ### Describe the bug
Several Schedulers support flow matching by using the prediction_type='flow_prediction" e.g.
```
pipe.scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True)
```
However Chroma and Qwen Image will not work with these schedulers failing with the error
```
ValueError: The current scheduler class <class 'diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler'>'s `set_timesteps` does not support custom sigmas schedules. Please check whether you are using the correct scheduler.
```
Can we have this fixed by either changing the schedulers to have the missing attributes and use them, or by rethinking the way these pipelines handle the timesteps .
### Reproduction
```py
import torch
from diffusers import QwenImagePipeline, UniPCMultistepScheduler
pipe = QwenImagePipeline.from_pretrained("Qwen/Qwen-Image",
torch_dtype=torch.bfloat16)
#pipe.scheduler = FlowMatchEulerDiscreteScheduler(shift=3.16, use_beta_sigmas=True)
pipe.scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True)
pipe.to("mps")
pipe("a nice picture of an rainbow")
```
### Logs
```shell
File "/Volumes/SSD2TB/AI/Diffusers/qwenimagelowmem.py", line 84, in <module>
image = pipe(prompt_embeds=prompt_embeds, prompt_embeds_mask=prompt_embeds_mask,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 619, in __call__
timesteps, num_inference_steps = retrieve_timesteps(
^^^^^^^^^^^^^^^^^^^
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 119, in retrieve_timesteps
raise ValueError(
ValueError: The current scheduler class <class 'diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler'>'s `set_timesteps` does not support custom sigmas schedules. Please check whether you are using the correct scheduler.
```
### System Info
- ๐ค Diffusers version: 0.35.0.dev0
- Platform: macOS-15.5-arm64-arm-64bit
- Running on Google Colab?: No
- Python version: 3.11.13
- PyTorch version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.3
- Transformers version: 4.52.4
- Accelerate version: 1.7.0
- PEFT version: 0.17.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: Apple M3
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/12108 | open | [
"bug"
] | 2025-08-09T21:34:28Z | 2025-08-09T21:39:30Z | 0 | Vargol |
huggingface/transformers | 40,056 | Question: How to write a custome tokenizer form scratch | In this guide you introduced how to write a custom model and custom model configuration: [here](https://huggingface.co/docs/transformers/main/en/custom_models), IN addition I want to create a custom tokenizer form scratch why ?
I have a problem of multilevel transcription: the model takes an input utterance and output a 12 multilingual transcript simultaneously . So I want to design a tokenzier such that it take the whole 12 languages as a dict:
```python
{
"lang1": "text text",
"lang2": "text text",
"lang3": "text text",
}
```
and after tokenization
```python
{
"input_ids":
{
"lang1": "ids of lang 1",
"lang2": "ids of lang 2",
"lang3": "ids of lang 2",
}
}
```
How to do so as I can not find docs of building such custom tkenizer from scratch ? | https://github.com/huggingface/transformers/issues/40056 | closed | [] | 2025-08-09T16:39:19Z | 2025-09-24T08:03:02Z | null | obadx |
huggingface/diffusers | 12,107 | accelerator.init_trackers error when try with a custom object such as list | ### Describe the bug
I set multiple prompts with nargs for argument "--validation_prompt " in "train_dreambooth.py":
` parser.add_argument(
"--validation_prompt",
type=str,
default=["A photo of sks dog in a bucket", "A sks cat wearing a coat"],
nargs="*",
help="A prompt that is used during validation to verify that the model is learning.",
)`
but an error occured at ` if accelerator.is_main_process:
tracker_name = "dreambooth-lora"
accelerator.init_trackers(tracker_name, config=vars(args))` :
"ValueError: value should be one of int, float, str, bool, or torch.Tensor"
Is it because tensorboard only support basic Python types and PyTorch tensors but not a custom object such as list?
so how to visualize when has custom object such as list or argument with nargs?
### Reproduction
set the follow argument in "train_dreambooth.py" or other similar demos such as "train_amused.py":
` parser.add_argument(
"--validation_prompt",
type=str,
default=["A photo of sks dog in a bucket", "A sks cat wearing a coat"],
nargs="*",
help="A prompt that is used during validation to verify that the model is learning.",
)`
error occured at ` if accelerator.is_main_process:
tracker_name = "dreambooth-lora"
accelerator.init_trackers(tracker_name, config=vars(args))` with
"ValueError: value should be one of int, float, str, bool, or torch.Tensor"
### Logs
```shell
```
### System Info
- ๐ค Diffusers version: 0.33.0.dev0
- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.30.1
- Transformers version: 4.52.4
- Accelerate version: 1.8.1
- PEFT version: 0.15.2
- Bitsandbytes version: 0.45.4
- Safetensors version: 0.5.3
- xFormers version: 0.0.27.post2
- Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/12107 | open | [
"bug"
] | 2025-08-09T10:04:06Z | 2025-08-09T10:04:06Z | 0 | micklexqg |
huggingface/diffusers | 12,104 | IndexError: index 0 is out of bounds for dimension 0 with size 0 | ### Describe the bug
When I test the mit-han-lab/nunchaku-flux.1-kontext-dev model, it runs normally in a non-concurrent scenario, but throws an error when I try to run it with concurrent requests.
My GPU is a single RTX 4090D.
How can I enable multi-concurrency support on a single GPU?
Thank you in advance for your help.
Here is my error message:
[2025-08-08 17:14:50.242] [info] Initializing QuantizedFluxModel on device 0
[2025-08-08 17:14:50.382] [info] Loading partial weights from pytorch
[2025-08-08 17:14:51.445] [info] Done.
Injecting quantized module
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 99.47it/s]
Loading pipeline components...: 57%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 4/7 [00:00<00:00, 28.54it/s]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7/7 [00:00<00:00, 19.02it/s]
Generation `height` and `width` have been adjusted to 752 and 1360 to fit the model requirements.
Generation `height` and `width` have been adjusted to 880 and 1168 to fit the model requirements.
43%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 12/28 [00:17<00:23, 1.45s/it]
57%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 16/28 [00:18<00:13, 1.17s/it]
ๅค็ๅพๅๆถๅบ้: index 29 is out of bounds for dimension 0 with size 29
ๅค็ๅพๅๆถๅบ้: index 29 is out of bounds for dimension 0 with size 29
### Reproduction
```
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
from concurrent.futures import ThreadPoolExecutor
from nunchaku import NunchakuFluxTransformer2dModel
from nunchaku.utils import get_precision
import time
def get_result(image_path,pipeline):
time_begin = time.time()
image = load_image(
image_path
).convert("RGB")
size = image.size
large_now = 1440
small_now = round(1440 * (min(size)/max(size)) /32) * 32
width,height = (large_now,small_now) \
if size[0]>size[1] else (small_now,large_now)
prompt = "Remove the watermark from the picture"
image = pipeline(
image=image,
prompt=prompt,
guidance_scale=2.5,
num_inference_steps=28,
height=height,
width=width,
).images[0]
image.save(image_path[:-4]+"_result.png")
def nunchaku_test(concurrency,pipeline):
test_images = ["ๆฟๅๅพๆฐดๅฐ.jpg", "ๅงๅฎคๆฐดๅฐ.png"] * concurrency
test_images = test_images[:concurrency]
overall_start = time.time()
with ThreadPoolExecutor(max_workers=concurrency) as executor:
futures = [executor.submit(get_result, img_path, pipeline) for img_path in test_images]
results = []
for future in futures:
try:
results.append(future.result())
except Exception as e:
print(f"ๅค็ๅพๅๆถๅบ้: {e}")
overall_time = time.time() - overall_start
if __name__ == '__main__':
transformer = NunchakuFluxTransformer2dModel.from_pretrained(
f"/root/autodl-tmp/nunchaku-flux.1-kontext-dev/svdq-{get_precision()}_r32-flux.1-kontext-dev.safetensors"
)
pipeline = FluxKontextPipeline.from_pretrained(
"/root/autodl-tmp/FLUX.1-Kontext-dev", transformer=transformer, torch_dtype=torch.bfloat16
).to("cuda")
nunchaku_test(pipeline,2)
nunchaku_test(pipeline,4)
```
### Logs
```shell
```
### System Info
~/FLUX.1-Kontext-Dev-nunchaku# diffusers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- ๐ค Diffusers version: 0.35.0.dev0
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.12.3
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.33.1
- Transformers version: 4.53.0
- Accelerate version: 1.8.1
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4090 D, 24564 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/12104 | closed | [
"bug"
] | 2025-08-08T09:20:52Z | 2025-08-17T22:22:37Z | 1 | liushiton |
huggingface/datasets | 7,729 | OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | > Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu | https://github.com/huggingface/datasets/issues/7729 | open | [] | 2025-08-07T14:07:23Z | 2025-09-24T02:17:15Z | 1 | SaleemMalikAI |
huggingface/transformers | 39,992 | [gpt-oss] Transform checkpoint from safetensors to state dict | Yesterday I was working on gpt-oss. However, loading the weights give me troubles.
โจFor models like Qwen, I did things like this:
1. Create model on meta device
2. FSDP2 shard it, so it can fit in memory
3. On each GPU, it read weights from safetensors in a generator style, to save memory.
4. Chunk the weights and copy to the FSDPโs DTensor.โจ
GPT-oss does not apply this routine. Within `from_pretrained`, the mxfp4 quantizer somehow dequantized the weights, yet I cannot find a very clean way to utilize this capability. I have to modify the process, and initialized a CPU version of the model in the CPU memory.
How can we transform the safetensors to state dict directly? | https://github.com/huggingface/transformers/issues/39992 | closed | [] | 2025-08-07T13:24:06Z | 2025-09-15T08:02:55Z | 1 | fingertap |
huggingface/diffusers | 12,094 | [Wan2.2] pipeline_wan miss the 'shift' parameter which used by Wan2.2-A14B-diffusers. | **Firstly, I found that the quality of output using diffusers is poor**
Later, I found that the pipeline_wan in diffusers[0.34.0] did not support two-stage processing. I noticed that the community had already updated it, so I installed diffusers[0.35.0-dev] by source code and it worked.
Then I found that the scheduler in diffusers does not support the parameter "shift", but "sample_shift" is an important parameter generated by Wan2.2, which may also lead to differences from the official inference code of Wan2.2. Therefore, the video effect may still be inferior to the original inference code.
https://github.com/Wan-Video/Wan2.2/issues/69
**What I need**
Can the community provide the UniPCMultistepScheduler and DPMSolverMultistepScheduler that support the 'shift' parameter? Or can it be adapted in pipeline_wan so that the shift parameter can be used.
Or is there something wrong with my understanding? How can I correctly use the shift parameter when using diffusers?
Thanks!!
cc @yiyixuxu @a-r-r-o-w
| https://github.com/huggingface/diffusers/issues/12094 | closed | [] | 2025-08-07T11:37:36Z | 2025-08-10T08:43:27Z | 7 | yvmilir |
huggingface/lerobot | 1,687 | When using AMP to train a model, why are the saved model weights still in fp32? | <img width="1668" height="95" alt="Image" src="https://github.com/user-attachments/assets/406a1879-f2f2-43c6-8341-8733873ee911" /> | https://github.com/huggingface/lerobot/issues/1687 | open | [
"question",
"policies"
] | 2025-08-06T12:42:40Z | 2025-08-12T08:52:00Z | null | Hukongtao |
huggingface/diffusers | 12,084 | Will `cosmos-transfer1` be supported in diffusers in the future? |
Hi @a-r-r-o-w and @yiyixuxu :)
First of all, thank you for recently enabling cosmos-predict1 models (text2world and video2world) in the diffusers library โ it's super exciting to see them integrated!
I was wondering if there are any plans to also support [cosmos-transfer1](https://github.com/nvidia-cosmos/cosmos-transfer1) in diffusers in the future?
Thanks again for your great work! ๐ | https://github.com/huggingface/diffusers/issues/12084 | open | [] | 2025-08-06T11:22:28Z | 2025-08-19T12:11:33Z | 3 | rebel-shshin |
huggingface/lerobot | 1,683 | SmolVLMWithExpertModel | Excuse me, I would like to know about each module. In this class, I would like to know how to define inputs. | https://github.com/huggingface/lerobot/issues/1683 | open | [
"question",
"policies"
] | 2025-08-06T10:30:21Z | 2025-08-12T08:52:21Z | null | xjushengjie |
huggingface/lerobot | 1,674 | How to train smolvla for multi-task | I have trained smolvla for aloha_sim_transfer_cube and aloha_sim_insertion, and smolvla performs well in each single task. Now I'd like to train smolvla for multi-task ---- one model can complete the two tasks above. What should I do Now? | https://github.com/huggingface/lerobot/issues/1674 | closed | [] | 2025-08-06T02:40:01Z | 2025-10-15T02:52:29Z | null | w673 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.