repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 11,914 | Loading multiple LoRAs to 1 pipeline in parallel, 1 LoRA to 2-pipelines on 2-GPUs | Hi everyone,
I have the following scenario.
I have a machine with 2-GPUs and a running service that keep has two pipelines loaded to their corresponding devices. Also I have a list of LoRAs (say 10). On each request I split the batch into 2 parts (request also has the corresponding information about LoRA), load LoRAs and run the forward pass.
The problem I encounter is that whatever parallelization method I have tried (threading, multi-processing), the maximum I have achieved is pre-loading LoRAs on the cpu and then, moving them to GPU and only after that `load_lora_weights` from the state_dict.
Even if I attempt to achieve parallelization in by calling the chunk where I load in parallel in threads, the pipe starts to produce either a complete noise or a black image.
Where I would appreciate a lot the help is:
1. To get an advice of elegantly loading multiple LoRAs at once into one pipe (all examples in the documentation indicate that one needs to do it 1 by 1)
2. If I have 2 pipes on 2 different devices, how to parallelize the process of loading 1 LoRA to pipes on their corresponding devices.
```
def apply_multiple_loras_from_cache(pipes, adapter_names, lora_cache, lora_names, lora_strengths, devices):
for device_index, pipe in enumerate(pipes):
logger.info(f"Starting setup for device {devices[device_index]}")
# Step 1: Unload LoRAs
start = time.time()
pipe.unload_lora_weights(reset_to_overwritten_params=False)
logger.info(f"[Device {device_index}] Unload time: {time.time() - start:.3f}s")
# Step 2: Parallelize CPU β GPU state_dict move
def move_to_device(name):
return name, {
k: v.to(devices[device_index], non_blocking=True).to(pipe.dtype)
for k, v in lora_cache[name]['state_dict'].items()
}
start = time.time()
with ThreadPoolExecutor() as executor:
future_to_name = {executor.submit(move_to_device, name): name for name in adapter_names}
results = [future.result() for future in as_completed(future_to_name)]
logger.info(f"[Device {device_index}] State dict move + dtype conversion time: {time.time() - start:.3f}s")
# Step 3: Load adapters
start = time.time()
for adapter_name, state_dict in results:
pipe.load_lora_weights(
pretrained_model_name_or_path_or_dict=state_dict,
adapter_name=adapter_name
)
logger.info(f"[Device {device_index}] Load adapter weights time: {time.time() - start:.3f}s")
# Step 4: Set adapter weights
start = time.time()
pipe.set_adapters(lora_names, adapter_weights=lora_strengths)
logger.info(f"[Device {device_index}] Set adapter weights time: {time.time() - start:.3f}s")
torch.cuda.empty_cache()
logger.info("All LoRAs applied and GPU cache cleared.")
``` | https://github.com/huggingface/diffusers/issues/11914 | closed | [] | 2025-07-12T15:54:44Z | 2025-07-15T19:40:11Z | 5 | vahe-toffee |
huggingface/lerobot | 1,494 | release the code for reproducing the performance on the LIBERO dataset reported in the SmolVLA paper? | Has anyone been able to reproduce the performance on the LIBERO dataset reported in the SmolVLA paper? Iβd appreciate any guidelines or tips to help with reproducing the results. | https://github.com/huggingface/lerobot/issues/1494 | closed | [
"question",
"policies",
"simulation"
] | 2025-07-12T09:35:00Z | 2025-09-23T09:44:59Z | null | JustinKai0527 |
huggingface/datasets | 7,680 | Question about iterable dataset and streaming | In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78
I am confused,
1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style dataset?
2. `load_dataset(streaming=True)` is useful for huge dataset, but the speed is slow. How to make it comparable to `to_iterable_dataset` without loading the whole dataset into RAM? | https://github.com/huggingface/datasets/issues/7680 | open | [] | 2025-07-12T04:48:30Z | 2025-08-01T13:01:48Z | 8 | Tavish9 |
huggingface/transformers | 39,377 | FlashAttention2 support for GSAI-ML / LLaDA-8B-Instruct? | Hi there,
I attempted to use flash attention 2 with this model but it seems like it isn't supported, based on this error:
```
ValueError: LLaDAModelLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/GSAI-ML/LLaDA-8B-Instruct/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new
```
would it be possible to add support to this kind of model?
Thank you for your time! | https://github.com/huggingface/transformers/issues/39377 | closed | [] | 2025-07-12T02:48:36Z | 2025-08-19T08:03:26Z | 2 | lbertge |
huggingface/lerobot | 1,492 | Is there any plan to add a validation loss in the training pipeline, which is not dependent on simulation env. | Can we have a dataset split in the training code to run the model on a holdout validation episode to check loss on it? | https://github.com/huggingface/lerobot/issues/1492 | open | [
"enhancement",
"question",
"policies"
] | 2025-07-11T20:43:04Z | 2025-12-30T07:12:20Z | null | mohitydv09 |
huggingface/peft | 2,642 | Prompt_Tuning.ipynb example doesn't seem to train the model | Hello! I am running Prompt-Tuning notebook example from PEFT lib examples [here](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/Prompt_Tuning.ipynb). I did **not** change any line of code and I ran the code block sequentially.
However, the performance under metrics remain exactly the **same** for each epoch, which is very weird. From the [orignal notebook](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/Prompt_Tuning.ipynb), we can see accuracy fluctuates and can increase to 0.70.
I checked the output logits for the training data is changing every epoch (set shuffle=False, and this is the only change for debugging). Now I am very confused, any suggestions would be very much welcome, please let me know if I am doing something very wrong, thanks in advance!
Here's the performance log:
```
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.36it/s]
epoch 0: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.72it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.49it/s]
epoch 1: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.34it/s]
epoch 2: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.72it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.35it/s]
epoch 3: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.47it/s]
epoch 4: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.69it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.63it/s]
epoch 5: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.75it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.45it/s]
epoch 6: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.40it/s]
epoch 7: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.53it/s]
epoch 8: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:19<00:00, 5.76it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.27it/s]
epoch 9: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.75it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.50it/s]
epoch 10: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.63it/s]
epoch 11: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:19<00:00, 5.77it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.50it/s]
epoch 12: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:19<00:00, 5.78it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.60it/s]
epoch 13: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
100%|ββββββββββ| 115/115 [00:20<00:00, 5.74it/s]
100%|ββββββββββ| 13/13 [00:01<00:00, 10.54it/s]
epoch 14: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
```
Besides, my environment info is here if it helps debugging:
```
python 3.10
transformers 4.52.4
peft 0.16.0
torch 2.7.0
jupyterlab 4.4.3
OS Ubuntu 22.04 LTS
GPU NVIDIA RTX 5880
``` | https://github.com/huggingface/peft/issues/2642 | closed | [] | 2025-07-11T18:26:58Z | 2025-08-23T15:03:47Z | 8 | ruixing76 |
huggingface/transformers | 39,366 | RuntimeError when loading llmcompressor W8A8 quantized model: int8 dtype in weight initialization | I'm trying to load the quantized model `RedHatAI/Qwen2.5-VL-7B-Instruct-quantized.w8a8` but encountering a dtype compatibility issue during model initialization. The model appears to be quantized using `llmcompressor` with W8A8 quantization scheme.
**Note**: I need to load this model without vLLM because I may need to add custom hooks for my research, so I'm looking for a direct loading method using transformers/llmcompressor.
## Error Message
```python
RuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8
```
**Full Stack Trace:**
```python
File "/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 366, in _init_weights
module.weight.data.normal_(mean=0.0, std=std)
File "/torch/_refs/__init__.py", line 6214, in normal_
return normal(mean, std, self.shape, out=self, generator=generator)
...
RuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8
```
## Traceback
The error occurs during model weight initialization where transformers tries to call `normal_()` on int8 tensors. The `normal_()` function in PyTorch only works with floating-point tensors, but the quantized model contains int8 weights.
**Specific failure point:**
- File: `modeling_qwen2_5_vl.py`, line 366
- Function: `_init_weights()`
- Operation: `module.weight.data.normal_(mean=0.0, std=std)`
- Issue: Trying to apply normal distribution to int8 tensors
## Model Information
Based on the model's `config.json`:
- **Quantization method**: `compressed-tensors`
- **Format**: `int-quantized`
- **Scheme**: W8A8 (8-bit weights and activations)
- **Base model**: `Qwen/Qwen2.5-VL-7B-Instruct`
- **Compression ratio**: ~1.2x
- **Ignored layers**: All visual layers (`visual.blocks.*`, `visual.merger.*`, `lm_head`)
## What I've Tried
### 1. llmcompressor methods:
```python
# Method 1: TraceableQwen2_5_VLForConditionalGeneration
from llmcompressor.transformers.tracing import TraceableQwen2_5_VLForConditionalGeneration
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
# Method 2: SparseAutoModelForCausalLM
from llmcompressor.transformers import SparseAutoModelForCausalLM
model = SparseAutoModelForCausalLM.from_pretrained(
model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
```
### 2. Standard transformers methods:
```python
# Method 3: Various dtype configurations
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16, # Also tried: torch.float16, "auto", None
trust_remote_code=True,
device_map="auto"
)
# Method 4: AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True, torch_dtype="auto"
)
```
**All methods fail at the same weight initialization step, so I wonder should the model be loaded with `_fast_init=False` or other special parameters?**
## Additional Observations
1. **Warning about ignored layers**: The loader warns about missing visual layers, but this seems expected since they were ignored during quantization
2. **Model files exist**: The quantized model directory contains the expected `.safetensors` files and configuration
3. **Original model works**: The base `Qwen/Qwen2.5-VL-7B-Instruct` loads and works perfectly
## Environment
- **Python**: 3.10
- **PyTorch**: 2.7.0+cu126
- **Transformers**: 4.52.4
- **LLMCompressor**: 0.6.0
- **Compressed-tensors**: 0.10.2
This model was likely created using llmcompressor's oneshot quantization:
```python
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
recipe = [
GPTQModifier(
targets="Linear",
scheme="W8A8",
sequential_targets=["Qwen2_5_VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
),
]
```
If this is more of an llmcompressor-specific model loading issue rather than a transformers compatibility issue, please let me know and I'll file this issue in the llmcompressor repository instead.
| https://github.com/huggingface/transformers/issues/39366 | closed | [
"Good First Issue"
] | 2025-07-11T15:15:09Z | 2025-12-08T13:30:10Z | 10 | AdelineXinyi |
huggingface/lerobot | 1,483 | How can I set `max_relative_target` to get safe action? | I saw this in function `send_action` in `src/lerobot/robots/so100_follower/so100_follower.py`
```python
def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
"""Command arm to move to a target joint configuration.
The relative action magnitude may be clipped depending on the configuration parameter
`max_relative_target`. In this case, the action sent differs from original action.
Thus, this function always returns the action actually sent.
Raises:
RobotDeviceNotConnectedError: if robot is not connected.
Returns:
the action sent to the motors, potentially clipped.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
# Cap goal position when too far away from present position.
# /!\ Slower fps expected due to reading from the follower.
if self.config.max_relative_target is not None:
present_pos = self.bus.sync_read("Present_Position")
goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()}
goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
# Send goal position to the arm
self.bus.sync_write("Goal_Position", goal_pos)
return {f"{motor}.pos": val for motor, val in goal_pos.items()}
```
But in So100followerconfig it defaults to None
```python
class SO100FollowerConfig(RobotConfig):
# Port to connect to the arm
port: str
disable_torque_on_disconnect: bool = True
# `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
# Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
# the number of motors in your follower arms.
max_relative_target: int | None = None
# cameras
cameras: dict[str, CameraConfig] = field(default_factory=dict)
# sensors
sensors: dict[str, ForceSensorConfig] = field(default_factory=dict)
# Set to `True` for backward compatibility with previous policies/dataset
use_degrees: bool = False
```
I don't know how much should I set `max_relative_target` is there any instruction? thanks!! | https://github.com/huggingface/lerobot/issues/1483 | open | [
"question",
"robots"
] | 2025-07-11T02:46:02Z | 2025-08-12T09:34:51Z | null | milong26 |
huggingface/peft | 2,640 | Why peft.utils.other.fsdp_auto_wrap_policy do not warp the module do not require grad? | In https://github.com/huggingface/peft/blob/main/src/peft/utils/other.py#L977,
```
def fsdp_auto_wrap_policy(model):
if hasattr(FullyShardedDataParallelPlugin, "get_module_class_from_name"):
get_module_class_from_name = FullyShardedDataParallelPlugin.get_module_class_from_name
else:
from accelerate.utils.dataclasses import get_module_class_from_name
from torch.distributed.fsdp.wrap import _or_policy, lambda_auto_wrap_policy, transformer_auto_wrap_policy
from ..tuners import PrefixEncoder, PromptEmbedding, PromptEncoder
default_transformer_cls_names_to_wrap = ",".join(_get_no_split_modules(model))
transformer_cls_names_to_wrap = os.environ.get(
"FSDP_TRANSFORMER_CLS_TO_WRAP", default_transformer_cls_names_to_wrap
).split(",")
transformer_cls_to_wrap = {PrefixEncoder, PromptEncoder, PromptEmbedding}
for layer_class in transformer_cls_names_to_wrap:
if len(layer_class) == 0:
continue
transformer_cls = get_module_class_from_name(model, layer_class)
if transformer_cls is None:
raise Exception("Could not find the transformer layer class to wrap in the model.")
else:
transformer_cls_to_wrap.add(transformer_cls)
def lambda_policy_fn(module):
if (
len(list(module.named_children())) == 0
and getattr(module, "weight", None) is not None
and module.weight.requires_grad
):
return True
return False
lambda_policy = functools.partial(lambda_auto_wrap_policy, lambda_fn=lambda_policy_fn)
transformer_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
transformer_layer_cls=transformer_cls_to_wrap,
)
auto_wrap_policy = functools.partial(_or_policy, policies=[lambda_policy, transformer_wrap_policy])
return auto_wrap_policy
```
the fsdp_auto_wrap_policy uses a lambda_policy_fn which does not warp the module does not require grad.
But in regular Lora training, the original network does not need grad.
That may cause every GPU still keep a full network copy even in FSDP FULLY SHARD.
Why the code design such a policy? | https://github.com/huggingface/peft/issues/2640 | closed | [] | 2025-07-10T12:07:13Z | 2025-08-18T15:05:03Z | 4 | Changlin-Lee |
huggingface/transformers | 39,336 | TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format' | I am using CogVLM2 video captioning model
It works latest with transformers==4.43.4
with transformers==4.44.0 and forward I get below error
but I need to use latest version of transformers since currently 4bit quantization fails on some gpus and platforms
how can i fix this issue?
`TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'`
```
14:23:32 - INFO - Final video tensor shape for CogVLM processing: torch.Size([3, 24, 720, 1280])
14:23:35 - ERROR - Error during auto-captioning: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'
Traceback (most recent call last):
File "E:\Ultimate_Video_Processing_v1\STAR\logic\cogvlm_utils.py", line 679, in auto_caption
outputs_tensor = local_model_ref.generate(**inputs_on_device, **gen_kwargs)
File "E:\Ultimate_Video_Processing_v1\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "E:\Ultimate_Video_Processing_v1\venv\lib\site-packages\transformers\generation\utils.py", line 2024, in generate
result = self._sample(
File "E:\Ultimate_Video_Processing_v1\venv\lib\site-packages\transformers\generation\utils.py", line 3032, in _sample
model_kwargs = self._update_model_kwargs_for_generation(
File "E:\Ultimate_Video_Processing_v1\STAR\models\modules\transformers_modules\cogvlm2-video-llama3-chat\modeling_cogvlm.py", line 726, in _update_model_kwargs_for_generation
cache_name, cache = self._extract_past_from_model_output(
TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'
```
@amyeroberts, @qubvel @SunMarc @MekkCyber
the error i am getting is below with 4.43.1 on B200 when doing 4bit quant. interesting same code same libraries on my rtx 5090 on windows working without errors
fp16 has no issues
```
11:45:10 - INFO - Preparing to load model from: /workspace/STAR/models/cogvlm2-video-llama3-chat with quant: 4, dtype: torch.bfloat16, device: cuda, device_map: auto, low_cpu_mem: True
11:45:10 - INFO - Starting model loading - this operation cannot be interrupted once started
/workspace/venv/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/workspace/venv/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6/6 [01:18<00:00, 13.07s/steps]
11:46:30 - ERROR - Failed to load CogVLM2 model from path: /workspace/STAR/models/cogvlm2-video-llama3-chat
11:46:30 - ERROR - Exception type: ValueError
11:46:30 - ERROR - Exception details: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
Traceback (most recent call last):
File "/workspace/STAR/logic/cogvlm_utils.py", line 160, in load_cogvlm_model
raise model_loading_result["error"]
File "/workspace/STAR/logic/cogvlm_utils.py", line 122, in load_model_thread
model = AutoModelForCausalLM.from_pretrained(
File "/workspace/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
return model_class.from_pretrained(
File "/workspace/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4000, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/workspace/venv/lib/python3.10/site-packages/accelerate/big_modeling.py", line 502, in dispatch_model
model.to(device)
File "/workspace/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2849, in to
raise ValueError(
ValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
11:46:30 - ERROR - Error during auto-captioning: 'Could not load CogVLM2 model (check logs for details): `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.'
Traceback (most recent call last):
File "/workspace/STAR/logic/cogvlm_utils.py", line 160, in load_cogvlm_model
raise model_loading_result["error"]
File "/workspace/STAR/logic/cogvlm_utils.py", line 122, in load_model_thread
model = AutoMode | https://github.com/huggingface/transformers/issues/39336 | closed | [
"bug"
] | 2025-07-10T11:49:02Z | 2025-08-18T08:03:13Z | 4 | FurkanGozukara |
huggingface/lerobot | 1,476 | Here as interactive gym to play with the robot, (I still need some help) | ### First the good news:
This is an interactive gym where you can experiment with pre-trained policies to control the robot in real time.
Here is how to use it:
- `Double-click` on a body to select it.
- `Ctrl + left` drag applies a torque to the selected object, resulting in rotation.
- `Ctrl + right` drag applies a force to the selected object in the (x,z) plane, resulting in translation.
- `Ctrl + Shift + right` drag applies a force to the selected object in the (x,y) plane.
### However, there are a few limitations:
- When you move the cubes, the robot doesn't seem to register the new positions and instead attempts to pick them up from their original locations.
- **Only** the environment `lerobot/act_aloha_sim_insertion_human` appears to work occasionally. The others either don't function at all or cause the program to crash due to missing attributes that haven't been implemented in the gym.
I'd really appreciate feedback/guidance from the repo maintainers on how to improve this snippet to support more environments and tasks.
file `interactive_gym.py`:
```python
import gymnasium as gym
import mujoco
import mujoco.viewer
import torch
import importlib
from lerobot.policies.utils import get_device_from_parameters
from lerobot.configs import parser
from lerobot.configs.eval import EvalPipelineConfig
from lerobot.policies.factory import make_policy
from lerobot.envs.utils import preprocess_observation
from lerobot.utils.utils import get_safe_torch_device
# $ python interactive_gym.py --policy.path=lerobot/act_aloha_sim_insertion_human --env.type=aloha
# $ python interactive_gym.py --policy.path=lerobot/act_aloha_sim_transfer_cube_human --env.type=aloha
@parser.wrap()
def make_env_and_policy(cfg: EvalPipelineConfig):
package_name = f"gym_{cfg.env.type}"
try:
importlib.import_module(package_name)
except ModuleNotFoundError as e:
print(f"{package_name} is not installed. Please install it with `pip install 'lerobot[{cfg.env.type}]'`")
raise e
gym_handle = f"{package_name}/{cfg.env.task}"
env = gym.make(gym_handle, disable_env_checker=True, **cfg.env.gym_kwargs)
policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env)
policy.eval()
policy.reset()
return env, policy
def main(env, policy):
device = get_device_from_parameters(policy)
viewer = mujoco.viewer.launch_passive(env.unwrapped.model, env.unwrapped.data)
observation, info = env.reset(seed=42)
viewer.sync()
for i in range(40000):
observation = preprocess_observation(observation)
observation = {
key: observation[key].to(device, non_blocking=device.type == "cuda") for key in observation
}
# Infer "task" from attributes of environments.
# TODO: works with SyncVectorEnv but not AsyncVectorEnv
if hasattr(env, "task_description"):
observation["task"] = env.unwrapped.task_description
elif hasattr(env, "task"):
observation["task"] = env.unwrapped.task
else: # For envs without language instructions, e.g. aloha transfer cube and etc.
observation["task"] = ""
with torch.inference_mode():
action = policy.select_action(observation)
# Convert to CPU / numpy.
action = action.to("cpu").numpy()
assert action.ndim == 2, "Action dimensions should be (batch, action_dim)"
# Apply the next action.
#observation, reward, terminated, truncated, info = env.step(action)
observation, reward, terminated, truncated, info = env.step(action[0])
viewer.sync()
if terminated or truncated:
observation, info = env.reset()
viewer.sync()
if i % 100 == 0:
print(i)
viewer.close()
env.close()
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
env, policy = make_env_and_policy()
main(env, policy)
```
| https://github.com/huggingface/lerobot/issues/1476 | open | [
"question",
"simulation"
] | 2025-07-09T14:59:22Z | 2025-12-16T13:41:00Z | null | raul-machine-learning |
huggingface/lerobot | 1,475 | [Question] What does each number in predicted action(SmolVLA) stand for? | Hi, I'm trying to load the SmolVLA and test on my simulation env.
After passing the observations to the model using "policy.select_action(obs)" I got a 6-dimensional action, but I'm quite confused about what exactly they are. And if there are three for position translation and three for rotation, how could I control the open and close for the gripper?
Thanks. | https://github.com/huggingface/lerobot/issues/1475 | open | [
"question",
"policies"
] | 2025-07-09T13:39:25Z | 2025-08-12T10:08:26Z | null | Calvert0921 |
huggingface/lerobot | 1,471 | where is 7_get_started_with_real_robot.md? | I didn't find 7_get_started_with_real_robot.md | https://github.com/huggingface/lerobot/issues/1471 | closed | [
"documentation",
"question"
] | 2025-07-09T08:02:32Z | 2025-10-08T08:42:21Z | null | von63 |
huggingface/alignment-handbook | 218 | Will you release SmolLM 3 recipe? | First off, thank you so much for sharing these training resources.
I was wondering if, with the recent release of SmolLM3, you have plans to also share its training recipe.
Have a nice day! | https://github.com/huggingface/alignment-handbook/issues/218 | closed | [] | 2025-07-08T19:47:20Z | 2025-07-15T14:16:11Z | 1 | ouhenio |
huggingface/sentence-transformers | 3,433 | How to use a custom batch sampler? | `SentenceTransformerTrainer.__init__` will check the type of args, so I have to write a class inheriting from `SentenceTransformerTrainingArgs` rather than `TransformerTrainingArgs`. The problem is that `SentenceTransformerTrainingArgs.__post__init__` forces to use `BatchSampler` to initialize a batch sampler. Is there any workaround about this? | https://github.com/huggingface/sentence-transformers/issues/3433 | open | [] | 2025-07-08T09:35:24Z | 2025-07-08T12:36:33Z | null | Hypothesis-Z |
huggingface/transformers | 39,266 | Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. | ### System Info
```bash
Traceback (most recent call last):
File "/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 767, in convert_to_tensors
tensor = as_tensor(value)
File "/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 729, in as_tensor
return torch.tensor(value)
ValueError: expected sequence of length 15757 at dim 1 (got 16242)
```
*DataCollatorForLanguageModeling* seems to only padding input ids and ignore labels, resulting in different lengths of labels in a batch. Why is this?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
def _process_fn(samples, tokenizer : PreTrainedTokenizerFast, config):
samples = [[{"role" : "user", "content" : x[0]}, {"role" : "assistant", "content" : x[1]}]
for x in zip(samples["input"], samples["output"])]
# tokenized_data = tokenizer.apply_chat_template(samples,
# return_tensors="pt",
# return_dict=True,
# padding="max_length",
# truncation=True,
# max_length=8000)
tokenized_data = tokenizer.apply_chat_template(samples,
return_tensors="pt",
return_dict=True,
padding=True
)
samples_ids = tokenized_data["input_ids"]
attention_mask = tokenized_data["attention_mask"]
output_ids = []
for i, seq in enumerate(samples_ids):
output_index = torch.where(seq == SPECIAL_GENERATE_TOKEN_ID)[0]
mask = attention_mask[i]
if len(output_index) == 1:
output_index = output_index[0].item()
else:
continue
temp = torch.full_like(seq, -100)
temp[output_index:] = seq[output_index:]
temp[mask == 0] = -100
output_ids.append(temp)
labels = torch.stack(output_ids)
return {"input_ids" : samples_ids,
"labels" : labels,
"attention_mask" : attention_mask}
trainer = Trainer(
model=peft_model,
args=train_config,
train_dataset=train_data,
eval_dataset=eval_data,
data_collator=DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False,
pad_to_multiple_of=8 if torch.cuda.is_available() else None,
return_tensors="pt"
)
)
```
### Expected behavior
run code | https://github.com/huggingface/transformers/issues/39266 | closed | [
"bug"
] | 2025-07-08T05:19:35Z | 2025-07-08T06:50:47Z | 0 | mumu029 |
huggingface/lerobot | 1,460 | How to support dataloading with historical cue? | as i see, the getitem function of LerobotDataset now returns the single frame data, how to stack the historical frames and make use of batch data with historical information like univla?
| https://github.com/huggingface/lerobot/issues/1460 | open | [
"question",
"dataset"
] | 2025-07-08T01:49:11Z | 2025-08-12T09:44:02Z | null | joeyxin-del |
huggingface/lerobot | 1,458 | how to control a real robot arm-101 with my own pretrained model? | I don't see the instruction or script example on this repositoryγ
Please help
Thanks,
| https://github.com/huggingface/lerobot/issues/1458 | open | [
"question",
"policies"
] | 2025-07-08T01:19:50Z | 2025-08-12T09:45:13Z | null | jcl2023 |
huggingface/candle | 3,016 | Build fails on Maxwell GPU due to __dp4a undefined in quantized.cu | Iβm trying to build a Rust project locally that depends on candle-kernels on my laptop with an NVIDIA GeForce 940MX (Maxwell, compute capability 5.0). The build fails with errors like:
```
src/quantized.cu(1997): error: identifier "__dp4a" is undefined
...
18 errors detected in the compilation of "src/quantized.cu".
```
GPU: NVIDIA GeForce 940MX (GM107, compute capability 5.0)
OS: Kali Linux (rolling)
CUDA toolkit: 12.3
NVIDIA driver: 550.163.01
candle-kernels: v0.7.2
The error is caused by the use of the CUDA intrinsic __dp4a, which is only available on GPUs with compute capability 6.1+ (Pascal and newer).
My GPU is compute 5.0, so this intrinsic is not available.
**Questions:**
Is there a way to disable quantized kernels or the use of __dp4a for older GPUs?
If not, could a feature flag or build option be added to support older hardware, or at least skip building quantized kernels on unsupported GPUs?
| https://github.com/huggingface/candle/issues/3016 | open | [] | 2025-07-07T14:41:53Z | 2025-07-07T14:41:53Z | 0 | fishonamos |
huggingface/text-generation-inference | 3,289 | How to detect watermark? | Hi,
Thanks for the great work.
I saw in the current code the KGW watermark is implemented. But it seems lack of code to evaluate and detect whether the generated text contains watermark.
Could anyone suggest whether this code is exists? It will be very helpful.
Thanks | https://github.com/huggingface/text-generation-inference/issues/3289 | open | [] | 2025-07-07T11:42:54Z | 2025-07-07T11:42:54Z | null | Allencheng97 |
huggingface/lerobot | 1,448 | How to specify both policy.type and pretrained path at the same time? | Hi, I am adding custom configs to a PreTrainedConfig, and I also want to load it from a pretrained path. However, if I specify the pretrained path (with policy.path), I won't be able to modify the fields inside the new PreTrainedConfig subclass. If I use policy.type="myNewModel" instead, I am able to call the fields (such as `policy.new_field_in_myNewModel` when I run `lerobot/scripts/train.py`, but unable to specify the pretrained path.
What is a good solution to this problem?
Thanks! | https://github.com/huggingface/lerobot/issues/1448 | open | [
"enhancement",
"configuration"
] | 2025-07-07T03:33:15Z | 2025-08-12T09:45:58Z | null | branyang02 |
huggingface/lerobot | 1,447 | SmolVLA input/output clarification | I'm now trying to load the SmolVLA to control the Franka arm in simulation. I found that there could be three image inputs(Obeservation.image, 1 and 2) and I have top, wrist and side views. Is there a fixed order for those camera views?
And the predicted action has 6 dimensions, does that mean it doesn't include the gripper state? What are those values represent for? Thanks in advance! | https://github.com/huggingface/lerobot/issues/1447 | closed | [
"question",
"policies"
] | 2025-07-06T21:56:43Z | 2025-10-09T21:59:17Z | null | Calvert0921 |
huggingface/lerobot | 1,446 | How to evaluate finetuned SmolVLA model | Dear authors and your wonderful work.
I have fine-tuned the smolvla model based on a customized lerobot format dataset. My dataset is picking up a banana and placing it on a box. How can I evaluate the performance of the model? I tried eval.py in the scripes directory, but env_type=pusht doesn't work. I think this env_type may cause eval.py to fail to run.
I hope someone can help me. Thanks in advance.
| https://github.com/huggingface/lerobot/issues/1446 | closed | [
"question",
"policies"
] | 2025-07-06T15:27:22Z | 2025-10-17T11:57:49Z | null | BintaoBryant |
huggingface/diffusers | 11,865 | AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file' | ### Describe the bug
I would like to run the Cosmos-Predict2-14B-Text2Image model, but it is too large to fit in 24GB of VRAM normally, so I tried to load a Q8_0 GGUF quantization. I copied some code from the [HiDreamImageTransformer2DModel](https://huggingface.co/docs/diffusers/en/api/models/hidream_image_transformer#loading-gguf-quantized-checkpoints-for-hidream-i1) page and tried to adapt it, but I get the following error:
`AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'`
Is there supposed to be another way to load a 8 bit quantization? From what I have seen, Q8_0 typically produces results that are much closer to full precision compared to FP8.
### Reproduction
```
transformer = CosmosTransformer3DModel.from_single_file(
rf"{model_14b_id}\cosmos-predict2-14b-text2image-Q8_0.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16
)
pipe_14b = Cosmos2TextToImagePipeline.from_pretrained(
model_14b_id,
torch_dtype=torch.bfloat16,
transformer = transformer
)
```
### Logs
```shell
transformer = CosmosTransformer3DModel.from_single_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'
```
### System Info
- π€ Diffusers version: 0.35.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.11.9
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.33.1
- Transformers version: 4.53.0
- Accelerate version: 1.8.1
- PEFT version: 0.15.2
- Bitsandbytes version: 0.46.1
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@DN6 | https://github.com/huggingface/diffusers/issues/11865 | closed | [
"bug"
] | 2025-07-05T12:14:50Z | 2025-07-11T07:15:23Z | 9 | mingyi456 |
huggingface/diffusers | 11,864 | AutoencoderDC.encode fails with torch.compile(fullgraph=True) - "name 'torch' is not defined" | ### Describe the bug
I'm trying to optimize my data preprocessing pipeline for the Sana model by using `torch.compile` on the DC-AE encoder. Following PyTorch's best practices, I attempted to compile only the `encode` method with `fullgraph=True` for better performance, but I'm encountering an error.
When I try:
```python
dae.encode = torch.compile(dae.encode, fullgraph=True)
```
The code fails with `NameError: name 'torch' is not defined` when calling `dae.encode(x)`.
However, compiling the entire model works:
```python
dae = torch.compile(dae, fullgraph=True)
```
I'm unsure if this is expected behavior or if I'm doing something wrong. Is there a recommended way to compile just the encode method for `AutoencoderDC`?
I was advised to use the more targeted approach of compiling only the encode method for better performance, but it seems like the DC-AE model might have some internal structure that prevents this optimization pattern.
Any guidance on the correct way to apply `torch.compile` optimizations to `AutoencoderDC` would be greatly appreciated. Should I stick with compiling the entire model, or is there a way to make method-level compilation work?
### Reproduction
```
import torch
from diffusers import AutoencoderDC
# Load model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
dae = AutoencoderDC.from_pretrained(
"mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers",
torch_dtype=torch.bfloat16
).to(device).eval()
# This fails with "name 'torch' is not defined"
dae.encode = torch.compile(dae.encode, fullgraph=True)
# Test
x = torch.randn(1, 3, 512, 512, device=device, dtype=torch.bfloat16)
out = dae.encode(x) # Error occurs here
# This works fine
dae = torch.compile(dae, fullgraph=True)
```
### Logs
```shell
Testing torch.compile(dae.encode, fullgraph=True)
/data1/tzz/anaconda_dir/envs/Sana/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:150: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
β Error: name 'torch' is not defined
```
### System Info
- π€ Diffusers version: 0.34.0.dev0
- Platform: Linux-5.15.0-142-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.18
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.33.0
- Transformers version: 4.45.2
- Accelerate version: 1.7.0
- PEFT version: 0.15.2
- Bitsandbytes version: 0.46.0
- Safetensors version: 0.5.3
- xFormers version: 0.0.27.post2
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11864 | closed | [
"bug"
] | 2025-07-05T06:15:11Z | 2025-07-09T01:32:39Z | 6 | SingleBicycle |
huggingface/datasets | 7,669 | How can I add my custom data to huggingface datasets | I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that. | https://github.com/huggingface/datasets/issues/7669 | open | [] | 2025-07-04T19:19:54Z | 2025-07-05T18:19:37Z | null | xiagod |
huggingface/lerobot | 1,442 | Trained pi0 policy ignores visual cues | I am having an issue in which my trained pi0 policy looks smooth but it completely ignores the camera input. I have tried covering up a camera and the policy still looks smooth! This seems very wrong. I wonder if it is because my images are not normalized correctly? Has anyone else seen this?
Do i need to change the "NormalizationMode" visual for pi0? Seems like this may be a repeat of https://github.com/huggingface/lerobot/issues/1065?
| https://github.com/huggingface/lerobot/issues/1442 | open | [
"question",
"policies"
] | 2025-07-03T20:13:08Z | 2025-08-12T09:47:09Z | null | kumarhans |
huggingface/lerobot | 1,439 | [QUESTION] run a policy on a real robot | Hi There, In the documentation , scripts to teleoperate, record, replay or evaluate a policy are provided **but how to run a policy for inference only on a real robot** ? I did not find such a script?
Besides it may be interesting to add such a script in the documentation as well
Thank you very much for your help
| https://github.com/huggingface/lerobot/issues/1439 | open | [
"question",
"policies"
] | 2025-07-03T18:09:10Z | 2025-08-12T09:47:27Z | null | FaboNo |
huggingface/smolagents | 1,512 | How can we use this benchmark to evaluate local models? | examples/smolagents_benchmark/run.py
| https://github.com/huggingface/smolagents/issues/1512 | closed | [
"enhancement"
] | 2025-07-03T06:17:58Z | 2025-07-03T08:07:26Z | null | OoOPenN |
huggingface/diffusers | 11,849 | Can not load fusionx_lora into original wan2.1-14b | hello, i am adding the fusionx_lora into original wan2.1-14b-i2v, my code is as follow:
> pipe = WanImageToVideoPipeline.from_pretrained(my_local_path + "Wan2.1-I2V-14B-480P-Diffusers", vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
> pipe.load_lora_weights(
> my_local_path + "Wan14BT2VFusioniX/FusionX_LoRa/Wan2.1_I2V_14B_FusionX_LoRA.safetensors"
> )
But i got some errors:
> File "/mmu_mllm_hdd_2/zuofei/infer_test/lora_infer_multi.py", line 60, in process_image
> pipe.load_lora_weights(
> File "/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 4869, in load_lora_weights
> state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
> return fn(*args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^
> File "/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 4796, in lora_state_dict
> state_dict = _convert_non_diffusers_wan_lora_to_diffusers(state_dict)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_conversion_utils.py", line 1564, in _convert_non_diffusers_wan_lora_to_diffusers
> num_blocks = len({k.split("blocks.")[1].split(".")[0] for k in original_state_dict})
> ~~~~~~~~~~~~~~~~~~^^^
> IndexError: list index out of range
Can you tell me how to fix it? Thank you so much! | https://github.com/huggingface/diffusers/issues/11849 | open | [] | 2025-07-02T13:48:17Z | 2025-07-02T13:48:17Z | 0 | fzuo1230 |
huggingface/transformers | 39,169 | Using Gemma3n with text-only generation requires image dependencies | ### System Info
- `transformers` version: 4.53.0
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.8
- Huggingface_hub version: 0.33.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I want to use the Gemma3n model in a text-only generation pipeline (without any multimodal inputs). I'm using the Gemma3nForCausalLM because it has only a language modeling head. But when running the script, it fails with an ImportError stating that `AutoImageProcessor` requires the PIL and timm libraries to work. How can I run Gemma3n for text-generation without those image-related dependencies?
```python
from transformers import AutoTokenizer, Gemma3nForCausalLM
import torch
model_id = "google/gemma-3n-e4b"
model = Gemma3nForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_length=30)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
```
### Expected behavior
I expect the script to run successfully without installing `pillow` and `timm`. | https://github.com/huggingface/transformers/issues/39169 | closed | [
"bug"
] | 2025-07-02T07:46:43Z | 2025-08-01T08:14:26Z | 6 | marianheinsen |
huggingface/lerobot | 1,429 | When will release the SmolVLA(2.25B & 0.24b) | Hi dear authors
thx for ur all and the wonderful work - SmolVLA!
I wonder will u release the **SmolVLA(2.25B)?** I want to compare the performance with your release version(0.45B) | https://github.com/huggingface/lerobot/issues/1429 | closed | [
"question",
"policies"
] | 2025-07-02T03:39:06Z | 2025-10-11T07:21:57Z | null | JuilieZ |
huggingface/sentence-transformers | 3,416 | How to calculate prompt tokens for embedding model encode? | I want to calculate input prompt tokens, which returns to user to let them know how many tokens they consumed. How can I do that? Could you give me an example? | https://github.com/huggingface/sentence-transformers/issues/3416 | open | [] | 2025-07-02T03:27:11Z | 2025-07-03T07:02:55Z | null | gaoxt1983 |
huggingface/sentence-transformers | 3,414 | How to fine tune multimodal embedding model? | Hi @tomaarsen and Team - hope all is well & thanks for the work.
I used to fine tune some pure text based embedding models using this package and now I would like to fine tune multimodal embedding models such as `llamaindex/vdr-2b-multi-v1` and `jinaai/jina-embeddings-v4`.
I wonder if you can share some insights / relevant documentation / code examples?
Thank you. | https://github.com/huggingface/sentence-transformers/issues/3414 | open | [] | 2025-07-01T23:45:04Z | 2025-07-03T10:25:29Z | null | groklab |
huggingface/lerobot | 1,424 | evaluated trained policy reports 14 pc_success only | Trained act policy using
```
python lerobot/scripts/train.py \
--policy.type=act \
--dataset.repo_id=lerobot/act_aloha_sim_insertion_human \
--env.type=aloha \
--output_dir=outputs/train/act_aloha_insertion
```
Question: I think I mistakenly used the prefix `act_` in the `repo_id` but if I don't use it I get this error:
```
$ python lerobot/scripts/train.py --policy.type=act --dataset.repo_id=lerobot/aloha_sim_insertion_human --env.type=aloha --output_dir=outputs/train/act_aloha_insertion
INFO 2025-07-01 05:47:32 ils/utils.py:48 Cuda backend detected, using cuda.
WARNING 2025-07-01 05:47:32 /policies.py:77 Device 'None' is not available. Switching to 'cuda'.
Traceback (most recent call last):
File "/home/user/lerobot/lerobot/scripts/train.py", line 291, in <module>
train()
File "/home/user/lerobot/lerobot/configs/parser.py", line 226, in wrapper_inner
response = fn(cfg, *args, **kwargs)
File "/home/user/lerobot/lerobot/scripts/train.py", line 110, in train
cfg.validate()
File "/home/user/lerobot/lerobot/configs/train.py", line 120, in validate
raise ValueError(
ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.
```
Using that "act_" prefix in the repo id I attempted to Evaluate it using the command below but it reports `pc_success` being 14% which seems too low?
```
python lerobot/scripts/eval.py \
--policy.path=outputs/train/act_aloha_insertion/checkpoints/last/pretrained_model \
--env.type=aloha \
--eval.batch_size=10 \
--eval.n_episodes=50
```
Detailed output of the above command:
```
$ python lerobot/scripts/eval.py --policy.path=outputs/train/act_aloha_insertion/checkpoints/last/pretrained_model --env.type=aloha --eval.batch_size=10 --eval.n_episodes=50
INFO 2025-07-01 05:33:14 pts/eval.py:467 {'env': {'episode_length': 400,
'features': {'action': {'shape': (14,),
'type': <FeatureType.ACTION: 'ACTION'>},
'agent_pos': {'shape': (14,),
'type': <FeatureType.STATE: 'STATE'>},
'pixels/top': {'shape': (480, 640, 3),
'type': <FeatureType.VISUAL: 'VISUAL'>}},
'features_map': {'action': 'action',
'agent_pos': 'observation.state',
'pixels/top': 'observation.images.top',
'top': 'observation.image.top'},
'fps': 50,
'obs_type': 'pixels_agent_pos',
'render_mode': 'rgb_array',
'task': 'AlohaInsertion-v0'},
'eval': {'batch_size': 10, 'n_episodes': 50, 'use_async_envs': False},
'job_name': 'aloha_act',
'output_dir': PosixPath('outputs/eval/2025-07-01/05-33-14_aloha_act'),
'policy': {'chunk_size': 100,
'device': 'cuda',
'dim_feedforward': 3200,
'dim_model': 512,
'dropout': 0.1,
'feedforward_activation': 'relu',
'input_features': {'observation.images.top': {'shape': (3,
480,
640),
'type': <FeatureType.VISUAL: 'VISUAL'>},
'observation.state': {'shape': (14,),
'type': <FeatureType.STATE: 'STATE'>}},
'kl_weight': 10.0,
'latent_dim': 32,
'license': None,
'n_action_steps': 100,
'n_decoder_layers': 1,
'n_encoder_layers': 4,
'n_heads': 8,
'n_obs_steps': 1,
'n_vae_encoder_layers': 4,
'normalization_mapping': {'ACTION': <NormalizationMode.MEAN_STD: 'MEAN_STD'>,
'STATE': <NormalizationMode.MEAN_STD: 'MEAN_STD'>,
'VISUAL': <NormalizationMode.MEAN_STD: 'MEAN_STD'>},
'optimizer_lr': 1e-05,
'optimizer_lr_backbone': 1e-05,
'optimizer_weight_decay': 0.0001,
'output_features': {'action': {'shape': (14,),
'type': <FeatureType.ACTION: 'ACTION'>}},
'pre_norm': False,
'pretrained_backbone_weights': 'ResNet18_Weights.IMAGENET1K_V1',
'private': None,
'push_to_hub': False,
'replace_final_stride_with_dilation': 0,
'repo_id': None,
'tags': None,
'temporal_ensemble_coeff': None,
'use_amp': False,
'use_vae': True,
'vision_backbone': 'resnet18'},
'seed': 1000}
INFO 2025-07-01 05:33:14 pts/eval.py:476 Output dir: outputs/eval/2025-07-01/05-33-14_aloha_act
INFO 2025-07-01 05:33:14 pts/eval.py:478 Making environment.
INFO 2025-07-01 05:33:14 /__init__.py:84 MUJOCO_GL=%s, attempting to import specified O | https://github.com/huggingface/lerobot/issues/1424 | open | [
"question",
"policies"
] | 2025-07-01T12:16:38Z | 2025-08-12T09:49:05Z | null | raul-machine-learning |
huggingface/lerobot | 1,421 | It would help to have a description for the lerobots datasets: | for example, for [lerobot/aloha_sim_insertion_human](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_human) comes with no description at all
I'd help to know
- What makes this data special/interesting
- How to train different models in the simulator
- What should we expect
- what does the `_human` means, and how is it different from the `_script` suffix | https://github.com/huggingface/lerobot/issues/1421 | open | [
"question",
"dataset"
] | 2025-07-01T10:14:45Z | 2025-08-12T09:49:27Z | null | raul-machine-learning |
huggingface/lerobot | 1,419 | simulator should allow pushing objects around with the mouse interactively | Not having this is preventing us from testing, debugging and playing with the robots.
According to Mujoco documentation this feature available in their simulator but it is not exposed in lerobot:
```
A related usability feature is the ability to βreach intoβ the simulation, push objects around and see how the
physics respond. The user selects the body to which the external forces and torques will be applied, and sees
a real-time rendering of the perturbations together with their dynamic consequences. This can be used to debug
the model visually, to test the response of a feedback controller, or to configure the model into a desired pose.
```
Also for an awesome OOTB experience it would be great to have a script that loads a pretrained model and makes the interactive simulation just work.
| https://github.com/huggingface/lerobot/issues/1419 | open | [
"question",
"simulation"
] | 2025-07-01T09:47:02Z | 2025-08-12T09:50:18Z | null | raul-machine-learning |
huggingface/lerobot | 1,418 | Robot tries to transfer cube even if it failed to pick it up, shouldn't it retry? | I am evaluating the following policy:
```
python lerobot/scripts/eval.py --policy.path=lerobot/act_aloha_sim_transfer_cube_human --env.type=aloha --env.task=AlohaTransferCube-v0 --eval.n_episodes=1 --eval.batch_size=1
```
However the robot fails to pick up the cube but carries on with the task, shouldn't the robot keep on trying until it picks up the cube? See the video
https://github.com/user-attachments/assets/5ad20353-97bc-4d03-a78d-5f9f149c95f9
| https://github.com/huggingface/lerobot/issues/1418 | closed | [
"question",
"simulation"
] | 2025-07-01T09:18:38Z | 2025-10-17T11:57:34Z | null | raul-machine-learning |
huggingface/transformers | 39,137 | ImportError: cannot import name 'pipeline' from 'transformers' | ### System Info
I am using Databricks notebook.
Databricks runtime: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)
### Who can help?
@Rocketknight1 @SunMarc @zach-huggingface
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is the code:
```
%pip install --upgrade torch transformers accelerate deepspeed bitsandbytes huggingface_hub
dbutils.library.restartPython()
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
```
Error:
`ImportError: cannot import name 'pipeline' from 'transformers' (/local_disk0/.ephemeral_nfs/envs/pythonEnv-a13cd5c4-d035-4d04-87bd-75088348617d/lib/python3.10/site-packages/transformers/__init__.py)`
Python: 3.10.12
installed packages:
transformers== 4.53.0
huggingface_hub==0.33.1
torch==2.7.1+cu126
accelerate==1.8.1
deepspeed==0.17.1
bitsandbytes==0.46.0
These are all up-to-date versions for all of these packages. What is the problem?
### Expected behavior
Import without error. | https://github.com/huggingface/transformers/issues/39137 | closed | [
"Usage",
"bug"
] | 2025-06-30T18:49:54Z | 2025-10-23T00:53:19Z | 14 | atabari-bci |
huggingface/lerobot | 1,407 | Can read the current signals from the lerobot? | Can a user read the current signals from the LeRobot? | https://github.com/huggingface/lerobot/issues/1407 | open | [
"question",
"sensors"
] | 2025-06-30T10:05:26Z | 2025-08-12T09:51:06Z | null | Frank-ZY-Dou |
huggingface/optimum | 2,314 | How to set the dynamic input sizes for decoder_with_past_model.onnx of NLLB | Dear author,
I'm a beginner in optimum. So this question may be an elementary one. I used optimum to export decoder_with_past_model.onnx from nllb-200-distilled-600M. The resulted onnx has many inputs with dynamic shape. Now I intend to overwrite the inputs with static sizes. However, I'm not sure about the correct settings.
There are 4 arguments to be determined and I set:
batch_size = 1
encoder_sequence_length = 200 (same with max_length)
past_decoder_sequence_length = 200
encoder_sequence_length_out = 200
Any suggestions are appre

ciated. Big thanks. | https://github.com/huggingface/optimum/issues/2314 | closed | [
"Stale"
] | 2025-06-30T06:37:50Z | 2025-08-07T02:17:43Z | null | liamsun2019 |
huggingface/transformers | 39,114 | Is there a way to force it to use ASCII based progress bar and not the ipython widget one? | When loading models, I like it better to have a ASCII based progress bar and not a IPython one | https://github.com/huggingface/transformers/issues/39114 | open | [
"Feature request"
] | 2025-06-29T22:41:19Z | 2025-07-07T13:20:13Z | 0 | weathon |
huggingface/transformers | 39,105 | How to use other acceleration apis of npu? | ### Feature request
I noticed that transformers now support using flash attention directly in the npu by [```npu_flash_attention.py```](https://github.com/huggingface/transformers/pull/36696). There are many other acceleration apis that can be used in npu, such as shown in [doc](https://www.hiascend.com/document/detail/zh/Pytorch/700/ptmoddevg/trainingmigrguide/performance_tuning_0028.html).
How can we use them directly in transformers? How to switch seamlessly between different devices?
### Motivation
Request to integrate other acceleration apis of npu in transformers. If this can be done, the ease of using transformers will be greatly improved in npu. | https://github.com/huggingface/transformers/issues/39105 | closed | [
"Feature request"
] | 2025-06-29T08:26:29Z | 2026-01-04T07:23:26Z | null | zheliuyu |
huggingface/candle | 3,013 | Word Timestamp for whisper | Hi is there no way to get word timestamp using the whisper in candle?
The example successfully demonstrates the retrieval of segment timestamp but how would one retrieve word timestamp.
When I look into python code, they seem to pass this `word_timestamp=True` argument while transcribing and get the result with `base` model.
Is there any work around or can someone point me towards how to achieve this please. | https://github.com/huggingface/candle/issues/3013 | open | [] | 2025-06-29T01:16:38Z | 2025-06-29T23:47:39Z | 2 | bp7968h |
huggingface/trl | 3,662 | What is the point of steps_per_gen in GRPO Trainer | Hello, can you please explain what is the point of steps_per_gen in GRPO Training config when we already have num_iterations? The policy update logic can then simply be:
if num_iterations = 1, generations and model update are on_policy (per_token_logps = old_per_token_logps)
When num_iterations > 1, then the same generation will be used for multiple times, and per_token_logps will be different from old_per_token_logps for all but the first time a generation batch is used.
Why is steps_per_gen needed? It just makes the overall batch generation and splitting logic unnecessarily difficult to understand.. | https://github.com/huggingface/trl/issues/3662 | open | [
"β question",
"π GRPO"
] | 2025-06-28T20:08:01Z | 2025-07-25T08:05:50Z | null | ankur6ue |
huggingface/lerobot | 1,399 | calibrate.py for only follower | the calibrate.py file doesnt work for setting up the motors for the follower arm, as there arent enough parameters for the function to run. Has anyone made an adaption for the calibrate file that doesnt take into consideration the teleop? | https://github.com/huggingface/lerobot/issues/1399 | open | [
"question",
"teleoperators"
] | 2025-06-27T20:53:47Z | 2025-08-12T09:51:53Z | null | ramallis |
huggingface/transformers | 39,091 | `transformers`' dependency on `sentencepiece` blocks use on windows in python 3.13 | ### System Info
Due to
* changes in Python 3.13,
* an incompatibility in `sentencepiece`,
* `transformers` dependency on `sentencepiece`,
`transformers` cannot be easily installed under windows + py3.13, and does not work as a dependency of other packages in this environment
There are multiple issues and a merged PR on sentencepiece (https://github.com/google/sentencepiece/pull/1084) from Feb 26 2025 but no release has been forthcoming
### Who can help?
* people currently using `sentencepiece` in `transformers` code they own
* people determining what the scope of `transformers`' OS & python support is
* `sentencepiece` pypi maintainers
### Reproduction
1. Be on windows
2. Be on python 3.13
3. Try to install current `transformers` from pypi
4. If you get this far, use any function importing `sentencepiece`, e.g. loading an `xlm_roberta` model
### Expected behavior
Code doesn't raise exception | https://github.com/huggingface/transformers/issues/39091 | closed | [
"Usage"
] | 2025-06-27T15:23:57Z | 2025-07-03T16:02:47Z | 5 | leondz |
huggingface/transformers | 39,073 | Inefficient default GELU implementation in GPT2 | While profiling the HuggingFace GPT2 model, I found that the default GELU backend used is NewGELUActivation, which is inefficient in most cases. Instead of using a fused CUDA kernel, NewGELUActivation executes multiple separate PyTorch-level operators, leading to unnecessary kernel launches and memory overhead.
```python
# activations.py:L46
class NewGELUActivation(nn.Module):
def forward(self, input: Tensor) -> Tensor:
return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))
```
Is there a reason why NewGELUActivation is still used as the default for GPT2, rather than switching to nn.functional.gelu or another fused alternative?
Iβd be happy to share profiler traces or help test a patch if helpful. | https://github.com/huggingface/transformers/issues/39073 | closed | [] | 2025-06-27T09:07:39Z | 2025-08-12T03:35:13Z | 4 | null-pointer-access |
huggingface/diffusers | 11,816 | set_adapters performance degrades with the number of inactive adapters | ### Describe the bug
### Goal
Build an image-generation service with `StableDiffusionXLPipeline` that:
1. Keeps ~50 LoRA adapters resident in GPU VRAM.
2. For each request:
β’ activate **β€ 5** specific LoRAs via `pipeline.set_adapters(...)`
β’ run inference
β’ deactivate them (ready for the next request).
### Issue
`pipeline.set_adapters()` becomes progressively slower the more unique LoRAs have ever been loaded,
even though each call still enables only up to five adapters.
| # LoRAs ever loaded | `set_adapters()` time (s) |
|---------------------|---------------------------|
| 3 | ~ 0.1031 |
| 6 | ~ 0.1843 |
| 9 | ~ 0.2614 |
| 12 | ~ 0.3522 |
| 45 | ~ 1.2470 |
| 57 | ~ 1.5435 |
### What Iβve tried
1. **Load LoRAs from disk for every request** ~ 0.8 s/LoRA, too slow.
2. **Keep LoRAs in RAM (`SpooledTemporaryFile`) + `pipeline.delete_adapter()`** β roughly as slow as (1).
3. **Keep all 50 LoRAs on the GPU** and just switch with `set_adapters()` β fastest so far, but still shows the O(N)-style growth above.
### Question
Is this increasing latency expected?
Is there a recommended pattern for caching many LoRAs on the GPU and switching between small subsets without paying an O(total LoRAs) cost every time?
Any guidance (or confirmation itβs a current limitation) would be greatly appreciated!
### Reproduction
<details>
<summary>Code</summary>
``` Minimal example
import os
import time
from typing import List
from pydantic import BaseModel
from diffusers import StableDiffusionXLPipeline, AutoencoderTiny
import torch
from diffusers.utils import logging
logging.disable_progress_bar()
logging.set_verbosity_error()
pipeline = None
class Lora(BaseModel):
name: str
strength: float
def timeit(func):
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
duration = end - start
print(f"{func.__name__} executed in {duration:.4f} seconds")
return result
return wrapper
@timeit
def load_model():
pipeline = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
vae=AutoencoderTiny.from_pretrained(
'madebyollin/taesdxl',
use_safetensors=True,
torch_dtype=torch.float16,
)
).to("cuda")
pipeline.set_progress_bar_config(disable=True)
return pipeline
@timeit
def set_adapters(pipeline, adapter_names, adapter_weights):
pipeline.set_adapters(
adapter_names=adapter_names,
adapter_weights=adapter_weights,
)
@timeit
def fuse_lora(pipeline):
pipeline.fuse_lora()
@timeit
def inference(pipeline, req, generator=None):
return pipeline(
prompt=req.prompt,
negative_prompt=req.negative_prompt,
width=req.width,
height=req.height,
num_inference_steps=req.steps,
guidance_scale=req.guidance_scale,
generator=generator,
).images
def apply_loras(pipeline, loras: list[Lora]) -> str:
if not loras or len(loras) == 0:
pipeline.disable_lora()
return
pipeline.enable_lora()
for lora in loras:
try:
pipeline.load_lora_weights(
"ostris/super-cereal-sdxl-lora",
weight_name="cereal_box_sdxl_v1.safetensors",
adapter_name=lora.name,
token=os.getenv("HUGGINGFACE_HUB_TOKEN", None),
)
except ValueError:
continue # LoRA already loaded, skip
except Exception as e:
print(f"Failed to load LoRA {lora}: {e}")
continue
set_adapters(
pipeline,
adapter_names=[lora.name for lora in loras],
adapter_weights=[lora.strength for lora in loras],
)
fuse_lora(pipeline)
return
def generate_images(req, pipeline):
generator = torch.Generator(device="cuda").manual_seed(42)
apply_loras(pipeline, req.loras)
images = inference(
pipeline,
req,
generator=generator,
)
pipeline.unfuse_lora()
return images
class GenerationRequest(BaseModel):
prompt: str
loras: List[Lora] = []
negative_prompt: str = ""
width: int = 512
height: int = 512
steps: int = 30
guidance_scale: float = 7
def test_lora_group(pipeline, lora_group: List[Lora], group_number: int):
test_req = GenerationRequest(
prompt="a simple test image",
loras=[Lora(name=lora_name, strength=0.8) for lora_name in lora_group],
width=256,
height=256,
steps=10,
)
try:
generate_images(test_req, pipeline)
return True, lora_group
except Exception as e:
return Fa | https://github.com/huggingface/diffusers/issues/11816 | closed | [
"bug"
] | 2025-06-26T22:27:54Z | 2025-09-29T14:33:13Z | 27 | hrazjan |
huggingface/lerobot | 1,393 | motor configuration request - one motor at a time like configure_motors | I like the new process generally but I think the ability to configure a single motor was valuable (e.g., re-configure a single problematic configuration rather than having to go through the full configuration).
In addition to the current process, it would be nice if we could bring that per-motor functionality forward, maybe the ability to pass a single motor ID in `lerobot.setup_motor`?
ref: https://huggingface.co/docs/lerobot/en/so101#2-set-the-motors-ids-and-baudrates
| https://github.com/huggingface/lerobot/issues/1393 | open | [
"question",
"robots"
] | 2025-06-26T19:27:36Z | 2025-08-12T09:52:30Z | null | brainwavecoder9 |
huggingface/text-generation-inference | 3,277 | Rubbish responses by Llama-3.3-70B-Instruct when message API is enabled. | ### System Info
TGI endpoint deployed on AWS SageMaker using the 3.2.3 image version.
The image URI is `763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.6.0-tgi3.2.3-gpu-py311-cu124-ubuntu22.04`
The environment is:
```python
env = {'HF_MODEL_ID': 'meta-llama/Llama-3.3-70B-Instruct',
'HF_TASK': 'text-generation',
'SM_NUM_GPUS': '8',
'MAX_INPUT_LENGTH': '2048',
'MAX_TOTAL_TOKENS': '4096',
'MAX_BATCH_PREFILL_TOKENS': '4096',
'HUGGING_FACE_HUB_TOKEN': None,
'MESSAGES_API_ENABLED': 'true',
'ENABLE_PREFILL_LOGPROBS': 'false'
}
Note the **MESSAGES_API_ENABLED** above.
```
Deployed using the AWS Python SDK:
```python
from sagemaker.huggingface.model import HuggingFaceModel
HuggingFaceModel(
env=env,
image_uri=image_uri,
name=params.endpoint_name,
role=get_my_sagemaker_execution_role(),
)
```
Deployed on a ml.g5.48xlarge machine.
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
Using the SageMaker Python SDK, when invoking using a manually rendered chat template, I get the following response:
```python
from transformers import AutoTokenizer
from sagemaker.huggingface.model import HuggingFacePredictor
# define messages
message_dict = [{'role': 'user', 'content': 'Who is the president of the United States?'},
{'role': 'assistant',
'content': 'The current president of the United States is Donald Trump.'},
{'role': 'user',
'content': (
"Your task is to rewrite the given question in a context independent manner.\n"
"Here are some examples:\n\n"
"Example 1:\n"
"Q: What is the capital of France?\n"
"A: Paris?\n"
"Q: How many people live there?\n"
"Rewrite: How many people live in Paris?\n\n"
"Example 2:\n"
"Q: Do I need a visa to travel to the United States?\n"
"A: Yes, you need a visa to travel to the United States.\n"
"Q: What is the process to get a visa?\n"
"Rewrite: What is the process to get a visa for the United States?\n\n"
"Now it's your turn:\n"
"Q: Who is the president of the United States?\n"
"A: The current president of the United States is Donald Trump.\n"
"Q: When was he elected?\n"
)},
{'role': 'assistant', 'content': 'Rewrite: '}]
# construct predictor
pred = HuggingFacePredictor(endpoint_name=my_endpoint_name, sagemaker_session=get_my_sagemaker_session())
# render the messages to a string
tok = AutoTokenizer.from_pretrained(setup_params.llm_name)
rendered_messages = tok.apply_chat_template(prompt.messages.model_dump(), tokenize=False,
# invoke the predictor
add_generation_prompt=False, continue_final_message=True)
resp = pred.predict({"inputs": rendered_messages})
```
The response is
```python
[{'generated_text': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 26 Jul 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWho is the president of the United States?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nThe current president of the United States is Donald Trump.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nYour task is to rewrite the given question in a context independent manner.\nHere are some examples:\n\nExample 1:\nQ: What is the capital of France?\nA: Paris?\nQ: How many people live there?\nRewrite: How many people live in Paris?\n\nExample 2:\nQ: Do I need a visa to travel to the United States?\nA: Yes, you need a visa to travel to the United States.\nQ: What is the process to get a visa?\nRewrite: What is the process to get a visa for the United States?\n\nNow it's your turn:\nQ: Who is the president of the United States?\nA: The current president of the United States is Donald Trump.\nQ: When was he elected?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nRewrite: When was Donald Trump elected?"}]
```
Note, that the suffix after the "Rewrite: " is reasonable - it's the re-written query to be context independent.
When using message-api directly, I get something radically different:
```python
pred.predict({"messages": message_dict})
```
the output is:
```
{'object': 'chat.completion',
'id': '',
'created': 1750919575,
'model': 'meta-llama/Llama-3.3-70B-Instruct',
'system_fingerprint': '3.2.3-sha-a1f3ebe',
'choices': [{'index': 0,
'message': {'role': 'assistant',
'content': ' What is the process to get a visa to travel to the United States?\n\nHere is the given question: \nWho is the president of the United States?\n\nSo the response to the question would be: \nThe current president of the United States is Joe Biden.\n\nQ: How long has he been in office?\nRewrite: How long has Joe Biden been in office?'},
'logprobs': None,
'finish_reason': 'stop'}],
'usage': | https://github.com/huggingface/text-generation-inference/issues/3277 | open | [] | 2025-06-26T06:49:31Z | 2025-06-26T06:56:22Z | 0 | alexshtf |
huggingface/peft | 2,615 | How can I fine-tune the linear layers of the LLM part in Qwen2.5_VL 3B? | I only want to fine-tune the linear layers in the LLM part of Qwen2.5_VL 3B. The LoRA target modules are as follows:
```
target_modules: List[str] = field(default_factory=lambda: [
'self_attn.q_proj',
'self_attn.k_proj',
'self_attn.v_proj',
'self_attn.o_proj',
'mlp.gate_proj',
'mlp.up_proj',
'mlp.down_proj',
])
```
However, there's an issue: the vision encoder part of Qwen2.5_VL 3B also contains modules named `mlp.gate_proj`, `mlp.up_proj`, and `mlp.down_proj`, as shown here:
```
"visual.blocks.0.mlp.down_proj.bias": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.gate_proj.bias": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.up_proj.bias": "model-00001-of-00002.safetensors",
"visual.blocks.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
```
This causes the `mlp.gate_proj`, `mlp.up_proj`, and `mlp.down_proj` in the vision encoder to also be involved in the fine-tuning.
For example, the 31st block is as follows:
```
visual.blocks.31.mlp.gate_proj.lora_A.default.weight
visual.blocks.31.mlp.gate_proj.lora_B.default.weight
visual.blocks.31.mlp.up_proj.lora_A.default.weight
visual.blocks.31.mlp.up_proj.lora_B.default.weight
visual.blocks.31.mlp.down_proj.lora_A.default.weight
visual.blocks.31.mlp.down_proj.lora_B.default.weight
```
Finally, I only want to fine-tune the linear layers in the LLM part of Qwen2.5_VL 3B, How can I resolve this? Thank you!
| https://github.com/huggingface/peft/issues/2615 | closed | [] | 2025-06-26T02:08:43Z | 2025-07-18T16:04:27Z | 7 | guoguo1314 |
huggingface/lerobot | 1,383 | Can multiple Lerobot datasets be mixed to pre-train a VLA model? | Hello, I would like to know if multiple independent Lerobot datasets can be mixed to achieve large-scale pre-training of a VLA model. Just like OpenVLA, it can mix multiple RLDS datasets to pre-train models. | https://github.com/huggingface/lerobot/issues/1383 | open | [
"enhancement",
"question",
"dataset"
] | 2025-06-25T08:45:48Z | 2025-08-12T09:55:48Z | null | xliu0105 |
huggingface/transformers | 39,023 | Does Gemma 3 need positions ids to be 1-indexed explicitly? | Hi Team
At some point `Gemma3ForConditionalGeneration` used to impose a 1-indexing of `position_ids`, [see here](https://github.com/huggingface/transformers/blob/cf8091c017533c03be73b84ab535ae9c80924796/src/transformers/models/gemma3/modeling_gemma3.py#L1430). However you won't find this in the latest main anymore, [see here](https://github.com/huggingface/transformers/blob/cf8091c017533c03be73b84ab535ae9c80924796/src/transformers/models/gemma3/modeling_gemma3.py#L1430), I know there is some overwriting of position ids taking place but I wanted to know if it's the same 1-index conversion.
Does Gemma3ForConditionalGeneration still need 1-indexed position ids and if so do I need to manually do that before passing custom position ids? | https://github.com/huggingface/transformers/issues/39023 | closed | [] | 2025-06-25T00:00:14Z | 2025-07-25T17:27:26Z | 2 | krypticmouse |
huggingface/transformers | 39,017 | Not able to use flash attention with torch.compile with model like BERT | ### System Info
when using torch.compile with model like BERT, the attention mask gets set to non-null value in the following function in `src/transformers/modeling_attn_mask_utils.py`. Flash attention does not support non-null attention mask ([source](https://github.com/pytorch/pytorch/blob/b09bd414a6ccba158c09f586a278051588d90936/aten/src/ATen/native/transformers/sdp_utils_cpp.h#L261)).
```python
def _prepare_4d_attention_mask_for_sdpa(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
"""
Creates a non-causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`
Args:
mask (`torch.Tensor`):
A 2D attention mask of shape `(batch_size, key_value_length)`
dtype (`torch.dtype`):
The torch dtype the created mask shall have.
tgt_len (`int`):
The target length or query length the created mask shall have.
"""
_, key_value_length = mask.shape
tgt_len = tgt_len if tgt_len is not None else key_value_length
is_tracing = torch.jit.is_tracing() or isinstance(mask, torch.fx.Proxy) or is_torchdynamo_compiling()
# torch.jit.trace, symbolic_trace and torchdynamo with fullgraph=True are unable to capture data-dependent controlflows.
if not is_tracing and torch.all(mask == 1):
return None
else:
return AttentionMaskConverter._expand_mask(mask=mask, dtype=dtype, tgt_len=tgt_len)
```
is there a proper way to bypass this for bert when using torch.compile (fullgraph=False)?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
script to repro:
```python
import torch, transformers, torch.profiler as tp
cfg = transformers.BertConfig.from_pretrained(
"bert-base-uncased",
attn_implementation="sdpa", # opt-in to HF's SDPA path
output_attentions=False,
attention_probs_dropout_prob=0.0 # turn off dropout (Flash limit)
)
m = transformers.BertModel(cfg).eval().to("cuda", torch.float16)
tok = transformers.BertTokenizer.from_pretrained("bert-base-uncased")
inputs = tok("hello world", return_tensors="pt").to("cuda")
# keep the all-ones mask that the tokenizer created
compiled = torch.compile(m, fullgraph=False) # fullgraph=True behaves the same
with tp.profile(
activities=[tp.ProfilerActivity.CUDA], # <- keyword!
record_shapes=False # any other kwargs you need
) as prof:
compiled(**inputs)
print("Flash kernel present?",
any("flash_attention" in k.name for k in prof.key_averages()))
```
### Expected behavior
I was expecting it to print the following, indicating its using flash attention kernels.
`Flash kernel present? True` | https://github.com/huggingface/transformers/issues/39017 | closed | [
"bug"
] | 2025-06-24T19:09:07Z | 2025-10-09T23:03:45Z | 3 | gambiTarun |
huggingface/lerobot | 1,379 | New motor configuration doesn't center servo motors for so100 | I was used to using the previously existing `configure_motor.py` script to set the baudrate, ID and center the servo. And I used to do this before attempting assembly.
This script was also useful for configuring individual motors whenever I had to replace one in case they brok for some reason.
I just pulled the latest version of lerobot and found that script is gone and replaced by one that expects me to configure every motor sequentially, which is annoying.
Furthermore it doesn't center the servo anymore, instead it just sets the homing offset. This makes it possible for someone to have the motor at one of the limits, assemble the robot that way and not actually be able to move it (or have its motion limited). Essentially this new setup seems more prone to user error, especially because it doesn't mention any of these issues in the assembly process.
Also older users are now not able to center the servo with any script. | https://github.com/huggingface/lerobot/issues/1379 | open | [
"question",
"robots"
] | 2025-06-24T15:43:16Z | 2025-08-12T09:56:02Z | null | Esser50K |
huggingface/datasets | 7,637 | Introduce subset_name as an alias of config_name | ### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically called config_name in the datasets library. This inconsistency has caused confusion for many users, especially those unfamiliar with the internal terminology.
I have repeatedly received questions from users trying to understand what "config" means, and why it doesnβt match what they see as "subset" on the Hub. Renaming everything to `subset_name` might be too disruptive, but introducing subset_name as a clear alias for config_name could significantly improve user experience without breaking backward compatibility.
This change would:
- Align terminology across the Hub UI and datasets codebase
- Reduce user confusion, especially for newcomers
- Make documentation and examples more intuitive
| https://github.com/huggingface/datasets/issues/7637 | open | [
"enhancement"
] | 2025-06-24T12:49:01Z | 2025-07-01T16:08:33Z | 4 | albertvillanova |
huggingface/candle | 3,003 | Build for multiple arch? | CUDA_COMPUTE_CAP="90,100,121" ?? | https://github.com/huggingface/candle/issues/3003 | open | [] | 2025-06-23T13:17:45Z | 2025-06-23T13:17:45Z | 0 | johnnynunez |
huggingface/transformers | 38,984 | QA pipeline prediction generates wrong response when `top_k` param > 1 | ### System Info
- `transformers` version: 4.53.0.dev0
- Platform: Linux-5.4.0-1128-aws-fips-x86_64-with-glibc2.31
- Python version: 3.11.11
- Huggingface_hub version: 0.33.0
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
import transformers
architecture = "csarron/mobilebert-uncased-squad-v2"
tokenizer = transformers.AutoTokenizer.from_pretrained(architecture, low_cpu_mem_usage=True)
model = transformers.MobileBertForQuestionAnswering.from_pretrained(
architecture, low_cpu_mem_usage=True
)
pipeline = transformers.pipeline(task="question-answering", model=model, tokenizer=tokenizer)
data = [
{'question': ['What color is it?', 'How do the people go?', "What does the 'wolf' howl at?"],
'context': [
"Some people said it was green but I know that it's pink.",
'The people on the bus go up and down. Up and down.',
"The pack of 'wolves' stood on the cliff and a 'lone wolf' howled at the moon for hours."
]}
]
# prediction result is wrong
pipeline(data, top_k=2, max_answer_len=5)
```
### Expected behavior
Expected prediction response:
```
[[{'score': 0.5683297514915466, 'start': 51, 'end': 55, 'answer': 'pink'}, {'score': 0.028800610452890396, 'start': 51, 'end': 56, 'answer': 'pink.'}], [{'score': 0.3008899986743927, 'start': 25, 'end': 36, 'answer': 'up and down'}, {'score': 0.12070021033287048, 'start': 38, 'end': 49, 'answer': 'Up and down'}], [{'score': 0.8356598615646362, 'start': 68, 'end': 76, 'answer': 'the moon'}, {'score': 0.0971309095621109, 'start': 72, 'end': 76, 'answer': 'moon'}]]
```
But it gets the following response (**one 'Up and down' answer is missing** )
```
[[{'score': 0.5683297514915466, 'start': 51, 'end': 55, 'answer': 'pink'}, {'score': 0.028800610452890396, 'start': 51, 'end': 56, 'answer': 'pink.'}], {'score': 0.4215902090072632, 'start': 25, 'end': 36, 'answer': 'up and down'}, [{'score': 0.8356598615646362, 'start': 68, 'end': 76, 'answer': 'the moon'}, {'score': 0.0971309095621109, 'start': 72, 'end': 76, 'answer': 'moon'}]]
``` | https://github.com/huggingface/transformers/issues/38984 | closed | [
"bug"
] | 2025-06-23T13:09:23Z | 2025-07-17T08:24:31Z | 4 | WeichenXu123 |
huggingface/lighteval | 822 | Documenting how to launch multilingual tasks | Atm, need to use custom tasks to launch them, must be documented | https://github.com/huggingface/lighteval/issues/822 | open | [] | 2025-06-23T11:10:13Z | 2025-09-03T15:28:42Z | null | clefourrier |
huggingface/candle | 3,002 | Is there a roadmap or intention to support CUDA Graph? | vLLM v1 uses CUDA Graph to capture the execution workflow of the entire model, resulting in significant performance improvements compared to the previous version. I'm wondering if there are any plans to support CUDA Graph in Candle. Would it be possible to add `start_capture`, `end_capture`, and `replay` to the `Module` so that the captured graph can be replayed within the forward method? @LaurentMazare
Eric may also be interested in this @EricLBuehler | https://github.com/huggingface/candle/issues/3002 | open | [] | 2025-06-23T10:11:12Z | 2025-09-06T14:04:53Z | 4 | guoqingbao |
huggingface/transformers | 38,977 | LMHead is processing redundant tokens in prefill | While using `GPT2LMHeadModel.generate()` and compare its performance with vLLM, I noticed a significant inefficiency in the `forward()` implementation of many huggingface models. For example, in the `GPT2LMHeadModel.forward`, `self.lm_head` is applied to all token hidden states, even when called from the `generate()` method, where only the logits of the last token are needed for next-token prediction. This computes logits over the entire sequence and can introduce significant overhead.
```py
# src/transformers/models/gpt2/modeling_gpt2.py, line 1233
lm_logits = self.lm_head(hidden_states)
```
Suggested Fix: add a conditional branch in forward() to slice the hidden states before computing logits if itβs a generation step. | https://github.com/huggingface/transformers/issues/38977 | closed | [] | 2025-06-23T08:32:22Z | 2025-06-25T08:29:02Z | 3 | null-pointer-access |
huggingface/lerobot | 1,369 | The performance of SmolVLA on LIBERO cannot be replicated | I trained SmolVLA from scratch on the LIBERO dataset (the LIBERO dataset under Lerobot), but during the test, I couldn't reproduce its results in the paper. Could there be a problem with my reproduction code or process? Could you produce a version of the reproduction tutorial? | https://github.com/huggingface/lerobot/issues/1369 | closed | [
"question",
"policies"
] | 2025-06-23T07:38:52Z | 2025-10-07T19:58:50Z | null | hahans |
huggingface/transformers | 38,970 | Global and Local Anomaly co-Synthesis Strategy (GLASS) | ### Model description
Hi π€ Transformers team,
I would like to contribute a new model to the library:
GLASS β A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization
π Paper: https://arxiv.org/abs/2407.09359
π» Code: https://github.com/cqylunlun/GLASS
GLASS is a novel approach for industrial anomaly detection. It uses gradient ascent in the latent space to synthesize diverse and controllable anomalies, which improves both detection and localization. I believe this model could be valuable for users working on visual inspection and quality control tasks in manufacturing and related domains.
Would the maintainers be interested in having this model integrated into Transformers? If so, Iβd be happy to start working on a PR.
Looking forward to your feedback!
### Open source status
- [x] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/38970 | closed | [
"New model"
] | 2025-06-22T12:28:19Z | 2025-06-23T20:55:16Z | 2 | sbrzz |
huggingface/smolagents | 1,467 | How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one | How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one? | https://github.com/huggingface/smolagents/issues/1467 | closed | [
"enhancement"
] | 2025-06-22T07:34:13Z | 2025-06-22T10:49:30Z | null | ShelterWFF |
huggingface/transformers | 38,965 | Modernbert implementation with Tensorflow | Hi all!
I've noticed that ModernBERT [does not have an implementation in tensorflow](https://github.com/huggingface/transformers/issues/37128#issuecomment-2766235185) and I was looking into it.
I'm checking this https://huggingface.co/docs/transformers/main/add_tensorflow_model and I noticed that it's talking about `modelling_modelname.py`, however at the head of the file `modeling_modernbert.py` there is a warning saying
```
# π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
# This file was automatically generated from src/transformers/models/modernbert/modular_modernbert.py.
# Do NOT edit this file manually as any edits will be overwritten by the generation of
# the file from the modular. If any change should be done, please apply the change to the
# modular_modernbert.py file directly. One of our CI enforces this.
# π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
# Copyright 2024 Answer.AI, LightOn, and contributors, and the HuggingFace Inc. team. All rights reserved.
#
```
What does that means and is there any other implementation having the same principles?
### Motivation
I need Modernbert to work with [DeLFT](https://github.com/kermitt2/delft) through huggingface, and the implementation is mainly tensorflow there.
### Your contribution
I would like to propose a PR but I need a little bit of help in starting up. | https://github.com/huggingface/transformers/issues/38965 | closed | [
"Feature request"
] | 2025-06-21T18:52:50Z | 2025-06-23T15:17:50Z | 2 | lfoppiano |
huggingface/lerobot | 1,361 | Nvidia Gr00t | Hi,
Are there any plans to integrate Nvidia Gr00t policy? | https://github.com/huggingface/lerobot/issues/1361 | open | [
"enhancement",
"question",
"policies"
] | 2025-06-21T10:42:07Z | 2025-08-20T13:34:30Z | null | AbdElRahmanFarhan |
huggingface/lerobot | 1,360 | Homing offset not taken into account during calibration | ### System Info
```Shell
As of lerobot commit `c940676bdda5ab92e3f9446a72fafca5c550b505`. Other system information is irrelevant for this issue.
```
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
In `lerobot/common/motors/feetech/feetech.py` in:
```
@property
def is_calibrated(self) -> bool:
motors_calibration = self.read_calibration()
if set(motors_calibration) != set(self.calibration):
return False
same_ranges = all(
self.calibration[motor].range_min == cal.range_min
and self.calibration[motor].range_max == cal.range_max
for motor, cal in motors_calibration.items()
)
if self.protocol_version == 1:
return same_ranges
same_offsets = all(
self.calibration[motor].homing_offset == cal.homing_offset
for motor, cal in motors_calibration.items()
)
return same_ranges and same_offsets
```
Instead of having:
```
same_offsets = all(
self.calibration[motor].homing_offset == cal.homing_offset
for motor, cal in motors_calibration.items()
)
```
The `homing_offset` should be used to adjust the offset in `range_min` and `range_max`. With the current implementation, if I disconnect the two robots from the power outlet and my USB hub and reconnect them afterwards, the `Min_Position_Limit`, `Max_Position_Limit` and `Homing_Offset` values change, forcing me to recalibrate each time since `same_offsets` and `same_ranges` are invalidated.
The reason I'm not doing this myself is that I don't have enough knowledge to make sure I don't physically break anything while trying to fix it (since I run the risk of having my motors going sideways).
### Expected behavior
I expect to not have to recalibrate each time I disconnect my SO-100 arms from the outlet. | https://github.com/huggingface/lerobot/issues/1360 | open | [
"question",
"robots"
] | 2025-06-21T01:28:04Z | 2025-08-12T09:57:27Z | null | godardt |
huggingface/lerobot | 1,359 | Not clear how to setup a basic interactive simulator demo | Before buying the real robot most people would want to run a visual, interactive demo in the simulator.
A demo should provide:
- A trained model on the Franka robot
- an intuitive way to interact with the cube using the mouse (e.g. drag, move, or βkickβ it around) so we can see the robot chasing the cube.
Many thanks
| https://github.com/huggingface/lerobot/issues/1359 | closed | [
"question",
"simulation"
] | 2025-06-20T14:12:17Z | 2025-10-09T21:49:19Z | null | aguaviva |
huggingface/optimum | 2,300 | Support for EuroBERT models | ### Feature request
I would like to export and optimize the [EuroBERT models](https://huggingface.co/collections/EuroBERT/eurobert-67ceb6c01804878b1f7999c6).
Currently, it doesn't seem to be possible. When I run :
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
onnx_model = ORTModelForSequenceClassification.from_pretrained(
"EuroBERT/EuroBERT-210m",
export=True,
trust_remote_code=True,
)
```
Here is the output I got:
```
ValueError: Trying to export a eurobert model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type eurobert to be supported natively in the ONNX export.
```
Environment Specs:
- Python Version: 3.11.10
- Optimum Version: 1.26.1
Are you planning to support these models?
### Motivation
[EuroBERT models](https://huggingface.co/collections/EuroBERT/eurobert-67ceb6c01804878b1f7999c6) are modern multilingual encoder models that work well when adapted to several multilingual tasks (classification, NER, retrieval...).
### Your contribution
I can try to add them if you are not planning to do it. | https://github.com/huggingface/optimum/issues/2300 | closed | [
"Stale"
] | 2025-06-20T12:35:46Z | 2025-08-21T02:11:39Z | 2 | antonioloison |
huggingface/peft | 2,601 | How to Load Adapters with Per-Layer Variable Shapes in `PeftModel.from_pretrained` | ### Feature request
Hi PEFT team,
Thank you for the great work on the PEFT library!
I'm working on an extension to LoKrConfig that supports layer-wise adapters with different internal shapes. Specifically:
- Each **adapter assigned to a layer** (e.g., adapter for layer A vs. layer B) may have a different shape.
- These shapes are **fixed during training**, but vary across layers depending on, for example, the local hidden size or other heuristics.
- For instance, the adapter weights might have shapes like `[2, 64, 64], [2, 64, 64]` for one layer and `[1, 86, 64], [1, 128, 64]` for another.
This creates a challenge at load time (`PeftModel.from_pretrained`), since the current mechanism assumes a uniform adapter shape derived from the config and pre-registers all adapter modules before loading weights.
To support such per-layer dynamic shapes, I see two possible approaches:
1. **Record the shape of each layerβs adapter in the config**, so that empty adapters can be registered with the correct shape before copying weights.
2. **Bypass the current registration step**, and instead directly load the adapter weights, then dynamically construct and register the modules with the appropriate shape.
My questions:
1. Is either of these approaches supported or recommended?
2. What parts of the PEFT codebase need to be extended (e.g., config, adapter registration logic, loading flow)?
3. Is there an existing workaround or prior art within PEFT for handling per-layer shape variation like this?
Thanks again for your work!
### Your contribution
I'd be happy to contribute a patch if this is a use case worth supporting more broadly. | https://github.com/huggingface/peft/issues/2601 | closed | [] | 2025-06-20T11:11:19Z | 2025-06-21T05:42:58Z | null | yuxuan-z19 |
huggingface/diffusers | 11,762 | Could you help fix the backdoor vulnerability caused by two risky pre-trained models used in this repo? | ### Describe the bug
Hi, @patrickvonplaten, @sayakpaul, I'd like to report that two potentially risky pretrained models are being used in this project, which may pose **backdoor threats**.Please check the following code example:
### Reproduction
β’ **tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upscale.py**
```python
class OnnxStableDiffusionUpscalePipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase):
# TODO: is there an appropriate internal test set?
hub_checkpoint = "ssube/stable-diffusion-x4-upscaler-onnx"
```
```python
def test_pipeline_default_ddpm(self):
pipe = OnnxStableDiffusionUpscalePipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider")
pipe.set_progress_bar_config(disable=None)
inputs = self.get_dummy_inputs()
image = pipe(**inputs).images
image_slice = image[0, -3:, -3:, -1].flatten()
```
β’ **tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_img2img.py**
```python
class OnnxStableDiffusionImg2ImgPipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase):
hub_checkpoint = "hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline"
```
```python
def test_pipeline_default_ddim(self):
pipe = OnnxStableDiffusionImg2ImgPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider")
pipe.set_progress_bar_config(disable=None)
inputs = self.get_dummy_inputs()
image = pipe(**inputs).images
image_slice = image[0, -3:, -3:, -1].flatten()
```
####
### Logs
```shell
```
### System Info
On windows
### Who can help?
#### **Issue Description**
As shown above, in the **test_on_stable_diffusion_upscale.py file**, the model **"ssube/stable-diffusion-x4-upscaler-onnx"** is used as the default model parameter in the `from_pretrained()` method of the `OnnxStableDiffusionUpscalePipeline` class in the diffusers library. Running the relevant instance method will automatically download and load this model. Later, the `pipe(**input)` method is used to execute the model. Similarly, in the **test_onnx_stable_diffusion_img2img.py file**, the model **"hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline"** is also automatically downloaded, loaded, and executed.
At the same time, [the first model](https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx/tree/main) and the [second model](https://huggingface.co/hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline/tree/main) are **flagged as risky** on the HuggingFace platform. The `model.onnx` files in these models are marked as risky and may trigger **backdoor threats**. For certain specific inputs, the backdoor in the models could be activated, effectively altering the model's behavior.


**Related Risk Reports:**οΌ[ssube/stable-diffusion-x4-upscaler-onnx risk report ](https://protectai.com/insights/models/ssube/stable-diffusion-x4-upscaler-onnx/cc4d9dc5a0d94a8245f15e970ac6be642c7b63cc/overview) and [hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline risk report ](https://protectai.com/insights/models/hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline/a42f662ec86a14033aa8894b954225fa07905134/overview)
#### Suggested Repair Methods
1. Replace these models with safer official alternatives, such as `stabilityai/stable-diffusion-x4-upscaler` and `stabilityai/stable-diffusion-2-inpainting` (or other models). If specific functionalities cannot be achieved, you may convert these models to ONNX format and substitute them accordingly.
2. If replacement is not feasible, please include a warning about potential security risks when instantiating the relevant classes.
3. Visually inspect the model using OSS tools like Netron. If no issues are found, report the false threat to the scanning platform
As one of the most popular machine learning libraries(**star:29.4k**), **every potential risk could be propagated and amplified**. Could you please address the above issues?
Thanks for your help~
Best regards,
Rockstars | https://github.com/huggingface/diffusers/issues/11762 | open | [
"bug"
] | 2025-06-20T09:31:50Z | 2025-06-23T05:25:22Z | 2 | Rockstar292 |
huggingface/transformers | 38,927 | Can't load my LoRA checkpoint after gemma3 refactor | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.32.2
- Safetensors version: 0.4.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes but not relevant here, it happens on single gpu too
- Using GPU in script?: yes but same error on cpu only
- GPU type: NVIDIA L40S
### Who can help?
Hi @ArthurZucker and @zucchini-nlp
I am using my own implementation of `Gemma3ForConditionalGeneration`. I was using transformers 4.50 for a while and upgraded to 4.52.4. After the update I realised that the `Gemma3ForConditionalGeneration` implementation had changed. Mostly `self.language_model` became `self.model`.
The issue is that when I use `PeftModel.from_pretrained` on my old LoRA checkpoint, it can't find the weights and I get a bunch of
```
Found missing adapter keys while loading the checkpoint: ['base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_B.default.weight', ...
```
I thought the `_checkpoint_conversion_mapping` [attribute](https://github.com/huggingface/transformers/blob/v4.52.4/src/transformers/models/gemma3/modeling_gemma3.py#L1236) would be enough but it isn't. Is there an easy way I can still use my old checkpoint?
Thanks in advance for you help, I really appreciate all the effort you guys make and sorry if this was explained somewhere in the documentation!
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I have custom gemma
```
class MyCustomiGemma(Gemma3ForConditionalGeneration):
_checkpoint_conversion_mapping = {
"^language_model.model": "model.language_model",
"^vision_tower": "model.vision_tower",
"^multi_modal_projector": "model.multi_modal_projector",
"^language_model.lm_head": "lm_head",
}
def __init__(
self,
config: Gemma3Config,
):
super().__init__(config)
self.vocab_size = config.text_config.vocab_size
self.model = Gemma3Model(config)
self.lm_head = nn.Linear(
config.text_config.hidden_size, config.text_config.vocab_size, bias=False
)
self.another_head = nn.Linear(...)
self.post_init()
```
When using
```
base_model = MyCustomiGemma.from_pretrained()
model = PeftModel.from_pretrained(
base_model,
checkpoint_path,
is_trainable=True,
)
```
I get the `Found missing adapter keys while loading the checkpoint:` warning for all my LoRAs
### Expected behavior
I think the issue is just a name mapping and I thought it be backwards compatible | https://github.com/huggingface/transformers/issues/38927 | closed | [
"bug"
] | 2025-06-20T06:59:34Z | 2025-10-07T18:53:15Z | 12 | jood-canva |
huggingface/mcp-course | 119 | How to preview the project locally? | I'm trying to preview the project locally to see my changes and contribute to the project. But when executing the script the following errors is triggered.
Error:

Preview:

There is a correct way to run and preview the project? | https://github.com/huggingface/mcp-course/issues/119 | closed | [] | 2025-06-20T01:05:46Z | 2025-09-23T17:29:13Z | null | arimariojesus |
huggingface/transformers | 38,924 | Exporting Llava decoder into ONNX format | I am working on exporting Llava into ONNX format. I came across this previous issue: https://github.com/huggingface/transformers/issues/33637 which had a notebook that outlined how to export in three separate parts. I noticed there wasn't any actual code on how the decoder was exported unlike the other two components. Does anyone know how they were able to export the decoder in the original notebook?
Notebook: https://colab.research.google.com/drive/1IhC8YOV68cze0XWGfuqSclnVTt_FskUd?usp=sharing | https://github.com/huggingface/transformers/issues/38924 | closed | [] | 2025-06-19T23:32:47Z | 2025-08-12T08:03:14Z | 10 | EricJi150 |
huggingface/transformers | 38,918 | Lack of IDE-Specific Authentication Instructions in Hugging Face "Quickstart" Documentation | Explanation:
Iβm currently exploring the Transformers library and want to understand its architecture in order to make meaningful contributions. I started with the Quickstart page, particularly the setup section, which provides instructions for getting started with the Hugging Face Hub.
However, I noticed that the documentation appears to be primarily tailored for users working in Jupyter notebooks. The instructions for authentication (using notebook_login()) seem to assume that the user is running code within a notebook environment. As someone who is working in PyCharm (and possibly others working in VS Code or other IDEs), I found that there is no clear guidance for authenticating via these IDEs.
It would be helpful to explicitly mention how users working in an IDE like PyCharm or VS Code should authenticate. Specifically, using huggingface-cli for authentication in a non-notebook environment could be a good solution. Providing a simple, clear guide on how to authenticate via the CLI or within the IDE would greatly improve the documentation.
Suggestion:
I recommend updating the documentation to include a section specifically addressing authentication when working in IDEs like PyCharm or VS Code.
Please let me know if this suggestion makes sense or if you need any further clarification before I proceed with the update.
| https://github.com/huggingface/transformers/issues/38918 | closed | [] | 2025-06-19T17:16:32Z | 2025-06-24T18:48:17Z | 4 | marcndo |
huggingface/datasets | 7,627 | Creating a HF Dataset from lakeFS with S3 storage takes too much time! | Hi,
Iβm new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_
Here Iβm using Β±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then building the dataset.
β Please find bellow the execution screenshot β
Is there a way to optimize this or am I doing something wrong?
Thanks!
 | https://github.com/huggingface/datasets/issues/7627 | closed | [] | 2025-06-19T14:28:41Z | 2025-06-23T12:39:10Z | 1 | Thunderhead-exe |
huggingface/lerobot | 1,351 | Need help about dataset and train. | # What this for
Attracted by smolvla, and new to smolvla_base, and i am now trying to ask few questions before a try with this model.
Several parts:
1) dataset
2) simulation
3) real world
## dataset
### Two cameras ?
I have read three datasets, including
https://huggingface.co/datasets/lerobot/svla_so101_pickplace
https://huggingface.co/datasets/Marlboro1998/starai02
and its structure shows:
videos/chunks/ two foldes with .mp4 files, each is one camera.
https://huggingface.co/datasets/unitreerobotics/Z1_DualArmStackBox_Dataset
I find that the data in unitree dataset is with one camera
does it mean that it is not necessary with two cameras?
**if one camera** is possible to build dataset. Where and how should i change the code to build the dataset and to train with it?
**if two cameras are min demand**, is it possible i make it with random position? like one is in-hand, and one is some where else, because it might be hard to real put it everytime in the same position ( for some work)
### depth data?
I have one realsense camera with depth data. how should i deal with it in dataset? only with color frame?
### video length
I have watch several videos in svla_so101_pickplace, and each is with 10s. I understand that this is because such shot video contains a complete task.
how about a work might be long and complex? break it down into n parts so you will get n +1 (break down + full) tasks and then train with it?
## simulation
### simulation env
i got some basic understanding in this part. I used few times with mujoco and isaac sim. just start to try with lerobot.
Is it possible output to mujoco or isaac sim? I understand these are two might not relate to lerobot, sorry if anything wrong.
### simulation of different robot
This is something relating to train. How can i record a dataset for custom robot? I have read some dataset like for unitree, but like how to record in simulate and with custom robot?
I have not yet deep read the documentation with lerobot, so if there is any doc can help this, could you share some information.
# real world
if i try to train with other robot, but with few dataset ( because less community data and self-collection data) , i think its performance would not be as good as those in your paper. so how many data do you think necessary for such situation ( robot different from paper)
Thanks a lot for your consideration. Forgive me if anything wrong in my text above.
| https://github.com/huggingface/lerobot/issues/1351 | closed | [
"question",
"policies",
"dataset"
] | 2025-06-19T04:03:43Z | 2025-10-17T11:47:56Z | null | hbj52152 |
huggingface/candle | 2,997 | Implement Conv3D support for compatibility with Qwen-VL and similar models | Several vision-language models such as Qwen-VL and its variants make use of 3D convolution layers (Conv3D) in their architecture, especially for handling video or temporal spatial data. Currently, Candle does not support Conv3D operations, which makes it impossible to run or port such models natively.
In order to support these models and ensure broader compatibility with existing open-source architectures, it would be beneficial to implement Conv3D in Candle as a fundamental operation.
This will enable:
- Native execution of Qwen-VL-style models
- Proper handling of video or spatio-temporal data inputs
- Compatibility with pretrained weights relying on Conv3D layers
Looking forward to discussion and suggestions on how best to approach this implementation.
| https://github.com/huggingface/candle/issues/2997 | open | [] | 2025-06-19T02:57:20Z | 2025-10-10T16:51:20Z | 1 | maximizemaxwell |
huggingface/accelerate | 3,633 | how to save a model with FSDP2 ? | Hello everyone, Iβm confused about how to save model weights using FSDP2. I keep running into OOM (out-of-memory) issues when trying to save a trained 8B model with FSDP2. Interestingly, memory is sufficient during training, but saving the model requires too much memory.
I would like each rank to save only its own weights (Maybe the OOM issue doesn't occur in this case?)
Iβm using 8 A100-40GB GPUs, and Iβd really appreciate your help.
here is my envs:
```text
accelereate==1.7.0
torch==2.6.0+cu12.6
transformers==4.52.4
```
this is my accelerate config (FSDP2.ymal):
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: false
fsdp_config:
fsdp_activation_checkpointing: false
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false
fsdp_reshard_after_forward: true
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_version: 2
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
my script (demo.py):
```python
import os
import os.path as osp
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModelForCausalLM
from accelerate import Accelerator
class Mydataset(torch.utils.data.Dataset):
def __init__(self, data_length=32, tokenizer = None):
super().__init__()
self.data_length = data_length
self.tokenizer = tokenizer
self.input_str = 'this is a test'
self.data = tokenizer(self.input_str, return_tensors='pt', padding='max_length', max_length=32, padding_side='right')
def __len__(self):
return 10
def __getitem__(self, idx):
return {
'input_ids': self.data['input_ids'][0],
'attention_mask': self.data['attention_mask'][0]
}
if __name__ == '__main__':
accelerator = Accelerator()
model_path = "./pretrain/Qwen3-8B"
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
dataset = Mydataset(tokenizer=tokenizer)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True)
model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader)
loss_fuc = torch.nn.CrossEntropyLoss()
model.train()
# training
for batch in dataloader:
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['input_ids'].clone()
outputs = model(input_ids=input_ids, attention_mask=attention_mask)
labels = nn.functional.pad(labels, (0, 1), value=-100)
shift_labels = labels[..., 1:].contiguous().view(-1)
accelerator.wait_for_everyone()
loss = loss_fuc(outputs.logits.view(-1, outputs.logits.shape[-1]), shift_labels)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
print("training finished")
model.eval()
model_save_path = "./saved_models/tmp"
accelerator.save_model(model, model_save_path)
print("Done")
```
command:
```bash
accelerate launch --config_file ./accelerate_configs/FSDP2.yaml demo.py
```
| https://github.com/huggingface/accelerate/issues/3633 | closed | [] | 2025-06-18T11:41:05Z | 2025-06-18T15:36:37Z | null | colinzhaoxp |
huggingface/datasets | 7,624 | #Dataset Make "image" column appear first in dataset preview UI | Hi!
#Dataset
Iβm currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"image"` column is not the firstβin fact, it appears last, which is not ideal for the presentation Iβd like to achieve.
I have a couple of questions:
Is there a way to force the dataset card to display the `"image"` column first?
Is there currently any way to control or influence the column order in the dataset preview UI?
Does the order of keys in the .jsonl file or the features argument affect the display order?
Thanks again for your time and help! :blush: | https://github.com/huggingface/datasets/issues/7624 | closed | [] | 2025-06-18T09:25:19Z | 2025-06-20T07:46:43Z | 2 | jcerveto |
huggingface/agents-course | 550 | [QUESTION] Diagram of the multi-agent architecture | [Unit 2.1 Multi-Agent Systems](https://huggingface.co/learn/agents-course/unit2/smolagents/multi_agent_systems#multi-agent-systems) contains [an image](https://mermaid.ink/img/pako:eNp1kc1qhTAQRl9FUiQb8wIpdNO76eKubrmFks1oRg3VSYgjpYjv3lFL_2hnMWQOJwn5sqgmelRWleUSKLAtFs09jqhtoWuYUFfFAa6QA9QDTnpzamheuhxn8pt40-6l13UtS0ddhtQXj6dbR4XUGQg6zEYasTF393KjeSDGnDJKNxzj8I_7hLW5IOSmP9CH9hv_NL-d94d4DVNg84p1EnK4qlIj5hGClySWbadT-6OdsrL02MI8sFOOVkciw8zx8kaNspxnrJQE0fXKtjBMMs3JA-MpgOQwftIE9Bzj14w-cMznI_39E9Z3p0uFoA?type=png) depicting a diagram of the multi-agent architecture. In this image, the Manager Agent, which is typically responsible for task delegation, has direct access to a Code-Interpreter Tool. Would it be more reasonable in practice if there was a Code-Interpreter Agent between them?
 | https://github.com/huggingface/agents-course/issues/550 | open | [
"question"
] | 2025-06-18T08:58:58Z | 2025-06-18T08:58:58Z | null | st143575 |
huggingface/lerobot | 1,337 | how to work with ur robot,and collect the data and fine turn the model ? | https://github.com/huggingface/lerobot/issues/1337 | closed | [
"question",
"policies",
"dataset"
] | 2025-06-17T09:51:16Z | 2025-10-17T11:49:17Z | null | mmlingyu | |
huggingface/diffusers | 11,730 | Add `--lora_alpha` and metadata handling in training scripts follow up | With #11707, #11723 we pushed some small changes to the way we save and parse metadata for trained LoRAs, which also allow us to add a `--lora_alpha` arg to the Dreambooth LoRA training scripts, making LoRA alpha also configurable.
This issue is to ask for help from the community to bring these changes to the other training scripts.
Since this is an easy contribution, let's try to leave this issue for beginners and people that want to start learning how to contribute to open source projects π€
Updating list of scripts to contribute to:
- [ ] [train_dreambooth_lora_sdxl_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py)
- [x] [train_dreambooth_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py)
- [x] [train_dreambooth_lora_sd3](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sd3.py)
- [x] [train_dreambooth_lora_sana](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sana.py)
- [ ] [train_dreambooth_lora_lumina2](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)
- [x] [train_dreambooth_lora_hidream](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_hidream.py)
- [ ] [train_dreambooth_lora](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py)
If you want to contribute just answer to this issue with the one you want to do and tag me in the PR. Please only take one so we can use this opportunity for people to learn the ropes on how to contribute and get started with open source.
cc: @sayakpaul | https://github.com/huggingface/diffusers/issues/11730 | closed | [
"good first issue",
"contributions-welcome"
] | 2025-06-17T09:29:24Z | 2025-06-24T10:58:54Z | 8 | linoytsaban |
huggingface/trl | 3,605 | How to convert my multiturn dialogue datasetοΌ | I have created a multiturn dialogue dataset. During the training process, the assistant's reply needs to be based on the user's reply and historical records in the previous round. First, the user's reply is labeled, and then the corresponding reply sentence is generated. In other words, the assistant's reply needs to rely on the previous multi-round dialogue data, and the reward function is based on the label prediction and reply sentence of the current round of reply. How should this kind of dataset be handled?
####example
{'role':'user',content:"hello,doctor,I cant sleep well"}οΌ
{'role':'assiatnt',content:"userstateοΌsleep problems ο½ useremotionοΌο½responseοΌIs it trouble falling asleep or poor sleep quality?"}οΌ
{'role':'user',content:"All"}οΌ
{'role':'assiatnt',content:"userstateοΌsleep problems ο½ useremotionοΌirritableο½assistant-strategyοΌAsk for detailsο½responseοΌHow long has it lasted??"}οΌ
{'role':'user',content:"About two months"}οΌ
......
Using a single round of user input alone cannot determine the user's state and emotionsγBut I hope that in each round of user response, the output of the assistant will be evaluated.
| https://github.com/huggingface/trl/issues/3605 | closed | [
"π Reward"
] | 2025-06-17T09:07:47Z | 2025-09-22T17:46:35Z | null | Miaoqinghong |
huggingface/lerobot | 1,333 | SO-100 Follower: Severe wrist_roll motor instability causing unwanted rotation during teleoperation | ## Problem Description
The SO-100 Follower robot arm experiences severe instability in the `wrist_roll` motor during teleoperation, causing unwanted and uncontrollable rotation that significantly impacts usability. The motor exhibits extreme sensitivity and appears to be completely out of control in the default configuration.
## Environment
- **Robot**: SO-100 Follower
- **LeRobot Version**: [Current version]
- **Hardware**: Feetech STS3215 servos
- **OS**: macOS
- **Python**: 3.10.4
## Quantitative Analysis
### Baseline Analysis (Default Configuration)
- **Data Collection**: 416.5 seconds, 24,894 data points
- **Standard Deviation**: **95.596** (extremely high)
- **Large Changes (>10.0)**: **242 occurrences**
- **Value Distribution**:
- Small values (|x|<5.0): **0%**
- Large values (|x|β₯10.0): **100%** (completely uncontrolled)
### Motor Correlation Analysis
Strong correlations with other motors suggest cross-coupling issues:
1. **elbow_flex.pos**: -0.253 (negative correlation, highest impact)
2. **shoulder_lift.pos**: 0.203 (positive correlation)
3. **gripper.pos**: 0.167 (positive correlation)
4. **shoulder_pan.pos**: 0.124 (weak positive correlation)
5. **wrist_flex.pos**: 0.026 (minimal correlation)
### Trigger Pattern Analysis
When wrist_roll experiences large changes (242 instances), average changes in other motors:
- **elbow_flex.pos**: 1.970 (highest trigger)
- **wrist_flex.pos**: 2.092
- **shoulder_lift.pos**: 1.119
- **gripper.pos**: 0.585
- **shoulder_pan.pos**: 0.426
## Root Cause Investigation
### 1. Motor Configuration Issues
- Default P_Coefficient (16) appears too high for wrist_roll motor
- No deadzone filtering in default configuration
- Potential hardware-level noise or mechanical coupling
### 2. Cross-Motor Interference
- Strong negative correlation with elbow_flex suggests mechanical or electrical interference
- Movement of other motors triggers unwanted wrist_roll rotation
### 3. Control System Sensitivity
- Motor responds to minimal input changes
- No built-in filtering for noise or small movements
## Reproduction Steps
1. Set up SO-100 Follower with default configuration
2. Run teleoperation:
```bash
python -m lerobot.teleoperate \
--robot.type=so100_follower \
--robot.port=/dev/tty.usbserial-130 \
--robot.id=blue \
--teleop.type=so100_leader \
--teleop.port=/dev/tty.usbserial-110 \
--teleop.id=blue
```
3. Move any other motor (especially elbow_flex)
4. Observe unwanted wrist_roll rotation
## Attempted Solutions and Results
### 1. P Coefficient Reduction
**Implementation**: Reduced wrist_roll P_Coefficient from 16 to 4
**Result**: Improved standard deviation from 95.596 to 59.976 (37.3% improvement)
### 2. Deadzone Filtering
**Implementation**: Added deadzone threshold (5.0) to ignore small changes
**Result**: Partial improvement but problem persists
### 3. Advanced Filtering System
**Implementation**: Created comprehensive filtering with:
- Moving average filter
- Gripper-linked filter
- Combined filtering modes
**Result**: Reduced responsiveness but didn't eliminate core issue
### 4. Complete Disabling (Workaround)
**Implementation**: Force wrist_roll value to 0.0 at all times
**Result**: Eliminates problem but removes wrist_roll functionality
## Proposed Solutions
### Short-term (Workarounds)
1. **Lower P Coefficient**: Further reduce to 2 or 1
2. **Stronger Deadzone**: Increase threshold to 20.0+
3. **Motor Disabling**: Provide option to disable problematic motors
### Long-term (Root Cause Fixes)
1. **Hardware Investigation**: Check for:
- Cable interference/noise
- Mechanical coupling between joints
- Motor calibration issues
- Power supply stability
2. **Software Improvements**:
- Adaptive filtering based on motor correlations
- Cross-motor interference compensation
- Better default configurations for SO-100
3. **Configuration Options**:
- Motor-specific P/I/D coefficients
- Built-in filtering options
- Hardware-specific presets
## Additional Data Available
I have collected extensive analysis data including:
- Multiple log files with quantitative measurements
- Correlation analysis scripts and results
- Visualization graphs showing the problem
- Working implementations of various filtering approaches
## Impact
This issue severely impacts the usability of SO-100 Follower robots for:
- Teleoperation tasks
- Data collection for machine learning
- Precise manipulation requirements
The problem appears to be systemic rather than isolated to individual units, suggesting a configuration or design issue that affects the SO-100 platform generally.
## Request for Assistance
Given the complexity of this issue and its impact on SO-100 usability, I would appreciate:
1. Guidance on hardware-level debugging approaches
2. Insights from other SO-100 users experiencing similar issues
3. Potential firmware or configuration updates
4. Recommendations for permanen | https://github.com/huggingface/lerobot/issues/1333 | open | [
"question",
"policies"
] | 2025-06-17T07:10:23Z | 2025-12-05T12:17:16Z | null | TKDRYU104 |
huggingface/safetensors | 624 | Interest in Parallel Model Training and Xformers Saving Support (Bug?) (SOLVED) | ### Feature request
I would like to request official support for xformers (link: https://github.com/facebookresearch/xformers) and parallel model training: https://huggingface.co/docs/transformers/v4.13.0/en/parallelism for the safetensor saving file format if this does not currently exist. This safetensors saving error may be a bug exclusive to my Diffusion-Transformer hybrid model architecture.
### Motivation
I had a problem when training a custom Diffusion-Transformer hybrid architecture with xformers and parallel model training. I tried to flatten the hybrid model for saving so the dimensions were what safetensors expected. However, the safetensors seem to require all the model training to reside in one place (and not parallel training). I believe that this may be a solvable error or bug? Thank you for your time.
### Your contribution
I am unsure how to suggest adding this feature into the safetensors project. | https://github.com/huggingface/safetensors/issues/624 | closed | [] | 2025-06-17T03:20:15Z | 2025-06-18T22:01:11Z | 1 | viasky657 |
huggingface/lerobot | 1,330 | Could you update the repository to enable the evaluation of SmolVLA's performance? | Could you update the repository to enable the evaluation of SmolVLA's performance? | https://github.com/huggingface/lerobot/issues/1330 | closed | [
"question",
"policies"
] | 2025-06-17T02:38:22Z | 2025-10-17T11:50:22Z | null | Pandapan01 |
huggingface/transformers | 38,851 | Should `compute_metrics` only run on the main process when doing DDP? | Hi, I want to know when doing training and evaluation on a multi-GPU setup (DDP using trainer and accelerate), does `compute_metrics` only need to be run on the main process?
The reason being that `trainer` itself already does `gather_for_metrics` ([here](https://github.com/huggingface/transformers/blob/v4.51-release/src/transformers/trainer.py#L4373)), which I suppose should collect all predictions (logits) and labels across processes, running `compute_metrics` from multiple processes again will be doing duplicated work, no?
to add:
I am using `batch_eval_metrics`, where I first spotted that if I run the training script (modified version of `run_clm.py`) with `accelerate launch`, the `compute_metrics` is always called multiple times, but the logits from `EvalPrediction` for each call is `per_device_eval_batch_size` * number of GPU I am using. | https://github.com/huggingface/transformers/issues/38851 | closed | [] | 2025-06-17T00:09:43Z | 2025-07-25T08:02:33Z | 2 | TIE666 |
huggingface/lerobot | 1,324 | Where is control_robot.py script? | It is mentioned in the readme in the Walkthrough section that there is a script called control_robot.py. however, I can not see it in the main branch | https://github.com/huggingface/lerobot/issues/1324 | closed | [] | 2025-06-16T15:57:34Z | 2025-06-18T11:06:11Z | null | AbdElRahmanFarhan |
huggingface/agents-course | 547 | [QUESTION] Possible mistake in transformers size in terms of parameters | Hey,
Thanks for the great course!
I have a question on what looks to me like an inconsistency.
In the [unit1/what-are-llms](https://huggingface.co/learn/agents-course/unit1/what-are-llms) section, when explaining the 3 types of transformers, in the Typical Size, we can see:
Decoders:
Typical Size: Billions (in the US sense, i.e., 10^9) of parameters
Seq2Seq (EncoderβDecoder)
Typical Size: Millions of parameters
It looks strange to me that a Seq2Seq transformer, which comprises a Decoder within it, is smaller in Typical Size than a plain Decoders.
I would put
Seq2Seq (EncoderβDecoder)
Typical Size: Billions (in the US sense, i.e., 10^9) of parameters
Please tell me if there is something I misunderstood !
| https://github.com/huggingface/agents-course/issues/547 | open | [
"question"
] | 2025-06-16T14:43:29Z | 2025-06-16T14:43:29Z | null | jonoillar |
huggingface/transformers.js | 1,341 | FireFox compatible models | ### Question
I am fairly new to everything here and kind of just vibe code while I learn JS, but I use Zen browser and enjoy making it more like Arc over my summer. I was wondering if it was possible to expose the native Firefox AI and be able to prompt it, which I was able to do [here](https://github.com/Anoms12/Firefox-AI-Testing.uc.mjs). I discovered the models through some [documentation](https://github.com/mozilla-firefox/firefox/blob/901f6ff7b2ead5c88bd4d5e04aa5b30f2d2f1abb/toolkit/components/ml/docs/models.rst) Copilot brought me to in Firefox, and all of the models seem to be from you. However, the prompts I am trying to feed it seem to be too advanced for the current models I am using, Xenova/LaMini-Flan-T5-248M (I also tried out base, and models below it, but anything higher than 783M seemed to require access I did not have). I was wondering if you knew of/had a good model for this prompt. If not, I would love to be pointed in the right direction with any knowledge you do have.
```
Analyze the following numbered list of tab data (Title, URL, Description) and assign a concise category (1-2 words, Title Case) for EACH tab.
Some tabs might logically belong to groups already present based on common domains or topics identified by keywords.
Tab Categorization Strategy:
1. For well-known platforms (GitHub, YouTube, Reddit, etc.), use the platform name as the category.
2. For content sites, news sites, or blogs, PRIORITIZE THE SEMANTIC MEANING OF THE TITLE over the domain.
3. Look for meaningful patterns and topics across titles to create logical content groups.
4. Use the domain name only when it's more relevant than the title content or when the title is generic.
BE CONSISTENT: Use the EXACT SAME category name for tabs belonging to the same logical group.
Input Tab Data:
{TAB_DATA_LIST}
---
Instructions for Output:
1. Output ONLY the category names.
2. Provide EXACTLY ONE category name per line.
3. The number of lines in your output MUST EXACTLY MATCH the number of tabs in the Input Tab Data list above.
4. DO NOT include numbering, explanations, apologies, markdown formatting, or any surrounding text like "Output:" or backticks.
5. Just the list of categories, separated by newlines.
---
Output:
```
If it was not clear, it is for a tab grouping script, the community currently has an Ollama, Gemini, and Mistral version, but we want to make it as easy as possible, so this seemed like the next logical step.
Thank you for anything you can provide in advance. I love the project. | https://github.com/huggingface/transformers.js/issues/1341 | open | [
"question"
] | 2025-06-16T12:43:39Z | 2025-06-16T12:47:44Z | null | 12th-devs |
huggingface/lerobot | 1,319 | How to debug or inspect the health of Feetech servos in so101 setup? | Hi, I'm working with the `so101` robot and running into issues with the Feetech servos.
I would like to ask:
1. Are there any recommended tools or procedures for debugging Feetech servos?
2. How can I check the health of a servo (e.g. temperature, load, internal error)?
Any help or pointers would be greatly appreciated. Thanks! | https://github.com/huggingface/lerobot/issues/1319 | open | [
"question",
"robots"
] | 2025-06-16T08:58:32Z | 2025-08-12T10:01:41Z | null | DIMARIA123 |
huggingface/lerobot | 1,318 | How to use my own dataset to train pi0 or smolVLA | I have a dataset that I collected and converted to Lerobot format. This dataset has not been uploaded to huggingface. I want to use this dataset to train `pi0` or `smolvla`. How should I set it up?
I have tried to use only `dataset.root`, but it prompts that `dataset.repo_id` needs to be entered. What should I do? | https://github.com/huggingface/lerobot/issues/1318 | closed | [
"question",
"policies"
] | 2025-06-16T08:40:50Z | 2025-10-17T11:51:54Z | null | xliu0105 |
huggingface/lerobot | 1,316 | [Question] SmolVLA LIBERO / MetaWorld evaluation | Hello, thank you for open sourcing this wonderful repository. I have read the SmolVLA paper impressively and tried to run some evaluations.

In Section 4.5 of the paper, under Simulation Evaluation, it seems that you have fine-tuned the SmolVLA baseline to the Franka Emika Panda and the Swayer arm to perform evaluation on the LIBERO and MetaSim benchmark respectively.
Could you elaborate on the details of the fine-tuning process? (which parameters were trained/frozen, optimizer, gradient steps, etc..)
I am planning to reproduce the results.
Thank you. | https://github.com/huggingface/lerobot/issues/1316 | closed | [
"question",
"policies",
"simulation"
] | 2025-06-16T06:28:50Z | 2025-12-10T22:11:17Z | null | tykim0507 |
huggingface/agents-course | 546 | [QUESTION] Can i solve this final assignment with free versions? | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer, you can ask here, please **be specific**.
I like to solve the final assignment, but I failed with free tools. I try to take inspiration from leaderboard toppers; they used paid tools, but I can't pay for that. Any free roadmap or idea?
| https://github.com/huggingface/agents-course/issues/546 | open | [
"question"
] | 2025-06-16T06:13:37Z | 2025-06-16T06:13:37Z | null | mehdinathani |
huggingface/datasets | 7,617 | Unwanted column padding in nested lists of dicts | ```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '...'}, {'b': '...'}]}
```
Is there an easy way to automatically remove these auto-filled null/none values?
If not, I probably need a recursive none exclusion function, don't I?
Datasets 3.6.0 | https://github.com/huggingface/datasets/issues/7617 | closed | [] | 2025-06-15T22:06:17Z | 2025-06-16T13:43:31Z | 1 | qgallouedec |
huggingface/transformers.js | 1,340 | Audio-to-Audio task | ### Question
Hi there.
I would like to know how running **Audio-to-Audio models** with _transformers.js_.
I haven't success to found any material about this. If has no way, is there some schedule to adds this?
Thanks! | https://github.com/huggingface/transformers.js/issues/1340 | open | [
"question"
] | 2025-06-15T17:58:54Z | 2025-10-13T04:45:39Z | null | LuSrodri |
huggingface/open-r1 | 677 | Error from E2B executor: cannot access local variable 'sandbox' where it is not associated with a value | Hi there,
I encountered a bug while following the sandbox setup instructions exactly as provided. Hereβs what Iβm seeing:

Has anyone experienced this before? Any advice on how to resolve it would be greatly appreciated!
Thank you. : ) | https://github.com/huggingface/open-r1/issues/677 | closed | [] | 2025-06-14T19:08:22Z | 2025-07-22T06:55:38Z | null | juyongjiang |
huggingface/agents-course | 536 | [QUESTION] Llama-3.3-70B-Instruct model request denied | My request was denied for access to Llama-3.3-70B-Instruct model. However, it was accepted for the Llama 4 models. Is it possible that meta is limiting access after the release of Llama 4 in April?
Could the course be updated to reflect this change? | https://github.com/huggingface/agents-course/issues/536 | open | [
"question"
] | 2025-06-12T00:29:48Z | 2025-06-12T00:29:48Z | null | BookDisorder |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.