repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 12,079 | API Suggestion: Expose Methods to Convert to Sample Prediction in Schedulers | **What API design would you like to have changed or added to the library? Why?**
My proposal is for schedulers to expose `convert_to_sample_prediction` and `convert_to_prediction_type` methods, which would do the following:
1. `convert_to_sample_prediction`: Converts from a given `prediction_type` to `sample_prediction` (e.g. $x_0$-prediction). This function would accept a `prediction_type` argument which defaults to `self.config.prediction_type`.
2. `convert_to_prediction_type`: Converts back from `sample_prediction` to the scheduler's `prediction_type`. This is intended to be the inverse function of `convert_to_sample_prediction`.
The motivating use case I have in mind is to support guidance strategies such as [Adaptive Projected Guidance (APG)](https://arxiv.org/abs/2410.02416) and [Frequency-Decoupled Guidance (FDG)](https://arxiv.org/abs/2506.19713) which prefer to operate with sample / $x_0$-predictions. A code example will be given below.
The reason I think schedulers should expose these methods explicitly is that performing these operations depend on the scheduler state and definition. For example, the prediction type conversion code in `EulerDiscreteScheduler` depends on the `self.sigmas` schedule:
https://github.com/huggingface/diffusers/blob/ba2ba9019f76fd96c532240ed07d3f98343e4041/src/diffusers/schedulers/scheduling_euler_discrete.py#L650-L663
As a possible alternative, code that uses a scheduler could instead try to infer the prediction type conversion logic from the presence of `alphas_cumprod` (for a DDPM-style conversion) or `sigmas` (for an EDM-style conversion) attributes. However, I think this is unreliable because a scheduler could use `alphas_cumprod` or `sigmas` in a non-standard way. Since schedulers essentially already implement the `convert_to_sample_prediction` logic in their `step` methods, I think it could be relatively easy to implement these methods, and calling code would not have to guess how to do the conversion.
A potential difficulty is ensuring that these methods work well with the `step` method, for example if they are called outside of a denoising loop (so internal state like `self.step_index` may not be properly initialized) or if the conversion can be non-deterministic (for example, when `gamma > 0` in `EulerDiscreteScheduler`).
**What use case would this enable or better enable? Can you give us a code example?**
The motivating use case is to support guidance strategies which prefer to operate with $x_0$-predictions. For this use case, we want to convert the denoising model prediction to `sample_prediction`, run the guider's `__call__` logic, and then convert back to the scheduler's `prediction_type` (as schedulers currently expect `model_outputs` in that `prediction_type`).
There may be other potential use cases as well that I haven't thought of.
As a concrete example, we can imagine modifying `EulerDiscreteScheduler` as follows:
```python
class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):
...
def convert_to_sample_prediction(
self,
model_output: torch.Tensor,
timestep: Union[float, torch.Tensor],
sample: torch.Tensor,
prediction_type: Optional[str] = None,
s_churn: float = 0.0,
s_tmin: float = 0.0,
s_tmax: float = float("inf"),
s_noise: float = 1.0,
generator: Optional[torch.Generator] = None,
) -> torch.Tensor:
if prediction_type is None:
prediction_type = self.config.prediction_type
# NOTE: there's a potential catch here if self.step_index isn't properly initialized
sigma = self.sigmas[self.step_index]
gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0
sigma_hat = sigma * (gamma + 1)
# NOTE: another potential problem is ensuring consistent computation with `step` if the conversion
# can be non-deterministic (as below)
if gamma > 0:
noise = randn_tensor(
model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator
)
eps = noise * s_noise
sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5
# Compute predicted original sample (x_0) from sigma-scaled predicted noise
# NOTE: "original_sample" should not be an expected prediction_type but is left in for
# backwards compatibility
if self.config.prediction_type == "original_sample" or self.config.prediction_type == "sample":
pred_original_sample = model_output
elif self.config.prediction_type == "epsilon":
pred_original_sample = sample - sigma_hat * model_output
elif self.config.prediction_type == "v_prediction":
# denoised = model_output * c_out + input * c_skip
pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
else:
raise Valu | https://github.com/huggingface/diffusers/issues/12079 | open | [] | 2025-08-06T02:24:46Z | 2025-08-06T02:24:46Z | 0 | dg845 |
huggingface/candle | 3,047 | Can the safetensor files from OpenAI's new gpt-oss-20b work with any existing setup? | Is the new gpt-oss-20b a totally different architecture or can I use an existing candle setup, swap out the files and start playing around with gpt-oss-20b?
| https://github.com/huggingface/candle/issues/3047 | open | [] | 2025-08-06T01:59:59Z | 2025-08-06T02:01:52Z | 1 | zcourts |
huggingface/diffusers | 12,078 | Problem with provided example validation input in the Flux Control finetuning example | ### Describe the bug
The help page for the Flux control finetuning example, https://github.com/huggingface/diffusers/blob/main/examples/flux-control/README.md, provides a sample validation input, a pose condition image
[<img src="https://huggingface.co/api/resolve-cache/models/Adapter/t2iadapter/3c291e0547a1b17bed93428858cdc9b0265c26c7/openpose.png?%2FAdapter%2Ft2iadapter%2Fresolve%2Fmain%2Fopenpose.png=&etag=%2287cc79e12fe5a5bba31ac3098ee7837400b41ffa%22" width=256>]().
The pose conditioned model trained by the script does not process this image properly because it is in BGR format, apparent when comparing it to the openpose spec:
[<img src="https://github.com/ArtificialShane/OpenPose/raw/master/doc/media/keypoints_pose.png" width=256>]().
It doesn't appear that the validation image is loaded in BGR format properly, in the below line:
https://github.com/huggingface/diffusers/blob/ba2ba9019f76fd96c532240ed07d3f98343e4041/examples/flux-control/train_control_lora_flux.py#L127.
In my personal experiments, the validation output does not make sense. Below is an example of what my run uploaded to wandb:
<img width="1310" height="698" alt="Image" src="https://github.com/user-attachments/assets/0edc3c88-cfa5-4fae-a6b1-295839136dba" />
### Reproduction
I ran the below in the command line:
```
accelerate launch --config_file=/mnt/localssd/huggingface/accelerate/deepspeed.yaml train_control_lora_flux.py \
--pretrained_model_name_or_path="black-forest-labs/FLUX.1-dev" \
--dataset_name="raulc0399/open_pose_controlnet" \
--output_dir="/mnt/localssd/pose-control-lora" \
--mixed_precision="bf16" \
--train_batch_size=1 \
--rank=64 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--use_8bit_adam \
--learning_rate=1e-4 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=5000 \
--validation_image="openpose.png" \
--validation_prompt="A couple, 4k photo, highly detailed" \
--seed="0" \
--cache_dir="/mnt/localssd/huggingface"
```
### Logs
```shell
```
### System Info
```
- 🤗 Diffusers version: 0.34.0
- Platform: Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.8
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.3
- Transformers version: 4.54.1
- Accelerate version: 1.9.0
- PEFT version: 0.17.0
- Bitsandbytes version: 0.46.1
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
- Using GPU in script?: Yes.
- Using distributed or parallel set-up in script?: Yes.
```
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/12078 | open | [
"bug"
] | 2025-08-05T22:29:35Z | 2025-08-07T08:47:45Z | 1 | kzhang2 |
huggingface/lerobot | 1,672 | How to resume training? | My old setting of training:
```
# batch_size: 64
steps: 20000
# output_dir: outputs/train
```
in outputs/train/ there are 020000 folder and last folder,eash has pretrained_model and training_state
When I want to resume training, I read configs/train.py
so I set
```
resume: true
output_dir: outputs/train/
# or output_dir: outputs/train/checkpoints/last/pretrained_model/
# or output_dir: outputs/train/checkpoints/last/pretrained_model/train_config.json
```
All got this:
Traceback (most recent call last):
File "/miniconda3/envs/lerobot/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "miniconda3/envs/lerobot/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "//code/lerobot_diy/src/lerobot/scripts/train.py", line 394, in <module>
train()
File "/code/lerobot_diy/src/lerobot/configs/parser.py", line 225, in wrapper_inner
response = fn(cfg, *args, **kwargs)
File "/code/lerobot_diy/src/lerobot/scripts/train.py", line 215, in train
optimizer, lr_scheduler = make_optimizer_and_scheduler(cfg, policy)
File "//code/lerobot_diy/src/lerobot/optim/factory.py", line 38, in make_optimizer_and_scheduler
optimizer = cfg.optimizer.build(params)
AttributeError: 'NoneType' object has no attribute 'build'
How to write command of output dir?
Thanks! | https://github.com/huggingface/lerobot/issues/1672 | closed | [] | 2025-08-05T14:57:32Z | 2025-08-06T03:04:28Z | null | milong26 |
huggingface/transformers | 39,921 | [Gemma3N] Not able to add new special tokens to model/tokenizer due to projection error | ### System Info
```
- transformers==4.54.1
- Platform: Linux-5.15.0-1084-aws-x86_64-with-glibc2.31
- Python version: 3.13
- TRL version: 0.19.1
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
```
Hi,
The transformers model class for 'gemma-3n` has issues as below (pasting stacktrace):
```
trainer.train()
~~~~~~~~~~~~~^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py", line 2237, in train
return inner_training_loop(
args=args,
...<2 lines>...
ignore_keys_for_eval=ignore_keys_for_eval,
)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py", line 2578, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/trl/trainer/sft_trainer.py", line 914, in training_step
return super().training_step(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py", line 3792, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/trl/trainer/sft_trainer.py", line 868, in compute_loss
(loss, outputs) = super().compute_loss(
~~~~~~~~~~~~~~~~~~~~^
model, inputs, return_outputs=True, num_items_in_batch=num_items_in_batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py", line 3879, in compute_loss
outputs = model(**inputs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/accelerate/utils/operations.py", line 818, in forward
return model_forward(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/accelerate/utils/operations.py", line 806, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/peft/peft_model.py", line 1850, in forward
return self.base_model(
~~~~~~~~~~~~~~~^
input_ids=input_ids,
^^^^^^^^^^^^^^^^^^^^
...<6 lines>...
**kwargs,
^^^^^^^^^
)
^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/peft/tuners/tuners_utils.py", line 222, in forward
return self.model.forward(*args, **kwargs)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/utils/generic.py", line 961, in wrapper
output = func(self, *args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/models/gemma3n/modeling_gemma3n.py", line 2276, in forward
outputs = self.model(
input_ids=input_ids,
...<14 lines>...
**lm_kwargs,
)
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/teamspace/studios/this_studio/.venv/lib/python3.13 | https://github.com/huggingface/transformers/issues/39921 | open | [
"Usage",
"Good Second Issue",
"bug"
] | 2025-08-05T14:43:37Z | 2025-08-19T19:37:39Z | 14 | debasisdwivedy |
huggingface/transformers | 39,910 | Question: Llama4 weight reshaping | Hi all
I am trying to extract the original Llama4 MoE weights, specifically:
- `experts.w1` (aka `experts.moe_w_in_eD_F`)
- `experts.w3` (aka `experts.moe_w_swiglu_eD_F`)
I need both of these in the shape `[E, D, N]`, where:
- E is the number of experts (16 for Scout)
- D is the embedding dimension (5120)
- N is the intermediate dimension (8192)
I tried just splitting `experts.gate_up_proj` in half along the last dimension to get w1 and w3, but although the dimensions match, the model is outputting nonsense, so I assume the actual order of the weights is wrong.
Could someone help me make sense of this snippet (from `convert_llama4_weights_to_hf`)?
Why is this hard coded indexing / reshaping being done and do you have any suggestions for how to get the original weight back?
```python
elif re.search(r"(gate|up)_proj", new_key):
path = new_key.split(".")
gate_key = re.sub(r"(gate|up)_proj", lambda m: "gate_proj", new_key)
up_key = re.sub(r"(gate|up)_proj", lambda m: "up_proj", new_key)
if gate_key == new_key:
state_dict[new_key] = torch.cat(current_parameter, dim=concat_dim)
elif new_key == up_key:
if "experts" not in new_key:
state_dict[new_key] = torch.cat(current_parameter, dim=concat_dim)
else:
# gate_proj = moe_w_in_eD_F = w1
gate_proj = state_dict.pop(gate_key)
gate_proj = [
gate_proj.reshape(num_experts, -1, 8, 1024)[:, :, k, :].reshape(num_experts, -1, 1024)
for k in range(8)
]
gate_proj = torch.cat(gate_proj, dim=-1)
# up_proj = moe_w_swiglu_eD_F = w3
up_proj = [
k.reshape(num_experts, -1, 8, 1024).reshape(num_experts, -1, 1024)
for k in current_parameter
]
up_proj = torch.cat(up_proj, dim=-1)
gate_up_proj = torch.cat((gate_proj, up_proj), dim=-1)
new_key = new_key.replace("up_proj", "gate_up_proj")
state_dict[new_key] = gate_up_proj.contiguous()
tqdm.write(f"Processing: {key.ljust(50)} ->\t {new_key}, {state_dict[new_key].shape}")
```
Thank you! | https://github.com/huggingface/transformers/issues/39910 | closed | [] | 2025-08-05T10:19:25Z | 2025-08-13T09:35:52Z | 0 | gskorokhod |
huggingface/datasets | 7,724 | Can not stepinto load_dataset.py? | I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | https://github.com/huggingface/datasets/issues/7724 | open | [] | 2025-08-05T09:28:51Z | 2025-08-05T09:28:51Z | 0 | micklexqg |
huggingface/lerobot | 1,670 | How does leroBot address the issue of training heterogeneous datasets? | Specifically, suppose I have a dataset A and dataset B. In dataset A, both the state and action are represented as (x, y, z, gripper), where x, y, and z denote the distances moved along the x, y, and z axes, respectively, and gripper represents the on/off state of the gripper. In dataset B, both the state and action are the angles of the corresponding joints of the robotic arm. How can I use these two datasets together for training? | https://github.com/huggingface/lerobot/issues/1670 | open | [
"question",
"processor"
] | 2025-08-05T08:20:08Z | 2025-08-12T09:01:57Z | null | mahao18cm |
huggingface/lerobot | 1,667 | How many episode to have a good result of SmolVLA | ### System Info
```Shell
Hello, I'm trying to do a simple task like dual hand pick banana to a basket using SmolVLA,may I know how many episode to train for having a good result?
Many thanks
Julien
```
### Reproduction
I've used 100 episode for training, looks like the arm can not pick the banana accurately, sometimes the arms just stay on the head of banana
### Expected behavior
left hand pick banana and hand it to right hand then right hand put banana into basket | https://github.com/huggingface/lerobot/issues/1667 | closed | [
"question",
"policies"
] | 2025-08-05T05:12:12Z | 2025-10-17T11:27:14Z | null | chejulien |
huggingface/lerobot | 1,666 | Please add multi gpu training support | MultiGPU training currently does not work with lerobot as mentioned here https://github.com/huggingface/lerobot/issues/1377
Please add this support. | https://github.com/huggingface/lerobot/issues/1666 | closed | [
"enhancement",
"question",
"policies"
] | 2025-08-04T18:06:40Z | 2025-10-17T09:53:59Z | null | nahidalam |
huggingface/lerobot | 1,663 | No way to train on subset of features | Currently, when loading a policy from a config.json, the input_features seem to be ignored and re-generated from the dataset provided. However, it may not always be desirable to train on all features, perhaps if I have multiple camera views but I only want to train on one.
I would prefer that config.json features are not overwritten, but this would be a breaking change. Do you have suggestions on how we could implement this behavior? | https://github.com/huggingface/lerobot/issues/1663 | open | [
"question",
"policies",
"processor"
] | 2025-08-04T15:19:35Z | 2025-08-12T09:03:47Z | null | atyshka |
huggingface/diffusers | 12,060 | Is there any DiT block defined in the huggingface/diffusers OR huggingface/transformers project? | **Is your feature request related to a problem? Please describe.**
I want to make some experiments about DiT based flow-matching model, I need an implementation of the common DiT block, but did not found it in both huggingface/diffusers and huggingface/transformers. Is there any implementation about it with just some other file names?
**Describe the solution you'd like.**
A clear DiT implementation
**Describe alternatives you've considered.**
**Additional context.**
| https://github.com/huggingface/diffusers/issues/12060 | open | [] | 2025-08-04T09:40:43Z | 2025-08-04T10:19:00Z | 2 | JohnHerry |
huggingface/diffusers | 12,052 | Wan 2.2 with LightX2V offloading tries to multiply tensors from different devices and fails | ### Describe the bug
After @sayakpaul great work in https://github.com/huggingface/diffusers/pull/12040 LightX2V now works. However what doesn't work is adding both a lora and offloading to the transformer_2. I can get away with either (i.e. offload both transformers but add a lora only to transformer and NOT to transformer_2, OR offload just transformer and add a lora to both transformer_2 and transformer).
However offloading transformer_2 is quite important, since it causes 2x the VRAM to be used, and even a Q4_K_S model with LightX2V will use >24gb vram (as opposed to <9GB VRAM as in ComfyUI).
### Reproduction
The script is the same as the one posted by Paul in the #12040 PR with the addition of offloading
```python
import torch
from diffusers import WanImageToVideoPipeline
from huggingface_hub import hf_hub_download
import requests
from PIL import Image
from diffusers.loaders.lora_conversion_utils import _convert_non_diffusers_wan_lora_to_diffusers
from io import BytesIO
import safetensors.torch
# Load a basic transformer model
pipe = WanImageToVideoPipeline.from_pretrained(
"Wan-AI/Wan2.2-I2V-A14B-Diffusers",
torch_dtype=torch.bfloat16
)
lora_path = hf_hub_download(
repo_id="Kijai/WanVideo_comfy",
filename="Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors"
)
# This is what is different
self.pipe.vae.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
self.pipe.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
# Without this line it works but uses 2x the VRAM
self.pipe.transformer_2.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
self.pipe.text_encoder.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
pipe.to("cuda")
pipe.load_lora_weights(lora_path)
# print(pipe.transformer.__class__.__name__)
# print(pipe.transformer.peft_config)
org_state_dict = safetensors.torch.load_file(lora_path)
converted_state_dict = _convert_non_diffusers_wan_lora_to_diffusers(org_state_dict)
pipe.transformer_2.load_lora_adapter(converted_state_dict)
image_url = "https://cloud.inference.sh/u/4mg21r6ta37mpaz6ktzwtt8krr/01k1g7k73eebnrmzmc6h0bghq6.png"
response = requests.get(image_url)
input_image = Image.open(BytesIO(response.content)).convert("RGB")
frames = pipe(input_image, "animate", num_inference_steps=4, guidance_scale=1.0)
```
### Logs
```shell
[t+1m44s256ms] [ERROR] Traceback (most recent call last):
[t+1m44s256ms] File "/server/tasks.py", line 50, in run_task
[t+1m44s256ms] output = await result
[t+1m44s256ms] ^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/src/inference.py", line 424, in run
[t+1m44s256ms] output = self.pipe(
[t+1m44s256ms] ^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[t+1m44s256ms] return func(*args, **kwargs)
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 754, in __call__
[t+1m44s256ms] noise_pred = current_model(
[t+1m44s256ms] ^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[t+1m44s256ms] return self._call_impl(*args, **kwargs)
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[t+1m44s256ms] return forward_call(*args, **kwargs)
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/hooks/hooks.py", line 189, in new_forward
[t+1m44s256ms] output = function_reference.forward(*args, **kwargs)
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/models/transformers/transformer_wan.py", line 639, in forward
[t+1m44s256ms] temb, timestep_proj, encoder_hidden_states, encoder_hidden_states_image = self.condition_embedder(
[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^
[t+1m44s256ms] File "/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[t+1m44s256ms] return self._ | https://github.com/huggingface/diffusers/issues/12052 | closed | [
"bug"
] | 2025-08-03T12:43:13Z | 2025-08-11T15:53:41Z | 4 | luke14free |
huggingface/peft | 2,699 | UserWarning: Found missing adapter keys while loading the checkpoint | I have been fine-tuning different LLM models (mainly Llama family) since last year and use peft with lora config all the time with no issues.
Just recently I was fine-tuning the llama 70B on multiple GPU using accelerate then saving the adapter once training is done. (This was always my setup since last year)
However now I want to load the adapter into the base model as follows:
```
base_model = AutoModelForCausalLM.from_pretrained(model_id, dtype= torch.float16, device_map = 'auto', attn_implementation = 'flash_attention_2')
model = PeftModel.from_pretrained(base_model, adapter_path)
```
Now I am getting this warning:
```
UserWarning: Found missing adapter keys while loading the checkpoint:
```
Then it lists some Lora weights. I tried changing LoraConfig parameters but still the problem
Persists.
Can anyone please tell me what is the issue here and how to fix it.
I am using the latest version of peft, transformers, accelerate,
trl
Note: I am also using the same format for model during the training and inference.
I have already looked at this and seems same issue, but I load my model using AutoModelForCasaulLM in both cases:
https://github.com/huggingface/peft/issues/2566
Note: This is the warning: `[base_model.model.model.layers.0.self_attn, q_proj.lora_A.default.weight, base_model.model.model.layers.0.self_attn, q_proj.lora_B.default.weight, base_model.model.model.layers.0.self_attn, k_proj.lora_A.default.weight, base_model.model.model.layers.0.self_attn, k_proj.lora_B.default.weight`, ... | https://github.com/huggingface/peft/issues/2699 | closed | [] | 2025-08-02T20:49:31Z | 2025-11-09T15:03:46Z | 41 | manitadayon |
huggingface/diffusers | 12,044 | AttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'? | I am train the Flux.1-dev model and get this error. I found the solution to bring diffuser to version 0.21.0 but then it would beconflict with some other libraries. Is there any solution for this?
```
Traceback (most recent call last):
File "/home/quyetnv/t2i/ai-toolkit/run.py", line 120, in <module>
main()
File "/home/quyetnv/t2i/ai-toolkit/run.py", line 108, in main
raise e
File "/home/quyetnv/t2i/ai-toolkit/run.py", line 96, in main
job.run()
File "/home/quyetnv/t2i/ai-toolkit/jobs/ExtensionJob.py", line 22, in run
process.run()
File "/home/quyetnv/t2i/ai-toolkit/jobs/process/BaseSDTrainProcess.py", line 1518, in run
self.sd.load_model()
File "/home/quyetnv/t2i/ai-toolkit/toolkit/stable_diffusion_model.py", line 788, in load_model
pipe: Pipe = Pipe(
File "/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 197, in __init__
self.register_modules(
File "/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 212, in register_modules
library, class_name = _fetch_class_library_tuple(module)
File "/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 877, in _fetch_class_library_tuple
library = not_compiled_module.__module__.split(".")[0]
AttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'?
```
my version diffusers was installed from requirement of ai-toolkit is 0.35.0 dev3 | https://github.com/huggingface/diffusers/issues/12044 | closed | [] | 2025-08-02T01:37:30Z | 2025-08-21T01:27:19Z | 3 | qngv |
huggingface/optimum | 2,333 | Support for exporting t5gemma-2b-2b-prefixlm-it to onnx | ### Feature request
I’ve tried to export t5gemma-2b-2b-prefixlm-it to onnx using optimum. But it outputs: ValueError: Trying to export a t5gemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type t5gemma to be supported natively in the ONNX export.
Task: "text2text-generation"
### Motivation
I’ve tried, but nothing works...
### Your contribution
config.json
{
"architectures": [
"T5GemmaForConditionalGeneration"
],
"classifier_dropout_rate": 0.0,
"decoder": {
"attention_bias": false,
"attention_dropout": 0.0,
"attn_logit_softcapping": 50.0,
"classifier_dropout_rate": 0.0,
"cross_attention_hidden_size": 2304,
"dropout_rate": 0.0,
"final_logit_softcapping": 30.0,
"head_dim": 256,
"hidden_activation": "gelu_pytorch_tanh",
"hidden_size": 2304,
"initializer_range": 0.02,
"intermediate_size": 9216,
"is_decoder": true,
"layer_types": [
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention"
],
"max_position_embeddings": 8192,
"model_type": "t5_gemma_module",
"num_attention_heads": 8,
"num_hidden_layers": 26,
"num_key_value_heads": 4,
"query_pre_attn_scalar": 256,
"rms_norm_eps": 1e-06,
"rope_theta": 10000.0,
"sliding_window": 4096,
"torch_dtype": "bfloat16",
"use_cache": true,
"vocab_size": 256000
},
"dropout_rate": 0.0,
"encoder": {
"attention_bias": false,
"attention_dropout": 0.0,
"attn_logit_softcapping": 50.0,
"classifier_dropout_rate": 0.0,
"dropout_rate": 0.0,
"final_logit_softcapping": 30.0,
"head_dim": 256,
"hidden_activation": "gelu_pytorch_tanh",
"hidden_size": 2304,
"initializer_range": 0.02,
"intermediate_size": 9216,
"layer_types": [
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"full_attention"
],
"max_position_embeddings": 8192,
"model_type": "t5_gemma_module",
"num_attention_heads": 8,
"num_hidden_layers": 26,
"num_key_value_heads": 4,
"query_pre_attn_scalar": 256,
"rms_norm_eps": 1e-06,
"rope_theta": 10000.0,
"sliding_window": 4096,
"torch_dtype": "bfloat16",
"use_cache": true,
"vocab_size": 256000
},
"eos_token_id": [
1,
107
],
"initializer_range": 0.02,
"is_encoder_decoder": true,
"model_type": "t5gemma",
"pad_token_id": 0,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0.dev0",
"use_cache": true
} | https://github.com/huggingface/optimum/issues/2333 | closed | [
"Stale"
] | 2025-08-01T16:39:52Z | 2026-01-03T02:51:13Z | 2 | botan-r |
huggingface/transformers | 39,842 | Expected behavior of `compute_result` is hard to expect and inconsistent | In trainer there exists a parameter `compute_result` given to `compute_metrics` when `batch_eval_metrics` is given to True.
https://github.com/huggingface/transformers/blob/1e0665a191f73f6b002209c3dfcda478baac6bac/src/transformers/trainer.py#L370-L375
I think there are several problems for `compute_result`,
1. User can't expect (1) what happen if `batch_eval_metrics` is given (2) what is given to `compute_result` and when it change from True or False (3) what's HF's intention to implement `compute_metrics` with `compute_result`. since there are very few (only 3 line) instruction for this.
2. `compute_metrics` sometimes called with `compute_result` and sometimes not, EVEN WHEN `batch_eval_metrics` is present. See below lines.
https://github.com/huggingface/transformers/blob/1e0665a191f73f6b002209c3dfcda478baac6bac/src/transformers/trainer.py#L4534-L4547
Creating this issue because I spend long time figuring out this. | https://github.com/huggingface/transformers/issues/39842 | closed | [] | 2025-08-01T11:43:28Z | 2025-10-04T08:02:41Z | 3 | MilkClouds |
huggingface/transformers | 39,841 | MistralCommonTokenizer does not match PreTrainedTokenizer | ### System Info
on docker
os: ubuntu 24.04
transformers: 4.55.0.dev0
mistral_common: 1.8.3
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Command to lauch container:
```bash
docker run --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:latest --model mistralai/Voxtral-Mini-3B-2507
```
### Expected behavior
The output will finish in:
```bash
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer_group.py", line 24, in __init__
vllm-1 | self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 309, in get_tokenizer
vllm-1 | tokenizer = get_cached_tokenizer(tokenizer)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 104, in get_cached_tokenizer
vllm-1 | tokenizer_all_special_tokens = tokenizer.all_special_tokens
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | AttributeError: 'MistralCommonTokenizer' object has no attribute 'all_special_tokens'. Did you mean: '_all_special_ids'?
```
vLLM docker server uses the pretrained tokenizer format:
https://github.com/vllm-project/vllm/blob/49314869887e169be080201ab8bcda14e745c080/vllm/transformers_utils/tokenizer.py#L97-L101
Which must include: `all_special_ids`, `all_special_tokens`, `all_special_tokens_extended` default properties. However, MistralCommonTokenizer does not have implemented them. Is there a plan to standarize both tokenizers?
| https://github.com/huggingface/transformers/issues/39841 | closed | [
"bug"
] | 2025-08-01T09:16:24Z | 2025-11-23T08:03:33Z | 3 | Fhrozen |
huggingface/transformers | 39,839 | pack_image_features RuntimeError when vision_feature_select_strategy="full" | ### System Info
transformers 4.54.0
### Who can help?
@zucchini-nlp
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers.models.llava_next import LlavaNextForConditionalGeneration, LlavaNextProcessor
from PIL import Image
import requests
import torch
model = LlavaNextForConditionalGeneration.from_pretrained(
"llava-hf/llava-v1.6-vicuna-7b-hf",
vision_feature_select_strategy="full",
torch_dtype=torch.float16,
device_map="auto",
)
processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-vicuna-7b-hf")
image = Image.open("/data/coco/train2017/000000000009.jpg")
prompt = "USER: <image>\nWhat is shown in this image? ASSISTANT:"
inputs = processor(images=image, text=prompt, truncation=True, return_tensors="pt", vision_feature_select_strategy = "full").to("cuda")
input_embeds = model(inputs.input_ids, pixel_values=inputs.pixel_values, image_sizes=inputs.image_sizes, vision_feature_select_strategy="full")
```
### Expected behavior
I encountered a bug when running to the line
`input_embeds = model(inputs.input_ids, pixel_values=inputs.pixel_values, image_sizes=inputs.image_sizes, vision_feature_select_strategy="full")`
I got:
```
in pack_image_features
image_feature = image_feature.view(num_patch_height, num_patch_width, height, width, -1)
RuntimeError: shape '[2, 2, 24, 24, -1]' is invalid for input of size 9453568
```
the shape of image_feature is [4, 577, 4096] currently, I want to know how to fix this? | https://github.com/huggingface/transformers/issues/39839 | closed | [
"bug"
] | 2025-08-01T07:55:40Z | 2025-09-08T08:02:56Z | 2 | llnnnnnn |
huggingface/gsplat.js | 117 | How to generate a Mesh mesh? | I need a scene where Gaussian Splatting and Mesh are mixed, and I don't know if GSPLAT generates Mesh or not. | https://github.com/huggingface/gsplat.js/issues/117 | open | [] | 2025-08-01T03:29:22Z | 2025-08-01T03:29:22Z | null | ZXStudio |
huggingface/diffusers | 12,038 | Dataset structure for train_text_to_image_lora.py | Hello. I am trying to use **train_text_to_image_lora.py** script following the instructions https://github.com/huggingface/diffusers/tree/main/examples/text_to_image
I get errors on data structure and don't know what is the issue on my side.
I have a folder **data** where I have folder **image** and **csv** file.
C:/Users/XXX//data/
├── images/
│ ├── image1.jpg
│ ├── image2.jpg
│ └── ...
└── captions.csv
**Image** folder contain images and **csv** file contains two columns (image names and captions)
image, caption
image1.jpg, A dragon flying through fire
image2.jpg, A knight in shining armor
Please can you let me know how I should organize my dataset to be able to run the training.
| https://github.com/huggingface/diffusers/issues/12038 | open | [] | 2025-07-31T16:10:38Z | 2025-08-01T16:44:48Z | 1 | HripsimeS |
huggingface/lerobot | 1,632 | Are there plans to support distributed training? | [train.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) currently only supports single-GPU training. Is there a plan to support distributed training in the future? | https://github.com/huggingface/lerobot/issues/1632 | closed | [
"question",
"policies"
] | 2025-07-31T03:31:46Z | 2025-10-17T12:10:40Z | null | Hukongtao |
huggingface/candle | 3,039 | Request support for Qwen2.5-vl or Fast-VLM | I'm trying to call some image-to-text visual models using candle, if anyone knows how to use Qwen2.5-vl or Fast-VLM, can you share it? Appreciate | https://github.com/huggingface/candle/issues/3039 | open | [] | 2025-07-31T02:41:33Z | 2025-08-04T12:21:35Z | 1 | 826327700 |
huggingface/transformers | 39,801 | ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981 | ### System Info
_prepare_cache_for_generation
raise ValueError(
ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981
I got this error and i have no clue of how to solve it. I tried different implementations from different people and I always have the same problem.
I used this code: https://mer.vin/2024/11/finetune-llama-3-2-vision-radiology-images/
import os
from unsloth import FastVisionModel
import torch
from datasets import load_dataset
from transformers import TextStreamer
from unsloth import is_bf16_supported
from unsloth.trainer import UnslothVisionDataCollator
from trl import SFTTrainer, SFTConfig
# 1. Load the model
model, tokenizer = FastVisionModel.from_pretrained(
"unsloth/Llama-3.2-11B-Vision-Instruct",
load_in_4bit = True,
use_gradient_checkpointing = "unsloth",
)
model = FastVisionModel.get_peft_model(
model,
finetune_vision_layers = True,
finetune_language_layers = True,
finetune_attention_modules = True,
finetune_mlp_modules = True,
r = 16,
lora_alpha = 16,
lora_dropout = 0,
bias = "none",
random_state = 3407,
use_rslora = False,
loftq_config = None,
)
# 2. Load the dataset
dataset = load_dataset("unsloth/Radiology_mini", split = "train")
instruction = "You are an expert radiographer. Describe accurately what you see in this image."
def convert_to_conversation(sample):
conversation = [
{ "role": "user",
"content" : [
{"type" : "text", "text" : instruction},
{"type" : "image", "image" : sample["image"]} ]
},
{ "role" : "assistant",
"content" : [
{"type" : "text", "text" : sample["caption"]} ]
},
]
return { "messages" : conversation }
pass
converted_dataset = [convert_to_conversation(sample) for sample in dataset]
# 3. Before training
FastVisionModel.for_inference(model)
image = dataset[0]["image"]
instruction = "You are an expert radiographer. Describe accurately what you see in this image."
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": instruction}
]}
]
input_text = tokenizer.apply_chat_template(messages, add_generation_prompt = True)
inputs = tokenizer(
image,
input_text,
add_special_tokens = False,
return_tensors = "pt",
).to("cuda")
print("\nBefore training:\n")
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128,
use_cache = True, temperature = 1.5, min_p = 0.1)
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
pip install unsloth
export HF_TOKEN=xxxxxxxxxxxxx
### Expected behavior
Start fine-tuning | https://github.com/huggingface/transformers/issues/39801 | closed | [
"bug"
] | 2025-07-30T20:59:45Z | 2025-09-07T08:02:42Z | 2 | jpitalopez |
huggingface/lerobot | 1,631 | 🥚 Filtering Eggs on Moving Table: Dirt/Breakage Detection Feasibility | Hi 👋
Thanks a lot for your work on lerobot!
I am exploring the use of lerobot to filter eggs based on dirt or breakage while they move past the robot on a conveyor table. The goal is to detect anomalies in real time and eventually eject faulty eggs.
Some specific questions I have:
* Do you have any advice or feedback on using lerobot in this kind of setup?
* Are there known pros/cons with fast-moving objects and image-based anomaly detection?
* Would it make sense to multiply robots along the line (e.g., several cameras/models at different angles or points)?
* Is there support or a best practice for triggering actions (e.g. pneumatic ejection) once a faulty egg is detected?
I am happy to fine-tune a model or adapt an existing one if that’s viable.
Any insights would be super helpful 🙏
Thanks again! | https://github.com/huggingface/lerobot/issues/1631 | open | [
"question",
"policies"
] | 2025-07-30T18:35:12Z | 2025-08-12T09:07:41Z | null | KannarFr |
huggingface/optimum | 2,330 | Patch Release to support `transformers~=4.53` | ### System Info
```shell
optimum[onnxruntime-gpu]==1.26.1
torch==2.7.1
vllm==0.10.0
docker run --rm -it --platform linux/amd64 ghcr.io/astral-sh/uv:debian bash
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
The latest release is more than 1 month old. It supports `transformers>=4.36,<4.53.0` with `onnxruntime-gpu` extra. This is incompatible with `vllm==0.10.0`, which requires `transformers>=4.53.2`. `vllm==0.10.0` is required to use with `torch==2.7.1`. My system is required to use `torch==2.7.1` due to the medium CVE in previous versions.
https://nvd.nist.gov/vuln/detail/CVE-2025-2953
In the current main branch, the requirements has been changed to `transformers>=4.36,<4.54.0`, which would mitigate the issue.
Is it possible to create a patch release based on the current main branch?
```bash
> uv pip compile <(echo "optimum[onnxruntime-gpu]>=1.23"; echo "vllm>=0.10")
x No solution found when resolving dependencies:
`-> Because only the following versions of optimum[onnxruntime-gpu] are available:
optimum[onnxruntime-gpu]<=1.23.0
optimum[onnxruntime-gpu]==1.23.1
optimum[onnxruntime-gpu]==1.23.2
optimum[onnxruntime-gpu]==1.23.3
optimum[onnxruntime-gpu]==1.24.0
optimum[onnxruntime-gpu]==1.25.0
optimum[onnxruntime-gpu]==1.25.1
optimum[onnxruntime-gpu]==1.25.2
optimum[onnxruntime-gpu]==1.25.3
optimum[onnxruntime-gpu]==1.26.0
optimum[onnxruntime-gpu]==1.26.1
and optimum[onnxruntime-gpu]>=1.23.0,<=1.23.2 depends on transformers<4.46.0, we can conclude that optimum[onnxruntime-gpu]>=1.23.0,<1.23.1
depends on transformers<4.46.0.
And because optimum[onnxruntime-gpu]>=1.23.1,<=1.23.2 depends on transformers<4.46.0 and transformers<4.46.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0,<1.23.3 depends on transformers<4.46.0.
And because optimum[onnxruntime-gpu]==1.23.3 depends on transformers<4.47.0 and transformers>=4.36,<4.49.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0,<1.25.0 depends on transformers<4.49.0.
And because optimum[onnxruntime-gpu]>=1.25.0,<=1.25.3 depends on transformers>=4.36,<4.52.0 and transformers>=4.36,<4.52.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0,<1.25.2 depends on transformers<4.52.0.
And because optimum[onnxruntime-gpu]>=1.25.2,<=1.25.3 depends on transformers>=4.36,<4.52.0 and transformers>=4.36,<4.52.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0,<1.26.0 depends on transformers<4.52.0.
And because optimum[onnxruntime-gpu]>=1.26.0 depends on transformers>=4.36,<4.53.0 and transformers>=4.36,<4.53.0, we can conclude that
optimum[onnxruntime-gpu]>=1.23.0 depends on transformers<4.53.0.
And because vllm==0.10.0 depends on transformers>=4.53.2 and only vllm<=0.10.0 is available, we can conclude that vllm>=0.10.0 and
optimum[onnxruntime-gpu]>=1.23.0 are incompatible.
And because you require optimum[onnxruntime-gpu]>=1.23 and vllm>=0.10.0, we can conclude that your requirements are unsatisfiable.
```
### Expected behavior
Able to install `optimum[onnxruntime-gpu]>=1.26` and `vllm>=0.10.0`.
```bash
> uv pip compile <(echo "optimum[onnxruntime-gpu] @ git+https://github.com/huggingface/optimum@689c0b5d38aabe265ab1eb334a6ca5bc3ca3574d"; echo "vllm>=0.10")
Resolved 152 packages in 359ms
# This file was autogenerated by uv via the following command:
# uv pip compile /dev/fd/63
aiohappyeyeballs==2.6.1
# via aiohttp
aiohttp==3.12.15
# via
# fsspec
# vllm
aiosignal==1.4.0
# via aiohttp
annotated-types==0.7.0
# via pydantic
anyio==4.9.0
# via
# httpx
# openai
# starlette
# watchfiles
astor==0.8.1
# via depyf
attrs==25.3.0
# via
# aiohttp
# jsonschema
# referencing
blake3==1.0.5
# via vllm
cachetools==6.1.0
# via vllm
cbor2==5.6.5
# via vllm
certifi==2025.7.14
# via
# httpcore
# httpx
# requests
# sentry-sdk
cffi==1.17.1
# via soundfile
charset-normalizer==3.4.2
# via requests
click==8.2.1
# via
# ray
# rich-toolkit
# typer
# uvicorn
cloudpickle==3.1.1
# via vllm
coloredlogs==15.0.1
# via onnxruntime-gpu
compressed-tensors==0.10.2
# via vllm
cupy-cuda12x==13.5.1
# via ray
datasets==4.0.0
# via optimum
depyf==0.19.0
# via vllm
dill==0.3.8
# via
# datasets
# depyf
# multiprocess
diskcache==5.6.3
# via vllm
distro==1.9.0
# via openai
dnspython==2.7.0
# via email-validator
einops==0.8.1
# via vllm
email-validator==2.2.0
# via
# fastapi
# pydantic | https://github.com/huggingface/optimum/issues/2330 | closed | [
"bug"
] | 2025-07-30T02:40:41Z | 2025-07-31T02:54:31Z | 1 | yxtay |
huggingface/lerobot | 1,622 | Why is LeRobot’s policy ignoring additional camera streams despite custom `input_features`? | I'm training a SO101 arm policy with 3 video streams (`front`, `above`, `gripper`) and a state vector. The dataset can be found at this [link](https://huggingface.co/datasets/aaron-ser/SO101-Dataset/tree/main).
I created a custom JSON config (the `train_config.json` below) that explicitly lists the three visual streams under `policy.input_features`, and despite disabling the preset config loading with `"use_policy_training_preset": false`, the policy never takes into account any feed that isn't the front observations. Disabling the preset however is not mandatory as previous hackathons with multiple streams such as the [following](https://huggingface.co/LeRobot-worldwide-hackathon/91-AM-PM-smolvla-pouring-liquid/blob/main/train_config.json) used the preset config.
I pass into `lerobot.scripts.train` the `train_config.json` file shared below with the `--config_path` parameter. Although the initial printout of the config is correct with all three streams, after training finishes, the saved `train_config.json` file inside `aaron-ser/SO101-Model` only contains:
**aaron-ser/SO101-Model train_config.json snippet**
```
"input_features": {
"observation.state": { ... },
"observation.images.front": { ... },
"output_features": { ... }
```
Dropping the `above` and `gripper` streams although the HF dataset includes all three streams and I explicitly passed them in the JSON file.
What internal step or configuration is overriding my custom `input_features` and keeping only the front camera? How can I ensure LeRobot trains on all provided video streams?
**train_config.json**
```
{
"dataset": {
"repo_id": "aaron-ser/SO101-Dataset",
"root": null,
"episodes": null,
"image_transforms": {
"enable": false,
"max_num_transforms": 3,
"random_order": false,
"tfs": {
"brightness": {
"weight": 1.0,
"type": "ColorJitter",
"kwargs": {
"brightness": [
0.8,
1.2
]
}
},
"contrast": {
"weight": 1.0,
"type": "ColorJitter",
"kwargs": {
"contrast": [
0.8,
1.2
]
}
},
"saturation": {
"weight": 1.0,
"type": "ColorJitter",
"kwargs": {
"saturation": [
0.5,
1.5
]
}
},
"hue": {
"weight": 1.0,
"type": "ColorJitter",
"kwargs": {
"hue": [
-0.05,
0.05
]
}
},
"sharpness": {
"weight": 1.0,
"type": "SharpnessJitter",
"kwargs": {
"sharpness": [
0.5,
1.5
]
}
}
}
},
"revision": null,
"use_imagenet_stats": true,
"video_backend": "torchcodec"
},
"env": null,
"policy": {
"type": "act",
"n_obs_steps": 1,
"normalization_mapping": {
"VISUAL": "MEAN_STD",
"STATE": "MEAN_STD",
"ACTION": "MEAN_STD"
},
"input_features": {
"observation.state": {
"type": "STATE",
"shape": [
6
]
},
"observation.images.front": {
"type": "VISUAL",
"shape": [
3,
720,
1280
]
},
"observation.images.above": {
"type": "VISUAL",
"shape": [
3,
720,
1280
]
},
"observation.images.gripper": {
"type": "VISUAL",
"shape": [
3,
720,
1280
]
}
},
"output_features": {
"action": {
"type": "ACTION",
"shape": [
6
]
}
},
"device": "cuda",
"use_amp": false,
"push_to_hub": true,
"repo_id": "aaron-ser/SO101-Model",
"private": null,
"tags": null,
"license": null,
| https://github.com/huggingface/lerobot/issues/1622 | open | [
"question",
"policies"
] | 2025-07-29T14:07:14Z | 2025-09-23T14:01:54Z | null | Aaron-Serpilin |
huggingface/trl | 3,797 | How to view the training parameters after training is completed | How to view the training parameters after training is completed?I am using GRPOTrainer for training, but after training multiple times, I have forgotten the parameters I set. How can I view the saved training parameters? | https://github.com/huggingface/trl/issues/3797 | open | [
"❓ question",
"🏋 GRPO"
] | 2025-07-29T09:42:52Z | 2025-07-29T13:07:50Z | null | Tuziking |
huggingface/optimum | 2,329 | Support for exporting paligemma to onnx | ### Feature request
I’ve tried to export google/paligemma-3b-mix-224 to onnx using optimum. But it outputs: "ValueError: Trying to export a paligemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an
issue at https://github.com/huggingface/optimum/issues if you would like the model type paligemma to be supported natively in the ONNX export."
### Motivation
I’ve tried everything but nothing works =(
(Using custom configs, using torch.onnx.export, etc)
### Your contribution
Actually, it seems to me that I can’t help… =( | https://github.com/huggingface/optimum/issues/2329 | closed | [
"Stale"
] | 2025-07-29T08:58:41Z | 2025-09-06T02:04:25Z | 2 | DashaMed555 |
huggingface/transformers | 39,744 | _supports_static_cache disappear | ### System Info
transformers main branch
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I see the attr `_supports_static_cache` disappeared in the model. I used to check if `model._supports_static_cache` before setting `cache_implementation=True`. For now, can I assume all models support static cache?
### Expected behavior
All models support static cache as `_supports_static_cache` is deprecated. Or do we have other method to check if the model support static cache? | https://github.com/huggingface/transformers/issues/39744 | closed | [
"bug"
] | 2025-07-29T02:36:04Z | 2025-07-29T08:17:00Z | 4 | jiqing-feng |
huggingface/lerobot | 1,607 | how to control a so-101 with trained ACT model? | https://huggingface.co/initie/test_pick_result
This is my pre-trained model for grabbing the switch on the desk by ACT model.
How to run this policy model on the Anaconda?
Already by way of example,
python -m lerobot.record --robot.type=so101_follower
--robot.port=COM3
--robot.id=ammd_follower_arm
--robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, side: {type: opencv, index_or_path: 1, width: 640, height: 480, fps: 30} }"
--display_data=True
--dataset.repo_id="initie/eval_test_pick"
--dataset.single_task="Grab the switch"
--policy.path=initie/test_pick_result
--teleop.type=so101_leader --teleop.port=COM5
--teleop.id=ammd_leader_arm --dataset.reset_time_s=5
This is the example code from Lerobot tutorial, but when i run these codes, I had to record 10 episodes again.
I just wanna run a pre-trained model, not record an episode again. I'm curious about a simple code that only "runs" that model not including recording | https://github.com/huggingface/lerobot/issues/1607 | open | [
"question",
"policies"
] | 2025-07-28T05:23:24Z | 2025-10-15T03:28:50Z | null | initia1013 |
huggingface/lerobot | 1,602 | How to perform multi-GPU training for SMoVLA? | I noticed that the paper used 4 GPUs for pretraining, but the current training code doesn’t seem to support it. Could you provide the corresponding code? | https://github.com/huggingface/lerobot/issues/1602 | closed | [] | 2025-07-27T09:46:04Z | 2025-07-28T08:40:01Z | null | QZepHyr |
huggingface/hmtl | 72 | How to create a website | https://github.com/huggingface/hmtl/issues/72 | open | [] | 2025-07-27T09:30:22Z | 2025-07-27T09:30:22Z | null | Chi23-ike | |
huggingface/text-generation-inference | 3,304 | using trtllm-build instead of optimum-nvidia for engine building or optimum-nvidia wrong version ? |
Hello,
I'm experiencing significant issues when trying to use Text Generation Inference (TGI) with TensorRT-LLM as the backend.
**Problem 1: Version Compatibility**
I cannot use the latest version of TGI due to a known bug (see: https://github.com/huggingface/text-generation-inference/issues/3296).
I'm therefore using version: `ghcr.io/huggingface/text-generation-inference:3.3.4-trtllm`
However, this version uses TensorRT-LLM v0.17.0.post1, while the latest optimum-nvidia version ([[v0.1.0b9](https://github.com/huggingface/optimum-nvidia/releases/tag/v0.1.0b9)]) uses TensorRT-LLM 0.16.0.
When I try to launch TGI with my engine built using optimum-nvidia, I get the following error:
```
root@5ddf177112d7:/usr/local/tgi/bin# /usr/local/tgi/bin/text-generation-launcher --model-id "/engines/llama-3.2-3b-instruct-optimum/GPU/engines" --tokenizer-name "/models/llama-3.2-3b-instruct" --executor-worker "/usr/local/tgi/bin/executorWorker"
2025-07-27T06:16:40.717109Z INFO text_generation_backends_trtllm: backends/trtllm/src/main.rs:293: Successfully retrieved tokenizer /models/llama-3.2-3b-instruct
[2025-07-27 06:16:40.717] [info] [ffi.hpp:164] Initializing TGI - TensoRT-LLM Backend (v0.17.0.post1)
[2025-07-27 06:16:40.747] [info] [ffi.hpp:173] [FFI] Detected 1 Nvidia GPU(s)
[2025-07-27 06:16:40.758] [info] [backend.cpp:22] Detected single engine deployment, using leader mode
[TensorRT-LLM][INFO] Engine version 0.16.0 found in the config file, assuming engine(s) built by new builder API.
[TensorRT-LLM][INFO] Initializing MPI with thread mode 3
[TensorRT-LLM][INFO] Initialized MPI
[TensorRT-LLM][INFO] Refreshed the MPI local session
[TensorRT-LLM][INFO] MPI size: 1, MPI local size: 1, rank: 0
[TensorRT-LLM][INFO] Rank 0 is using GPU 0
[TensorRT-LLM][INFO] TRTGptModel maxNumSequences: 64
[TensorRT-LLM][INFO] TRTGptModel maxBatchSize: 64
[TensorRT-LLM][INFO] TRTGptModel maxBeamWidth: 1
[TensorRT-LLM][INFO] TRTGptModel maxSequenceLen: 4096
[TensorRT-LLM][INFO] TRTGptModel maxDraftLen: 0
[TensorRT-LLM][INFO] TRTGptModel mMaxAttentionWindowSize: (4096) * 28
[TensorRT-LLM][INFO] TRTGptModel enableTrtOverlap: 0
[TensorRT-LLM][INFO] TRTGptModel normalizeLogProbs: 1
[TensorRT-LLM][INFO] TRTGptModel maxNumTokens: 262144
[TensorRT-LLM][INFO] TRTGptModel maxInputLen: 4095 = maxSequenceLen - 1 since chunked context is enabled
[TensorRT-LLM][INFO] TRTGptModel If model type is encoder, maxInputLen would be reset in trtEncoderModel to maxInputLen: 4096 = maxSequenceLen.
[TensorRT-LLM][INFO] Capacity Scheduler Policy: MAX_UTILIZATION
[TensorRT-LLM][INFO] Context Chunking Scheduler Policy: None
[TensorRT-LLM][INFO] Loaded engine size: 6981 MiB
[TensorRT-LLM][ERROR] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.8.0.43 got
..)
Error: Runtime("[TensorRT-LLM][ERROR] Assertion failed: Failed to deserialize cuda engine. (/usr/src/text-generation-inference/target/release/build/text-generation-backends-trtllm-479f10d4b58ebb37/out/build/_deps/trtllm-src/cpp/tensorrt_llm/runtime/tllmRuntime.cpp:239)")
```
**Problem 2: Building Engine with trtllm-build**
I attempted to build my engine directly using `trtllm-build`, but when launching TGI, I encounter this error:
```
2025-07-27T06:15:55.033318Z INFO text_generation_backends_trtllm: backends/trtllm/src/main.rs:293: Successfully retrieved tokenizer /models/llama-3.2-3b-instruct
[2025-07-27 06:15:55.034] [info] [ffi.hpp:164] Initializing TGI - TensoRT-LLM Backend (v0.17.0.post1)
[2025-07-27 06:15:55.101] [info] [ffi.hpp:173] [FFI] Detected 1 Nvidia GPU(s)
terminate called after throwing an instance of 'nlohmann::json_abi_v3_11_3::detail::parse_error'
what(): [json.exception.parse_error.101] parse error at line 1, column 1: attempting to parse an empty input; check that your input string or stream contains the expected JSON
```
The error suggests it cannot find a JSON file, but the `config.json` file is present in the engine directory:
```bash
root@5ddf177112d7:/usr/local/tgi/bin# ls -l /engines/llama-3.2-3b-instruct/
total 3033324
-rw-r--r-- 1 root root 7848 Jul 26 17:21 config.json
-rw-r--r-- 1 root root 3106108276 Jul 26 17:21 rank0.engine
```
**Environment:**
- Model: llama-3.2-3b-instruct
- TGI Version: 3.3.4-trtllm
- TensorRT-LLM Version: v0.17.0.post1
Could you please help resolve these compatibility issues or provide guidance on the correct workflow for using TensorRT-LLM with TGI?
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Reproduction
**1/ Build your engine :**
`docker run --rm -it --gpus=1 --shm-size=1g -v "/home/jyce/unmute.mcp/volumes/llm-tgi/engines:/engines" -v "/home/jyce/unmute.mcp/volumes/llm-tgi/models:/models" huggingface/optimum-nvidia:v0.1.0b8-py310 bash
`
```
optimum-cli export trtllm \
--tp=1 \
--pp=1 \
--max-batch-size | https://github.com/huggingface/text-generation-inference/issues/3304 | open | [] | 2025-07-27T06:24:29Z | 2025-10-06T09:56:29Z | 4 | psykokwak-com |
huggingface/transformers | 39,705 | [i18n-<bn>] Translating docs to <Bengali> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Bengali-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
| https://github.com/huggingface/transformers/issues/39705 | open | [
"WIP"
] | 2025-07-27T06:18:20Z | 2025-07-27T11:58:32Z | 1 | ankitdutta428 |
huggingface/transformers | 39,699 | No flag to support Conditional Parameter Loading for gemma-3n-E2B models in transformer | ### System Info
Hi,
While a lot has been mentioned about gemma-3n-E2B and gemma-3n-E4B about the COnditional parameter loading and reduced memory loading
There is no configuration currently visible in transformers for supporting that.
Is it possible to get the related configuration/code/documentation to make it work to get an actual lower memory model?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import AutoProcessor, AutoModelForImageTextToText
GEMMA_MODEL_ID = "google/gemma-3n-E2B-it"
print("Loading processor")
processor = AutoProcessor.from_pretrained(GEMMA_MODEL_ID)
print("Loadind model")
model = AutoModelForImageTextToText.from_pretrained(
GEMMA_MODEL_ID, torch_dtype="auto", device_map=None).to("cpu")
There is no flag for doing Conditional parameter Loading or PLE
### Expected behavior
Some flag using which Conditional Parameter Loading can be enabled and save on the memory | https://github.com/huggingface/transformers/issues/39699 | closed | [
"bug"
] | 2025-07-26T18:08:00Z | 2025-09-03T08:02:58Z | 2 | aakashgaur01 |
huggingface/tokenizers | 1,835 | Can you provide binary releases? | It seems that binaries are not available in recent versions.
tokenizers module is essential for the latest models, and it would be preferable if it could be easily installed.
Setting up a Rust compilation environment can be cumbersome, and it's almost impossible to do so offline.
Could we possibly distribute something in binary form via PyPI or here? | https://github.com/huggingface/tokenizers/issues/1835 | closed | [] | 2025-07-26T16:07:12Z | 2025-09-08T13:49:52Z | 4 | goldenmomonga |
huggingface/lerobot | 1,599 | Evaluation results of VLA models on MetaWorld Benchmark | Thank you for this excellent work! I noticed that the paper mentions evaluation results of VLA models on MetaWorld. However, in the original papers for Octo and π₀, results are only reported on the LIBERO benchmark, and I haven’t found their MetaWorld evaluations in other related studies. I’d like to know how Octo and π₀ were specifically evaluated on MetaWorld in this work, including implementation details (e.g., for π₀, was it full finetune or only fine-tuning the action expert?). Additionally, the MetaWorld MT50 dataset on LeRobot appears to lack data for one task—is this the real data used for fine-tuning VLAs? | https://github.com/huggingface/lerobot/issues/1599 | open | [
"enhancement",
"question",
"policies",
"simulation"
] | 2025-07-26T11:18:54Z | 2025-08-12T09:17:44Z | null | Zooy138 |
huggingface/transformers | 39,686 | CRITICAL ISSUE REPORT! GEMMA 3 1B CANNOT RUN! | How to reproduce:
Run this:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the base model in FP16
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/gemma-3-1b-pt",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="mps",
)
# Load and configure the tokenizer
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-1b-pt", trust_remote_code=True)
# Generate the text
prompt = "<bos>Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt").to(base_model.device)
outputs = base_model.generate(inputs.input_ids, max_length=50)
# Decode the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
Error:
```
(yuna) yuki@yuki AI % python gener.py
k_out_updated = k_out_shifted.index_copy(2, update_position, key_states)
Traceback (most recent call last):
File "/Users/yuki/Documents/AI/gener.py", line 19, in <module>
outputs = base_model.generate(inputs.input_ids, max_length=50)
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/transformers/generation/utils.py", line 2623, in generate
result = self._sample(
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/transformers/generation/utils.py", line 3649, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
System: macOS Tahoe, MacBook Pro M1 with 16 GB of RAM | https://github.com/huggingface/transformers/issues/39686 | closed | [] | 2025-07-26T00:22:27Z | 2025-07-28T12:07:50Z | 5 | yukiarimo |
huggingface/lerobot | 1,592 | Time spent on imitation learning training (ACT) | I use colab to make a policy with ACT model.
The note said, "Training with the ACT policy for 100,000 steps typically takes about 1.5 hours on an NVIDIA A100 GPU,", and I used A100 model in colab too.
However the expected time is 13 hours, which seems to be much longer than the standard value of 1.5 hours.
Is it correct that it takes this much time in a colab environment?
I used dataset from
https://huggingface.co/datasets/initie/test_pick
and there is no problem with the operation of the training code. | https://github.com/huggingface/lerobot/issues/1592 | closed | [
"question",
"policies"
] | 2025-07-25T06:36:35Z | 2025-10-08T08:32:32Z | null | initia1013 |
huggingface/datasets | 7,699 | Broken link in documentation for "Create a video dataset" | The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" /> | https://github.com/huggingface/datasets/issues/7699 | open | [] | 2025-07-24T19:46:28Z | 2025-07-25T15:27:47Z | 1 | cleong110 |
huggingface/transformers | 39,637 | [BUG] Run 111B+ Teacher distributed inference and 8B Student distributed training on multi-node H200 GPUs using the Transformers Trainer without encountering OOM errors? | Hello, first off, apologies if this information is already available elsewhere. I've searched through the documentation and existing issues but haven't found a clear answer to my question.
I have access to 2 to 4 nodes (16 to 32 GPUs in total), each equipped with 8x140GB H200 GPUs. My objective is to perform large-scale distributed inference using a massive 111B-parameter Teacher model (CohereLabs/c4ai-command-a-03-2025) and simultaneously conduct online knowledge distillation (soft-logit based) from this 111B Teacher model to a smaller 8B Student model (CohereLabs/c4ai-command-r7b-12-2024).
Is there a way to simultaneously run distributed inference for Teacher models larger than 111B and distributed training for Student models in a multi-node setup, utilizing Hugging Face Transformers' Trainer?
The Transformers version I'm using is v4.51.3. I've observed the use of model = deepspeed.tp_model_init within the def deepspeed_init function in src/transformers/integrations/deepspeed.py. I attempted to apply this code, but it resulted in a torch.distributed.DistBackendError.
I would be very grateful if someone could explain what would be most suitable for my use case. A minimal working example would be the icing on the cake. Surely, if the Open LLM Leaderboard shows that online knowledge distillation (soft-logit) is possible with large models exceeding 111B, there must be a straightforward way to achieve what I want, but I'm unsure how everyone else does it.
For reference, below is the script I'm currently working with:
`deepspeed --num_nodes 2 --num_gpus 8 \
--hostfile $HOSTFILE \
--master_addr $MASTER_ADDR \
--master_port=62535 \
train.py \
--teacher CohereLabs/c4ai-command-a-03-2025 \
--student CohereLabs/c4ai-command-r7b-12-2024 \
--epochs 1 --batch_size 1 --seq_len 4096 --temperature 1.0 --max_samples 150 --lr 1e-6 2>&1 | tee -a "./train.log" `
```import deepspeed
import torch.distributed as dist
import os, math, argparse, warnings, torch, random, multiprocessing as mp
from datasets import load_dataset, concatenate_datasets
from transformers import (AutoTokenizer, AutoModelForCausalLM,
PreTrainedTokenizerBase)
from torch.nn.utils.rnn import pad_sequence
import torch.nn.functional as F
from datetime import timedelta
from deepspeed.runtime.utils import see_memory_usage
os.environ["TOKENIZERS_PARALLELISM"] = "false"
os.environ.setdefault("NCCL_ASYNC_ERROR_HANDLING", "1")
warnings.filterwarnings("ignore", category=UserWarning)
mp.set_start_method("spawn", force=True)
def get_args():
p = argparse.ArgumentParser()
p.add_argument("--teacher", default="")
p.add_argument("--student", default="")
p.add_argument("--dataset", default="")
p.add_argument("--split", default="train")
p.add_argument("--epochs", type=int, default=1)
p.add_argument("--batch_size", type=int, default=1,
help="per-GPU micro-batch")
p.add_argument("--seq_len", type=int, default=4096)
p.add_argument("--temperature", type=float, default=1.0)
p.add_argument("--lr", type=float, default=1e-6)
p.add_argument("--max_samples", type=int, default=0,
help="0=1000 ")
p.add_argument("--local_rank", type=int, default=-1,
help="deepspeed/torch launcher GPU index")
p.add_argument("--cache_path", default="")
p.add_argument("--hf_token", default="")
p = deepspeed.add_config_arguments(p)
return p.parse_args()
def main():
timeout_seconds = 3600
timeout_duration = timedelta(seconds=timeout_seconds)
dist.init_process_group(
backend="nccl",
timeout=timeout_duration
)
args = get_args()
deepspeed.init_distributed()
rank, world = deepspeed.comm.get_rank(), deepspeed.comm.get_world_size()
device = torch.device("cuda", deepspeed.comm.get_local_rank())
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(args.student,
use_fast=True, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# tokenizer token_id
tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
# Teacher (inference only)
teacher_model = AutoModelForCausalLM.from_pretrained(
args.teacher, torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True, device_map=None,
cache_dir=args.cache_path,token=args.hf_token)
see_memory_usage("After load model", force=True)
teacher_model.config.eos_token_id = tokenizer.eos_token_id
teacher_model.config.pad_token_id = tokenizer.pad_token_id
teacher_engine = deepspeed.init_inference(
teacher_model,
mp_size=world,
dtype=torch.bfloat16,
replace_with_kernel_inject=True,
replace_method="auto")
| https://github.com/huggingface/transformers/issues/39637 | closed | [] | 2025-07-24T15:05:38Z | 2025-09-01T08:03:18Z | 3 | seona21 |
huggingface/lerobot | 1,586 | Real-world deploy on ALOHA Robot | How could I deploy the policies on the ALOHA robot? And how could I deploy in the real world? | https://github.com/huggingface/lerobot/issues/1586 | open | [
"question",
"robots"
] | 2025-07-24T12:52:06Z | 2025-08-21T16:18:26Z | null | LogSSim |
huggingface/diffusers | 11,984 | A compatibility issue when using custom Stable Diffusion with pre-trained ControlNets | I have successfully fine-tuned a Stable Diffusion v1.5 model using the Dreambooth script, and the results are excellent. However, I've encountered a compatibility issue when using this custom model with pre-trained ControlNets. Since the Dreambooth process modifies the U-Net weights, the original ControlNet is no longer aligned with the fine-tuned model, leading to a significant degradation in control and image quality.
My goal is to find a way to make them compatible again. It's important to clarify that I am trying to avoid a full, separate fine-tuning of the ControlNet on my custom model. That process is data- and resource-intensive, which defeats the purpose of a lightweight personalization method like Dreambooth. I have tried modifying the train_dreambooth.py script to incorporate ControlNet, but results have been consistently poor.
Is there a dedicated script or a recommended workflow in diffusers to fine-tune a Stable Diffusion with ControlNet via Dreambooth? Any guidance or pointers would be greatly appreciated. Thanks a lot! | https://github.com/huggingface/diffusers/issues/11984 | closed | [] | 2025-07-24T09:16:55Z | 2025-07-24T15:15:20Z | 6 | ScienceLi1125 |
huggingface/lighteval | 868 | How to calculate perplexity from an OpenAI compatible API | Hello,
I'm new to LightEval. I want to use LightEval to evaluate an LLM model that is served via an API. The API is OpenAI compatible. It also returns logprobs for each token. Is there a built-in function to evaluate the perplexity score? I'm asking because I see that it’s not implemented.
https://github.com/huggingface/lighteval/blob/d805f9fa0a84da9ca4c0c6a638bbed149a7012a3/src/lighteval/models/litellm_model.py#L322
Any help or guidance is greatly appreciated. Thanks. | https://github.com/huggingface/lighteval/issues/868 | open | [] | 2025-07-24T07:27:05Z | 2025-07-24T07:27:05Z | null | mrtpk |
huggingface/lerobot | 1,580 | Environment_State in act and SmolVLA policy | Hi, Thanks for the awesome work!
I have been noticing a variable called observation.environment_state in the act policy. What is exactly the feature environment_state. Thanks! | https://github.com/huggingface/lerobot/issues/1580 | closed | [
"question",
"policies"
] | 2025-07-24T03:32:31Z | 2025-10-08T13:09:33Z | null | kasiv008 |
huggingface/transformers.js | 1,379 | Why Do I Get Different Outputs in Python and JavaScript for the Same ONNX Model? | Hi ,
I'm running inference on the same ONNX model (t5-small-new) using both Python and JavaScript (via ONNX Runtime). However, I'm noticing that the outputs are different between the two environments, even though the inputs and model are the same. The output of the Python code is correct while JS is not accurate.
Python Code:
```
from optimum.onnxruntime import ORTModelForSeq2SeqLM
from transformers import AutoTokenizer
model = ORTModelForSeq2SeqLM.from_pretrained(
"t5-small-new",
use_cache=True
)
tokenizer = AutoTokenizer.from_pretrained("t5-small-new")
inputs = tokenizer("My Input", return_tensors="pt")
outputs = model.generate(**inputs)
print("Prediction:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
JS code:
```
const inputText = "My Input";
const tokenizer = await window.AutoTokenizer.from_pretrained("t5-small-new");
const model = await window.AutoModelForSeq2SeqLM.from_pretrained("t5-small-new", {
dtype: "fp32",
device: "wasm",
});
const encoded = await tokenizer(inputText, {
return_tensors: "pt",
});
const output = await model.generate({
input_ids: encoded.input_ids,
attention_mask: encoded.attention_mask,
use_cache: true,
});
const decoded = await tokenizer.decode(output[0], {
skip_special_tokens: true,
});
console.log("JS Prediction:", decoded);
```
My model uses `decoder_model_merged.onnx`, `encoder_model.onnx`, and `decoder_model.onnx`.
Could you guide me on what is happening and why I get different results? | https://github.com/huggingface/transformers.js/issues/1379 | closed | [
"question"
] | 2025-07-23T20:13:57Z | 2025-08-29T23:43:21Z | null | mahdin75 |
huggingface/transformers | 39,618 | SageAttention for attention implementation? | ### Feature request
I've noticed it's been a while now, but transformers still only has flash attention as the fastest attention backend for calls like these:
<img width="1307" height="780" alt="Image" src="https://github.com/user-attachments/assets/3f3d62f6-a166-4ca6-97a0-49263fd93299" />
Are there any plans to add sageattention as well?
### Motivation
It's become increasingly involved to have to monkey patch sage attention support for every new model that comes out, and for older models that used older versions of transformers, I've had to do unholy things like this:
<img width="1296" height="705" alt="Image" src="https://github.com/user-attachments/assets/c5f4ff6a-094a-48f4-9339-17de1ece43d0" />
### Your contribution
I have an example of a patch I had to do so I will upload that here
[llama_nar.py.txt](https://github.com/user-attachments/files/21393926/llama_nar.py.txt) | https://github.com/huggingface/transformers/issues/39618 | open | [
"Feature request"
] | 2025-07-23T19:10:47Z | 2025-07-25T12:30:37Z | 4 | Many0therFunctions |
huggingface/diffusers | 11,977 | how to load a finetuned model especially during validation phase | <img width="1034" height="743" alt="Image" src="https://github.com/user-attachments/assets/c4e9318f-10aa-4b91-9d60-e28a3be38f8a" />
As the above, I have finetuned the model and want to validate it, but the given demo which is train_dreambooth_sd3.py still uses
"pipeline = StableDiffusion3Pipeline.from_pretrained(
args.pretrained_model_name_or_path,
transformer=transformer,
text_encoder=text_encoder_one,
text_encoder_2=text_encoder_two,
text_encoder_3=text_encoder_three,
) " .
I wonder why it still load from args.pretrained_model_name_or_path as it has saved the finetuned model in the save_path which is "os.path.join(args.output_dir, f"checkpoint-{global_step}")".
so, how to how to load the finetuned model during validation phase?
Another confusion, what is the difference between " StableDiffusion3Pipeline.from_pretrained() " and "SD3Transformer2DModel.from_pretrained" as the following:
<img width="1034" height="743" alt="Image" src="https://github.com/user-attachments/assets/7d9e5915-8aa2-4678-b39f-6ecb4480a02b" />
| https://github.com/huggingface/diffusers/issues/11977 | open | [] | 2025-07-23T11:54:16Z | 2025-07-24T09:19:11Z | null | micklexqg |
huggingface/lerobot | 1,579 | Is there a video backend supporting nondestructive encoding? | I saved images during recording through not deletng folder `images`. When I try to compare the first frame.png in `images` folder and dataset=make_dataset(config)'s first image, I found the saved png file is nondestructive. But the image I got by lerobot is not.
How I find:
in `def save_episode`
```
# img_dir = self.root / "images"
# if img_dir.is_dir():
# shutil.rmtree(self.root / "images")
```
This has been moved in latest version. now:
```
def encode_episode_videos(self, episode_index: int) -> None:
...
encode_video_frames(img_dir, video_path, self.fps, overwrite=True)
shutil.rmtree(img_dir)
```
I saved some images through recording with one channel filled with zero. Then read the saved png through cv2, it showed it has a 0-filled channel.
Then I try to check whether I can get the same image through lerobot
so I did this in train.py
```
raw_dataloader = torch.utils.data.DataLoader(
dataset,
num_workers=cfg.num_workers,
batch_size=cfg.batch_size,
shuffle=False,
sampler=sampler,
pin_memory=device.type == "cuda",
drop_last=False,
)
image_tensor=peek_batch["observation.images.side_depth"][0]
image_np = (image_tensor * 255).permute(1, 2, 0).cpu().numpy().astype(np.uint8)
```
Sadly,`image_np` is really different from real png, it doesn't have a 0-filled channel, and its average data shows larger.
| https://github.com/huggingface/lerobot/issues/1579 | open | [
"question",
"dataset"
] | 2025-07-23T08:38:39Z | 2025-08-12T09:22:26Z | null | milong26 |
huggingface/candle | 3,032 | `matmul` (and others) Precision issues between Candle & PyTorch | We noticed there's some precision discrepancy in matrix multiplication and the linear layer between between Candle and PyTorch. This matters a lot when reproducing LLMs originated from PyTorch into Candle. We used the `hf_hub::api::Api` to get the safetensors from the hub and for testing the precision issues for each modules independently. This also occurs for the `BF16` dtype in `Cuda`.
Here's a shortened list of tests (for brevity) between `candle_core::tensor::Tensor::matmul` and `torch.matmul`
```
❌ test_0: MSE=0.0000000004096404, MAE=0.00001550 (dims: 2048x256, dtype: F32, device: Cpu)
❌ test_1: MSE=0.0000000003628351, MAE=0.00001453 (dims: 2048x256, dtype: F32, device: Cpu)
...
❌ test_48: MSE=0.0000000000824194, MAE=0.00000633 (dims: 512x1024, dtype: F32, device: Cpu)
❌ test_49: MSE=0.0000000003840639, MAE=0.00001534 (dims: 2048x256, dtype: F32, device: Cpu)
```
We did notice `candle_nn::Embedding` performed at 0-tolerance (tested indirectly), which probably means the the loaded weights themselves are working precisely.
Have you guys tried validating your implementation with the PyTorch at 0-tolerance (within the same CPU/GPU architecture)? Is there any proper way to mitigate this? We need it for our implementation. Thank you. | https://github.com/huggingface/candle/issues/3032 | closed | [] | 2025-07-23T04:07:08Z | 2025-09-27T21:25:51Z | 4 | andrew-shc |
huggingface/lerobot | 1,578 | Lerobot metaworld dataset only provides 49 tasks | https://huggingface.co/datasets/lerobot/metaworld_mt50
There are only 49 tasks and "Push the puck to a goal" task repeates twice | https://github.com/huggingface/lerobot/issues/1578 | open | [
"question",
"simulation"
] | 2025-07-23T04:03:17Z | 2025-08-12T09:23:12Z | null | chenkang455 |
huggingface/lerobot | 1,577 | test failed after training SVLA | I collected 76 sets of data and used the same calibration file as during collection. However, after training for 24k steps, the model obtained was unable to complete the grasping task during inference. Can anyone help me deal with the problem?
[dataset](https://huggingface.co/datasets/Xiaoyan97/orange_block_pickplace)
| https://github.com/huggingface/lerobot/issues/1577 | open | [
"question",
"policies"
] | 2025-07-23T03:59:26Z | 2025-08-12T09:23:26Z | null | Liu-Xiaoyan97 |
huggingface/lerobot | 1,576 | Multiple Dataset training | How to train multiple lerobot dataset? is there any function I can use it | https://github.com/huggingface/lerobot/issues/1576 | open | [
"question",
"dataset"
] | 2025-07-23T03:46:03Z | 2025-10-10T09:30:06Z | null | JustinKai0527 |
huggingface/transformers | 39,596 | Does transformers support python3.13 -- disable-gil or python3.14 free threading? | Does transformers support python3.13 -- disable-gil or python3.14 free threading?
I got an error when trying to install transformers on these two python versions. | https://github.com/huggingface/transformers/issues/39596 | closed | [] | 2025-07-23T02:34:03Z | 2025-08-30T08:02:54Z | 2 | SoulH-qqq |
huggingface/transformers.js | 1,374 | nanoVLM support | ### Question
I would like to know if there is any plan to support models built with nanoVLM [https://github.com/huggingface/nanoVLM], thanks. | https://github.com/huggingface/transformers.js/issues/1374 | open | [
"question"
] | 2025-07-22T11:43:57Z | 2025-07-23T09:02:15Z | null | sbrzz |
huggingface/diffusers | 11,971 | What is the minimum memory requirement for model training? | Hello, I would like to try training an SDXL model using my own dataset. What is the minimum memory size required for the model? | https://github.com/huggingface/diffusers/issues/11971 | closed | [] | 2025-07-22T07:52:28Z | 2025-07-22T08:26:27Z | null | WWWPPPGGG |
huggingface/transformers | 39,565 | Model forward execution in full eager mode? | I know there is a flag `attn_implementation` which could trigger specialized attention kernel implementation. Besides this, does everything run in native PyTorch eager mode? Does `transformers` have any other custom op or kernel?
```python
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation=None)
model.forward(input_tokens)
```
I'm asking this to see if `transformers` can be used as a numerical baseline to verify other inference backend | https://github.com/huggingface/transformers/issues/39565 | closed | [] | 2025-07-21T21:49:05Z | 2025-08-21T08:34:59Z | 3 | 22quinn |
huggingface/lerobot | 1,564 | How are Episode Stats used? | I'm looking to create a subset of an episode (ie sec 2-4) in a 30 second episode, and wanted to know how episode_stats are used later on for training / inference?
Are they used to normalize model inputs or are they used somewhere else as well?
ie. in modeling_act.py
```
self.normalize_inputs = Normalize(
config.input_features, config.normalization_mapping, dataset_stats)
```
| https://github.com/huggingface/lerobot/issues/1564 | closed | [
"question",
"policies",
"processor"
] | 2025-07-21T19:06:21Z | 2025-08-12T09:27:29Z | null | andlyu |
huggingface/lerobot | 1,561 | will you release the libero ft&eval setting? | hello your smolVLA is a wonderful work ,i notice that you finetuned it on the **libero** and evalaute at the same time.but i couldn't achieve the same or similar success rate**(just 76% ,much lower than your '96%')**
**have you use the async inference in libero?**
I think it must be the different hyperparameters with yours,so could you release the script(finetune.py & eval.py) or just tell me your ft&eval settings.here is my emal 602225349@qq.com
thx u in advance~ | https://github.com/huggingface/lerobot/issues/1561 | closed | [
"enhancement",
"question",
"policies"
] | 2025-07-21T13:57:13Z | 2025-09-23T09:25:04Z | null | JuilieZ |
huggingface/transformers | 39,554 | Why `is_causal` is not used in `flash_attention_forward` ? | I want to perform bidirectional attention in the Qwen3 model to train an embedding model, so I passed `is_causal=False` in the model `forward` (I manually added `is_causal` arguments in all `forward` method such as `Qwen3Model` and `Qwen3Attention` in`modeling_qwen3.py`):
```python
class Qwen3Attention(nn.Module):
"""Multi-headed attention from 'Attention Is All You Need' paper"""
...
def forward(
self,
hidden_states: torch.Tensor,
position_embeddings: tuple[torch.Tensor, torch.Tensor],
attention_mask: Optional[torch.Tensor],
past_key_value: Optional[Cache] = None,
cache_position: Optional[torch.LongTensor] = None,
is_causal: Optional[bool] = True, # I add is_causal here
**kwargs: Unpack[FlashAttentionKwargs],
) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
...
attn_output, attn_weights = attention_interface(
self,
query_states,
key_states,
value_states,
attention_mask,
dropout=0.0 if not self.training else self.attention_dropout,
scaling=self.scaling,
sliding_window=self.sliding_window, # diff with Llama
is_causal=is_causal, # and is_causal from the argument is passed to the attention_interface (e.g. `flash_attention_2`, `sdpa_attention_forward`)
**kwargs,
)
```
I can successfully change the causality of the attention in `sdpa_attention_forward`. However, I realized that it does not change the causality in the attention in `flash_attention_forward`. After diving into the implementation of `flash_attention_forward`, I found the reason in `flash_attention_forward` located at `transformers/integrations/flash_attention.py`:
```python
def flash_attention_forward(
module: torch.nn.Module,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
attention_mask: Optional[torch.Tensor],
dropout: float = 0.0,
scaling: Optional[float] = None,
sliding_window: Optional[int] = None,
softcap: Optional[float] = None,
**kwargs,
) -> tuple[torch.Tensor, None]:
...
# FA2 always relies on the value set in the module, so remove it if present in kwargs to avoid passing it twice
kwargs.pop("is_causal", None)
attn_output = _flash_attention_forward(
query,
key,
value,
attention_mask,
query_length=seq_len,
is_causal=module.is_causal, # here module is `Qwen3Attention`
dropout=dropout,
softmax_scale=scaling,
sliding_window=sliding_window,
softcap=softcap,
use_top_left_mask=_use_top_left_mask,
target_dtype=target_dtype,
attn_implementation=module.config._attn_implementation,
**kwargs,
)
```
As you can see, the `is_causal` argument is popped, and the `is_causal` of `Qwen3Attention` is used as the argument. Note that `Qwen3Attention.is_causal` is never changed, and its default value is `True`, so the `is_causal` argument passed into `_flash_attention_forward` will always be `True` regardless of any change.
After I add a line of code to alter the `Qwen3Attention.is_causal`, i.e. `self.is_causal = is_causal` before passing the arguments into `attention_interface`, I can change the causality of `flash_attention_forward`. So I would like to know if it is a feature or a bug? Thank you!! | https://github.com/huggingface/transformers/issues/39554 | closed | [
"Flash Attention"
] | 2025-07-21T12:08:00Z | 2025-11-11T12:32:41Z | 9 | lucaswychan |
huggingface/peft | 2,660 | Custom models LoRA | Is there any way to fine-tune models that are not in the support list or custom models?
Currently, many public models have their LLM parts from Qwen. Can LLaMA-Factory use the Qwen template and only fine-tune the LLM part? Thank you | https://github.com/huggingface/peft/issues/2660 | closed | [] | 2025-07-21T11:52:30Z | 2025-07-24T12:53:34Z | 6 | stillbetter |
huggingface/lerobot | 1,559 | Is the current model framework suitable for using automatic mixed precision? | I saw that `.to(torch.float32)` and `.to(torch.bfloat16)` were used in many places in the Pi0 model code. Then I implemented parallel training of Pi0 based on accelerate, and found that if I want to use AMP, the code will report an error of dtype mismatch. I want to know whether the existing code is suitable for automatic mixed precision? If not, how should it be modified? | https://github.com/huggingface/lerobot/issues/1559 | open | [
"question",
"policies"
] | 2025-07-21T10:45:26Z | 2025-08-12T09:27:59Z | null | xliu0105 |
huggingface/transformers | 39,549 | Is there plan to integrate ColQwen2.5 into Transformers? | ### Model description
Is ColQwen2ForRetrieval integrated into the transformers library, and are there plans to add [ColQwen2.5](https://github.com/illuin-tech/colpali/blob/main/colpali_engine/models/qwen2_5/colqwen2_5/modeling_colqwen2_5.py) in the future?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
https://github.com/illuin-tech/colpali/blob/main/colpali_engine/models/qwen2_5/colqwen2_5/modeling_colqwen2_5.py
https://github.com/huggingface/transformers/pull/38391 | https://github.com/huggingface/transformers/issues/39549 | closed | [
"New model"
] | 2025-07-21T10:08:47Z | 2025-11-03T23:31:08Z | 0 | rebel-thkim |
huggingface/diffusers | 11,966 | How about forcing the first and last block on device when groupoffloading is used? | **Is your feature request related to a problem? Please describe.**
When group offloading is enabled, the offload and onload cannot be streamed between steps and this is really a big time comsuming problem.
**Describe the solution you'd like.**
Is it possible to add an option that could make the first and last block forced on device to avoid offload and onload?
@a-r-r-o-w Could you please give some help? Thanks so much.
| https://github.com/huggingface/diffusers/issues/11966 | open | [
"contributions-welcome",
"group-offloading"
] | 2025-07-21T08:38:30Z | 2025-12-02T15:30:23Z | 13 | seed93 |
huggingface/tokenizers | 1,829 | The parameter in initial_alphabet of the "class BpeTrainer(Trainer)" does not allow more than one character to initialized | Hi everyone,
I am working on Tamil and Sinhala languages which are morphologically rich languages, in these languages a character is actually a combination of multiple unicode codepoints (similar to emojis) so it would be greatly beneficial to initialize the BPE alphabet with graphemes instead of the characters. Is there any work around for this which i can use to initialize the BPE algorithm? Thanks in advance!! | https://github.com/huggingface/tokenizers/issues/1829 | open | [] | 2025-07-21T08:30:21Z | 2025-07-21T08:30:21Z | 0 | vmenan |
huggingface/lerobot | 1,554 | How to use local datasets to train and evaluate | Due to network issues, I want to use only local datasets during training and evaluation and prevent huggingface from uploading data or retrieve datasets on the hub.Is there any good solution? | https://github.com/huggingface/lerobot/issues/1554 | closed | [
"question",
"dataset"
] | 2025-07-21T07:54:07Z | 2025-10-08T12:58:32Z | null | zym123321 |
huggingface/optimum | 2,324 | AutoConfig.from_dict Missing in transformers==4.51.3 — Incompatibility with optimum==1.26.1 | ### System Info
```shell
I am running into a critical compatibility issue between optimum and recent versions of transformers.
❗ Error Summary
When using:
transformers==4.51.3
optimum==1.26.1
onnx==1.17.0
onnxruntime==1.20.0
The following runtime error is thrown when attempting to load an ONNX model using ORTModelForTokenClassification.from_pretrained:
AttributeError: type object 'AutoConfig' has no attribute 'from_dict'
This traces back to:
config = AutoConfig.from_pretrained(...)
# ↓ internally calls:
return CONFIG_MAPPING[pattern].from_dict(config_dict, **unused_kwargs)
However, in transformers>=4.48, the method AutoConfig.from_dict appears to have been deprecated or removed. This causes optimum to break at runtime when trying to load ONNX models.
📦 Package Versions
transformers - 4.51.3
optimum - 1.26.1
onnx - 1.17.0
onnxruntime - 1.20.0
torch - 2.2.6
Due to a security advisory, we're required to upgrade to transformers>=4.48. However, even with the latest optimum==1.26.1, it appears optimum is not yet updated for compatibility with changes introduced in recent transformers versions.
ASK:
Is support for transformers>=4.48 (particularly 4.51.3) planned in an upcoming optimum release?
Could this AutoConfig.from_dict dependency be refactored or conditionally patched to restore compatibility?
Is there a compatibility roadmap available between transformers and optimum for ONNX workflows?
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
Use transformers==4.51.3 and optimum==1.26.1
Load an exported ONNX model using ORTModelForTokenClassification.from_pretrained(...)
Observe the AttributeError about AutoConfig.from_dict
### Expected behavior
When using optimum==1.26.1 with transformers>=4.48 (specifically 4.51.3), the following should work without error:
from optimum.onnxruntime import ORTModelForTokenClassification
model = ORTModelForTokenClassification.from_pretrained("path/to/onnx/model")
The model should load successfully using the ONNX Runtime backend.
Internally, AutoConfig.from_pretrained(...) should function correctly regardless of changes in the transformers API (e.g., deprecation/removal of from_dict).
ONNX workflows should remain compatible with newer transformers versions, allowing teams to benefit from critical updates and security patches without breaking ONNX integration. | https://github.com/huggingface/optimum/issues/2324 | open | [
"bug"
] | 2025-07-21T06:04:58Z | 2025-08-01T07:10:20Z | 5 | rratnakar09 |
huggingface/diffusers | 11,964 | KeyError when loading LoRA for Flux model: missing lora_unet_final_layer_adaLN_modulation_1 weights | I'm trying to run Overlay-Kontext-Dev-LoRA locally by loading the LoRA weights using the pipe.load_lora_weights() function. However, I encountered the following error during execution:
> KeyError: 'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight'
```
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
Load the pipeline with a specific torch data type for GPU optimization
pipe = DiffusionPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev",
torch_dtype=torch.bfloat16
)
Move the entire pipeline to the GPU
pipe.to("cuda")
Load LoRA weights (this will also be on the GPU)
pipe.load_lora_weights("ilkerzgi/Overlay-Kontext-Dev-LoRA")
prompt = "Place it"
input_image = load_image("img2.png")
The pipeline will now run on the GPU
image = pipe(image=input_image, prompt=prompt).images[0]
image.save("output_image.png")
```
Environment:
diffusers version: 0.35.0.dev0
Python: 3.10
Running locally on a ubuntu environment with RTX 4090
> Additional Note:
> The model file size is also quite large. I may need to quantize it before running it on the 4090 to avoid out-of-memory issues.
>
> Would appreciate any help or suggestions on how to resolve the loading issue. Thank you!
| https://github.com/huggingface/diffusers/issues/11964 | open | [] | 2025-07-21T05:16:34Z | 2025-07-21T09:14:00Z | 1 | NEWbie0709 |
huggingface/transformers | 39,545 | Is the new Intel–Weizmann speculative decoding algorithm integrated into Transformers? | Hi,
I recently read about a new speculative decoding algorithm developed by Intel Labs and the Weizmann Institute, which reportedly improves inference speed by up to 2.8×, even when using draft and target models with different vocabularies or architectures.
References:
- [Intel Newsroom](https://newsroom.intel.com/artificial-intelligence/intel-weizmann-institute-speed-ai-with-speculative-decoding-advance?utm_source=chatgpt.com)
- [CTech Article](https://www.calcalistech.com/ctechnews/article/h1z7pydlex)
Several sources (including Intel press releases and third-party writeups) claim that this algorithm has already been integrated into the Hugging Face Transformers library.
However, I haven’t found any reference to this new version in the official Transformers documentation
My Questions:
1. Has this Intel–Weizmann speculative decoding algorithm actually been integrated into transformers?
2. If so, where can I find documentation or usage examples for how to enable it?
Thanks in advance for your help! This looks like a powerful advancement, and I'd love to test it. | https://github.com/huggingface/transformers/issues/39545 | closed | [] | 2025-07-21T02:47:48Z | 2025-07-22T12:15:54Z | 4 | NEWbie0709 |
huggingface/lerobot | 1,552 | Support smolvla training on Intel GPU | Current script is only supporting `cuda`, `mps` and `cpu`.
With PyTorch 2.7 with Intel GPU support, once PyTorch is installed, Intel GPU can be utilized in the training script. | https://github.com/huggingface/lerobot/issues/1552 | open | [
"enhancement",
"question",
"policies"
] | 2025-07-21T01:47:38Z | 2025-10-09T07:40:10Z | null | xiangyang-95 |
huggingface/transformers | 39,542 | ValueError: You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time | ### System Info
- `transformers` version: 4.53.2
- Platform: **Ubuntu 22.04** Linux 5.15.0-139-generic
- **Python 3.10.18** + ipykernel 6.29.5
- Pytorch 2.7.1+cu118
### Who can help?
@ArthurZucker
@SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
 I want to build a new MT model with **bert-based encoder** and a **decoder from opus-mt-en-zh** (loaded as `MarianMTModel`), BUT when I execute `Trainer.train()`, It report ValueError: `You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time`. This is code about my model and trainer.
 Thanks for helping!
```Python
# ManchuBERT Encoder + Opus-MT-zh Decoder
import torch
from torch import nn
from transformers.modeling_outputs import Seq2SeqLMOutput
def get_extended_attention_mask(attention_mask, input_shape, device, dtype=torch.float32):
"""
attention_mask: [B, seq_len]
return: [B, 1, 1, seq_len]
"""
mask = attention_mask[:, None, None, :] # [B, 1, 1, seq_len]
mask = mask.to(dtype=dtype)
mask = (1.0 - mask) * -10000.0
return mask
class ManchuZhMT(nn.Module):
def __init__(self, bert, marian):
super().__init__()
self.decoder_embeddings = marian.model.decoder.embed_tokens
self.embeddings = bert.embeddings
self.encoder = bert.encoder
self.decoder = marian.model.decoder
self.lm_head = marian.lm_head
self.final_logits_bias = marian.final_logits_bias
self.config = marian.config
def forward(self,
input_ids=None,
attention_mask=None,
decoder_input_ids=None,
decoder_attention_mask=None,
labels=None,
**kwargs):
hidden_states = self.embeddings(input_ids=input_ids)
attention_mask = attention_mask.to(dtype=torch.float32)
extended_mask = get_extended_attention_mask(attention_mask, input_ids.shape, input_ids.device)
enc_out = self.encoder(hidden_states=hidden_states,
attention_mask=extended_mask,
return_dict=True)
dec_out = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=enc_out.last_hidden_state,
encoder_attention_mask=extended_mask,
return_dict=True)
logits = self.lm_head(dec_out.last_hidden_state) + self.final_logits_bias
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss(ignore_index=-100)
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))
return Seq2SeqLMOutput(loss=loss, logits=logits)
def prepare_inputs_for_generation(self, *args, **kwargs):
return self.decoder.prepare_inputs_for_generation(*args, **kwargs)
def _prepare_encoder_decoder_kwargs_for_generation(self, *args, **kwargs):
return self.decoder._prepare_encoder_decoder_kwargs_for_generation(*args, **kwargs)
model = ManchuZhMT(manchu_model, chn_model)
print(model)
# freeze Decoder + LM Head
for p in model.decoder.parameters():
p.requires_grad = False
for p in model.lm_head.parameters():
p.requires_grad = False
```
```Python
# Add LoRA for Encoder
from peft import LoraConfig, get_peft_model, TaskType
num_layers = len(model.encoder.layer)
target_modules = []
for i in range(num_layers):
target_modules.extend([
f"encoder.layer.{i}.attention.self.query",
f"encoder.layer.{i}.attention.self.key",
f"encoder.layer.{i}.attention.self.value",
f"encoder.layer.{i}.attention.output.dense",
f"encoder.layer.{i}.intermediate.dense",
f"encoder.layer.{i}.output.dense",
])
lora_config = LoraConfig(
task_type=TaskType.SEQ_2_SEQ_LM,
target_modules=target_modules,
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
)
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
```
```Python
# Start Train!
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
args = Seq2SeqTrainingArguments(
output_dir="./lora_with_bert",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=10,
learning_rate=3e-4,
fp16=True,
save_strategy="epoch",
predict_with_generate=True,
logging_steps=100,
report_to="none",
)
trainer = Seq2SeqTrainer(
model=model,
args=args,
train_dataset=tokenized_ds["train"],
eval_dataset=tokenized_ds["val"],
tokenizer=manchu_tok,
)
trainer.train()
trainer.save_model("./lora_with_bert/final")
```
### Expected behav | https://github.com/huggingface/transformers/issues/39542 | closed | [
"Usage",
"Good First Issue",
"trainer",
"bug"
] | 2025-07-21T01:06:27Z | 2025-08-22T05:53:51Z | 10 | xjackzenvey |
huggingface/transformers | 39,551 | InformerForPrediction [I would like to seek your opinions, everyone, How can I set the dynamic real features for prediction] | Here is the description cited from the docs of InformerForPrediction:
> future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) — Required time features for the prediction window, which the model internally will add to future_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features. The Time Series Transformer only learns additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
Hi, I have a question regarding inference in time series forecasting models.
When making predictions, how can I obtain or construct the dynamic_real_features for the future steps (i.e., for the prediction_length)?
More specifically, how should I concatenate the corresponding dynamic_real_features and time_features during inference?
Is it appropriate to use all-zero placeholders for the future dynamic_real_features?
Will this affect prediction performance, considering that during training the model has access to real values for these features over the full context + prediction window?
On a related note:
In time series forecasting, is it necessary for all timestamps in the input window to be equally spaced (e.g., every x minutes)?
Or can I use sequences with irregular time intervals, as long as the time order is preserved?
Thanks for your help!
| https://github.com/huggingface/transformers/issues/39551 | closed | [] | 2025-07-20T11:38:50Z | 2025-08-28T08:03:20Z | null | 2004learner |
huggingface/diffusers | 11,961 | New Adapter/Pipeline Request: IT-Blender for Creative Conceptual Blending | ## Model/Pipeline/Scheduler description
### Name of the model/pipeline/scheduler
"Image-and-Text Concept Blender" (IT-Blender), a diffusion adapter that blends visual concepts from a real reference image with textual concepts from a prompt in a disentangled manner. The goal is to enhance human creativity in design tasks.
### Project page & ArXiv link
Paper link: https://arxiv.org/pdf/2506.24085
The project website: https://imagineforme.github.io/
**(a lot of interesting feasible examples are in the project page.)**
</br>
<img width="2880" height="3159" alt="Image" src="https://github.com/user-attachments/assets/87607797-32a1-41a5-b5aa-69cd8406352c" />
### What is the proposed method?
IT-Blender is an adapter that works with existing models like SD and FLUX. Its core innovation is the **Blended Attention (BA)** module. This module modifies the standard self-attention layers. It uses a two-stream approach (a noisy stream for generation and a clean reference stream for the image) and introduces trainable parameters within an Image Cross-Attention (imCA) term to bridge the distributional shift between clean and noisy latents.
### Is the pipeline different from an existing pipeline?
Yes. The IT-Blender pipeline is distinct for a few reasons:
1. **Native Image Encoding**: It uses the diffusion model's own denoising network to encode the reference image by forwarding a clean version at "t=0". This avoids an external image encoder to better preserve details.
2. **Two-Stream Processing**: During training and inference, it processes a "noisy stream" for the text-guided generation and a "reference stream" for the clean visual concept image simultaneously.
3. **Blended Attention Integration**: The pipeline replaces standard self-attention modules with the new Blended Attention (BA) module, which is designed to physically separate textual and visual concept processing.
### Why is this method useful?
The method is particularly effective for creative tasks like product design, character design, and graphic design, as shown by the extensive examples in the paper and project page. We believe it would be a valuable and unique addition to the `diffusers` library.
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
**Demo page**: https://huggingface.co/spaces/WonwoongCho/IT-Blender
**GitHub page for inference**: https://github.com/WonwoongCho/IT-Blender
Note that we are using our own diffusers with a little bit of changes (`requirements.txt` in the github repo);
**Changed Diffusers Pipeline for FLUX**: https://github.com/WonwoongCho/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py
**Changed Diffusers Pipeline for SD1.5**: https://github.com/WonwoongCho/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
| https://github.com/huggingface/diffusers/issues/11961 | open | [] | 2025-07-20T03:07:38Z | 2025-07-20T03:08:06Z | 0 | WonwoongCho |
huggingface/transformers | 39,522 | T5Gemma failing on provided example | ### System Info
- `transformers` version: 4.53.2
- Platform: Linux-6.14.0-23-generic-x86_64-with-glibc2.41
- Python version: 3.13.3
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'}
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 5060 Ti
### Who can help?
@ArthurZucker and @itazap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example from the T5Gemma docs page.
```
echo -e "Question: Why is the sky blue? Answer:" | transformers run --task text2text-generation --model google/t5gemma-s-s-ul2 --device 0
```
### Expected behavior
When I run I get:
```
File ".venv/lib/python3.13/site-packages/transformers/configuration_utils.py", line 209, in __getattribute__
return super().__getattribute__(key)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
AttributeError: 'T5GemmaConfig' object has no attribute **'vocab_size'**
```
Indeed. The vocab_size is a sub attribute from encoder/decoder, not a direct attribute.
| https://github.com/huggingface/transformers/issues/39522 | closed | [
"bug"
] | 2025-07-19T11:07:26Z | 2025-08-27T07:51:08Z | 7 | jadermcs |
huggingface/lerobot | 1,540 | Controlling robot with text using SmolVLA | Is it possible to control the robot with text inputs? I thought that's what a VLA model was...
I cannot find any instructions on how to do this anywhere...
I found this https://huggingface.co/masato-ka/smolvla_block_instruction , but control_robot was split into multiple files recently - none of which seem to work.
| https://github.com/huggingface/lerobot/issues/1540 | open | [
"question",
"policies"
] | 2025-07-18T23:09:11Z | 2025-08-12T09:35:59Z | null | drain-pipe |
huggingface/diffusers | 11,956 | Frequency-Decoupled Guidance (FDG) for diffusion models | FDG is a new method for applying CFG in the frequency domain. It improves generation quality at low CFG scales while inherently avoiding the harmful effects of high CFG values. It could be a nice addition to the guiders part of diffusers. The implementation details for FDG are available on page 19 of the paper.
https://huggingface.co/papers/2506.19713 | https://github.com/huggingface/diffusers/issues/11956 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome",
"advanced",
"consider-for-modular-diffusers"
] | 2025-07-18T19:12:50Z | 2025-08-07T05:51:03Z | 5 | Msadat97 |
huggingface/datasets | 7,689 | BadRequestError for loading dataset? | ### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand
✖ Invalid input: expected array, received string
→ at paths
✖ Invalid input: expected boolean, received string
→ at expand
```
I tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both.
What can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution.
### Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset("Helsinki-NLP/europarl", "en-fr", streaming=True, trust_remote_code=True)["train"]
```
### Expected behavior
That the dataset loads as it did a couple days ago.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | https://github.com/huggingface/datasets/issues/7689 | closed | [] | 2025-07-18T09:30:04Z | 2025-07-18T11:59:51Z | 17 | WPoelman |
huggingface/diffusers | 11,951 | Kontext model loading quantization problem | Hello, can kontext be loaded quantitatively at present? Because I only have a 4090 with 24g video memory, the current fp16 loading method will cause OOM. Like flux, can it be loaded with torchao or gguf, so that this model can run on 4090? | https://github.com/huggingface/diffusers/issues/11951 | closed | [] | 2025-07-18T03:20:48Z | 2025-07-18T05:39:28Z | 2 | babyta |
huggingface/transformers | 39,484 | Transformers still tries to use apex.amp which is no longer a thing in apex. | ### System Info
```
root@12bb27e08b1b:/# pip show transformers
Name: transformers
Version: 4.52.3
```
trainer.py contains this:
```
if is_apex_available():
from apex import amp
```
Apex (built from source, as they recommend) does no longer come with amp.
How to reproduce?
1. install transformers
2. install apex
3. python `from trl import SFTTrainer`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
How to reproduce?
1. install transformers
2. install apex
3. python `from trl import SFTTrainer`
### Expected behavior
There should not be `from apex import amp` in the code base | https://github.com/huggingface/transformers/issues/39484 | closed | [
"bug"
] | 2025-07-17T16:43:14Z | 2025-08-25T08:03:03Z | 4 | yselivonchyk |
huggingface/datasets | 7,688 | No module named "distributed" | ### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.distributed import split_dataset_by_node
### Expected behavior
expecting the command "from datasets.distributed import split_dataset_by_node" can be ran successfully
### Environment info
python: 3.12 | https://github.com/huggingface/datasets/issues/7688 | open | [] | 2025-07-17T09:32:35Z | 2025-07-25T15:14:19Z | 3 | yingtongxiong |
huggingface/alignment-handbook | 220 | A little question: why num examples is much less than the total amount of my training dataset? | I am using this repo to SFT a model, and I notice that:
I print the total amount of my training dataset, which is 7473
`Number of raw training samples: 7473`
But during training, I find the log:
[INFO|trainer.py:2314] 2025-07-17 17:03:23,908 >> ***** Running training *****
[INFO|trainer.py:2315] 2025-07-17 17:03:23,908 >> Num examples = 698
[INFO|trainer.py:2316] 2025-07-17 17:03:23,908 >> Num Epochs = 3
[INFO|trainer.py:2317] 2025-07-17 17:03:23,908 >> Instantaneous batch size per device = 2
[INFO|trainer.py:2320] 2025-07-17 17:03:23,908 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:2321] 2025-07-17 17:03:23,908 >> Gradient Accumulation steps = 4
[INFO|trainer.py:2322] 2025-07-17 17:03:23,908 >> Total optimization steps = 66
[INFO|trainer.py:2323] 2025-07-17 17:03:23,910 >> Number of trainable parameters = 7,612,756,480
I am using a machine with 8 A100. Could anyone explain it? I am afraid I didn't use the whole dataset but only 698 of 7473 samples to train... | https://github.com/huggingface/alignment-handbook/issues/220 | closed | [] | 2025-07-17T09:12:08Z | 2025-07-23T23:30:33Z | 3 | Red-Scarff |
huggingface/diffusers | 11,945 | Floating point exception with nightly PyTorch and CUDA | ### Describe the bug
When running any code snippet using diffusers it fails with floating point exception, and doesn't print any traceback.
For example this one would cause the issue (the example of Stable Diffusion 3.5 medium):
```
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-medium", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")
image = pipe(
"A capybara holding a sign that reads Hello World",
num_inference_steps=40,
guidance_scale=4.5,
).images[0]
image.save("capybara.png")
```
The issue could be with upstream PyTorch or CUDA, but we'd need to identify what of Diffusers is causing it.
### Reproduction
Not too sure as it's my first time with Diffusers but as suggested by [John6666](https://discuss.huggingface.co/u/John6666/summary) any NVIDIA GeForce RTX 5000 series... In my case it's a 16gb 5060 Ti. Perhaps CUDA 575.57.08 with CUDA version 12.9 and/or PyTorch 2.9.0.dev20250716+cu129?
### Logs
```shell
Let me know how can I retrieve any logs you might need.
```
### System Info
`diffusers-cli env` also causes a Floating point exception, but here you have environment information:
**OS**: Debian 12
```
nvidia-smi
Wed Jul 16 15:58:48 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.57.08 Driver Version: 575.57.08 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 5060 Ti On | 00000000:01:00.0 On | N/A |
| 0% 42C P5 4W / 180W | 10MiB / 16311MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
```
```
pip list
Package Version
------------------------ ------------------------
bitsandbytes 0.46.1
certifi 2025.7.14
charset-normalizer 3.4.2
diffusers 0.34.0
filelock 3.18.0
fsspec 2025.7.0
hf-xet 1.1.5
huggingface-hub 0.33.4
idna 3.10
importlib_metadata 8.7.0
Jinja2 3.1.6
MarkupSafe 3.0.2
mpmath 1.3.0
networkx 3.5
numpy 2.3.1
nvidia-cublas-cu12 12.9.1.4
nvidia-cuda-cupti-cu12 12.9.79
nvidia-cuda-nvrtc-cu12 12.9.86
nvidia-cuda-runtime-cu12 12.9.79
nvidia-cudnn-cu12 9.10.2.21
nvidia-cufft-cu12 11.4.1.4
nvidia-cufile-cu12 1.14.1.1
nvidia-curand-cu12 10.3.10.19
nvidia-cusolver-cu12 11.7.5.82
nvidia-cusparse-cu12 12.5.10.65
nvidia-cusparselt-cu12 0.7.1
nvidia-nccl-cu12 2.27.5
nvidia-nvjitlink-cu12 12.9.86
nvidia-nvshmem-cu12 3.3.9
nvidia-nvtx-cu12 12.9.79
packaging 25.0
pillow 11.2.1
pip 23.0.1
pytorch-triton 3.4.0+gitae848267
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.4
safetensors 0.5.3
setuptools 66.1.1
sympy 1.14.0
torch 2.9.0.dev20250716+cu129
torchaudio 2.8.0.dev20250716+cu129
torchvision 0.24.0.dev20250716+cu129
tqdm 4.67.1
triton 3.3.1
typing_extensions 4.14.1
urllib3 2.5.0
zipp 3.23.0
```
Don't hesitate to tell me any other info you might need.
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11945 | open | [
"bug"
] | 2025-07-17T03:16:02Z | 2025-08-02T13:48:05Z | 1 | MxtAppz |
huggingface/course | 1,009 | How Transformers solve tasks - ASR section refers to task using Whisper but task actually uses Wav2Vec2 | The [Automatic speech recognition](https://huggingface.co/learn/llm-course/chapter1/5?fw=pt#automatic-speech-recognition) segment of Section 1 "Transformer Models" > "How 🤗 Transformers solve tasks" refers to
> Check out our complete [automatic speech recognition guide](https://huggingface.co/docs/transformers/tasks/asr) to learn how to finetune Whisper and use it for inference!
However the guide actually uses Wav2Vec2, not Whisper.
This is a dual request:
1. Update the segment in question to refer to Wav2Vec2
2. Update the task to use Whisper | https://github.com/huggingface/course/issues/1009 | open | [] | 2025-07-16T23:25:55Z | 2025-07-16T23:25:55Z | null | renet10 |
huggingface/diffusers | 11,930 | how to run convert_cosmos_to_diffusers.py correctly? | ### Describe the bug
hi. I have tried to convert the cosmos-transfer1's base model to diffuers using "convert_cosmos_to_diffusers.py" code with options --transformer_type Cosmo
s-1.0-Diffusion-7B-Video2World --vae_type CV8x8x8-1.0 --transformer_ckpt_path ../fsdp_edge_v1/iter_000016000_ema_model_only.pt --output_path ./convert_to_diffusers
but I got error
```Traceback (most recent call last):
File "/home1/jovyan/workspace/cosmos-transfer1/diffusers/../convert_cosmos_to_diffusers.py", line 485, in <module>
transformer = convert_transformer(args.transformer_type, args.transformer_ckpt_path, weights_only)
File "/home1/jovyan/workspace/cosmos-transfer1/diffusers/../convert_cosmos_to_diffusers.py", line 358, in convert_transformer
transformer.load_state_dict(original_state_dict, strict=True, assign=True)
File "/opt/conda/envs/cosmos-transfer1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2581, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for CosmosTransformer3DModel:
Missing key(s) in state_dict: "transformer_blocks.3.norm1.linear_1.weight", "transformer_blocks.3.norm1.linear_2.weight", "transformer_blocks.3.attn1.norm_q.weight", "transformer_blocks.3.attn1.norm_k.weight", "transformer_blocks.3.attn1.to_q.weight", "transformer_blocks.3.attn1.to_k.weight", "transformer_blocks.3.attn1.to_v.weight", "transformer_blocks.3.attn1.to_out.0.weight", "transformer_blocks.3.norm2.linear_1.weight", "transformer_blocks.3.norm2.linear_2.weight", "transformer_blocks.3.attn2.norm_q.weight", "transformer_blocks.3.attn2.norm_k.weight", "transformer_blocks.3.attn2.to_q.weight", "transformer_blocks.3.attn2.to_k.weight", "transformer_blocks.3.attn2.to_v.weight", "transformer_blocks.3.attn2.to_out.0.weight", "transformer_blocks.3.norm3.linear_1.weight", "transformer_blocks.3.norm3.linear_2.weight", "transformer_blocks.3.ff.net.0.proj.weight", "transformer_blocks.3.ff.net.2.weight", "transformer_blocks.4.norm1.linear_1.weight", "transformer_blocks.4.norm1.linear_2.weight", "transformer_blocks.4.attn1.norm_q.weight", "transformer_blocks.4.attn1.norm_k.weight", "transformer_blocks.4.attn1.to_q.weight", "transformer_blocks.4.attn1.to_k.weight", "transformer_blocks.4.attn1.to_v.weight", "transformer_blocks.4.attn1.to_out.0.weight", "transformer_blocks.4.norm2.linear_1.weight", "transformer_blocks.4.norm2.linear_2.weight", "transformer_blocks.4.attn2.norm_q.weight", "transformer_blocks.4.attn2.norm_k.weight", "transformer_blocks.4.attn2.to_q.weight", "transformer_blocks.4.attn2.to_k.weight", "transformer_blocks.4.attn2.to_v.weight", "transformer_blocks.4.attn2.to_out.0.weight", "transformer_blocks.4.norm3.linear_1.weight", "transformer_blocks.4.norm3.linear_2.weight", "transformer_blocks.4.ff.net.0.proj.weight", "transformer_blocks.4.ff.net.2.weight", "transformer_blocks.5.norm1.linear_1.weight", "transformer_blocks.5.norm1.linear_2.weight", "transformer_blocks.5.attn1.norm_q.weight", "transformer_blocks.5.attn1.norm_k.weight", "transformer_blocks.5.attn1.to_q.weight", "transformer_blocks.5.attn1.to_k.weight", "transformer_blocks.5.attn1.to_v.weight", "transformer_blocks.5.attn1.to_out.0.weight", "transformer_blocks.5.norm2.linear_1.weight", "transformer_blocks.5.norm2.linear_2.weight", "transformer_blocks.5.attn2.norm_q.weight", "transformer_blocks.5.attn2.norm_k.weight", "transformer_blocks.5.attn2.to_q.weight", "transformer_blocks.5.attn2.to_k.weight", "transformer_blocks.5.attn2.to_v.weight", "transformer_blocks.5.attn2.to_out.0.weight", "transformer_blocks.5.norm3.linear_1.weight", "transformer_blocks.5.norm3.linear_2.weight", "transformer_blocks.5.ff.net.0.proj.weight", "transformer_blocks.5.ff.net.2.weight", "transformer_blocks.6.norm1.linear_1.weight", "transformer_blocks.6.norm1.linear_2.weight", "transformer_blocks.6.attn1.norm_q.weight", "transformer_blocks.6.attn1.norm_k.weight", "transformer_blocks.6.attn1.to_q.weight", "transformer_blocks.6.attn1.to_k.weight", "transformer_blocks.6.attn1.to_v.weight", "transformer_blocks.6.attn1.to_out.0.weight", "transformer_blocks.6.norm2.linear_1.weight", "transformer_blocks.6.norm2.linear_2.weight", "transformer_blocks.6.attn2.norm_q.weight", "transformer_blocks.6.attn2.norm_k.weight", "transformer_blocks.6.attn2.to_q.weight", "transformer_blocks.6.attn2.to_k.weight", "transformer_blocks.6.attn2.to_v.weight", "transformer_blocks.6.attn2.to_out.0.weight", "transformer_blocks.6.norm3.linear_1.weight", "transformer_blocks.6.norm3.linear_2.weight", "transformer_blocks.6.ff.net.0.proj.weight", "transformer_blocks.6.ff.net.2.weight", "transformer_blocks.7.norm1.linear_1.weight", "transformer_blocks.7.norm1.linear_2.weight", "transformer_blocks.7.attn1.norm_q.weight", "transformer_blocks.7.attn1.norm_k.weight", "transformer_blocks.7.attn1.to_q.weight", "transformer_blocks.7.attn1.to_k.weight", "transformer_blocks.7.attn1.to_v.weight", "transformer_blocks.7.attn1.to_out.0.weight", "transformer_blocks.7.norm2.linear | https://github.com/huggingface/diffusers/issues/11930 | open | [
"bug"
] | 2025-07-15T16:20:09Z | 2025-07-15T16:24:47Z | null | dedoogong |
huggingface/transformers | 39,426 | object detection : matchin outputs.last_hidden_state with results | ### Feature request
it seems to me that would be possible with a little modification in the function post_process_object_detection
with
```
``for score, label, box, index in zip(scores, labels, boxes, indexes):
results.append(
{
"scores": score[score > threshold],
"labels": label[score > threshold],
"boxes": box[score > threshold],
"indexes": index[score > threshold],
}
)``
```
and then
`outputs.last_hidden_state[0][results[0]['indexes']] `
gives me the desired vector features
Am I right or is there a better way to obtain this matching ?
Thanks for your help
### Motivation
I would like to use outputs.last_hidden_state as features for auxiliary tasks. So I need to know the label and the bounding box associated to one given vector of outputs.last_hidden_state
### Your contribution
I am not a top coder and do not know how to submit a PR | https://github.com/huggingface/transformers/issues/39426 | open | [
"Feature request"
] | 2025-07-15T13:34:08Z | 2025-07-22T11:08:23Z | 5 | fenaux |
huggingface/peft | 2,647 | How can I merge the original model weights with LoRA weights? | I'm currently fine-tuning Qwen2.5_VL. Specifically, I used PEFT for LoRA fine-tuning on the linear layers of the LLM part. Meanwhile, I performed regular fine-tuning on other components like visual.merger and embed_tokens (with param.requires_grad set to True). After generating the files, as follow:
<img width="946" height="691" alt="Image" src="https://github.com/user-attachments/assets/b863a12f-956b-4797-bbfa-769518e73c33" />
I exported pytorch_model.bin using zero_to_fp32.py. When I printed the weight keys of the pytorch_model.bin file, I noticed that the original weights and LoRA weights weren't merged. Here's an example:
```
base_model.model.model.language_model.layers.0.self_attn.q_proj.base_layer.weight: shape=(2048, 2048), dtype=torch.bfloat16
base_model.model.model.language_model.layers.0.self_attn.q_proj.base_layer.bias: shape=(2048,), dtype=torch.bfloat16
base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_A.default.weight: shape=(8, 2048), dtype=torch.bfloat16
base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_B.default.weight: shape=(2048, 8), dtype=torch.bfloat16
```
Could you tell me how to merge them? If I use
`model = model.merge_and_unload()`
I need the base_model. However, I no longer have the original base_model, and the original Qwen_2.5_VL model isn't suitable because apart from LoRA fine-tuning the linear layers, I also fine-tuned visual.merger and embed_tokens.
How can I solve this problem? Thank you!
| https://github.com/huggingface/peft/issues/2647 | closed | [] | 2025-07-15T11:40:33Z | 2025-08-23T15:03:44Z | 4 | guoguo1314 |
huggingface/transformers | 39,421 | Speculative Decoding(do_sample=False) get different outputs | > @transcend-0 hey!
>
>
>
> The issue was solved in [#30068](https://github.com/huggingface/transformers/pull/30068). You can install transformers from `main` with the following line for the correct generation with assisted decoding:
>
>
>
> `!pip install --upgrade git+https://github.com/huggingface/transformers.git`
_Originally posted by @zucchini-nlp in [#30608](https://github.com/huggingface/transformers/issues/30608#issuecomment-2089846816)_
### **System Info**
Python 3.10.11
transformers 4.49.0
torch 2.6.0+cu124
### **Same Reproduction**
Target_Model = Qwen2.5-32B-Instruct
Draft_Model = Qwen2.5-7B-Instruct
`question = "Dienes are organic compounds with two adjacent double bonds in their structure, and they exhibit unique reactivity due to their conjugated pi-electron system. They play a significant role in organic chemistry and are involved in various chemical reactions and natural processes.\nAmong the given options which one is the possible reactant (A) for the given reaction also mention the correct sequence of the dienes according to their reactivity ( most reactive to least reactive) B.\nCyclohexene + A ---> 8,8-diiodobicyclo[4.2.0]octan-7-one\n(B) 1. 2,3-dimethylbuta-1,3-diene, 2. (2E,4E)-hexa-2,4-diene, 3. (2E,4E)-hexa-2,4-diene, 4. (2Z,4Z)-hexa-2,4-diene\n\n\nA. A = 2,2-diiodoethen-1-one, B = 3, 1, 2, 4\nB. A = 2,2-diiodoethen-1-one, B = 4, 2, 1, 3\nC. A = 4,4-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\nD. A = 4,4-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\n\n"`
`prompt = '<|im_start|>user' + question + 'Please reason step-by-step and put your choice letter without any other text with \\boxed{} in the end.'`
`['userDienes are organic compounds with two adjacent double bonds in their structure, and they exhibit unique reactivity due to their conjugated pi-electron system. They play a significant role in organic chemistry and are involved in various chemical reactions and natural processes.\nAmong the given options which one is the possible reactant (A) for the given reaction also mention the correct sequence of the dienes according to their reactivity ( most reactive to least reactive) B.\nCyclohexene + A ---> 8,8-diiodobicyclo[4.2.0]octan-7-one\n(B) 1. 2,3-dimethylbuta-1,3-diene, 2. (2E,4E)-hexa-2,4-diene, 3. (2E,4E)-hexa-2,4-diene, 4. (2Z,4Z)-hexa-2,4-diene\n\n\nA. A = 2,2-diiodoethen-1-one, B = 3, 1, 2, 4\nB. A = 2,2-diiodoethen-1-one, B = 4, 2, 1, 3\nC. A = 4,4-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\nD. A = 4,4-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\n\nPlease reason step-by-step and put your choice letter without any other text with \\boxed{} in the end. To solve this problem, we need to identify the reactant \\( A \\) that can react with cyclohexene to form 8,8-diiodobicyclo[4.2.0]octan-7-one. We also need to determine the correct sequence of the dienes according to their reactivity from most reactive to least reactive.\n\n### Step-by-Step Reasoning:\n\n1. **Identify the Product:**\n - The product is 8,8-diiodobicyclo[4.2.0]octan-7-one. This suggests that the reactant \\( A \\) must be a compound that can undergo a Diels-Alder reaction with cyclohexene to form the bicyclic structure and then iodination at the appropriate positions.\n\n2. **Reactant Identification:**\n - The reactant \\( A \\) should be a dienophile (a compound with a double bond that can participate in a Diels-Alder reaction). Among the given options, the possible candidates are:\n - 2,2-diiodoethen-1-one\n - 4,4-diiodocyclobut-2-en-1-one\n\n3. **Diels-Alder Reaction:**\n - Cyclohexene is a diene, and it will react with a dienophile to form a bicyclic structure. The dienophile should have a double bond that can react with the diene to form the desired product.\n - 2,2-diiodoethen-1-one has a double bond and iodine substituents, making it a suitable dienophile.\n - 4,4-diiodocyclobut-2-en-1-one also has a double bond but is more complex and less likely to form the desired product directly.\n\n4. **Sequence of Dienes According to Reactivity:**\n - The reactivity of dienes depends on the stability of the conjugated pi-electron system.\n - Generally, the order of reactivity from most reactive to least reactive is:\n 1. (2E,4E)-hexa-2,4-diene (most stable and reactive)\n 2. (2E,4E)-hexa-2,4-diene (same as above)\n 3. 2,3-dimethylbuta-1,3-diene (less stable due to steric hindrance)\n 4. (2Z,4Z)-hexa-2,4-diene (least stable due to cis configuration)\n\n5. **Matching Options:**\n - Option A: \\( A = 2,2 \\)-diiodoethen-1-one, B = 3, 1, 2, 4\n - Option B: \\( A = 2,2 \\)-diiodoethen-1-one, B = 4, 2, 1, 3\n - Option C: \\( A = 4,4 \\)-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\n - Option D: \\( A = 4,4 \\)-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\n\nGiven the correct sequence of dienes and the suitable dienophile, the correct option is:\n\n\\boxed{A}']`
- targetDecoding - Running time: 41.82 s`
`['userDienes are organic compounds with two adjacent double bonds in thei | https://github.com/huggingface/transformers/issues/39421 | closed | [] | 2025-07-15T11:36:31Z | 2025-07-19T03:11:04Z | 13 | nighty8 |
huggingface/lerobot | 1,508 | so101_dualarm_triplecam config to evaluate ACT policy? | I recently fine-tuned an ACT policy where my data was from 3 cameras (1 overhead + 2 wrist) and two so101's. Then I tried to evaluate it but noticed there is currently a config file missing to support this. Does or will this support exist soon? | https://github.com/huggingface/lerobot/issues/1508 | open | [
"question",
"robots"
] | 2025-07-15T03:44:32Z | 2025-08-12T09:30:41Z | null | sebastiandavidlee |
huggingface/transformers | 39,410 | FP8 training support for Model Parallel / Tensor Parallel (MP/TP) | ### Feature request
I recieve message "ValueError: The model you are trying to fine-tune is quantized with QuantizationMethod.FP8 but that quantization method do not support training. Please open an issue on GitHub: https://github.com/huggingface/transformers to request the support for training support for QuantizationMethod.FP8" when trying to finetune a fp8 model.
I have learned from the documentations that fp8 models can be trained with ddp, zero or fsdp. Is there a way to do it with MP/TP for huge fp8 models?
### Motivation
Enable finetuning huge fp8 models, like Qwen/Qwen3-235B-A22B-FP8
### Your contribution
I'm afraid it's too tough for me, but I'll do whatever I can if you need. | https://github.com/huggingface/transformers/issues/39410 | open | [
"Feature request"
] | 2025-07-15T02:13:05Z | 2025-07-15T13:30:27Z | 2 | edgeinfinity1 |
huggingface/transformers | 39,409 | TypeError: couldn't find storage object Float8_e4m3fnStorage - which version is needed for this? | Tested so many versions but can't find a version that won't give this error
```
!pip install bitsandbytes==0.45.0 --upgrade
!pip install insightface --upgrade
!pip install huggingface_hub==0.25.1 hf_transfer diffusers==0.31.0 transformers==4.36.0
!pip uninstall xformers triton --yes
!pip install torch==2.2.0+cu121 torchvision --index-url https://download.pytorch.org/whl/cu121
!pip install xformers==0.0.24 --index-url https://download.pytorch.org/whl/cu121
```
```
File "/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py", line 975, in generate_image
reload_pipe(model_input, model_dropdown, scheduler, adapter_strength_ratio, enable_LCM, depth_type, lora_model_dropdown, lora_scale,test_all_loras,single_lora)
File "/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py", line 654, in reload_pipe
pipe = load_model(_pretrained_model_folder, model_to_load)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py", line 528, in load_model
pipeline = StableDiffusionPipeline.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/pipeline_utils.py", line 896, in from_pretrained
loaded_sub_model = load_sub_model(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/pipeline_loading_utils.py", line 704, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py", line 4027, in from_pretrained
dtype_orig = cls._set_default_torch_dtype(torch_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py", line 1584, in _set_default_torch_dtype
torch.set_default_dtype(dtype)
File "/usr/local/lib/python3.11/dist-packages/torch/__init__.py", line 1009, in set_default_dtype
_C._set_default_dtype(d)
TypeError: couldn't find storage object Float8_e4m3fnStorage
```
| https://github.com/huggingface/transformers/issues/39409 | closed | [
"bug"
] | 2025-07-15T01:51:08Z | 2025-08-02T12:06:59Z | 1 | FurkanGozukara |
huggingface/datasets | 7,682 | Fail to cast Audio feature for numpy arrays in datasets 4.0.0 | ### Describe the bug
Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails
in version 4.0.0 but not in version 3.6.0
### Steps to reproduce the bug
The following `uv script` should be able to reproduce the bug in version 4.0.0
and pass in version 3.6.0 on a macOS Sequoia 15.5
```python
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "datasets[audio]==4.0.0",
# "librosa>=0.11.0",
# ]
# ///
# NAME
# create_audio_dataset.py - create an audio dataset of sine waves
#
# SYNOPSIS
# uv run create_audio_dataset.py
#
# DESCRIPTION
# Create an audio dataset using the Hugging Face [datasets] library.
# Illustrates how to create synthetic audio datasets using the [map]
# datasets function.
#
# The strategy is to first create a dataset with the input to the
# generation function, then execute the map function that generates
# the result, and finally cast the final features.
#
# BUG
# Casting features with Audio for numpy arrays -
# done here with `ds.map(gen_sine, features=features)` fails
# in version 4.0.0 but not in version 3.6.0
#
# This happens both in cases where --extra audio is provided and where is not.
# When audio is not provided i've installed the latest compatible version
# of soundfile.
#
# The error when soundfile is installed but the audio --extra is not
# indicates that the array values do not have the `.T` property,
# whilst also indicating that the value is a list instead of a numpy array.
#
# Last lines of error report when for datasets + soundfile case
# ...
#
# File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 239, in cast_storage
# storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
# ~~~~~~~~~~~~~~~~~~~~~~^^^
# File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 122, in encode_example
# sf.write(buffer, value["array"].T, value["sampling_rate"], format="wav")
# ^^^^^^^^^^^^^^^^
# AttributeError: 'list' object has no attribute 'T'
# ...
#
# For the case of datasets[audio] without explicit adding soundfile I get an FFmpeg
# error.
#
# Last lines of error report:
#
# ...
# RuntimeError: Could not load libtorchcodec. Likely causes:
# 1. FFmpeg is not properly installed in your environment. We support
# versions 4, 5, 6 and 7.
# 2. The PyTorch version (2.7.1) is not compatible with
# this version of TorchCodec. Refer to the version compatibility
# table:
# https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
# 3. Another runtime dependency; see exceptions below.
# The following exceptions were raised as we tried to load libtorchcodec:
#
# [start of libtorchcodec loading traceback]
# FFmpeg version 7: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib, 0x0006): Library not loaded: @rpath/libavutil.59.dylib
# Referenced from: <6DB21246-F28A-31A6-910A-D8F3355D1064> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 6: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib, 0x0006): Library not loaded: @rpath/libavutil.58.dylib
# Referenced from: <BD3B44FC-E14B-3ABF-800F-BB54B6CCA3B1> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 5: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib, 0x0006): Library not loaded: @rpath/libavutil.57.dylib
# Referenced from: <F06EBF8A-238C-3A96-BFBB-B34E0BBDABF0> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 4: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib, 0x0006): Library not loaded: @rpath/libavutil.56.dylib
# Referenced from: <6E59F017-C703-3AF6-A271-6277DD5F8170> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib
# Reason: no LC_RPATH's found
# ...
#
# This is strange because the the same error does not happen when using version | https://github.com/huggingface/datasets/issues/7682 | closed | [] | 2025-07-14T18:41:02Z | 2025-07-15T12:10:39Z | 2 | luatil-cloud |
huggingface/lerobot | 1,507 | [PI0] Evaluation result on the metaworld | Has anyone tried training pi0 on the Metaworld benchmark? My evaluation results are relatively low 30~%. | https://github.com/huggingface/lerobot/issues/1507 | closed | [
"bug",
"question",
"policies",
"simulation"
] | 2025-07-14T14:56:38Z | 2025-10-08T08:47:31Z | null | chenkang455 |
huggingface/transformers | 39,401 | Qwen3 tokenizer wrong offset_mapping | ### System Info
transformers 4.53.2, Ubuntu 22.04.4, python 3.11.13
### Who can help?
@ArthurZucker and @itazap There must be a problem with the `offset_mapping` of Qwen3 `tokenizer`. The starting point in the text for each token, except the first and the last, is one position behind. I compared it with the BERT's `tokenizer`, which produces what is expected:
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
sample_text='A girl is styling her hair.'
bert_tokenizer = BertTokenizerFast.from_pretrained('google-bert/bert-base-cased')
bert_encoding = bert_tokenizer(
text=sample_text, add_special_tokens=False, return_offsets_mapping=True
)
print(bert_encoding['offset_mapping'])
qwen_tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-0.6B')
qwen_encoding = qwen_tokenizer(
text=sample_text, add_special_tokens=False, return_offsets_mapping=True
)
print(qwen_encoding['offset_mapping'])
```
### Expected behavior
[(0, 1), (2, 6), (7, 9), (10, 17), (18, 21), (22, 26), (26, 27)]
[(0, 1), (1, 6), (6, 9), (9, 17), (17, 21), (21, 26), (26, 27)] | https://github.com/huggingface/transformers/issues/39401 | closed | [
"bug"
] | 2025-07-14T14:21:08Z | 2025-07-16T09:59:35Z | 4 | contribcode |
huggingface/lerobot | 1,506 | episode: None | When I run "python -m lerobot.scripts.train --dataset.root=./lerobot_datasets/my_robot_dataset/ --output_dir=./lerobot_datasets/outputs/ --policy.type=pi0 --dataset.repo_id=lerobot/tape --policy.push_to_hub=false", I got
‘’
'dataset': {'episodes': None,
'image_transforms': {'enable': False...
}
‘’.
Is this right? | https://github.com/huggingface/lerobot/issues/1506 | open | [
"question",
"policies"
] | 2025-07-14T13:29:07Z | 2025-08-12T09:31:16Z | null | LogSSim |
huggingface/finetrainers | 420 | How to fine-tune Wan 2.1 with Context Parallelism? | I am trying to fine-tune the Wan 2.1 model and would like to leverage the Context Parallelism (CP) feature to manage memory and scale the training. I saw in the main README that `CP support` is listed as a key feature.
I have looked through the `examples/training` directory and the documentation, but I couldn't find a specific example or launch script demonstrating how to fine-tune the Wan model with Context Parallelism enabled.
Could you please provide some guidance or a minimal example on how to properly configure a training job for **Wan 2.1 with Context Parallelism**? | https://github.com/huggingface/finetrainers/issues/420 | open | [] | 2025-07-14T06:55:39Z | 2025-07-15T05:09:45Z | null | vviper25 |
huggingface/lerobot | 1,503 | LeRobot So100 and Groot N1.5 Model Multi-Robot Deployment Feasibility Inquiry | Hello, I am conducting various tests using LeRobot's So100 (robot arm) with Groot N1.5 for training.
I have some questions to ask.
**Main Question**
Is it possible to simultaneously apply a model trained with Groot N1.5 base on one robot to multiple robots of the same model?
**Question Background (Actual Experience)**
I had a model that was trained with Groot 1.5 base using data collected from So100. However, when one robot motor failed and was replaced, I had to recalibrate the entire system.
After applying the previously used model for inference, the robot did not operate properly.
I suspect this might be due to the basic position changing during the calibration process.
**Core Question**
Following this logic, does each robot of the same model require an individual model tailored to its specific calibration?
This question also relates to whether a single unified model can be used for inference and operation when deploying 100 robot arms in a factory setting.
I would appreciate your response. | https://github.com/huggingface/lerobot/issues/1503 | open | [
"enhancement",
"question",
"policies",
"dataset"
] | 2025-07-14T05:55:44Z | 2025-08-12T09:31:35Z | null | devedgar |
huggingface/lerobot | 1,497 | ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub. | ### System Info
```Shell
lerobot commit version:
https://github.com/huggingface/lerobot/tree/69901b9b6a2300914ca3de0ea14b6fa6e0203bd4
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
(lerobot) robot@robot-Legion-Y9000P-IRX8:~/imitation_learning_lerobot/lerobot$ python lerobot/scripts/train.py \
> --policy.type=act \
> --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \
> --env.type=aloha \
> --env.task=AlohaTransferCube-v0 \
> --output_dir=outputs/train/act_aloha_transfer
INFO 2025-07-13 12:30:41 ils/utils.py:48 Cuda backend detected, using cuda.
WARNING 2025-07-13 12:30:41 /policies.py:77 Device 'None' is not available. Switching to 'cuda'.
Traceback (most recent call last):
File "/home/robot/imitation_learning_lerobot/lerobot/lerobot/scripts/train.py", line 291, in <module>
train()
File "/home/robot/imitation_learning_lerobot/lerobot/lerobot/configs/parser.py", line 226, in wrapper_inner
response = fn(cfg, *args, **kwargs)
File "/home/robot/imitation_learning_lerobot/lerobot/lerobot/scripts/train.py", line 110, in train
cfg.validate()
File "/home/robot/imitation_learning_lerobot/lerobot/lerobot/configs/train.py", line 120, in validate
raise ValueError(
ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.
### Expected behavior
expected it can work | https://github.com/huggingface/lerobot/issues/1497 | open | [
"question",
"policies",
"configuration"
] | 2025-07-13T04:33:14Z | 2025-08-12T09:32:36Z | null | dbdxnuliba |
huggingface/trl | 3,730 | How to design stable reward functions for open-ended text generation tasks in GRPO? | I'm using GRPO for a text generation task where there's no single correct answer. I currently compute the reward using cosine similarity between the model output and a reference response. However, during training (around 400 steps), the reward values are quite unstable and fluctuate significantly.
I'm wondering:
Is cosine similarity a reasonable choice for reward in open-ended tasks?
Are there better practices to stabilize the reward or design it more effectively in such scenarios?
Should I consider switching to a learnable reward model (e.g., contrastive learning)?
Any general advice on reward design in non-deterministic generation tasks would be greatly appreciated. Thanks! | https://github.com/huggingface/trl/issues/3730 | open | [
"❓ question",
"🏋 Reward",
"🏋 GRPO"
] | 2025-07-12T18:39:37Z | 2025-07-12T18:40:05Z | null | Jax922 |
huggingface/diffusers | 11,915 | Create modular pipeline from existing pipeline | new concept of modular pipelines added via #9672 is very flexible way of creating custom pipelines
and one of the best early use-cases is new concept of modular guiders added via #11311
however, this would require a complete rewrite of the existing user apps/codebases to use new concepts
and would likely significantly slow down adoption (if not even block adoption for a long time)
ask here is to provide a way to use an existing pipeline to instantiate a modular pipeline,
very similar to how different standard diffuser pipelines can be instantiated
from a single pipeline class using `from_pipe` method
example of desired workflow:
```py
import torch
import diffusers
# load pipeline using any normal method
# such as DiffusionPipeline, AutoPipelineForText2Image, StableDiffusionPipeline, etc.
pipe = diffusers.DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.bfloat16,
)
# create modular pipeline from loaded pipeline
modular = diffusers.ModularPipeline.from_pipe(pipe)
# create guider and activate it
cfg = diffusers.ClassifierFreeGuidance(guidance_scale=5.0, guidance_rescale=0.0, start=0.0, stop=1.0)
modular.update_states(guider=cfg)
output = modular(
prompt='astronaut in a diner',
height=1024, width=1024)
```
cc: @yiyixuxu @a-r-r-o-w @sayakpaul | https://github.com/huggingface/diffusers/issues/11915 | closed | [] | 2025-07-12T16:08:30Z | 2025-08-28T08:18:08Z | 6 | vladmandic |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.