repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 12,079 | API Suggestion: Expose Methods to Convert to Sample Prediction in Schedulers | **What API design would you like to have changed or added to the library? Why?**
My proposal is for schedulers to expose `convert_to_sample_prediction` and `convert_to_prediction_type` methods, which would do the following:
1. `convert_to_sample_prediction`: Converts from a given `prediction_type` to `sample_predicti... | https://github.com/huggingface/diffusers/issues/12079 | open | [] | 2025-08-06T02:24:46Z | 2025-08-06T02:24:46Z | 0 | dg845 |
huggingface/candle | 3,047 | Can the safetensor files from OpenAI's new gpt-oss-20b work with any existing setup? | Is the new gpt-oss-20b a totally different architecture or can I use an existing candle setup, swap out the files and start playing around with gpt-oss-20b?
| https://github.com/huggingface/candle/issues/3047 | open | [] | 2025-08-06T01:59:59Z | 2025-08-06T02:01:52Z | 1 | zcourts |
huggingface/diffusers | 12,078 | Problem with provided example validation input in the Flux Control finetuning example | ### Describe the bug
The help page for the Flux control finetuning example, https://github.com/huggingface/diffusers/blob/main/examples/flux-control/README.md, provides a sample validation input, a pose condition image
[<img src="https://huggingface.co/api/resolve-cache/models/Adapter/t2iadapter/3c291e0547a1b17bed9342... | https://github.com/huggingface/diffusers/issues/12078 | open | [
"bug"
] | 2025-08-05T22:29:35Z | 2025-08-07T08:47:45Z | 1 | kzhang2 |
huggingface/lerobot | 1,672 | How to resume training? | My old setting of training:
```
# batch_size: 64
steps: 20000
# output_dir: outputs/train
```
in outputs/train/ there are 020000 folder and last folder,eash has pretrained_model and training_state
When I want to resume training, I read configs/train.py
so I set
```
resume: true
output_dir: outputs/train/
# or output... | https://github.com/huggingface/lerobot/issues/1672 | closed | [] | 2025-08-05T14:57:32Z | 2025-08-06T03:04:28Z | null | milong26 |
huggingface/transformers | 39,921 | [Gemma3N] Not able to add new special tokens to model/tokenizer due to projection error | ### System Info
```
- transformers==4.54.1
- Platform: Linux-5.15.0-1084-aws-x86_64-with-glibc2.31
- Python version: 3.13
- TRL version: 0.19.1
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch versio... | https://github.com/huggingface/transformers/issues/39921 | open | [
"Usage",
"Good Second Issue",
"bug"
] | 2025-08-05T14:43:37Z | 2025-08-19T19:37:39Z | 14 | debasisdwivedy |
huggingface/transformers | 39,910 | Question: Llama4 weight reshaping | Hi all
I am trying to extract the original Llama4 MoE weights, specifically:
- `experts.w1` (aka `experts.moe_w_in_eD_F`)
- `experts.w3` (aka `experts.moe_w_swiglu_eD_F`)
I need both of these in the shape `[E, D, N]`, where:
- E is the number of experts (16 for Scout)
- D is the embedding dimension (5120)
- N is th... | https://github.com/huggingface/transformers/issues/39910 | closed | [] | 2025-08-05T10:19:25Z | 2025-08-13T09:35:52Z | 0 | gskorokhod |
huggingface/datasets | 7,724 | Can not stepinto load_dataset.py? | I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | https://github.com/huggingface/datasets/issues/7724 | open | [] | 2025-08-05T09:28:51Z | 2025-08-05T09:28:51Z | 0 | micklexqg |
huggingface/lerobot | 1,670 | How does leroBot address the issue of training heterogeneous datasets? | Specifically, suppose I have a dataset A and dataset B. In dataset A, both the state and action are represented as (x, y, z, gripper), where x, y, and z denote the distances moved along the x, y, and z axes, respectively, and gripper represents the on/off state of the gripper. In dataset B, both the state and action ar... | https://github.com/huggingface/lerobot/issues/1670 | open | [
"question",
"processor"
] | 2025-08-05T08:20:08Z | 2025-08-12T09:01:57Z | null | mahao18cm |
huggingface/lerobot | 1,667 | How many episode to have a good result of SmolVLA | ### System Info
```Shell
Hello, I'm trying to do a simple task like dual hand pick banana to a basket using SmolVLA,may I know how many episode to train for having a good result?
Many thanks
Julien
```
### Reproduction
I've used 100 episode for training, looks like the arm can not pick the banana accurately, sometim... | https://github.com/huggingface/lerobot/issues/1667 | closed | [
"question",
"policies"
] | 2025-08-05T05:12:12Z | 2025-10-17T11:27:14Z | null | chejulien |
huggingface/lerobot | 1,666 | Please add multi gpu training support | MultiGPU training currently does not work with lerobot as mentioned here https://github.com/huggingface/lerobot/issues/1377
Please add this support. | https://github.com/huggingface/lerobot/issues/1666 | closed | [
"enhancement",
"question",
"policies"
] | 2025-08-04T18:06:40Z | 2025-10-17T09:53:59Z | null | nahidalam |
huggingface/lerobot | 1,663 | No way to train on subset of features | Currently, when loading a policy from a config.json, the input_features seem to be ignored and re-generated from the dataset provided. However, it may not always be desirable to train on all features, perhaps if I have multiple camera views but I only want to train on one.
I would prefer that config.json features are ... | https://github.com/huggingface/lerobot/issues/1663 | open | [
"question",
"policies",
"processor"
] | 2025-08-04T15:19:35Z | 2025-08-12T09:03:47Z | null | atyshka |
huggingface/diffusers | 12,060 | Is there any DiT block defined in the huggingface/diffusers OR huggingface/transformers project? | **Is your feature request related to a problem? Please describe.**
I want to make some experiments about DiT based flow-matching model, I need an implementation of the common DiT block, but did not found it in both huggingface/diffusers and huggingface/transformers. Is there any implementation about it with just some ... | https://github.com/huggingface/diffusers/issues/12060 | open | [] | 2025-08-04T09:40:43Z | 2025-08-04T10:19:00Z | 2 | JohnHerry |
huggingface/diffusers | 12,052 | Wan 2.2 with LightX2V offloading tries to multiply tensors from different devices and fails | ### Describe the bug
After @sayakpaul great work in https://github.com/huggingface/diffusers/pull/12040 LightX2V now works. However what doesn't work is adding both a lora and offloading to the transformer_2. I can get away with either (i.e. offload both transformers but add a lora only to transformer and NOT to trans... | https://github.com/huggingface/diffusers/issues/12052 | closed | [
"bug"
] | 2025-08-03T12:43:13Z | 2025-08-11T15:53:41Z | 4 | luke14free |
huggingface/peft | 2,699 | UserWarning: Found missing adapter keys while loading the checkpoint | I have been fine-tuning different LLM models (mainly Llama family) since last year and use peft with lora config all the time with no issues.
Just recently I was fine-tuning the llama 70B on multiple GPU using accelerate then saving the adapter once training is done. (This was always my setup since last year)
Howeve... | https://github.com/huggingface/peft/issues/2699 | closed | [] | 2025-08-02T20:49:31Z | 2025-11-09T15:03:46Z | 41 | manitadayon |
huggingface/diffusers | 12,044 | AttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'? | I am train the Flux.1-dev model and get this error. I found the solution to bring diffuser to version 0.21.0 but then it would beconflict with some other libraries. Is there any solution for this?
```
Traceback (most recent call last):
File "/home/quyetnv/t2i/ai-toolkit/run.py", line 120, in <module>
main()
Fi... | https://github.com/huggingface/diffusers/issues/12044 | closed | [] | 2025-08-02T01:37:30Z | 2025-08-21T01:27:19Z | 3 | qngv |
huggingface/optimum | 2,333 | Support for exporting t5gemma-2b-2b-prefixlm-it to onnx | ### Feature request
I’ve tried to export t5gemma-2b-2b-prefixlm-it to onnx using optimum. But it outputs: ValueError: Trying to export a t5gemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum... | https://github.com/huggingface/optimum/issues/2333 | closed | [
"Stale"
] | 2025-08-01T16:39:52Z | 2026-01-03T02:51:13Z | 2 | botan-r |
huggingface/transformers | 39,842 | Expected behavior of `compute_result` is hard to expect and inconsistent | In trainer there exists a parameter `compute_result` given to `compute_metrics` when `batch_eval_metrics` is given to True.
https://github.com/huggingface/transformers/blob/1e0665a191f73f6b002209c3dfcda478baac6bac/src/transformers/trainer.py#L370-L375
I think there are several problems for `compute_result`,
1. User c... | https://github.com/huggingface/transformers/issues/39842 | closed | [] | 2025-08-01T11:43:28Z | 2025-10-04T08:02:41Z | 3 | MilkClouds |
huggingface/transformers | 39,841 | MistralCommonTokenizer does not match PreTrainedTokenizer | ### System Info
on docker
os: ubuntu 24.04
transformers: 4.55.0.dev0
mistral_common: 1.8.3
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own ... | https://github.com/huggingface/transformers/issues/39841 | closed | [
"bug"
] | 2025-08-01T09:16:24Z | 2025-11-23T08:03:33Z | 3 | Fhrozen |
huggingface/transformers | 39,839 | pack_image_features RuntimeError when vision_feature_select_strategy="full" | ### System Info
transformers 4.54.0
### Who can help?
@zucchini-nlp
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproductio... | https://github.com/huggingface/transformers/issues/39839 | closed | [
"bug"
] | 2025-08-01T07:55:40Z | 2025-09-08T08:02:56Z | 2 | llnnnnnn |
huggingface/gsplat.js | 117 | How to generate a Mesh mesh? | I need a scene where Gaussian Splatting and Mesh are mixed, and I don't know if GSPLAT generates Mesh or not. | https://github.com/huggingface/gsplat.js/issues/117 | open | [] | 2025-08-01T03:29:22Z | 2025-08-01T03:29:22Z | null | ZXStudio |
huggingface/diffusers | 12,038 | Dataset structure for train_text_to_image_lora.py | Hello. I am trying to use **train_text_to_image_lora.py** script following the instructions https://github.com/huggingface/diffusers/tree/main/examples/text_to_image
I get errors on data structure and don't know what is the issue on my side.
I have a folder **data** where I have folder **image** and **csv** file.
C:/... | https://github.com/huggingface/diffusers/issues/12038 | open | [] | 2025-07-31T16:10:38Z | 2025-08-01T16:44:48Z | 1 | HripsimeS |
huggingface/lerobot | 1,632 | Are there plans to support distributed training? | [train.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) currently only supports single-GPU training. Is there a plan to support distributed training in the future? | https://github.com/huggingface/lerobot/issues/1632 | closed | [
"question",
"policies"
] | 2025-07-31T03:31:46Z | 2025-10-17T12:10:40Z | null | Hukongtao |
huggingface/candle | 3,039 | Request support for Qwen2.5-vl or Fast-VLM | I'm trying to call some image-to-text visual models using candle, if anyone knows how to use Qwen2.5-vl or Fast-VLM, can you share it? Appreciate | https://github.com/huggingface/candle/issues/3039 | open | [] | 2025-07-31T02:41:33Z | 2025-08-04T12:21:35Z | 1 | 826327700 |
huggingface/transformers | 39,801 | ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981 | ### System Info
_prepare_cache_for_generation
raise ValueError(
ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981
I got this error and i have no clue of how to solve it. I tried different implementation... | https://github.com/huggingface/transformers/issues/39801 | closed | [
"bug"
] | 2025-07-30T20:59:45Z | 2025-09-07T08:02:42Z | 2 | jpitalopez |
huggingface/lerobot | 1,631 | 🥚 Filtering Eggs on Moving Table: Dirt/Breakage Detection Feasibility | Hi 👋
Thanks a lot for your work on lerobot!
I am exploring the use of lerobot to filter eggs based on dirt or breakage while they move past the robot on a conveyor table. The goal is to detect anomalies in real time and eventually eject faulty eggs.
Some specific questions I have:
* Do you have any advice or feedb... | https://github.com/huggingface/lerobot/issues/1631 | open | [
"question",
"policies"
] | 2025-07-30T18:35:12Z | 2025-08-12T09:07:41Z | null | KannarFr |
huggingface/optimum | 2,330 | Patch Release to support `transformers~=4.53` | ### System Info
```shell
optimum[onnxruntime-gpu]==1.26.1
torch==2.7.1
vllm==0.10.0
docker run --rm -it --platform linux/amd64 ghcr.io/astral-sh/uv:debian bash
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An offici... | https://github.com/huggingface/optimum/issues/2330 | closed | [
"bug"
] | 2025-07-30T02:40:41Z | 2025-07-31T02:54:31Z | 1 | yxtay |
huggingface/lerobot | 1,622 | Why is LeRobot’s policy ignoring additional camera streams despite custom `input_features`? | I'm training a SO101 arm policy with 3 video streams (`front`, `above`, `gripper`) and a state vector. The dataset can be found at this [link](https://huggingface.co/datasets/aaron-ser/SO101-Dataset/tree/main).
I created a custom JSON config (the `train_config.json` below) that explicitly lists the three visual strea... | https://github.com/huggingface/lerobot/issues/1622 | open | [
"question",
"policies"
] | 2025-07-29T14:07:14Z | 2025-09-23T14:01:54Z | null | Aaron-Serpilin |
huggingface/trl | 3,797 | How to view the training parameters after training is completed | How to view the training parameters after training is completed?I am using GRPOTrainer for training, but after training multiple times, I have forgotten the parameters I set. How can I view the saved training parameters? | https://github.com/huggingface/trl/issues/3797 | open | [
"❓ question",
"🏋 GRPO"
] | 2025-07-29T09:42:52Z | 2025-07-29T13:07:50Z | null | Tuziking |
huggingface/optimum | 2,329 | Support for exporting paligemma to onnx | ### Feature request
I’ve tried to export google/paligemma-3b-mix-224 to onnx using optimum. But it outputs: "ValueError: Trying to export a paligemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/opti... | https://github.com/huggingface/optimum/issues/2329 | closed | [
"Stale"
] | 2025-07-29T08:58:41Z | 2025-09-06T02:04:25Z | 2 | DashaMed555 |
huggingface/transformers | 39,744 | _supports_static_cache disappear | ### System Info
transformers main branch
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reprodu... | https://github.com/huggingface/transformers/issues/39744 | closed | [
"bug"
] | 2025-07-29T02:36:04Z | 2025-07-29T08:17:00Z | 4 | jiqing-feng |
huggingface/lerobot | 1,607 | how to control a so-101 with trained ACT model? | https://huggingface.co/initie/test_pick_result
This is my pre-trained model for grabbing the switch on the desk by ACT model.
How to run this policy model on the Anaconda?
Already by way of example,
python -m lerobot.record --robot.type=so101_follower
--robot.port=COM3
--robot.id=ammd_follower_arm
--robot.camer... | https://github.com/huggingface/lerobot/issues/1607 | open | [
"question",
"policies"
] | 2025-07-28T05:23:24Z | 2025-10-15T03:28:50Z | null | initia1013 |
huggingface/lerobot | 1,602 | How to perform multi-GPU training for SMoVLA? | I noticed that the paper used 4 GPUs for pretraining, but the current training code doesn’t seem to support it. Could you provide the corresponding code? | https://github.com/huggingface/lerobot/issues/1602 | closed | [] | 2025-07-27T09:46:04Z | 2025-07-28T08:40:01Z | null | QZepHyr |
huggingface/hmtl | 72 | How to create a website | https://github.com/huggingface/hmtl/issues/72 | open | [] | 2025-07-27T09:30:22Z | 2025-07-27T09:30:22Z | null | Chi23-ike | |
huggingface/text-generation-inference | 3,304 | using trtllm-build instead of optimum-nvidia for engine building or optimum-nvidia wrong version ? |
Hello,
I'm experiencing significant issues when trying to use Text Generation Inference (TGI) with TensorRT-LLM as the backend.
**Problem 1: Version Compatibility**
I cannot use the latest version of TGI due to a known bug (see: https://github.com/huggingface/text-generation-inference/issues/3296).
I'm therefore us... | https://github.com/huggingface/text-generation-inference/issues/3304 | open | [] | 2025-07-27T06:24:29Z | 2025-10-06T09:56:29Z | 4 | psykokwak-com |
huggingface/transformers | 39,705 | [i18n-<bn>] Translating docs to <Bengali> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Bengali-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingfac... | https://github.com/huggingface/transformers/issues/39705 | open | [
"WIP"
] | 2025-07-27T06:18:20Z | 2025-07-27T11:58:32Z | 1 | ankitdutta428 |
huggingface/transformers | 39,699 | No flag to support Conditional Parameter Loading for gemma-3n-E2B models in transformer | ### System Info
Hi,
While a lot has been mentioned about gemma-3n-E2B and gemma-3n-E4B about the COnditional parameter loading and reduced memory loading
There is no configuration currently visible in transformers for supporting that.
Is it possible to get the related configuration/code/documentation to make it work t... | https://github.com/huggingface/transformers/issues/39699 | closed | [
"bug"
] | 2025-07-26T18:08:00Z | 2025-09-03T08:02:58Z | 2 | aakashgaur01 |
huggingface/tokenizers | 1,835 | Can you provide binary releases? | It seems that binaries are not available in recent versions.
tokenizers module is essential for the latest models, and it would be preferable if it could be easily installed.
Setting up a Rust compilation environment can be cumbersome, and it's almost impossible to do so offline.
Could we possibly distribute somethi... | https://github.com/huggingface/tokenizers/issues/1835 | closed | [] | 2025-07-26T16:07:12Z | 2025-09-08T13:49:52Z | 4 | goldenmomonga |
huggingface/lerobot | 1,599 | Evaluation results of VLA models on MetaWorld Benchmark | Thank you for this excellent work! I noticed that the paper mentions evaluation results of VLA models on MetaWorld. However, in the original papers for Octo and π₀, results are only reported on the LIBERO benchmark, and I haven’t found their MetaWorld evaluations in other related studies. I’d like to know how Octo and ... | https://github.com/huggingface/lerobot/issues/1599 | open | [
"enhancement",
"question",
"policies",
"simulation"
] | 2025-07-26T11:18:54Z | 2025-08-12T09:17:44Z | null | Zooy138 |
huggingface/transformers | 39,686 | CRITICAL ISSUE REPORT! GEMMA 3 1B CANNOT RUN! | How to reproduce:
Run this:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the base model in FP16
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/gemma-3-1b-pt",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="mps"... | https://github.com/huggingface/transformers/issues/39686 | closed | [] | 2025-07-26T00:22:27Z | 2025-07-28T12:07:50Z | 5 | yukiarimo |
huggingface/lerobot | 1,592 | Time spent on imitation learning training (ACT) | I use colab to make a policy with ACT model.
The note said, "Training with the ACT policy for 100,000 steps typically takes about 1.5 hours on an NVIDIA A100 GPU,", and I used A100 model in colab too.
However the expected time is 13 hours, which seems to be much longer than the standard value of 1.5 hours.
Is it corre... | https://github.com/huggingface/lerobot/issues/1592 | closed | [
"question",
"policies"
] | 2025-07-25T06:36:35Z | 2025-10-08T08:32:32Z | null | initia1013 |
huggingface/datasets | 7,699 | Broken link in documentation for "Create a video dataset" | The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" /> | https://github.com/huggingface/datasets/issues/7699 | open | [] | 2025-07-24T19:46:28Z | 2025-07-25T15:27:47Z | 1 | cleong110 |
huggingface/transformers | 39,637 | [BUG] Run 111B+ Teacher distributed inference and 8B Student distributed training on multi-node H200 GPUs using the Transformers Trainer without encountering OOM errors? | Hello, first off, apologies if this information is already available elsewhere. I've searched through the documentation and existing issues but haven't found a clear answer to my question.
I have access to 2 to 4 nodes (16 to 32 GPUs in total), each equipped with 8x140GB H200 GPUs. My objective is to perform large-sca... | https://github.com/huggingface/transformers/issues/39637 | closed | [] | 2025-07-24T15:05:38Z | 2025-09-01T08:03:18Z | 3 | seona21 |
huggingface/lerobot | 1,586 | Real-world deploy on ALOHA Robot | How could I deploy the policies on the ALOHA robot? And how could I deploy in the real world? | https://github.com/huggingface/lerobot/issues/1586 | open | [
"question",
"robots"
] | 2025-07-24T12:52:06Z | 2025-08-21T16:18:26Z | null | LogSSim |
huggingface/diffusers | 11,984 | A compatibility issue when using custom Stable Diffusion with pre-trained ControlNets | I have successfully fine-tuned a Stable Diffusion v1.5 model using the Dreambooth script, and the results are excellent. However, I've encountered a compatibility issue when using this custom model with pre-trained ControlNets. Since the Dreambooth process modifies the U-Net weights, the original ControlNet is no longe... | https://github.com/huggingface/diffusers/issues/11984 | closed | [] | 2025-07-24T09:16:55Z | 2025-07-24T15:15:20Z | 6 | ScienceLi1125 |
huggingface/lighteval | 868 | How to calculate perplexity from an OpenAI compatible API | Hello,
I'm new to LightEval. I want to use LightEval to evaluate an LLM model that is served via an API. The API is OpenAI compatible. It also returns logprobs for each token. Is there a built-in function to evaluate the perplexity score? I'm asking because I see that it’s not implemented.
https://github.com/huggingf... | https://github.com/huggingface/lighteval/issues/868 | open | [] | 2025-07-24T07:27:05Z | 2025-07-24T07:27:05Z | null | mrtpk |
huggingface/lerobot | 1,580 | Environment_State in act and SmolVLA policy | Hi, Thanks for the awesome work!
I have been noticing a variable called observation.environment_state in the act policy. What is exactly the feature environment_state. Thanks! | https://github.com/huggingface/lerobot/issues/1580 | closed | [
"question",
"policies"
] | 2025-07-24T03:32:31Z | 2025-10-08T13:09:33Z | null | kasiv008 |
huggingface/transformers.js | 1,379 | Why Do I Get Different Outputs in Python and JavaScript for the Same ONNX Model? | Hi ,
I'm running inference on the same ONNX model (t5-small-new) using both Python and JavaScript (via ONNX Runtime). However, I'm noticing that the outputs are different between the two environments, even though the inputs and model are the same. The output of the Python code is correct while JS is not accurate.
Pyt... | https://github.com/huggingface/transformers.js/issues/1379 | closed | [
"question"
] | 2025-07-23T20:13:57Z | 2025-08-29T23:43:21Z | null | mahdin75 |
huggingface/transformers | 39,618 | SageAttention for attention implementation? | ### Feature request
I've noticed it's been a while now, but transformers still only has flash attention as the fastest attention backend for calls like these:
<img width="1307" height="780" alt="Image" src="https://github.com/user-attachments/assets/3f3d62f6-a166-4ca6-97a0-49263fd93299" />
Are there any plans to ad... | https://github.com/huggingface/transformers/issues/39618 | open | [
"Feature request"
] | 2025-07-23T19:10:47Z | 2025-07-25T12:30:37Z | 4 | Many0therFunctions |
huggingface/diffusers | 11,977 | how to load a finetuned model especially during validation phase | <img width="1034" height="743" alt="Image" src="https://github.com/user-attachments/assets/c4e9318f-10aa-4b91-9d60-e28a3be38f8a" />
As the above, I have finetuned the model and want to validate it, but the given demo which is train_dreambooth_sd3.py still uses
"pipeline = StableDiffusion3Pipeline.from_pretrained(
... | https://github.com/huggingface/diffusers/issues/11977 | open | [] | 2025-07-23T11:54:16Z | 2025-07-24T09:19:11Z | null | micklexqg |
huggingface/lerobot | 1,579 | Is there a video backend supporting nondestructive encoding? | I saved images during recording through not deletng folder `images`. When I try to compare the first frame.png in `images` folder and dataset=make_dataset(config)'s first image, I found the saved png file is nondestructive. But the image I got by lerobot is not.
How I find:
in `def save_episode`
```
# img_dir... | https://github.com/huggingface/lerobot/issues/1579 | open | [
"question",
"dataset"
] | 2025-07-23T08:38:39Z | 2025-08-12T09:22:26Z | null | milong26 |
huggingface/candle | 3,032 | `matmul` (and others) Precision issues between Candle & PyTorch | We noticed there's some precision discrepancy in matrix multiplication and the linear layer between between Candle and PyTorch. This matters a lot when reproducing LLMs originated from PyTorch into Candle. We used the `hf_hub::api::Api` to get the safetensors from the hub and for testing the precision issues for each m... | https://github.com/huggingface/candle/issues/3032 | closed | [] | 2025-07-23T04:07:08Z | 2025-09-27T21:25:51Z | 4 | andrew-shc |
huggingface/lerobot | 1,578 | Lerobot metaworld dataset only provides 49 tasks | https://huggingface.co/datasets/lerobot/metaworld_mt50
There are only 49 tasks and "Push the puck to a goal" task repeates twice | https://github.com/huggingface/lerobot/issues/1578 | open | [
"question",
"simulation"
] | 2025-07-23T04:03:17Z | 2025-08-12T09:23:12Z | null | chenkang455 |
huggingface/lerobot | 1,577 | test failed after training SVLA | I collected 76 sets of data and used the same calibration file as during collection. However, after training for 24k steps, the model obtained was unable to complete the grasping task during inference. Can anyone help me deal with the problem?
[dataset](https://huggingface.co/datasets/Xiaoyan97/orange_block_pickplace)
| https://github.com/huggingface/lerobot/issues/1577 | open | [
"question",
"policies"
] | 2025-07-23T03:59:26Z | 2025-08-12T09:23:26Z | null | Liu-Xiaoyan97 |
huggingface/lerobot | 1,576 | Multiple Dataset training | How to train multiple lerobot dataset? is there any function I can use it | https://github.com/huggingface/lerobot/issues/1576 | open | [
"question",
"dataset"
] | 2025-07-23T03:46:03Z | 2025-10-10T09:30:06Z | null | JustinKai0527 |
huggingface/transformers | 39,596 | Does transformers support python3.13 -- disable-gil or python3.14 free threading? | Does transformers support python3.13 -- disable-gil or python3.14 free threading?
I got an error when trying to install transformers on these two python versions. | https://github.com/huggingface/transformers/issues/39596 | closed | [] | 2025-07-23T02:34:03Z | 2025-08-30T08:02:54Z | 2 | SoulH-qqq |
huggingface/transformers.js | 1,374 | nanoVLM support | ### Question
I would like to know if there is any plan to support models built with nanoVLM [https://github.com/huggingface/nanoVLM], thanks. | https://github.com/huggingface/transformers.js/issues/1374 | open | [
"question"
] | 2025-07-22T11:43:57Z | 2025-07-23T09:02:15Z | null | sbrzz |
huggingface/diffusers | 11,971 | What is the minimum memory requirement for model training? | Hello, I would like to try training an SDXL model using my own dataset. What is the minimum memory size required for the model? | https://github.com/huggingface/diffusers/issues/11971 | closed | [] | 2025-07-22T07:52:28Z | 2025-07-22T08:26:27Z | null | WWWPPPGGG |
huggingface/transformers | 39,565 | Model forward execution in full eager mode? | I know there is a flag `attn_implementation` which could trigger specialized attention kernel implementation. Besides this, does everything run in native PyTorch eager mode? Does `transformers` have any other custom op or kernel?
```python
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B", device_m... | https://github.com/huggingface/transformers/issues/39565 | closed | [] | 2025-07-21T21:49:05Z | 2025-08-21T08:34:59Z | 3 | 22quinn |
huggingface/lerobot | 1,564 | How are Episode Stats used? | I'm looking to create a subset of an episode (ie sec 2-4) in a 30 second episode, and wanted to know how episode_stats are used later on for training / inference?
Are they used to normalize model inputs or are they used somewhere else as well?
ie. in modeling_act.py
```
self.normalize_inputs = Normalize(
... | https://github.com/huggingface/lerobot/issues/1564 | closed | [
"question",
"policies",
"processor"
] | 2025-07-21T19:06:21Z | 2025-08-12T09:27:29Z | null | andlyu |
huggingface/lerobot | 1,561 | will you release the libero ft&eval setting? | hello your smolVLA is a wonderful work ,i notice that you finetuned it on the **libero** and evalaute at the same time.but i couldn't achieve the same or similar success rate**(just 76% ,much lower than your '96%')**
**have you use the async inference in libero?**
I think it must be the different hyperparameters w... | https://github.com/huggingface/lerobot/issues/1561 | closed | [
"enhancement",
"question",
"policies"
] | 2025-07-21T13:57:13Z | 2025-09-23T09:25:04Z | null | JuilieZ |
huggingface/transformers | 39,554 | Why `is_causal` is not used in `flash_attention_forward` ? | I want to perform bidirectional attention in the Qwen3 model to train an embedding model, so I passed `is_causal=False` in the model `forward` (I manually added `is_causal` arguments in all `forward` method such as `Qwen3Model` and `Qwen3Attention` in`modeling_qwen3.py`):
```python
class Qwen3Attention(nn.Module):
... | https://github.com/huggingface/transformers/issues/39554 | closed | [
"Flash Attention"
] | 2025-07-21T12:08:00Z | 2025-11-11T12:32:41Z | 9 | lucaswychan |
huggingface/peft | 2,660 | Custom models LoRA | Is there any way to fine-tune models that are not in the support list or custom models?
Currently, many public models have their LLM parts from Qwen. Can LLaMA-Factory use the Qwen template and only fine-tune the LLM part? Thank you | https://github.com/huggingface/peft/issues/2660 | closed | [] | 2025-07-21T11:52:30Z | 2025-07-24T12:53:34Z | 6 | stillbetter |
huggingface/lerobot | 1,559 | Is the current model framework suitable for using automatic mixed precision? | I saw that `.to(torch.float32)` and `.to(torch.bfloat16)` were used in many places in the Pi0 model code. Then I implemented parallel training of Pi0 based on accelerate, and found that if I want to use AMP, the code will report an error of dtype mismatch. I want to know whether the existing code is suitable for automa... | https://github.com/huggingface/lerobot/issues/1559 | open | [
"question",
"policies"
] | 2025-07-21T10:45:26Z | 2025-08-12T09:27:59Z | null | xliu0105 |
huggingface/transformers | 39,549 | Is there plan to integrate ColQwen2.5 into Transformers? | ### Model description
Is ColQwen2ForRetrieval integrated into the transformers library, and are there plans to add [ColQwen2.5](https://github.com/illuin-tech/colpali/blob/main/colpali_engine/models/qwen2_5/colqwen2_5/modeling_colqwen2_5.py) in the future?
### Open source status
- [x] The model implementation is ava... | https://github.com/huggingface/transformers/issues/39549 | closed | [
"New model"
] | 2025-07-21T10:08:47Z | 2025-11-03T23:31:08Z | 0 | rebel-thkim |
huggingface/diffusers | 11,966 | How about forcing the first and last block on device when groupoffloading is used? | **Is your feature request related to a problem? Please describe.**
When group offloading is enabled, the offload and onload cannot be streamed between steps and this is really a big time comsuming problem.
**Describe the solution you'd like.**
Is it possible to add an option that could make the first and last block fo... | https://github.com/huggingface/diffusers/issues/11966 | open | [
"contributions-welcome",
"group-offloading"
] | 2025-07-21T08:38:30Z | 2025-12-02T15:30:23Z | 13 | seed93 |
huggingface/tokenizers | 1,829 | The parameter in initial_alphabet of the "class BpeTrainer(Trainer)" does not allow more than one character to initialized | Hi everyone,
I am working on Tamil and Sinhala languages which are morphologically rich languages, in these languages a character is actually a combination of multiple unicode codepoints (similar to emojis) so it would be greatly beneficial to initialize the BPE alphabet with graphemes instead of the characters. Is the... | https://github.com/huggingface/tokenizers/issues/1829 | open | [] | 2025-07-21T08:30:21Z | 2025-07-21T08:30:21Z | 0 | vmenan |
huggingface/lerobot | 1,554 | How to use local datasets to train and evaluate | Due to network issues, I want to use only local datasets during training and evaluation and prevent huggingface from uploading data or retrieve datasets on the hub.Is there any good solution? | https://github.com/huggingface/lerobot/issues/1554 | closed | [
"question",
"dataset"
] | 2025-07-21T07:54:07Z | 2025-10-08T12:58:32Z | null | zym123321 |
huggingface/optimum | 2,324 | AutoConfig.from_dict Missing in transformers==4.51.3 — Incompatibility with optimum==1.26.1 | ### System Info
```shell
I am running into a critical compatibility issue between optimum and recent versions of transformers.
❗ Error Summary
When using:
transformers==4.51.3
optimum==1.26.1
onnx==1.17.0
onnxruntime==1.20.0
The following runtime error is thrown when attempting to load an ONNX model using ORTModelFo... | https://github.com/huggingface/optimum/issues/2324 | open | [
"bug"
] | 2025-07-21T06:04:58Z | 2025-08-01T07:10:20Z | 5 | rratnakar09 |
huggingface/diffusers | 11,964 | KeyError when loading LoRA for Flux model: missing lora_unet_final_layer_adaLN_modulation_1 weights | I'm trying to run Overlay-Kontext-Dev-LoRA locally by loading the LoRA weights using the pipe.load_lora_weights() function. However, I encountered the following error during execution:
> KeyError: 'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight'
```
import torch
from diffusers import DiffusionPipeline
fro... | https://github.com/huggingface/diffusers/issues/11964 | open | [] | 2025-07-21T05:16:34Z | 2025-07-21T09:14:00Z | 1 | NEWbie0709 |
huggingface/transformers | 39,545 | Is the new Intel–Weizmann speculative decoding algorithm integrated into Transformers? | Hi,
I recently read about a new speculative decoding algorithm developed by Intel Labs and the Weizmann Institute, which reportedly improves inference speed by up to 2.8×, even when using draft and target models with different vocabularies or architectures.
References:
- [Intel Newsroom](https://newsroom.intel.com/a... | https://github.com/huggingface/transformers/issues/39545 | closed | [] | 2025-07-21T02:47:48Z | 2025-07-22T12:15:54Z | 4 | NEWbie0709 |
huggingface/lerobot | 1,552 | Support smolvla training on Intel GPU | Current script is only supporting `cuda`, `mps` and `cpu`.
With PyTorch 2.7 with Intel GPU support, once PyTorch is installed, Intel GPU can be utilized in the training script. | https://github.com/huggingface/lerobot/issues/1552 | open | [
"enhancement",
"question",
"policies"
] | 2025-07-21T01:47:38Z | 2025-10-09T07:40:10Z | null | xiangyang-95 |
huggingface/transformers | 39,542 | ValueError: You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time | ### System Info
- `transformers` version: 4.53.2
- Platform: **Ubuntu 22.04** Linux 5.15.0-139-generic
- **Python 3.10.18** + ipykernel 6.29.5
- Pytorch 2.7.1+cu118
### Who can help?
@ArthurZucker
@SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An offi... | https://github.com/huggingface/transformers/issues/39542 | closed | [
"Usage",
"Good First Issue",
"trainer",
"bug"
] | 2025-07-21T01:06:27Z | 2025-08-22T05:53:51Z | 10 | xjackzenvey |
huggingface/transformers | 39,551 | InformerForPrediction [I would like to seek your opinions, everyone, How can I set the dynamic real features for prediction] | Here is the description cited from the docs of InformerForPrediction:
> future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) — Required time features for the prediction window, which the model internally will add to future_values. These could be things like “month of year”, “... | https://github.com/huggingface/transformers/issues/39551 | closed | [] | 2025-07-20T11:38:50Z | 2025-08-28T08:03:20Z | null | 2004learner |
huggingface/diffusers | 11,961 | New Adapter/Pipeline Request: IT-Blender for Creative Conceptual Blending | ## Model/Pipeline/Scheduler description
### Name of the model/pipeline/scheduler
"Image-and-Text Concept Blender" (IT-Blender), a diffusion adapter that blends visual concepts from a real reference image with textual concepts from a prompt in a disentangled manner. The goal is to enhance human creativity in design tas... | https://github.com/huggingface/diffusers/issues/11961 | open | [] | 2025-07-20T03:07:38Z | 2025-07-20T03:08:06Z | 0 | WonwoongCho |
huggingface/transformers | 39,522 | T5Gemma failing on provided example | ### System Info
- `transformers` version: 4.53.2
- Platform: Linux-6.14.0-23-generic-x86_64-with-glibc2.41
- Python version: 3.13.3
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_prec... | https://github.com/huggingface/transformers/issues/39522 | closed | [
"bug"
] | 2025-07-19T11:07:26Z | 2025-08-27T07:51:08Z | 7 | jadermcs |
huggingface/lerobot | 1,540 | Controlling robot with text using SmolVLA | Is it possible to control the robot with text inputs? I thought that's what a VLA model was...
I cannot find any instructions on how to do this anywhere...
I found this https://huggingface.co/masato-ka/smolvla_block_instruction , but control_robot was split into multiple files recently - none of which seem to work.... | https://github.com/huggingface/lerobot/issues/1540 | open | [
"question",
"policies"
] | 2025-07-18T23:09:11Z | 2025-08-12T09:35:59Z | null | drain-pipe |
huggingface/diffusers | 11,956 | Frequency-Decoupled Guidance (FDG) for diffusion models | FDG is a new method for applying CFG in the frequency domain. It improves generation quality at low CFG scales while inherently avoiding the harmful effects of high CFG values. It could be a nice addition to the guiders part of diffusers. The implementation details for FDG are available on page 19 of the paper.
https:... | https://github.com/huggingface/diffusers/issues/11956 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome",
"advanced",
"consider-for-modular-diffusers"
] | 2025-07-18T19:12:50Z | 2025-08-07T05:51:03Z | 5 | Msadat97 |
huggingface/datasets | 7,689 | BadRequestError for loading dataset? | ### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid... | https://github.com/huggingface/datasets/issues/7689 | closed | [] | 2025-07-18T09:30:04Z | 2025-07-18T11:59:51Z | 17 | WPoelman |
huggingface/diffusers | 11,951 | Kontext model loading quantization problem | Hello, can kontext be loaded quantitatively at present? Because I only have a 4090 with 24g video memory, the current fp16 loading method will cause OOM. Like flux, can it be loaded with torchao or gguf, so that this model can run on 4090? | https://github.com/huggingface/diffusers/issues/11951 | closed | [] | 2025-07-18T03:20:48Z | 2025-07-18T05:39:28Z | 2 | babyta |
huggingface/transformers | 39,484 | Transformers still tries to use apex.amp which is no longer a thing in apex. | ### System Info
```
root@12bb27e08b1b:/# pip show transformers
Name: transformers
Version: 4.52.3
```
trainer.py contains this:
```
if is_apex_available():
from apex import amp
```
Apex (built from source, as they recommend) does no longer come with amp.
How to reproduce?
1. install transformers
2. install ap... | https://github.com/huggingface/transformers/issues/39484 | closed | [
"bug"
] | 2025-07-17T16:43:14Z | 2025-08-25T08:03:03Z | 4 | yselivonchyk |
huggingface/datasets | 7,688 | No module named "distributed" | ### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.di... | https://github.com/huggingface/datasets/issues/7688 | open | [] | 2025-07-17T09:32:35Z | 2025-07-25T15:14:19Z | 3 | yingtongxiong |
huggingface/alignment-handbook | 220 | A little question: why num examples is much less than the total amount of my training dataset? | I am using this repo to SFT a model, and I notice that:
I print the total amount of my training dataset, which is 7473
`Number of raw training samples: 7473`
But during training, I find the log:
[INFO|trainer.py:2314] 2025-07-17 17:03:23,908 >> ***** Running training *****
[INFO|trainer.py:2315] 2025-07-17 17:03:23... | https://github.com/huggingface/alignment-handbook/issues/220 | closed | [] | 2025-07-17T09:12:08Z | 2025-07-23T23:30:33Z | 3 | Red-Scarff |
huggingface/diffusers | 11,945 | Floating point exception with nightly PyTorch and CUDA | ### Describe the bug
When running any code snippet using diffusers it fails with floating point exception, and doesn't print any traceback.
For example this one would cause the issue (the example of Stable Diffusion 3.5 medium):
```
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3... | https://github.com/huggingface/diffusers/issues/11945 | open | [
"bug"
] | 2025-07-17T03:16:02Z | 2025-08-02T13:48:05Z | 1 | MxtAppz |
huggingface/course | 1,009 | How Transformers solve tasks - ASR section refers to task using Whisper but task actually uses Wav2Vec2 | The [Automatic speech recognition](https://huggingface.co/learn/llm-course/chapter1/5?fw=pt#automatic-speech-recognition) segment of Section 1 "Transformer Models" > "How 🤗 Transformers solve tasks" refers to
> Check out our complete [automatic speech recognition guide](https://huggingface.co/docs/transformers/tasks... | https://github.com/huggingface/course/issues/1009 | open | [] | 2025-07-16T23:25:55Z | 2025-07-16T23:25:55Z | null | renet10 |
huggingface/diffusers | 11,930 | how to run convert_cosmos_to_diffusers.py correctly? | ### Describe the bug
hi. I have tried to convert the cosmos-transfer1's base model to diffuers using "convert_cosmos_to_diffusers.py" code with options --transformer_type Cosmo
s-1.0-Diffusion-7B-Video2World --vae_type CV8x8x8-1.0 --transformer_ckpt_path ../fsdp_edge_v1/iter_000016000_ema_model_only.pt --output_path .... | https://github.com/huggingface/diffusers/issues/11930 | open | [
"bug"
] | 2025-07-15T16:20:09Z | 2025-07-15T16:24:47Z | null | dedoogong |
huggingface/transformers | 39,426 | object detection : matchin outputs.last_hidden_state with results | ### Feature request
it seems to me that would be possible with a little modification in the function post_process_object_detection
with
```
``for score, label, box, index in zip(scores, labels, boxes, indexes):
results.append(
{
"scores": score[score > threshold],
... | https://github.com/huggingface/transformers/issues/39426 | open | [
"Feature request"
] | 2025-07-15T13:34:08Z | 2025-07-22T11:08:23Z | 5 | fenaux |
huggingface/peft | 2,647 | How can I merge the original model weights with LoRA weights? | I'm currently fine-tuning Qwen2.5_VL. Specifically, I used PEFT for LoRA fine-tuning on the linear layers of the LLM part. Meanwhile, I performed regular fine-tuning on other components like visual.merger and embed_tokens (with param.requires_grad set to True). After generating the files, as follow:
<img width="946" h... | https://github.com/huggingface/peft/issues/2647 | closed | [] | 2025-07-15T11:40:33Z | 2025-08-23T15:03:44Z | 4 | guoguo1314 |
huggingface/transformers | 39,421 | Speculative Decoding(do_sample=False) get different outputs | > @transcend-0 hey!
>
>
>
> The issue was solved in [#30068](https://github.com/huggingface/transformers/pull/30068). You can install transformers from `main` with the following line for the correct generation with assisted decoding:
>
>
>
> `!pip install --upgrade git+https://github.com/huggingface/transformers.... | https://github.com/huggingface/transformers/issues/39421 | closed | [] | 2025-07-15T11:36:31Z | 2025-07-19T03:11:04Z | 13 | nighty8 |
huggingface/lerobot | 1,508 | so101_dualarm_triplecam config to evaluate ACT policy? | I recently fine-tuned an ACT policy where my data was from 3 cameras (1 overhead + 2 wrist) and two so101's. Then I tried to evaluate it but noticed there is currently a config file missing to support this. Does or will this support exist soon? | https://github.com/huggingface/lerobot/issues/1508 | open | [
"question",
"robots"
] | 2025-07-15T03:44:32Z | 2025-08-12T09:30:41Z | null | sebastiandavidlee |
huggingface/transformers | 39,410 | FP8 training support for Model Parallel / Tensor Parallel (MP/TP) | ### Feature request
I recieve message "ValueError: The model you are trying to fine-tune is quantized with QuantizationMethod.FP8 but that quantization method do not support training. Please open an issue on GitHub: https://github.com/huggingface/transformers to request the support for training support for Quantizatio... | https://github.com/huggingface/transformers/issues/39410 | open | [
"Feature request"
] | 2025-07-15T02:13:05Z | 2025-07-15T13:30:27Z | 2 | edgeinfinity1 |
huggingface/transformers | 39,409 | TypeError: couldn't find storage object Float8_e4m3fnStorage - which version is needed for this? | Tested so many versions but can't find a version that won't give this error
```
!pip install bitsandbytes==0.45.0 --upgrade
!pip install insightface --upgrade
!pip install huggingface_hub==0.25.1 hf_transfer diffusers==0.31.0 transformers==4.36.0
!pip uninstall xformers triton --yes
!pip install torch==2.2.0+cu121 to... | https://github.com/huggingface/transformers/issues/39409 | closed | [
"bug"
] | 2025-07-15T01:51:08Z | 2025-08-02T12:06:59Z | 1 | FurkanGozukara |
huggingface/datasets | 7,682 | Fail to cast Audio feature for numpy arrays in datasets 4.0.0 | ### Describe the bug
Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails
in version 4.0.0 but not in version 3.6.0
### Steps to reproduce the bug
The following `uv script` should be able to reproduce the bug in version 4.0.0
and pass in version 3.6.0 on a macOS ... | https://github.com/huggingface/datasets/issues/7682 | closed | [] | 2025-07-14T18:41:02Z | 2025-07-15T12:10:39Z | 2 | luatil-cloud |
huggingface/lerobot | 1,507 | [PI0] Evaluation result on the metaworld | Has anyone tried training pi0 on the Metaworld benchmark? My evaluation results are relatively low 30~%. | https://github.com/huggingface/lerobot/issues/1507 | closed | [
"bug",
"question",
"policies",
"simulation"
] | 2025-07-14T14:56:38Z | 2025-10-08T08:47:31Z | null | chenkang455 |
huggingface/transformers | 39,401 | Qwen3 tokenizer wrong offset_mapping | ### System Info
transformers 4.53.2, Ubuntu 22.04.4, python 3.11.13
### Who can help?
@ArthurZucker and @itazap There must be a problem with the `offset_mapping` of Qwen3 `tokenizer`. The starting point in the text for each token, except the first and the last, is one position behind. I compared it with the BERT's `... | https://github.com/huggingface/transformers/issues/39401 | closed | [
"bug"
] | 2025-07-14T14:21:08Z | 2025-07-16T09:59:35Z | 4 | contribcode |
huggingface/lerobot | 1,506 | episode: None | When I run "python -m lerobot.scripts.train --dataset.root=./lerobot_datasets/my_robot_dataset/ --output_dir=./lerobot_datasets/outputs/ --policy.type=pi0 --dataset.repo_id=lerobot/tape --policy.push_to_hub=false", I got
‘’
'dataset': {'episodes': None,
'image_transforms': {'enable': False...
}
‘’.
Is thi... | https://github.com/huggingface/lerobot/issues/1506 | open | [
"question",
"policies"
] | 2025-07-14T13:29:07Z | 2025-08-12T09:31:16Z | null | LogSSim |
huggingface/finetrainers | 420 | How to fine-tune Wan 2.1 with Context Parallelism? | I am trying to fine-tune the Wan 2.1 model and would like to leverage the Context Parallelism (CP) feature to manage memory and scale the training. I saw in the main README that `CP support` is listed as a key feature.
I have looked through the `examples/training` directory and the documentation, but I couldn't find a... | https://github.com/huggingface/finetrainers/issues/420 | open | [] | 2025-07-14T06:55:39Z | 2025-07-15T05:09:45Z | null | vviper25 |
huggingface/lerobot | 1,503 | LeRobot So100 and Groot N1.5 Model Multi-Robot Deployment Feasibility Inquiry | Hello, I am conducting various tests using LeRobot's So100 (robot arm) with Groot N1.5 for training.
I have some questions to ask.
**Main Question**
Is it possible to simultaneously apply a model trained with Groot N1.5 base on one robot to multiple robots of the same model?
**Question Background (Actual Experience)*... | https://github.com/huggingface/lerobot/issues/1503 | open | [
"enhancement",
"question",
"policies",
"dataset"
] | 2025-07-14T05:55:44Z | 2025-08-12T09:31:35Z | null | devedgar |
huggingface/lerobot | 1,497 | ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub. | ### System Info
```Shell
lerobot commit version:
https://github.com/huggingface/lerobot/tree/69901b9b6a2300914ca3de0ea14b6fa6e0203bd4
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
(lerobot) robot@robot-Legion-Y9000... | https://github.com/huggingface/lerobot/issues/1497 | open | [
"question",
"policies",
"configuration"
] | 2025-07-13T04:33:14Z | 2025-08-12T09:32:36Z | null | dbdxnuliba |
huggingface/trl | 3,730 | How to design stable reward functions for open-ended text generation tasks in GRPO? | I'm using GRPO for a text generation task where there's no single correct answer. I currently compute the reward using cosine similarity between the model output and a reference response. However, during training (around 400 steps), the reward values are quite unstable and fluctuate significantly.
I'm wondering:
Is c... | https://github.com/huggingface/trl/issues/3730 | open | [
"❓ question",
"🏋 Reward",
"🏋 GRPO"
] | 2025-07-12T18:39:37Z | 2025-07-12T18:40:05Z | null | Jax922 |
huggingface/diffusers | 11,915 | Create modular pipeline from existing pipeline | new concept of modular pipelines added via #9672 is very flexible way of creating custom pipelines
and one of the best early use-cases is new concept of modular guiders added via #11311
however, this would require a complete rewrite of the existing user apps/codebases to use new concepts
and would likely signifi... | https://github.com/huggingface/diffusers/issues/11915 | closed | [] | 2025-07-12T16:08:30Z | 2025-08-28T08:18:08Z | 6 | vladmandic |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.