repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 11,561 | FluxFillPipeline Support load IP Adapter. | ### Model/Pipeline/Scheduler description
'FluxFillPipeline' object has no attribute 'load_ip_adapter'
I really need this,Thanks!
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the ... | https://github.com/huggingface/diffusers/issues/11561 | closed | [
"help wanted",
"Good second issue"
] | 2025-05-15T08:58:42Z | 2025-06-17T08:48:28Z | 6 | PineREN |
huggingface/lerobot | 1,111 | Unrecognized argument policy.path. How to load a pretrained model? | When I run this command:
```
python lerobot/scripts/control_robot.py --robot.type so100 --control.type record --control.fps 30 --control.single_task "Grasp a yellow tape and put it to yellow square." --control.repo_id a_cam_1/result --control.tags '["tutorial"]' --control.warmup_time_s 5 --control.episode_time_s 30 --c... | https://github.com/huggingface/lerobot/issues/1111 | closed | [
"bug"
] | 2025-05-15T03:13:27Z | 2025-06-24T06:20:08Z | null | milong26 |
huggingface/diffusers | 11,555 | `device_map="auto"` supported for diffusers pipelines? | ### Describe the bug
Hey dear diffusers team,
for `DiffusionPipline`, as I understand (hopefully correctly) from [this part of the documentation](https://huggingface.co/docs/diffusers/v0.33.1/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.device_map), it should be possible to specify `device_ma... | https://github.com/huggingface/diffusers/issues/11555 | open | [
"bug"
] | 2025-05-14T16:49:32Z | 2025-05-19T09:44:29Z | 4 | johannaSommer |
huggingface/lerobot | 1,107 | Does Pi0 use PaliGemma VLM pretrained model weights? | I attempted to finetune the Pi0 model, but noticed that it does not download the pretrained weights of Paligemma from Hugging Face. Specifically, I found that Pi0 initializes the VLM with:
```python
self.paligemma = PaliGemmaForConditionalGeneration(config=config.paligemma_config)
```
instead of using:
```python
Aut... | https://github.com/huggingface/lerobot/issues/1107 | closed | [
"bug",
"question",
"policies"
] | 2025-05-14T06:47:15Z | 2025-10-08T08:44:03Z | null | lxysl |
huggingface/lerobot | 1,106 | How to convert image mode to video mode lerobot dataset? | https://github.com/huggingface/lerobot/issues/1106 | open | [
"question",
"dataset"
] | 2025-05-14T03:54:42Z | 2025-08-08T16:42:33Z | null | hairuoliu1 | |
huggingface/transformers.js | 1,316 | May I ask how to set the HF_TOKEN on the browser side? | ### Question
May I ask how to set the HF_TOKEN on the browser side?

The following is my code:
```
const model = await AutoModel.from_pretrained("briaai/RMBG-2.0", {
config: {
model_type: "custom",
},
headers: {
'... | https://github.com/huggingface/transformers.js/issues/1316 | open | [
"question"
] | 2025-05-14T01:43:02Z | 2025-05-27T21:53:45Z | null | dengbupapapa |
huggingface/xet-core | 321 | How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache? | How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache?
I guess there may be a way in the scenario I had but by my mistake apparently I chose some incorrect usage and caused the deletion of the 95% complete partial local file instead of resumi... | https://github.com/huggingface/xet-core/issues/321 | closed | [] | 2025-05-13T22:16:02Z | 2025-05-16T17:48:45Z | null | ghchris2021 |
huggingface/chat-ui | 1,819 | Correct syntax of .env: what are those backticks for multiline strings? | I have read the suggestion of checking discussions but I was unable to find an answer so something very basic looks like it is missing here.
In the documentation there are many examples suggesting of putting long values in env var surrounded by backticks.
However when I do this I get errors like:
JSON5: invalid char... | https://github.com/huggingface/chat-ui/issues/1819 | open | [
"support"
] | 2025-05-13T12:21:43Z | 2025-05-23T09:37:09Z | 1 | sciabarracom |
huggingface/optimum | 2,262 | New Release to Support `transformers>=4.51.0`? | ### Feature request
The latest release (`1.24.0`) is 4 months old. There has been around 38 commits since the last release. Will there be a new release soon?
### Motivation
There is a medium CVE related to `transformers==4.48.1` that is the latest compatible version.
GHSA-fpwr-67px-3qhx
I am also blocked from upgra... | https://github.com/huggingface/optimum/issues/2262 | closed | [] | 2025-05-13T07:46:15Z | 2025-05-13T22:27:08Z | 2 | yxtay |
huggingface/lerobot | 1,101 | ValueError: No integer found between bounds [low_factor=np.float32(-0.001953125), upp_factor=np.float32(-0.001953125)] | ### System Info
```Shell
2025,ubantu,python3.10. when doing teleoperation
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [x] My own task or dataset (give details below)
### Reproduction
python lerobot/scripts/control_robot.py --robot.type=so100 --robot.cameras='{}' --contro... | https://github.com/huggingface/lerobot/issues/1101 | closed | [
"question"
] | 2025-05-13T05:06:35Z | 2025-06-19T14:25:08Z | null | qingx-cyber |
huggingface/diffusers | 11,542 | What's the difference between 'example/train_text_to_image_lora.py' and 'example/research_projects/lora/train_text_to_image_lora.py' ? | I want to use the "--train_text_encoder" argument, but it only exists in the latter script. | https://github.com/huggingface/diffusers/issues/11542 | closed | [] | 2025-05-13T01:41:19Z | 2025-06-10T20:35:10Z | 2 | night-train-zhx |
huggingface/lerobot | 1,097 | UnboundLocalError: local variable 'action' referenced before assignment | May I ask where the problem lies? It occurred during the evaluation of the strategy and I have been searching for a long time without finding a solution
(lerobot) wzx@wzx:~/lerobot$ python lerobot/scripts/control_robot.py \
> --robot.type=so101 \
> --control.type=record \
> --control.fps=30 \
> --control.singl... | https://github.com/huggingface/lerobot/issues/1097 | closed | [
"bug",
"question"
] | 2025-05-12T16:06:27Z | 2025-06-19T14:08:57Z | null | incomple42 |
huggingface/lerobot | 1,093 | List of available task | Thank you for your effort. Can you provide a list of available tasks (not just environments) for better understanding and usage? | https://github.com/huggingface/lerobot/issues/1093 | closed | [
"question"
] | 2025-05-10T06:18:21Z | 2025-10-17T12:03:32Z | null | return-sleep |
huggingface/transformers | 38,052 | `.to` on a `PreTrainedModel` throws a Pyright type check error. What is the correct way to put a model to the device that does not throw type check errors? | ### System Info
(venv) nicholas@B367309:tmp(master)$ transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.51.1
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version:... | https://github.com/huggingface/transformers/issues/38052 | closed | [
"bug"
] | 2025-05-09T19:01:15Z | 2025-06-29T08:03:07Z | null | nickeisenberg |
huggingface/finetrainers | 401 | how to train wan using multi-node | ### Feature request / 功能建议
Hi! I still wonder the multi-node training of Wan2.1 14B. Do you support FSDP across nodes?
### Motivation / 动机
Currently the memory restraint is very harsh for long video LoRA fine-tuning
### Your contribution / 您的贡献
N/A | https://github.com/huggingface/finetrainers/issues/401 | open | [] | 2025-05-09T18:11:07Z | 2025-05-09T18:11:07Z | null | Radioheading |
huggingface/lerobot | 1,091 | Diffusion policy for different tasks instead of PushT | Thank you all for the great job. I want to know if I can train the diffusion policy for different tasks besides the PushT task. How to achieve that? If the task is a new custom task with custom dataset, is there any feasible solution to solve that?
Thank you for your help! | https://github.com/huggingface/lerobot/issues/1091 | closed | [
"question",
"policies",
"stale"
] | 2025-05-09T15:44:20Z | 2025-12-31T02:35:27Z | null | siqisiqisiqisiqi |
huggingface/lerobot | 1,086 | push_to_the_hub error | ### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.30.2
- Dataset version: 3.5.0
- Numpy version: 2.2.5
- PyTorch version (GPU?): 2.7.0 (False)
- Cuda version: N/A
- Using GPU in script?: <fill in>
```
### Information
- ... | https://github.com/huggingface/lerobot/issues/1086 | closed | [
"question"
] | 2025-05-09T03:48:09Z | 2025-10-17T11:55:25Z | null | jungwonshin |
huggingface/trl | 3,424 | [GRPO] How to train model using vLLM and model parallelism on one node? | I tried to start GRPO trainer with vLLM and model parallelism on a single node with 8 GPUs (8 x A100 80G).
My plan was to use one GPU as the vLLM server and other 7 GPUs to load model with model parallelism (e.g., `device_map="auto"`)
```
CUDA_VISIBLE_DEVICES=0 trl vllm-serve --model <model_path> &
CUDA_VISIBLE_DEVIC... | https://github.com/huggingface/trl/issues/3424 | open | [] | 2025-05-08T17:22:19Z | 2025-12-02T22:48:13Z | null | zhiqihuang |
huggingface/lerobot | 1,082 | When add openvla oft policy? | https://github.com/huggingface/lerobot/issues/1082 | closed | [
"question",
"policies",
"stale"
] | 2025-05-08T09:16:16Z | 2025-12-31T02:35:30Z | null | zmf2022 | |
huggingface/text-generation-inference | 3,213 | Whether it supports Huawei Atlas300 graphics card? | ### System Info
Does the tgi inference framework support Huawei Atlas300I graphics cards?Could you help come up with a compatible solution?
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
.
### Expected behavior
C... | https://github.com/huggingface/text-generation-inference/issues/3213 | open | [] | 2025-05-08T03:18:30Z | 2025-05-08T03:18:38Z | 0 | fxb392 |
huggingface/trl | 3,419 | [GRPO] How to do gradient accumulation over sampled outputs? | Greetings,
I am wondering if we have this feature to do gradient accumulation over sampled outputs. For example, if I have `num_generations = 4`, so we have a single query `q1`, we have`completions = [o1, o2, o3, o4]`. I want to set that `per_device_train_batch_size=2, gradient_accumulation_steps=2`. So that the GPU o... | https://github.com/huggingface/trl/issues/3419 | closed | [] | 2025-05-07T17:49:36Z | 2025-05-09T06:26:29Z | null | SpaceHunterInf |
huggingface/lerobot | 1,080 | Update `control_sim_robot.py` to use the new configs | Adding this issue to track one of the TODO's of this MR #550
As of now, [this script](https://github.com/huggingface/lerobot/blob/8cfab3882480bdde38e42d93a9752de5ed42cae2/lerobot/scripts/control_sim_robot.py) is outdated; It does not use the new configuration classes. | https://github.com/huggingface/lerobot/issues/1080 | closed | [
"question"
] | 2025-05-07T11:37:47Z | 2025-06-19T14:04:11Z | null | jccalvojackson |
huggingface/Math-Verify | 53 | How to turn off error print? | When using multiprocessing, there is a lot of error message printed. | https://github.com/huggingface/Math-Verify/issues/53 | closed | [] | 2025-05-07T08:19:36Z | 2025-07-02T16:07:02Z | null | wenxueru |
huggingface/peft | 2,533 | Integrate TLoRA (Tri-Matrix LoRA) | ### Feature request
We would like to propose integrating a novel parameter-efficient fine-tuning method called **TLoRA (Tri-Matrix LoRA)** into the `peft` library. We believe TLoRA offers significant advantages in terms of parameter efficiency, making it a valuable addition to the PEFT ecosystem.
Our method is detail... | https://github.com/huggingface/peft/issues/2533 | closed | [] | 2025-05-06T21:22:50Z | 2025-06-15T15:03:57Z | 2 | itanvir |
huggingface/candle | 2,945 | Operating steps from scratch for beginners? | from
a
To
Z | https://github.com/huggingface/candle/issues/2945 | open | [] | 2025-05-06T15:34:02Z | 2025-05-06T15:34:02Z | 0 | Qarqor5555555 |
huggingface/lerobot | 1,072 | How to merge collected data into one? | For stability I collect data 10 episode by 10. Then forming this:
repo_id/first,repo_id_second...
I want to merge them together to repo_id/one_task for training, but it's hard to fix meta files.
I'm not sure if this approach helps with training, or if I should determine the number of episodes needed for training in a... | https://github.com/huggingface/lerobot/issues/1072 | closed | [
"question",
"dataset"
] | 2025-05-06T02:27:24Z | 2025-05-07T02:29:27Z | null | milong26 |
huggingface/diffusers | 11,499 | [Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change. | ### Sys env:
OS Ubuntu 22.04
PyTorch 2.4.0+cu121
sana == 0.0.1
Diffusers == 0.34.0.dev0
### Reproduce:
Try the demo test code:
```
import torch
from diffusers import SanaPAGPipeline
pipe = SanaPAGPipeline.from_pretrained(
# "Efficient-Large-Model/Sana_1600M_512px_diffusers",
"Efficient-Large-Model/SANA1.5_1.6... | https://github.com/huggingface/diffusers/issues/11499 | closed | [] | 2025-05-05T21:26:51Z | 2025-08-08T23:44:59Z | 11 | David-Dingle |
huggingface/candle | 2,944 | finetuning yolo 8 candle model | What is the correct way to finetune yolo8 model to be used here ? Finetuning model using candle is not straightforward.
candle\candle-examples\examples\yolo-v8\main.rs
// model model architecture points at ultralytics : https://github.com/ultralytics/ultralytics/issues/189
But my model trained using ultralytics and co... | https://github.com/huggingface/candle/issues/2944 | open | [] | 2025-05-05T15:21:48Z | 2025-05-05T18:46:52Z | 0 | flutter-painter |
huggingface/diffusers | 11,489 | Error when I'm trying to train a Flux lora with train_dreambooth_lora_flux_advanced | ### Describe the bug
Hi! I'm trying to train my lora model with [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py) script.
When I'm trying to train my model with prior preservation tag I give an error.
... | https://github.com/huggingface/diffusers/issues/11489 | open | [
"bug",
"training"
] | 2025-05-04T21:19:23Z | 2025-07-06T19:38:40Z | 4 | Mnwa |
huggingface/diffusers | 11,488 | Sincerely Request The Support for Flux PAG Pipeline | When the pag pipeline of flux can be supported? | https://github.com/huggingface/diffusers/issues/11488 | open | [
"help wanted",
"Good second issue"
] | 2025-05-04T11:12:05Z | 2025-05-16T04:53:52Z | 2 | PlutoQyl |
huggingface/text-generation-inference | 3,208 | Can I use TGI in a Supercomputer? | I want to generate somewhere around 1 trillion tokens and I was thinking of using TGI on a European Supercomputer. is there a way to achieve this without relying on docker and downloading the model natively and then load it on the compute node and serve it? @Wauplin | https://github.com/huggingface/text-generation-inference/issues/3208 | open | [] | 2025-05-03T15:13:24Z | 2025-05-15T08:55:08Z | 4 | sleepingcat4 |
huggingface/transformers.js | 1,305 | Trying to convert dinov2 model | ### Question
I tried to convert [this model](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.2.3) using the following command:
`python -m scripts.convert --model_id nguyenkhoa/dinov2_Liveness_detection_v2.2.3 --quantize --task image-classification`
but got the following error:
``ValueError: Trying t... | https://github.com/huggingface/transformers.js/issues/1305 | closed | [
"question"
] | 2025-05-01T19:56:28Z | 2025-05-05T22:18:48Z | null | jdp8 |
huggingface/datasets | 7,545 | Networked Pull Through Cache | ### Feature request
Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.
Enable a three-tier cache lookup for datasets:
1. Local on-disk cache
2. Configurable network cache proxy
3. Official Hugging Face Hub
### Motivation
- Dis... | https://github.com/huggingface/datasets/issues/7545 | open | [
"enhancement"
] | 2025-04-30T15:16:33Z | 2025-04-30T15:16:33Z | 0 | wrmedford |
huggingface/transformers | 37,895 | How to backpropagate the gradients of the embeddings output by the image processor to the input image tensor? | ### Feature request
I'm using the processor of Qwen2.5-VL, and the image processor within it should be Qwen2ImageProcessor. The input image I provide is a PyTorch tensor with gradients, and the processor outputs the feature embeddings of the image. How can I ensure that the gradient flow is not interrupted during this... | https://github.com/huggingface/transformers/issues/37895 | open | [
"Feature request"
] | 2025-04-30T15:06:40Z | 2025-05-01T13:36:24Z | null | weiminbai |
huggingface/diffusers | 11,466 | Finetuning of flux or scratch training | I am new to this field and wanted to know if Is there any code available for training the flux from scratch or even finetuning the existing model. All I see is the dreambooth or Lora finetuning. | https://github.com/huggingface/diffusers/issues/11466 | open | [] | 2025-04-30T07:45:49Z | 2025-05-30T16:32:33Z | 2 | preethamp0197 |
huggingface/hf-hub | 104 | What is this software licensed under? | Would this also be Apache 2 like in https://github.com/huggingface/huggingface_hub/?
Thanks! | https://github.com/huggingface/hf-hub/issues/104 | closed | [] | 2025-04-29T16:27:10Z | 2025-06-16T09:09:43Z | null | nathankw |
huggingface/optimum | 2,248 | Export cli export RT-Detr | ```python
Traceback (most recent call last):
File "/usr/local/bin/optimum-cli", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/dist-packages/optimum/commands/optimum_cli.py", line 208, in main
service.run()
File "/usr/local/lib/python3.11/dist-packages/optimum/com... | https://github.com/huggingface/optimum/issues/2248 | closed | [] | 2025-04-29T08:23:17Z | 2025-05-05T08:03:21Z | 1 | TheMattBin |
huggingface/open-muse | 144 | how to set the minimum learning rate for cosine lr_scheduler? | @dataclass
class TrainingArguments(transformers.TrainingArguments):
gradient_checkpointing_kwargs={'use_reentrant':False}
lr_scheduler_kwargs={
"eta_min":1e-6,
"num_cycles":1,
}
It did not work. how to set the minimum learning rate in transformers-4.51.3? | https://github.com/huggingface/open-muse/issues/144 | closed | [] | 2025-04-29T02:18:59Z | 2025-04-29T02:20:42Z | null | xubuvd |
huggingface/lerobot | 1,045 | Inefficient Config Structure without Hydra | Hi, I notice that the repo used Hydra before, which can modify some config param or create new config yaml files. However, this was deprecated. I wonder how to efficiently modify a new config file for policy without writing these params in the command line each time? | https://github.com/huggingface/lerobot/issues/1045 | closed | [
"question",
"configuration",
"stale"
] | 2025-04-28T11:48:08Z | 2025-11-18T02:30:46Z | null | jiangranlv |
huggingface/diffusers | 11,432 | `.from_pretrained` `torch_dtype="auto"` argument not working a expected | ### Describe the bug
Hey dear diffusers team,
thanks a lot for all your hard work!
I would like to make use of the `torch_dtype="auto"` keyword argument when loading a model/pipeline as specified [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.t... | https://github.com/huggingface/diffusers/issues/11432 | closed | [
"bug"
] | 2025-04-28T04:31:26Z | 2025-05-13T01:42:37Z | 3 | johannaSommer |
huggingface/lerobot | 1,041 | image transform of pi0 is inconsistent with openpi | Thank you for pi0 work in lerobot.However, i found that image transform was quite different from openpi.
image transform of lerobot pi0:

image transform of openpi:
,
# Remave All Token Embeddings
Pipeline.unload_textual_inversion()
# Remove Just One Token
Pipeline.unload_textual_inversion ("<Moe-Bius>")
But how do you know which are c... | https://github.com/huggingface/diffusers/issues/11419 | closed | [
"stale"
] | 2025-04-25T17:18:07Z | 2025-05-27T18:09:45Z | null | Eduardishion |
huggingface/diffusers | 11,418 | How to add flux1-fill-dev-fp8.safetensors | ### Describe the bug
Hi!
How to use flux1-fill-dev-fp8.safetensors in diffusers?
Now I have code:
```
def init_pipeline(device: str):
logger.info(f"Loading FLUX Inpaint Pipeline (Fill‑dev) on {device}")
pipe = FluxFillPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Fill-dev",
torch_dtype=t... | https://github.com/huggingface/diffusers/issues/11418 | closed | [
"bug"
] | 2025-04-25T14:58:08Z | 2025-04-28T19:06:17Z | null | SlimRG |
huggingface/optimum | 2,242 | [onnx] What are the functions of the generated files by optimum-cli? | ### System Info
```shell
I try to use **optimum-cli** to export the onnx file for llama, but i don't get a onnx file as expect, but get a lot of files, so I don't know what are they used for ?
(MindSpore) [ma-user llama149]$ls onnx_model/
config.json generation_config.json model.onnx model.onnx_data special_token... | https://github.com/huggingface/optimum/issues/2242 | closed | [] | 2025-04-25T13:12:35Z | 2025-04-28T09:18:06Z | 1 | vfdff |
huggingface/diffusers | 11,417 | attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'? | ### Describe the bug
attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'?
### Reproduction
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export OUTPUT_DIR="trained-flux-dev-dreambooth-lora"
accelerate launch train_dreambooth_lora_flux.py \
--pretrained_model_name_or_... | https://github.com/huggingface/diffusers/issues/11417 | open | [
"bug",
"stale"
] | 2025-04-25T03:30:52Z | 2025-05-25T15:02:30Z | 1 | asjqmasjqm |
huggingface/datasets | 7,536 | [Errno 13] Permission denied: on `.incomplete` file | ### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can somet... | https://github.com/huggingface/datasets/issues/7536 | closed | [] | 2025-04-24T20:52:45Z | 2025-05-06T13:05:01Z | 4 | ryan-clancy |
huggingface/diffusers | 11,396 | How to convert the hidream lora trained by diffusers to a format that comfyui can load? | ### Describe the bug
The hidream lora trained by diffusers can't load in comfyui, how could I convert it?
### Reproduction
No
### Logs
```shell
```
### System Info
No
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11396 | closed | [
"bug",
"stale"
] | 2025-04-23T13:13:34Z | 2025-06-23T09:49:19Z | null | yinguoweiOvO |
huggingface/candle | 2,916 | how to save and load the model | I just use the varmap.save the varmap,but when I use the varmap.load then achieved a empty varmap. is there any way to save the trained model? | https://github.com/huggingface/candle/issues/2916 | closed | [] | 2025-04-23T11:10:04Z | 2025-04-24T02:25:37Z | null | liguheng |
huggingface/tokenizers | 1,768 | How to debug tokenizers with python? | Hi, I have a technical question. After installing transformers via pip, I successfully installed tokenizers==0.21.1 and transformers==4.49.0. When running the code:
`tokenizer = AutoTokenizer.from_pretrained("../Qwen2") # (tokenizer configs in this folder)`
`tokenizer.encode(data)`
I want to trace the program flow to ... | https://github.com/huggingface/tokenizers/issues/1768 | open | [] | 2025-04-23T09:37:20Z | 2025-04-30T14:11:11Z | null | JinJieGan |
huggingface/diffusers | 11,390 | Better image interpolation in training scripts follow up | With https://github.com/huggingface/diffusers/pull/11206 we did a small quality improvement for the SDXL Dreambooth LoRA script by making `LANCZOS` the default interpolation mode for the image resizing.
This issue is to ask for help from the community to bring this change to the other training scripts, specially for t... | https://github.com/huggingface/diffusers/issues/11390 | closed | [
"good first issue",
"contributions-welcome"
] | 2025-04-23T00:04:10Z | 2025-05-05T16:35:18Z | 20 | asomoza |
huggingface/lerobot | 1,019 | How to resume dataset creation after interruption instead of starting from scratch? | Recently our dataset creation + upload got interrupted due to an error not related to LeRobot. However, I have not been able to launch the dataset creation using the information already processed. My cache folder shows the data, meta, and videos folders, and I was able to determine using the episodes.jsonl file in meta... | https://github.com/huggingface/lerobot/issues/1019 | closed | [] | 2025-04-22T21:30:12Z | 2025-04-22T21:45:00Z | null | Anas-7 |
huggingface/peft | 2,508 | How to save the custom module into adapter_model.safetensrs when integrating new peft method | Just don't know where to save and load the module, or something can mark which module need to be saved.
For example, we want a moe of lora, where multi-lora and a router will be the trainable part and need to be saved. | https://github.com/huggingface/peft/issues/2508 | closed | [] | 2025-04-22T15:46:39Z | 2025-04-30T11:01:58Z | null | AaronZLT |
huggingface/lerobot | 1,015 | How to efficiently collect and standardize datasets from multiple Gymnasium environments? | Hello, I am studying how to collect datasets from various Gymnasium environments for reinforcement learning and imitation learning experiments. Currently, I can collect some data from real environments, but how to collect data from Gymnasium? | https://github.com/huggingface/lerobot/issues/1015 | closed | [
"question",
"dataset",
"good first issue"
] | 2025-04-22T08:50:34Z | 2025-10-17T11:16:09Z | null | ybu-lxd |
huggingface/lerobot | 1,013 | When creating dataset, how to save_episode with existing video? | For video with compatible frames, height and width that is recorded/rendered elsewhere, how can I add it to an episode directly without redundant decode-encode round-trip? | https://github.com/huggingface/lerobot/issues/1013 | closed | [
"enhancement",
"dataset",
"stale"
] | 2025-04-22T04:05:10Z | 2025-12-25T02:35:25Z | null | jjyyxx |
huggingface/lerobot | 1,012 | why chunk_size not used in PI0? | https://github.com/huggingface/lerobot/blob/b43ece89340e7d250574ae7f5aaed5e8389114bd/lerobot/common/policies/pi0/modeling_pi0.py#L658
Is it more meaningful and reasonable here to change `n_action_steps` to `chunk_size`, since `chunk_size` means prediction action horizon and `n_action_steps` means action steps actually... | https://github.com/huggingface/lerobot/issues/1012 | closed | [
"question",
"policies",
"stale"
] | 2025-04-22T03:43:38Z | 2025-11-04T02:30:18Z | null | feixyz10 |
huggingface/huggingface_hub | 3,020 | How to run apps in local mode? local_files_only is failing | The app is running perfectly fine when internet available
All models downloaded into
`os.environ['HF_HOME'] = os.path.abspath(os.path.realpath(os.path.join(os.path.dirname(__file__), './hf_download')))`
When i set like below
```
# Set local_files_only based on offline mode
local_files_only = args.offline
if local_... | https://github.com/huggingface/huggingface_hub/issues/3020 | closed | [
"bug"
] | 2025-04-21T23:46:06Z | 2025-04-22T09:24:57Z | null | FurkanGozukara |
huggingface/finetrainers | 378 | How to finetune CogVideoX1.5-5B T2V LoRA? | Hello. I still unfamiliar with the finetuning process. I want to finetune CogVideoX1.5-5B T2V with LoRA. I have single RTX 4090. I try to re-run the bash script "finetrainers\examples\training\sft\cogvideox\crush_smol_lora\train.sh" with my own dataset and end up with error message
`train.sh: line 130: accelerate: comm... | https://github.com/huggingface/finetrainers/issues/378 | open | [] | 2025-04-21T17:17:08Z | 2025-04-24T06:24:06Z | null | MaulanaYusufIkhsanRobbani |
huggingface/trl | 3,333 | How can I set the dataset to not shuffle? It seems there is no such option. | I'm using GRPOTrainer for training, and based on the logs I've printed, it seems that the dataset is being shuffled. However, the order of samples in the dataset is very important to me, and I don't want it to be shuffled. What should I do? I've checked the documentation but couldn't find any parameter to control this. | https://github.com/huggingface/trl/issues/3333 | closed | [
"❓ question",
"🏋 GRPO"
] | 2025-04-21T11:11:53Z | 2025-04-21T21:34:33Z | null | Tuziking |
huggingface/trl | 3,331 | how to run multi-adapter PPO training in TRL==0.16.1 ? | In `TRL==0.11.0`, we can use multi-adapter to train PPO model like:
- $\pi_\text{sft}$ sft model as base model
- $\pi_\text{sft} + \text{LoRA}_\text{rm}$ as reward model
- $\pi_\text{sft} + \text{LoRA}_\text{policy}$ as policy model
- $\pi_\text{sft} + \text{LoRA}_\text{critic}$ as value model
in v0.16.0 how to run... | https://github.com/huggingface/trl/issues/3331 | closed | [
"❓ question",
"🏋 PPO",
"🏋 SFT"
] | 2025-04-21T06:26:32Z | 2025-06-17T08:59:11Z | null | dhcode-cpp |
huggingface/huggingface_hub | 3,019 | How to solve "Spaces stuck in Building" problems | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/:cpu--: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-: 401 Una... | https://github.com/huggingface/huggingface_hub/issues/3019 | closed | [
"bug"
] | 2025-04-21T03:11:11Z | 2025-04-22T07:50:01Z | null | ghost |
huggingface/datasets | 7,530 | How to solve "Spaces stuck in Building" problems | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401... | https://github.com/huggingface/datasets/issues/7530 | closed | [] | 2025-04-21T03:08:38Z | 2025-11-11T00:57:14Z | null | ghost |
huggingface/lerobot | 1,005 | [pi0] n_action_step vs chunk_size | In modeling_pi0.py, the config variable `chunk_size` is never used. Instead, the action queue is set to be the size of `n_action_step`, and the training loss is also calculated on the actions of size `n_action_step`.
But I thought what should happen is that the model would predict actions of length `chunk size` (and ... | https://github.com/huggingface/lerobot/issues/1005 | closed | [
"question",
"policies",
"stale"
] | 2025-04-20T04:00:23Z | 2025-11-07T02:30:27Z | null | IrvingF7 |
huggingface/lerobot | 1,000 | How to implement a new policy? | How can I integrate a new policy (e.g., OpenVLA) into LeRobot, and specifically, which files do I need to modify? | https://github.com/huggingface/lerobot/issues/1000 | closed | [
"enhancement",
"policies"
] | 2025-04-19T08:53:48Z | 2025-07-29T14:30:18Z | null | Elycyx |
huggingface/prettier-plugin-vertical-align | 2 | how to use | https://github.com/huggingface/prettier-plugin-vertical-align#installation
Add plugins: ["@huggingface/prettier-plugin-vertical-align"] to your .prettierrc file.
Are you sure to .prettierrc file? | https://github.com/huggingface/prettier-plugin-vertical-align/issues/2 | closed | [] | 2025-04-19T04:15:29Z | 2025-04-24T02:53:42Z | null | twotwoba |
huggingface/lerobot | 997 | how to convert pi0 fast | i just meet pi0 convert, how to convert pi0 fast

| https://github.com/huggingface/lerobot/issues/997 | closed | [
"question"
] | 2025-04-18T14:27:29Z | 2025-10-14T14:06:30Z | null | ximiluuuu |
huggingface/diffusers | 11,359 | [Feature request] LTX-Video v0.9.6 15x faster inference than non-distilled model. | **Is your feature request related to a problem? Please describe.**
No problem. This request is Low priority. As and when time allows.
**Describe the solution you'd like.**
Please support the new release of LTX-Video 0.9.6
**Describe alternatives you've considered.**
Original repo have support but it is easier to use ... | https://github.com/huggingface/diffusers/issues/11359 | closed | [] | 2025-04-18T08:05:27Z | 2025-05-09T16:03:34Z | 6 | nitinmukesh |
huggingface/transformers.js | 1,291 | @xenova/transformers vs. @huggingface/transformers npm package | ### Question
It's pretty confusing to have both of these on npm. Which are we supposed to use?
Can you please deprecate the one that we aren't supposed to use? (`npm deprecate`) | https://github.com/huggingface/transformers.js/issues/1291 | open | [
"question"
] | 2025-04-17T16:10:36Z | 2025-10-24T10:19:03Z | null | nzakas |
huggingface/accelerate | 3,510 | Accelerate Config Error - How to debug this? | ### System Info
```Shell
pip list
absl-py 2.2.2
accelerate 1.6.0
annotated-types 0.7.0
bitsandbytes 0.45.5
diffusers 0.33.0.dev0 /data/roy/diffusers
ftfy 6.3.1
huggingface-hub 0.30.2
numpy 2.2.4
nvidia-c... | https://github.com/huggingface/accelerate/issues/3510 | closed | [] | 2025-04-17T11:12:50Z | 2025-05-19T08:46:12Z | null | KihongK |
huggingface/diffusers | 11,351 | Why Wan i2v video processor always float32 datatype? | ### Describe the bug
I found
image = self.video_processor.preprocess(image, height=height, width=width).to(device, dtype=torch.float32)
https://github.com/huggingface/diffusers/blob/29d2afbfe2e09a4ee7cc51455e51ce8b8c0e252d/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L633
in pipeline_wan_i2v.py
why datatype ... | https://github.com/huggingface/diffusers/issues/11351 | closed | [
"bug"
] | 2025-04-17T07:00:42Z | 2025-05-07T03:48:24Z | 2 | DamonsJ |
huggingface/transformers | 37,570 | How to streaming output audio of Qwen2.5-omni-7b | All the examples of qwen2.5-omni-7b did not show how to streaming output audio, with passing streamer, I am able to get streaming text, but how can I get the streaming audio output? | https://github.com/huggingface/transformers/issues/37570 | closed | [] | 2025-04-17T04:16:35Z | 2025-07-30T08:03:44Z | null | qinxuye |
huggingface/diffusers | 11,339 | How to multi-GPU WAN inference | Hi,I didn't find multi-gpu inferences example in the documentation. Can you give me an example, such as Wan2.1-I2V-14B-720P-Diffusers.
I would appreciate some help on that, thank you in advance | https://github.com/huggingface/diffusers/issues/11339 | closed | [
"stale"
] | 2025-04-16T10:22:41Z | 2025-07-05T21:18:01Z | null | HeathHose |
huggingface/trl | 3,295 | i have 2 gpu,but default gpu:0,How to specify a gpu:1 for training? | ### Reproduction
```python
from trl import ...
```
outputs:
```
Traceback (most recent call last):
File "example.py", line 42, in <module>
...
```
### System Info
i have 2 gpu,but default gpu:0,How to specify a gpu:1 for training?
### Checklist
- [x] I have checked that my issue isn't already filed (see ... | https://github.com/huggingface/trl/issues/3295 | closed | [
"❓ question",
"📱 cli"
] | 2025-04-15T08:29:26Z | 2025-04-24T19:46:37Z | null | Aristomd |
huggingface/lerobot | 981 | How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations? | How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations?I am a beginner. | https://github.com/huggingface/lerobot/issues/981 | closed | [
"question",
"simulation"
] | 2025-04-15T04:04:33Z | 2025-10-17T11:19:34Z | null | harryhu0301 |
huggingface/diffusers | 11,321 | flux controlnet train ReadMe have a bug | ### Describe the bug

what is the controlnet config parameters? text is num_single_layers = 10, but the code set num_single_layers=0?
### Reproduction
check readme file
### Logs
```shell
```
### System Info
diffusers ==0.... | https://github.com/huggingface/diffusers/issues/11321 | closed | [
"bug",
"stale"
] | 2025-04-15T01:30:58Z | 2025-10-11T09:58:52Z | 14 | Johnson-yue |
huggingface/agents-course | 428 | [QUESTION] Current schedule is non-sensical | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.
The course page states:
> There’s a deadline for the certification process: all the assignments must be finished before May 1st... | https://github.com/huggingface/agents-course/issues/428 | closed | [
"question"
] | 2025-04-14T18:13:31Z | 2025-04-28T06:51:58Z | null | mindcrime |
huggingface/lerobot | 975 | [Question] How to modify model & dataset to accept two input images in observation.image? | Hi, thank you for the great repo!
I’ve been going through the first three examples, and now I’d like to explore training a diffusion policy with some customized input. Specifically:
My goal:
I want each observation.image to contain two images as input (they have the same shape as the original single image).
I want t... | https://github.com/huggingface/lerobot/issues/975 | closed | [
"dataset",
"stale"
] | 2025-04-14T08:35:47Z | 2025-11-04T02:30:23Z | null | Keith-Luo |
huggingface/candle | 2,893 | How to build a multi-node inference/training in candle? | Hi team,
I'd like to have an example on mulit-node inference/training of candle, how can I find it?
Thanks :)
-- Klaus | https://github.com/huggingface/candle/issues/2893 | open | [] | 2025-04-14T08:03:20Z | 2025-04-14T08:03:20Z | null | k82cn |
huggingface/chat-ui | 1,795 | Offline Custom Tools | Would it be possible to define/use tools that the LLMs can use in an offline state?
"Tools must use Hugging Face Gradio Spaces as we detect the input and output types automatically from the [Gradio API](https://www.gradio.app/guides/sharing-your-app#api-page)."
Is there any reason that the tools can't be hosted loca... | https://github.com/huggingface/chat-ui/issues/1795 | open | [
"enhancement"
] | 2025-04-14T02:41:19Z | 2025-04-14T02:41:19Z | 0 | cr-intezra |
huggingface/chat-ui | 1,794 | Docker Image and Local Install missing file/image/etc upload | I've used the chat-ui-db:latest image as well as cloning the repo, setting up mongo and npm install/run dev and the UI I get does not have the icons or ability to upload in image or file. It only has the web search button.
This would be for release 0.9.4.
Is there something in .env.local that I am missing to enable t... | https://github.com/huggingface/chat-ui/issues/1794 | open | [] | 2025-04-13T19:30:29Z | 2025-04-13T19:30:29Z | 0 | cr-intezra |
huggingface/optimum | 2,228 | Unable to convert an audio-to-audio model. | ### Feature request
``` bash
optimum-cli export onnx --model microsoft/speecht5_vc speecht5_vc_onnx/
```
Output:
``` log
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling... | https://github.com/huggingface/optimum/issues/2228 | closed | [
"Stale"
] | 2025-04-13T00:50:26Z | 2025-05-18T02:17:06Z | 1 | divinerapier |
huggingface/lerobot | 971 | Can different robotic arms share the same dataset and model? | English:
I currently have datasets and models for the Koch, SO100, and ALOHA robotic arms. Is it possible for these three arms to share the same dataset and model? If so, how should this be implemented? If not—given the significant hardware differences—what is the practical value of data sharing in this context?
@Caden... | https://github.com/huggingface/lerobot/issues/971 | closed | [
"question",
"dataset",
"stale"
] | 2025-04-12T05:03:27Z | 2025-10-17T12:06:45Z | null | ZhangWuWei |
huggingface/autotrain-advanced | 881 | Accelerators: Error fetching data. how to troubleshoot |
Getting this error message when trying to train my model using Autotrain
Accelerators: Error fetching data
Error fetching training status
My data file is a csv & correctly formatted.
What are possible ways to troubleshoot this problem?
I'm new to fine-tuning so would love any assistance | https://github.com/huggingface/autotrain-advanced/issues/881 | closed | [
"stale"
] | 2025-04-11T16:04:12Z | 2025-06-02T15:02:09Z | null | innerspacestudio |
huggingface/alignment-handbook | 215 | Use alignment-handbook on Apple Silicon | Hi, is it possible to install and use this tool on Apple Silicon? I am aware that certain dependencies, such as Flash Attention, do not work on Apple Silicon. Has anyone tried and successfully installed this tool without those dependencies? | https://github.com/huggingface/alignment-handbook/issues/215 | closed | [] | 2025-04-11T01:28:02Z | 2025-04-27T01:09:55Z | 0 | minhquoc0712 |
huggingface/lerobot | 968 | 没有物理机器人我如何进行仿真机器人,我应该如何学习 | 没有物理机器人我如何进行仿真机器人,我应该如何学习仿真机器人呢,有没有好的推荐吗 | https://github.com/huggingface/lerobot/issues/968 | closed | [
"question",
"simulation"
] | 2025-04-10T18:10:47Z | 2025-10-08T12:54:19Z | null | harryhu0301 |
huggingface/diffusers | 11,285 | value errors in convert to/from diffusers from original stable diffusion | ### Describe the bug
There's a hardcode somewhere for 77 tokens, when it should be using the dimensions of what is actually in the model.
I have a diffusers-layout SD1.5 model, with LongCLIP.
https://huggingface.co/opendiffusionai/xllsd-alpha0
I can pull it locally, then convert to single file format, with
python ... | https://github.com/huggingface/diffusers/issues/11285 | open | [
"bug"
] | 2025-04-10T17:16:42Z | 2025-05-12T15:03:03Z | 2 | ppbrown |
huggingface/diffusers | 11,272 | what is the difference between from diffusion import *** and from diffusers import ***? | I have installed diffusers and it runs ok, however the code gets wrong with " no module named diffusion "
when goes to from diffusion import ***?
What is the difference between from diffusion import *** and from diffusers import ***?
Need I install them all and what is the difference between diffusion and diffusers? | https://github.com/huggingface/diffusers/issues/11272 | closed | [] | 2025-04-10T05:11:56Z | 2025-04-30T02:11:51Z | null | micklexqg |
huggingface/inference-benchmarker | 11 | How to set the OPENAI_API_KEY? | There is no api_key param for inference-benchmarker. How to set the OPENAI_API_KEY?
Thanks~
code there:
https://github.com/huggingface/inference-benchmarker/blob/d91a0162bdfe318fe95b9a9bbb53b1bdc39194a9/src/requests.rs#L145C1-L153C36
```bash
root@P8757303A244:/opt/inference-benchmarker# inference-benchmarker -h
Usage... | https://github.com/huggingface/inference-benchmarker/issues/11 | closed | [] | 2025-04-10T04:36:11Z | 2025-04-25T13:13:18Z | null | handsome-chips |
huggingface/transformers | 37,408 | How to solve the error of converting Qwen onnx_model to tensorRT_model? | ### **1. The transformers' Qwen ONNX model has been exported successfully.**
### **2. Convert ONNX_model to tensorRT_model failed by trtexec.**
**error info**
```
[04/10/2025-11:04:52] [E] Error[3]: IExecutionContext::setInputShape: Error Code 3: API Usage Error (Parameter check failed, condition: engineDims.d[i] ==... | https://github.com/huggingface/transformers/issues/37408 | closed | [] | 2025-04-10T04:08:47Z | 2025-06-28T08:03:06Z | null | dearwind153 |
huggingface/lerobot | 964 | RuntimeError: Could not load libtorchcodec during lerobot/scripts/train.py script | ### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.29.3
- Dataset version: 3.4.1
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Cuda version: 12040
Additionally:
ffmpeg version :... | https://github.com/huggingface/lerobot/issues/964 | closed | [
"question"
] | 2025-04-09T14:25:38Z | 2025-04-15T13:32:24Z | null | shrutichakraborty |
huggingface/transformers | 37,390 | how to reduce original model's tokenizer vocabulary | `###` Feature request
I am working on model distillation. I am currently using the nllb-distilled-600M model, but the parameters of this model are still too large, and the vocabulary supports more than 100 languages. My use case is single language translation, such as English to Hebrew. Therefore, I need to reduce the... | https://github.com/huggingface/transformers/issues/37390 | open | [
"Feature request"
] | 2025-04-09T10:45:56Z | 2025-04-09T10:53:07Z | null | masterwang22327 |
huggingface/datasets | 7,506 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM | ### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er... | https://github.com/huggingface/datasets/issues/7506 | open | [] | 2025-04-09T06:32:04Z | 2025-06-29T06:04:59Z | 2 | calvintanama |
huggingface/lerobot | 960 | pi0-fintune-performance | I have been fine-tuning the provided pi0-base model on my dataset using LeRobot. After training for 100,000 steps, I found that the model performs well on tasks that appeared in my dataset, but its performance on unseen tasks is very poor. It seems to lack the generalization ability of a VLA model. Is this phenomenon n... | https://github.com/huggingface/lerobot/issues/960 | closed | [
"question",
"policies"
] | 2025-04-09T01:21:12Z | 2025-10-08T08:43:22Z | null | yanghb1 |
huggingface/lerobot | 956 | pi0 multi gps train | if i have multi 4090, how to modify to train pi0?
only 1 4090 just error
 | https://github.com/huggingface/lerobot/issues/956 | closed | [
"question"
] | 2025-04-08T13:06:27Z | 2025-11-20T03:07:56Z | null | ximiluuuu |
huggingface/transformers | 37,364 | How to find a specific func doc when using transformers doc? | ### Feature request
Better UX for doc
### Motivation
The search and UI layout make it so hard to find a func doc, especially when there are so many func doc in one webpage and your just can not find what you want by web page search.
### Your contribution
no, right now | https://github.com/huggingface/transformers/issues/37364 | open | [
"Feature request"
] | 2025-04-08T10:48:04Z | 2025-09-15T19:16:35Z | null | habaohaba |
huggingface/open-r1 | 586 | what is next for this project? | https://github.com/huggingface/open-r1/issues/586 | open | [] | 2025-04-07T21:29:54Z | 2025-04-07T21:29:54Z | null | Mnaik2 | |
huggingface/lerobot | 949 | Optional deps in using LeRobot as am optional package | Hi, we are working on enabling LeRobot dataset generation in [IsaacLab](https://github.com/isaac-sim/IsaacLab), such that developers could create data with IsaacLab data generation workflow and use it in their robot learning models.
The asks are,
1. Is there any scheduled release, such that downstream devs could ha... | https://github.com/huggingface/lerobot/issues/949 | closed | [
"question",
"dataset",
"simulation",
"stale"
] | 2025-04-07T16:55:48Z | 2025-10-21T02:29:27Z | null | xyao-nv |
huggingface/datasets | 7,502 | `load_dataset` of size 40GB creates a cache of >720GB | Hi there,
I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:
```python
ds = DatasetDict(
... | https://github.com/huggingface/datasets/issues/7502 | closed | [] | 2025-04-07T16:52:34Z | 2025-04-15T15:22:12Z | 2 | pietrolesci |
huggingface/trl | 3,254 | How to get completion_length? | I noticed that during GRPO training, `completion_length` is recorded. However, I found that it’s not simply obtained by `len(completion)`. How is this calculated—by tokens? Is it possible for me to access the `completion_length` for each sample?
| https://github.com/huggingface/trl/issues/3254 | open | [
"❓ question",
"🏋 GRPO"
] | 2025-04-07T15:02:04Z | 2025-04-11T03:10:20Z | null | Tuziking |
huggingface/diffusers | 11,220 | Unconditional image generation documentation page not working as expected | ### Describe the bug
When consulting the documentation for [unconditional image generation](https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation), the last embedded page seems to contain an error that blocks it from being shown (see image below). This is @stevhliu's model stored in [thi... | https://github.com/huggingface/diffusers/issues/11220 | closed | [
"bug"
] | 2025-04-07T10:32:45Z | 2025-04-08T08:47:18Z | 2 | alvaro-mazcu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.