repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/diffusers
11,561
FluxFillPipeline Support load IP Adapter.
### Model/Pipeline/Scheduler description 'FluxFillPipeline' object has no attribute 'load_ip_adapter' I really need this,Thanks! ### Open source status - [ ] The model implementation is available. - [ ] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links for the implementation _No response_
https://github.com/huggingface/diffusers/issues/11561
closed
[ "help wanted", "Good second issue" ]
2025-05-15T08:58:42Z
2025-06-17T08:48:28Z
6
PineREN
huggingface/lerobot
1,111
Unrecognized argument policy.path. How to load a pretrained model?
When I run this command: ``` python lerobot/scripts/control_robot.py --robot.type so100 --control.type record --control.fps 30 --control.single_task "Grasp a yellow tape and put it to yellow square." --control.repo_id a_cam_1/result --control.tags '["tutorial"]' --control.warmup_time_s 5 --control.episode_time_s 30 --control.reset_time_s 10 --control.m_episodes 1 --control.push_to_hub false --control.policy,path output/checkpoints/last/pretrained_model ``` I got: ``` usage: control_robot.py [-h] [--config_path str] [--robot str] [--robot.type {aloha,koch,koch_bimanual,moss,so101,so100,stretch,lekiwi}] [--robot.gripper_open_degree str] [--robot.max_relative_target str] [--robot.ip str] [--robot.port str] [--robot.video_port str] [--robot.cameras str] [--robot.calibration_dir str] [--robot.leader_arms str] [--robot.follower_arms str] [--robot.teleop_keys str] [--robot.mock str] [--control str] [--control.type {calibrate,teleoperate,record,replay,remote_robot}] [--control.arms str] [--control.teleop_time_s str] [--control.single_task str] [--policy str] [--control.policy.type {act,diffusion,pi0,tdmpc,vqbet,pi0fast}] [--control.policy.replace_final_stride_with_dilation str] [--control.policy.pre_norm str] [--control.policy.dim_model str] [--control.policy.n_heads str] [--control.policy.dim_feedforward str] [--control.policy.feedforward_activation str] [--control.policy.n_encoder_layers str] [--control.policy.n_decoder_layers str] [--control.policy.use_vae str] [--control.policy.n_vae_encoder_layers str] [--control.policy.temporal_ensemble_coeff str] [--control.policy.kl_weight str] [--control.policy.optimizer_lr_backbone str] [--control.policy.drop_n_last_frames str] [--control.policy.use_separate_rgb_encoder_per_camera str] [--control.policy.down_dims str] [--control.policy.kernel_size str] [--control.policy.n_groups str] [--control.policy.diffusion_step_embed_dim str] [--control.policy.use_film_scale_modulation str] [--control.policy.noise_scheduler_type str] [--control.policy.num_train_timesteps str] [--control.policy.beta_schedule str] [--control.policy.beta_start str] [--control.policy.beta_end str] [--control.policy.prediction_type str] [--control.policy.clip_sample str] [--control.policy.clip_sample_range str] [--control.policy.num_inference_steps str] [--control.policy.do_mask_loss_for_padding str] [--control.policy.scheduler_name str] [--control.policy.num_steps str] [--control.policy.attention_implementation str] [--control.policy.train_expert_only str] [--control.policy.train_state_proj str] [--control.policy.n_action_repeats str] [--control.policy.horizon str] [--control.policy.image_encoder_hidden_dim str] [--control.policy.state_encoder_hidden_dim str] [--control.policy.latent_dim str] [--control.policy.q_ensemble_size str] [--control.policy.mlp_dim str] [--control.policy.discount str] [--control.policy.use_mpc str] [--control.policy.cem_iterations str] [--control.policy.max_std str] [--control.policy.min_std str] [--control.policy.n_gaussian_samples str] [--control.policy.n_pi_samples str] [--control.policy.uncertainty_regularizer_coeff str] [--control.policy.n_elites str] [--control.policy.elite_weighting_temperature str] [--control.policy.gaussian_mean_momentum str] [--control.policy.max_random_shift_ratio str] [--control.policy.reward_coeff str] [--control.policy.expectile_weight str] [--control.policy.value_coeff str] [--control.policy.consistency_coeff str] [--control.policy.advantage_scaling str] [--control.policy.pi_coeff str] [--control.policy.temporal_decay_coeff str] [--control.policy.target_model_momentum str] [--control.policy.n_action_pred_token str] [--control.policy.action_chunk_size str] [--control.policy.vision_backbone str] [--control.policy.crop_shape str] [--control.policy.crop_is_random str] [--control.policy.pretrained_backbone_weights str] [--control.policy.use_group_norm str] [--control.policy.spatial_softmax_num_keypoints str] [--control.policy.n_vqvae_training_steps str] [--control.policy.vqvae_n_embed str] [--control.policy.vqvae_embedding_dim str] [--control.policy.vqvae_enc_hidden_dim str] [--control.policy.gpt_block_size str] [--control.policy.gpt_input_dim str] [--control.policy.gpt_output_dim str] [--control.policy.gpt_n_layer str] [--control.policy.gpt_n_head str] [--control.policy.gpt_hidden_dim str]
https://github.com/huggingface/lerobot/issues/1111
closed
[ "bug" ]
2025-05-15T03:13:27Z
2025-06-24T06:20:08Z
null
milong26
huggingface/diffusers
11,555
`device_map="auto"` supported for diffusers pipelines?
### Describe the bug Hey dear diffusers team, for `DiffusionPipline`, as I understand (hopefully correctly) from [this part of the documentation](https://huggingface.co/docs/diffusers/v0.33.1/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.device_map), it should be possible to specify `device_map="auto"` when loading a pipeline with `from_pretrained` but this results in a value error saying that this is not supported. However, the documentation on [device placement](https://huggingface.co/docs/diffusers/en/tutorials/inference_with_big_models#device-placement) currently states that only the "balanced" strategy is supported. Is this possibly similar to #11432 and should be removed from the docstrings / documentation? Happy to help on this with a PR if it turns out to be a mistake in the documentation. Thanks a lot for your hard work! ### Reproduction ```python from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto") ``` or ```python from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto") ``` ### Logs ```shell --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[12], line 3 1 from diffusers import StableDiffusionPipeline ----> 3 pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto") File ~/miniconda3/envs/pruna/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs) 111 if check_use_auth_token: 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) --> 114 return fn(*args, **kwargs) File ~/miniconda3/envs/pruna/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py:745, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 742 raise ValueError("`device_map` must be a string.") 744 if device_map is not None and device_map not in SUPPORTED_DEVICE_MAP: --> 745 raise NotImplementedError( 746 f"{device_map} not supported. Supported strategies are: {', '.join(SUPPORTED_DEVICE_MAP)}" 747 ) 749 if device_map is not None and device_map in SUPPORTED_DEVICE_MAP: 750 if is_accelerate_version("<", "0.28.0"): NotImplementedError: auto not supported. Supported strategies are: balanced ``` ### System Info - 🤗 Diffusers version: 0.33.1 - Platform: Linux-5.15.0-139-generic-x86_64-with-glibc2.35 - Running on Google Colab?: No - Python version: 3.10.16 - PyTorch version (GPU?): 2.7.0+cu126 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.30.2 - Transformers version: 4.51.3 - Accelerate version: 1.6.0 - PEFT version: 0.15.2 - Bitsandbytes version: 0.45.5 - Safetensors version: 0.5.3 - xFormers version: not installed - Accelerator: NVIDIA H100 PCIe, 81559 MiB NVIDIA H100 PCIe, 81559 MiB - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11555
open
[ "bug" ]
2025-05-14T16:49:32Z
2025-05-19T09:44:29Z
4
johannaSommer
huggingface/lerobot
1,107
Does Pi0 use PaliGemma VLM pretrained model weights?
I attempted to finetune the Pi0 model, but noticed that it does not download the pretrained weights of Paligemma from Hugging Face. Specifically, I found that Pi0 initializes the VLM with: ```python self.paligemma = PaliGemmaForConditionalGeneration(config=config.paligemma_config) ``` instead of using: ```python AutoModel.from_pretrained("google/paligemma-3b-pt-224") ``` This seems to result in the model not loading the pretrained weights. Could you please confirm whether this is the intended behavior? Should Pi0 load Paligemma’s pretrained weights from Hugging Face, or is there a reason it initializes the model from scratch? Thank you!
https://github.com/huggingface/lerobot/issues/1107
closed
[ "bug", "question", "policies" ]
2025-05-14T06:47:15Z
2025-10-08T08:44:03Z
null
lxysl
huggingface/lerobot
1,106
How to convert image mode to video mode lerobot dataset?
https://github.com/huggingface/lerobot/issues/1106
open
[ "question", "dataset" ]
2025-05-14T03:54:42Z
2025-08-08T16:42:33Z
null
hairuoliu1
huggingface/transformers.js
1,316
May I ask how to set the HF_TOKEN on the browser side?
### Question May I ask how to set the HF_TOKEN on the browser side? ![Image](https://github.com/user-attachments/assets/944af6e1-a3b7-429b-81a6-6d205925915e) The following is my code: ``` const model = await AutoModel.from_pretrained("briaai/RMBG-2.0", { config: { model_type: "custom", }, headers: { 'Authorization': `Bearer hf_xxxxxxxxxxxxxxx` } }); ```
https://github.com/huggingface/transformers.js/issues/1316
open
[ "question" ]
2025-05-14T01:43:02Z
2025-05-27T21:53:45Z
null
dengbupapapa
huggingface/xet-core
321
How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache?
How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache? I guess there may be a way in the scenario I had but by my mistake apparently I chose some incorrect usage and caused the deletion of the 95% complete partial local file instead of resuming / recovering its download via XET. e.g. I tried with a fresh tool install and a process something like: % pip install -U "huggingface_hub[hf_xet]" % pwd /whatever/some_tmpdir % ls -lh somefile 35G somefile // Partial file exists and is 95% complete but short / truncated by failed copy previously. % huggingface-cli download --local-dir . some_repo_id some_dir/somefile The end result was apparently the deletion of the pre-existing 95% complete 'somefile' from the current directory and the initiation of new download using xet protocol from the xet enabled some_repo_id. Based on huggingface-cli download --help and the articles about xet I had expected it to realize the pre-existing current directory's "somefile" with an identical name/target directory as the file being requested for download was a partial relevant file and it should start to recover / complete the download by missing chunk completion. That despite the fact that there was no cache directory or git LFS structure around the current working directory, it just contained the isolated partial file only. huggingface-cli download --help usage: huggingface-cli <command> [<args>] download [-h] [--repo-type {model,dataset,space}] [--revision REVISION] [--include [INCLUDE ...]] [--exclude [EXCLUDE ...]] [--cache-dir CACHE_DIR] [--local-dir LOCAL_DIR] [--local-dir-use-symlinks {auto,True,False}] [--force-download] [--resume-download] [--token TOKEN] [--quiet] [--max-workers MAX_WORKERS] repo_id [filenames ...] positional arguments: repo_id ID of the repo to download from (e.g. `username/repo-name`). filenames Files to download (e.g. `config.json`, `data/metadata.jsonl`). options: ... --local-dir LOCAL_DIR If set, the downloaded file will be placed under this directory. Check out https://huggingface.co/docs/huggingface_hub/guides/download#download-files-to-local-folder for more details. ... --resume-download Deprecated and ignored. Downloading a file to local dir always attempts to resume previously interrupted downloads (unless hf-transfer is enabled). ... huggingface-cli download --local-dir . some_repo_id some_dir/somefile Downloading 'somefile' to '.cache/huggingface/download/whatever.incomplete' Xet Storage is enabled for this repo. Downloading file from Xet Storage.. ... If there's a different way to accomplish this partial file recovery result (or even if there's a corrupted / patched / whatever file per. xet's chunk filling capabilities) then perhaps clarifying / expanding the usage documentation to cover this kind of common scenario use case could help? The desired result would be something like rsync --verbose --archive server:/some_repo_id/somedir/somefile somefile which would use rolling hash chunk based rsync algorithm / protocol downloading to complete the retrieval of the somefile in the current directory regardless of other context. Also I wonder if it'd be interesting to have a rsync to xet 'bridge' so anyone could use a normal rsync client but pull xet files from HF repos if HF doesn't want to support rsync itself in whole but has the conceptually aligned XET back end that could be "mapped" to rsync chunk based protocol (I suppose) by a thin protocol adapter? Lots of e.g. linux distribution mirror sites support rsync as an HTTP/HTTPS alternative so it presumably has some significant market-share for people doing IT / devops / mlops / whatever use case downloads.
https://github.com/huggingface/xet-core/issues/321
closed
[]
2025-05-13T22:16:02Z
2025-05-16T17:48:45Z
null
ghchris2021
huggingface/chat-ui
1,819
Correct syntax of .env: what are those backticks for multiline strings?
I have read the suggestion of checking discussions but I was unable to find an answer so something very basic looks like it is missing here. In the documentation there are many examples suggesting of putting long values in env var surrounded by backticks. However when I do this I get errors like: JSON5: invalid character '`' at 1:1 I have checked around and I have been unable to find anywhere references to .env using backticks for multiline strings, and the parser is refusing this. THis is happening with a git clone of main but also using tagged versions. So how do you possibile use this apparently non standard syntax and how is it possible no one else but me is having this issue?
https://github.com/huggingface/chat-ui/issues/1819
open
[ "support" ]
2025-05-13T12:21:43Z
2025-05-23T09:37:09Z
1
sciabarracom
huggingface/optimum
2,262
New Release to Support `transformers>=4.51.0`?
### Feature request The latest release (`1.24.0`) is 4 months old. There has been around 38 commits since the last release. Will there be a new release soon? ### Motivation There is a medium CVE related to `transformers==4.48.1` that is the latest compatible version. GHSA-fpwr-67px-3qhx I am also blocked from upgrading `vllm==0.8.5` within my system as it requires `transformers>=4.51.0`. `transformers==4.48.1` is compatible with up to `vllm==0.8.2` only where there are critical and high CVEs. GHSA-hj4w-hm2g-p6w5 GHSA-9f8f-2vmf-885j It looks like the current dependencies in the `main` branch will mitigate these issues completely. Is there any blocker to creating a new release from current state? ### Your contribution Don't think I will be granted permissions to create releases in this project.
https://github.com/huggingface/optimum/issues/2262
closed
[]
2025-05-13T07:46:15Z
2025-05-13T22:27:08Z
2
yxtay
huggingface/lerobot
1,101
ValueError: No integer found between bounds [low_factor=np.float32(-0.001953125), upp_factor=np.float32(-0.001953125)]
### System Info ```Shell 2025,ubantu,python3.10. when doing teleoperation ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [x] My own task or dataset (give details below) ### Reproduction python lerobot/scripts/control_robot.py --robot.type=so100 --robot.cameras='{}' --control.type=teleoperate ### Expected behavior How to deal with it.
https://github.com/huggingface/lerobot/issues/1101
closed
[ "question" ]
2025-05-13T05:06:35Z
2025-06-19T14:25:08Z
null
qingx-cyber
huggingface/diffusers
11,542
What's the difference between 'example/train_text_to_image_lora.py' and 'example/research_projects/lora/train_text_to_image_lora.py' ?
I want to use the "--train_text_encoder" argument, but it only exists in the latter script.
https://github.com/huggingface/diffusers/issues/11542
closed
[]
2025-05-13T01:41:19Z
2025-06-10T20:35:10Z
2
night-train-zhx
huggingface/lerobot
1,097
UnboundLocalError: local variable 'action' referenced before assignment
May I ask where the problem lies? It occurred during the evaluation of the strategy and I have been searching for a long time without finding a solution (lerobot) wzx@wzx:~/lerobot$ python lerobot/scripts/control_robot.py \ > --robot.type=so101 \ > --control.type=record \ > --control.fps=30 \ > --control.single_task="Grasp a lego block and put it in the bin." \ > --control.repo_id=${HF_USER}/eval_act_so101_test \ > --control.tags='["tutorial"]' \ > --control.warmup_time_s=5 \ > --control.episode_time_s=30 \ > --control.reset_time_s=30 \ > --control.num_episodes=10 \ > --control.display_data=true \ > --control.push_to_hub=true \ > --control.policy.path=outputs/train/act_so101_test/checkpoints/last/pretrained_model INFO 2025-05-12 22:54:05 ol_robot.py:408 {'control': {'display_data': True, 'episode_time_s': 30, 'fps': 30, 'num_episodes': 10, 'num_image_writer_processes': 0, 'num_image_writer_threads_per_camera': 4, 'play_sounds': True, 'policy': {'beta_end': 0.02, 'beta_schedule': 'squaredcos_cap_v2', 'beta_start': 0.0001, 'clip_sample': True, 'clip_sample_range': 1.0, 'crop_is_random': True, 'crop_shape': (84, 84), 'device': 'cuda', 'diffusion_step_embed_dim': 128, 'do_mask_loss_for_padding': False, 'down_dims': (512, 1024, 2048), 'drop_n_last_frames': 7, 'horizon': 16, 'input_features': {'observation.images.laptop': {'shape': (3, 480, 640), 'type': <FeatureType.VISUAL: 'VISUAL'>}, 'observation.images.phone': {'shape': (3, 480, 640), 'type': <FeatureType.VISUAL: 'VISUAL'>}, 'observation.state': {'shape': (6,), 'type': <FeatureType.STATE: 'STATE'>}}, 'kernel_size': 5, 'n_action_steps': 8, 'n_groups': 8, 'n_obs_steps': 2, 'noise_scheduler_type': 'DDPM', 'normalization_mapping': {'ACTION': <NormalizationMode.MIN_MAX: 'MIN_MAX'>, 'STATE': <NormalizationMode.MIN_MAX: 'MIN_MAX'>, 'VISUAL': <NormalizationMode.MEAN_STD: 'MEAN_STD'>}, 'num_inference_steps': None, 'num_train_timesteps': 100, 'optimizer_betas': (0.95, 0.999), 'optimizer_eps': 1e-08, 'optimizer_lr': 0.0001, 'optimizer_weight_decay': 1e-06, 'output_features': {'action': {'shape': (6,), 'type': <FeatureType.ACTION: 'ACTION'>}}, 'prediction_type': 'epsilon', 'pretrained_backbone_weights': None, 'scheduler_name': 'cosine', 'scheduler_warmup_steps': 500, 'spatial_softmax_num_keypoints': 32, 'use_amp': False, 'use_film_scale_modulation': True, 'use_group_norm': True, 'use_separate_rgb_encoder_per_camera': False, 'vision_backbone': 'resnet18'}, 'private': False, 'push_to_hub': True, 'repo_id': 'bursomi/eval_act_so101_test', 'reset_time_s': 30, 'resume': False, 'root': None, 'single_task': 'Grasp a lego block and put it in the bin.', 'tags': ['tutorial'], 'video': True, 'warmup_time_s': 5}, 'robot': {'calibration_dir': '.cache/calibration/so101', 'cameras': {'laptop': {'camera_index': 2, 'channels': 3, 'color_mode': 'rgb', 'fps': 30, 'height': 480, 'mock': False, 'rotation': None,
https://github.com/huggingface/lerobot/issues/1097
closed
[ "bug", "question" ]
2025-05-12T16:06:27Z
2025-06-19T14:08:57Z
null
incomple42
huggingface/lerobot
1,093
List of available task
Thank you for your effort. Can you provide a list of available tasks (not just environments) for better understanding and usage?
https://github.com/huggingface/lerobot/issues/1093
closed
[ "question" ]
2025-05-10T06:18:21Z
2025-10-17T12:03:32Z
null
return-sleep
huggingface/transformers
38,052
`.to` on a `PreTrainedModel` throws a Pyright type check error. What is the correct way to put a model to the device that does not throw type check errors?
### System Info (venv) nicholas@B367309:tmp(master)$ transformers-cli env Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.51.1 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 0.30.2 - Safetensors version: 0.5.3 - Accelerate version: 1.6.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (GPU?): 2.6.0+cu126 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA RTX 2000 Ada Generation Laptop GPU ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is a small snippet ```python from transformers.models.auto.modeling_auto import AutoModelForCausalLM from transformers.models.llama.modeling_llama import LlamaForCausalLM model = AutoModelForCausalLM.from_pretrained( "deepseek-ai/deepseek-coder-1.3b-instruct", torch_dtype=torch.float16 ) assert isinstance(model, LlamaForCausalLM) model.to("cuda:0") ``` This code runs fine and correctly puts the model to the device, however, `Pyright` throws a pre-runtime type check error on the `model.to("cuda:0") call. This is the error, ```plaintext Pyright: Argument of type "Literal['cuda:0']" cannot be assigned to parameter "self" of type "LlamaForCausalLM" in function "__call__". "Literal['cuda:0']" is not assignable to "LlamaForCausalLM" [reportArgumentType] ``` What is the correct way to put a model to the device that will satisfy the type checker? ### Expected behavior There should be know static type check error when doing `model.to(<device>)`
https://github.com/huggingface/transformers/issues/38052
closed
[ "bug" ]
2025-05-09T19:01:15Z
2025-06-29T08:03:07Z
null
nickeisenberg
huggingface/finetrainers
401
how to train wan using multi-node
### Feature request / 功能建议 Hi! I still wonder the multi-node training of Wan2.1 14B. Do you support FSDP across nodes? ### Motivation / 动机 Currently the memory restraint is very harsh for long video LoRA fine-tuning ### Your contribution / 您的贡献 N/A
https://github.com/huggingface/finetrainers/issues/401
open
[]
2025-05-09T18:11:07Z
2025-05-09T18:11:07Z
null
Radioheading
huggingface/lerobot
1,091
Diffusion policy for different tasks instead of PushT
Thank you all for the great job. I want to know if I can train the diffusion policy for different tasks besides the PushT task. How to achieve that? If the task is a new custom task with custom dataset, is there any feasible solution to solve that? Thank you for your help!
https://github.com/huggingface/lerobot/issues/1091
closed
[ "question", "policies", "stale" ]
2025-05-09T15:44:20Z
2025-12-31T02:35:27Z
null
siqisiqisiqisiqi
huggingface/lerobot
1,086
push_to_the_hub error
### System Info ```Shell - `lerobot` version: 0.1.0 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.30.2 - Dataset version: 3.5.0 - Numpy version: 2.2.5 - PyTorch version (GPU?): 2.7.0 (False) - Cuda version: N/A - Using GPU in script?: <fill in> ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction import argparse from lerobot.common.datasets.lerobot_dataset import LeRobotDataset def parse_args(): parser = argparse.ArgumentParser(description="Push a local HuggingFace dataset to the Hub") parser.add_argument( "--path", type=str, required=True, help="Local directory containing the dataset" ) parser.add_argument( "--repo_id", type=str, required=True, help="Repository ID on HuggingFace Hub (format: username/dataset_name)" ) parser.add_argument( "--private", action="store_true", help="Whether to make the dataset private" ) # Removed unused arguments return parser.parse_args() def main(): args = parse_args() print(f"Loading dataset from {args.path}...") dataset = LeRobotDataset( repo_id=args.repo_id, root=args.path ) print(f"Pushing dataset to {args.repo_id}...") dataset.push_to_hub( args.repo_id, private=args.private ) print("Dataset successfully pushed to Hub!") return 0 if __name__ == "__main__": main() <img width="1502" alt="Image" src="https://github.com/user-attachments/assets/36c563a6-ed2e-4deb-b54e-ce5c9889c50b" /> ### Expected behavior upload it to the huggingface
https://github.com/huggingface/lerobot/issues/1086
closed
[ "question" ]
2025-05-09T03:48:09Z
2025-10-17T11:55:25Z
null
jungwonshin
huggingface/trl
3,424
[GRPO] How to train model using vLLM and model parallelism on one node?
I tried to start GRPO trainer with vLLM and model parallelism on a single node with 8 GPUs (8 x A100 80G). My plan was to use one GPU as the vLLM server and other 7 GPUs to load model with model parallelism (e.g., `device_map="auto"`) ``` CUDA_VISIBLE_DEVICES=0 trl vllm-serve --model <model_path> & CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7 accelerate launch --num_machines 1 --num_processes 1 train.py ``` But the training ran into the following error `AssertionError: this nccl communicator is created to work on cuda:0, but the input tensor is on cuda:1` I think it happened when copying the weights to vLLM server. ``` torch==2.6.0+cu124 transformers==4.51.3 trl==0.17.0 accelerate==1.4.0 ```
https://github.com/huggingface/trl/issues/3424
open
[]
2025-05-08T17:22:19Z
2025-12-02T22:48:13Z
null
zhiqihuang
huggingface/lerobot
1,082
When add openvla oft policy?
https://github.com/huggingface/lerobot/issues/1082
closed
[ "question", "policies", "stale" ]
2025-05-08T09:16:16Z
2025-12-31T02:35:30Z
null
zmf2022
huggingface/text-generation-inference
3,213
Whether it supports Huawei Atlas300 graphics card?
### System Info Does the tgi inference framework support Huawei Atlas300I graphics cards?Could you help come up with a compatible solution? ### Information - [x] Docker - [ ] The CLI directly ### Tasks - [ ] An officially supported command - [ ] My own modifications ### Reproduction . ### Expected behavior Compatible with Huawei graphics cards. I want to use tgi on the Huawei Atlas300I graphics card
https://github.com/huggingface/text-generation-inference/issues/3213
open
[]
2025-05-08T03:18:30Z
2025-05-08T03:18:38Z
0
fxb392
huggingface/trl
3,419
[GRPO] How to do gradient accumulation over sampled outputs?
Greetings, I am wondering if we have this feature to do gradient accumulation over sampled outputs. For example, if I have `num_generations = 4`, so we have a single query `q1`, we have`completions = [o1, o2, o3, o4]`. I want to set that `per_device_train_batch_size=2, gradient_accumulation_steps=2`. So that the GPU or cluster will sample `[o1, o2]` first, and then calculate the gradient, then do, `[o3,o4]`, and do gradient accumulation over these two mini-samples for the datapoint `q1`. I assume this will be equivalent to having `num_generations=4, per_device_train_batch_size=4, gradient_accumulation_steps=1`. But we cannot do this now. Could someone tell me how to properly do that? Do we support such feature now? I hope I made myself clear. Thank you very much!
https://github.com/huggingface/trl/issues/3419
closed
[]
2025-05-07T17:49:36Z
2025-05-09T06:26:29Z
null
SpaceHunterInf
huggingface/lerobot
1,080
Update `control_sim_robot.py` to use the new configs
Adding this issue to track one of the TODO's of this MR #550 As of now, [this script](https://github.com/huggingface/lerobot/blob/8cfab3882480bdde38e42d93a9752de5ed42cae2/lerobot/scripts/control_sim_robot.py) is outdated; It does not use the new configuration classes.
https://github.com/huggingface/lerobot/issues/1080
closed
[ "question" ]
2025-05-07T11:37:47Z
2025-06-19T14:04:11Z
null
jccalvojackson
huggingface/Math-Verify
53
How to turn off error print?
When using multiprocessing, there is a lot of error message printed.
https://github.com/huggingface/Math-Verify/issues/53
closed
[]
2025-05-07T08:19:36Z
2025-07-02T16:07:02Z
null
wenxueru
huggingface/peft
2,533
Integrate TLoRA (Tri-Matrix LoRA)
### Feature request We would like to propose integrating a novel parameter-efficient fine-tuning method called **TLoRA (Tri-Matrix LoRA)** into the `peft` library. We believe TLoRA offers significant advantages in terms of parameter efficiency, making it a valuable addition to the PEFT ecosystem. Our method is detailed in the paper: **https://arxiv.org/abs/2504.18735** **What is TLoRA?** TLoRA is a variation of LoRA that introduces a tri-matrix decomposition for the weight update matrix $\Delta W$. Instead of the standard $W + A B$, TLoRA uses $W + \alpha A B C $, where: * $W$ is the original pre-trained weight matrix. * $A$ is a fixed, non-trainable matrix (e.g., initialized randomly or using Kaiming/Xavier). * $B$ is the _only_ trainable matrix. * $C$ is another fixed, non-trainable matrix (similar initialization as A). * $\alpha$ is a trainable scaling parameter. The $\Delta W$ update is computed as the product of three matrices: a fixed input projection matrix $A$, a small trainable bottleneck matrix $B$, and a fixed output projection matrix $C$. Only matrix $B$ is updated during fine-tuning. **TLoRA Implementation:** The core idea can be represented in a layer similar to this (based on our implementation): ```python class TLoRALayer(nn.Module): def __init__(self, weight, bias, rank=32): super(TLoRALayer, self).__init__() row, column = weight.shape # Restore Linear layer if bias is None: self.linear = nn.Linear(column, row, bias=False) self.linear.load_state_dict({"weight": weight}) else: self.linear = nn.Linear(column, row) self.linear.load_state_dict({"weight": weight, "bias": bias}) # Create TLoRA weights with initialization self.random_A = nn.Parameter( torch.zeros(column, rank), requires_grad=False ) # First matrix, non-trainable nn.init.kaiming_normal_(self.random_A, a=math.sqrt(5)) self.lora_B = nn.Parameter(torch.zeros(rank, rank)) # Second matrix (trainable) self.random_C = nn.Parameter( torch.zeros(rank, row), requires_grad=False ) # Third matrix nn.init.kaiming_normal_(self.random_C, a=math.sqrt(5)) self.lora_scaling = nn.Parameter(torch.ones(1)) self.dropout = nn.Dropout(0.5) def forward(self, input): # Standard linear transformation x = self.linear(input) # Low-rank adaptation with tri-matrix TLoRA # Using the scaling to control the LoRA output y = self.lora_scaling * (input @ self.random_A @ self.lora_B @ self.random_C) y = self.dropout(y) return x + y ``` Full Repo: https://github.com/itanvir/tlora ### Motivation 1. **Extreme Parameter Efficiency:** The core trainable component in TLoRA is the matrix $B$ with dimensions `rank x rank`. Compared to standard LoRA's trainable matrices $A$ (`input_dim x rank`) and $B$ (`rank x output_dim`), TLoRA's trainable parameters are significantly fewer. This makes TLoRA potentially one of the most parameter-efficient methods in PEFT for a given rank. 2. **Competitive Performance:** The fixed matrices $A$ and $C$ can be seen as defining fixed subspaces. By training only the matrix $B$ connecting these subspaces, TLoRA might capture more focused and effective updates compared to training the full $A$ and $B$ matrices in standard LoRA. Our paper provides empirical evidence supporting its effectiveness. ### Your contribution Can give inputs on the design. It should be straightforward.
https://github.com/huggingface/peft/issues/2533
closed
[]
2025-05-06T21:22:50Z
2025-06-15T15:03:57Z
2
itanvir
huggingface/candle
2,945
Operating steps from scratch for beginners?
from a To Z
https://github.com/huggingface/candle/issues/2945
open
[]
2025-05-06T15:34:02Z
2025-05-06T15:34:02Z
0
Qarqor5555555
huggingface/lerobot
1,072
How to merge collected data into one?
For stability I collect data 10 episode by 10. Then forming this: repo_id/first,repo_id_second... I want to merge them together to repo_id/one_task for training, but it's hard to fix meta files. I'm not sure if this approach helps with training, or if I should determine the number of episodes needed for training in advance when collecting data.
https://github.com/huggingface/lerobot/issues/1072
closed
[ "question", "dataset" ]
2025-05-06T02:27:24Z
2025-05-07T02:29:27Z
null
milong26
huggingface/diffusers
11,499
[Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change.
### Sys env: OS Ubuntu 22.04 PyTorch 2.4.0+cu121 sana == 0.0.1 Diffusers == 0.34.0.dev0 ### Reproduce: Try the demo test code: ``` import torch from diffusers import SanaPAGPipeline pipe = SanaPAGPipeline.from_pretrained( # "Efficient-Large-Model/Sana_1600M_512px_diffusers", "Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers", torch_dtype=torch.bfloat16, pag_applied_layers="transformer_blocks.8", ) pipe.to("cuda") pipe.text_encoder.to(torch.bfloat16) pipe.vae.to(torch.bfloat16) prompt = 'a cyberpunk cat with a neon sign that says "Sana"' image = pipe( prompt=prompt, guidance_scale=5.0, pag_scale=2.0, num_inference_steps=20, generator=torch.Generator(device="cuda").manual_seed(42), )[0] image[0].save('sana.png') ``` Inference data will go through [SanaLinearAttnProcessor2_0](https://github.com/huggingface/diffusers/blob/58431f102cf39c3c8a569f32d71b2ea8caa461e1/src/diffusers/models/attention_processor.py#L6007) ### Issue Description: Lines 6042 and 6043 first transposed a contiguous tensor and then did type casting. Type casting invokes a data copy from an old type tensor to a new one. But if you print the new tensor's stride(), you will see: ``` hidden_states = hidden_states.flatten(1, 2).transpose(1, 2) hidden_states = hidden_states.to(original_dtype) print("Contiguity after type casting: ", hidden_states.is_contiguous()) # False hidden_states = attn.to_out[0](hidden_states) hidden_states = attn.to_out[1](hidden_states) ``` The problem is typecasting copies, only did the dtype transmission based on the input tensor's strides. And the bad-strided tensor is immediately used by the latter two functions. Inefficiency is broadcast. ### How to Fix: let `hidden_states.to(original_dtype)` do contiguous and typecasting simultaneously. One possible approach: ``` @torch.compile def transpose_cast_kernel(input_tensor: torch.Tensor) -> torch.Tensor: """ torch-compiled kernel that transposes a 2D tensor and converts it to bfloat16 """ converted = input_tensor.to(torch.bfloat16) transposed = torch.transpose(converted, 1, 2).contiguous() return transposed ``` Use the versatile operation to handle the creation of the new tensor. ``` hidden_states = hidden_states.flatten(1, 2).transpose(1, 2) hidden_states = transpose_cast_kernel(hidden_states) # hidden_states.is_contiguous() True hidden_states = attn.to_out[0](hidden_states) hidden_states = attn.to_out[1](hidden_states) ``` Or, your expert team could do even better. ### Measurement: By adopting the previous change, the **SanaLinearAttnProcessor2_0.__call__ enjoys** 1.06X speedup on RTX3090. PAGCFGSanaLinearAttnProcessor2_0, and PAGIdentitySanaLinearAttnProcessor2_0 have similar logic and lose performance as well.
https://github.com/huggingface/diffusers/issues/11499
closed
[]
2025-05-05T21:26:51Z
2025-08-08T23:44:59Z
11
David-Dingle
huggingface/candle
2,944
finetuning yolo 8 candle model
What is the correct way to finetune yolo8 model to be used here ? Finetuning model using candle is not straightforward. candle\candle-examples\examples\yolo-v8\main.rs // model model architecture points at ultralytics : https://github.com/ultralytics/ultralytics/issues/189 But my model trained using ultralytics and converted to safetensors yield tensor errors when used in candle ylo 8 example. Renaming the tensors to match the candle yolo model did not work. I see DarkNet struct in the model.rs so I wonder if one should rather use [Darknet](https://github.com/hank-ai/darknet) instead (@LaurentMazare ) ?
https://github.com/huggingface/candle/issues/2944
open
[]
2025-05-05T15:21:48Z
2025-05-05T18:46:52Z
0
flutter-painter
huggingface/diffusers
11,489
Error when I'm trying to train a Flux lora with train_dreambooth_lora_flux_advanced
### Describe the bug Hi! I'm trying to train my lora model with [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py) script. When I'm trying to train my model with prior preservation tag I give an error. How can I fix it? ### Reproduction ```bash accelerate launch train_dreambooth_lora_flux_advanced.py \ --pretrained_model_name_or_path="black-forest-labs/FLUX.1-dev" \ --dataset_name="./ds5" \ --instance_prompt="1boy, 1girl" \ --validation_prompt="1boy, 1girl" \ --class_prompt="1boy, 1girl" \ --num_class_images=200 \ --with_prior_preservation \ --class_data_dir="./cdi" \ --output_dir="crtr-SDXL-LoRA" \ --caption_column="text" \ --mixed_precision="bf16" \ --prior_generation_precision="bf16" \ --resolution=1024 \ --train_batch_size=8 \ --repeats=1 \ --gradient_accumulation_steps=8 \ --gradient_checkpointing \ --learning_rate=1.0 \ --optimizer="prodigy"\ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --rank=64 \ --num_train_epochs=200 \ --validation_epochs=100 \ --center_crop \ --adam_beta2=0.99 \ --adam_weight_decay=0.01 \ --allow_tf32 ``` ### Logs ```shell Traceback (most recent call last): File "/workspace/train_dreambooth_lora_flux_advanced.py", line 2423, in <module> main(args) File "/workspace/train_dreambooth_lora_flux_advanced.py", line 2213, in main (weighting.float() * (model_pred_prior.float() - target_prior.float()) ** 2).reshape( ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ RuntimeError: The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 0 ``` ### System Info Diffusers 0.33 CUDA 12.9 Torch 2.7 Docker image nvcr.io/nvidia/pytorch:25.04-py3 ### Who can help? @sayakpaul
https://github.com/huggingface/diffusers/issues/11489
open
[ "bug", "training" ]
2025-05-04T21:19:23Z
2025-07-06T19:38:40Z
4
Mnwa
huggingface/diffusers
11,488
Sincerely Request The Support for Flux PAG Pipeline
When the pag pipeline of flux can be supported?
https://github.com/huggingface/diffusers/issues/11488
open
[ "help wanted", "Good second issue" ]
2025-05-04T11:12:05Z
2025-05-16T04:53:52Z
2
PlutoQyl
huggingface/text-generation-inference
3,208
Can I use TGI in a Supercomputer?
I want to generate somewhere around 1 trillion tokens and I was thinking of using TGI on a European Supercomputer. is there a way to achieve this without relying on docker and downloading the model natively and then load it on the compute node and serve it? @Wauplin
https://github.com/huggingface/text-generation-inference/issues/3208
open
[]
2025-05-03T15:13:24Z
2025-05-15T08:55:08Z
4
sleepingcat4
huggingface/transformers.js
1,305
Trying to convert dinov2 model
### Question I tried to convert [this model](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.2.3) using the following command: `python -m scripts.convert --model_id nguyenkhoa/dinov2_Liveness_detection_v2.2.3 --quantize --task image-classification` but got the following error: ``ValueError: Trying to export a dinov2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type dinov2 to be supported natively in the ONNX export.`` I looked a bit into the `custom_onnx_configs` flag and found [this conversion example](https://github.com/huggingface/transformers.js/issues/906#issuecomment-2315290257). My question is regarding what should I pass to `custom_onnx_configs` for the conversion to work? I could pass `gpt2` as used in the example but I'm wondering what is the correct `custom_onnx_configs` input for dinov2 models. Thank you!
https://github.com/huggingface/transformers.js/issues/1305
closed
[ "question" ]
2025-05-01T19:56:28Z
2025-05-05T22:18:48Z
null
jdp8
huggingface/datasets
7,545
Networked Pull Through Cache
### Feature request Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service. Enable a three-tier cache lookup for datasets: 1. Local on-disk cache 2. Configurable network cache proxy 3. Official Hugging Face Hub ### Motivation - Distributed training & ephemeral jobs: In high-performance or containerized clusters, relying solely on a local disk cache either becomes a streaming bottleneck or incurs a heavy cold-start penalty as each job must re-download datasets. - Traffic & cost reduction: A pull-through network cache lets multiple consumers share a common cache layer, reducing duplicate downloads from the Hub and lowering egress costs. - Better streaming adoption: By offloading repeat dataset pulls to a locally managed cache proxy, streaming workloads can achieve higher throughput and more predictable latency. - Proven pattern: Similar proxy-cache solutions (e.g. Harbor’s Proxy Cache for Docker images) have demonstrated reliability and performance at scale: https://goharbor.io/docs/2.1.0/administration/configure-proxy-cache/ ### Your contribution I’m happy to draft the initial PR for adding HF_DATASET_CACHE_NETWORK_LOCATION support in datasets and sketch out a minimal cache-service prototype. I have limited bandwidth so I would be looking for collaborators if anyone else is interested.
https://github.com/huggingface/datasets/issues/7545
open
[ "enhancement" ]
2025-04-30T15:16:33Z
2025-04-30T15:16:33Z
0
wrmedford
huggingface/transformers
37,895
How to backpropagate the gradients of the embeddings output by the image processor to the input image tensor?
### Feature request I'm using the processor of Qwen2.5-VL, and the image processor within it should be Qwen2ImageProcessor. The input image I provide is a PyTorch tensor with gradients, and the processor outputs the feature embeddings of the image. How can I ensure that the gradient flow is not interrupted during this process? ### Motivation I want to backpropagate the gradients of the embeddings output by the Qwen2 image processor to the input image tensor ### Your contribution I can coporate to fix this issue
https://github.com/huggingface/transformers/issues/37895
open
[ "Feature request" ]
2025-04-30T15:06:40Z
2025-05-01T13:36:24Z
null
weiminbai
huggingface/diffusers
11,466
Finetuning of flux or scratch training
I am new to this field and wanted to know if Is there any code available for training the flux from scratch or even finetuning the existing model. All I see is the dreambooth or Lora finetuning.
https://github.com/huggingface/diffusers/issues/11466
open
[]
2025-04-30T07:45:49Z
2025-05-30T16:32:33Z
2
preethamp0197
huggingface/hf-hub
104
What is this software licensed under?
Would this also be Apache 2 like in https://github.com/huggingface/huggingface_hub/? Thanks!
https://github.com/huggingface/hf-hub/issues/104
closed
[]
2025-04-29T16:27:10Z
2025-06-16T09:09:43Z
null
nathankw
huggingface/optimum
2,248
Export cli export RT-Detr
```python Traceback (most recent call last): File "/usr/local/bin/optimum-cli", line 8, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.11/dist-packages/optimum/commands/optimum_cli.py", line 208, in main service.run() File "/usr/local/lib/python3.11/dist-packages/optimum/commands/export/onnx.py", line 265, in run main_export( File "/usr/local/lib/python3.11/dist-packages/optimum/exporters/onnx/__main__.py", line 375, in main_export onnx_export_from_model( File "/usr/local/lib/python3.11/dist-packages/optimum/exporters/onnx/convert.py", line 1033, in onnx_export_from_model raise ValueError( ValueError: Trying to export a rt-detr model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type rt-detr to be supported natively in the ONNX export. ``` When I try to export my fine-tuned model with RT-DETR, it always pops up with the above error. Even with the cmd line `optimum-cli export onnx -m PekingU/rtdetr_r18vd --task object-detection test_onnx` still shows the same error. So, it should not be an issue related to finetuned model. I would like to know how to export a finetuned model. It would be helpful if anyone can give me some hint. Thanks!
https://github.com/huggingface/optimum/issues/2248
closed
[]
2025-04-29T08:23:17Z
2025-05-05T08:03:21Z
1
TheMattBin
huggingface/open-muse
144
how to set the minimum learning rate for cosine lr_scheduler?
@dataclass class TrainingArguments(transformers.TrainingArguments): gradient_checkpointing_kwargs={'use_reentrant':False} lr_scheduler_kwargs={ "eta_min":1e-6, "num_cycles":1, } It did not work. how to set the minimum learning rate in transformers-4.51.3?
https://github.com/huggingface/open-muse/issues/144
closed
[]
2025-04-29T02:18:59Z
2025-04-29T02:20:42Z
null
xubuvd
huggingface/lerobot
1,045
Inefficient Config Structure without Hydra
Hi, I notice that the repo used Hydra before, which can modify some config param or create new config yaml files. However, this was deprecated. I wonder how to efficiently modify a new config file for policy without writing these params in the command line each time?
https://github.com/huggingface/lerobot/issues/1045
closed
[ "question", "configuration", "stale" ]
2025-04-28T11:48:08Z
2025-11-18T02:30:46Z
null
jiangranlv
huggingface/diffusers
11,432
`.from_pretrained` `torch_dtype="auto"` argument not working a expected
### Describe the bug Hey dear diffusers team, thanks a lot for all your hard work! I would like to make use of the `torch_dtype="auto"` keyword argument when loading a model/pipeline as specified [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.torch_dtype), but the usage does not work as expected (see example below). Can you help me out with some guidance on how to use it correctly or let me know whether there is something wrong with the handling of this argument? Thank you! ### Reproduction ```python from diffusers import StableDiffusionPipeline model = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype="auto") ``` ### Logs ```shell Passed `torch_dtype` torch.float32 is not a `torch.dtype`. Defaulting to `torch.float32`. ``` ### System Info - 🤗 Diffusers version: 0.33.1 - Platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35 - Running on Google Colab?: No - Python version: 3.10.17 - PyTorch version (GPU?): 2.7.0+cu126 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.30.2 - Transformers version: 4.51.3 - Accelerate version: 1.6.0 - PEFT version: 0.15.2 - Bitsandbytes version: 0.45.5 - Safetensors version: 0.5.3 - xFormers version: not installed - Accelerator: NVIDIA H100 PCIe, 81559 MiB - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11432
closed
[ "bug" ]
2025-04-28T04:31:26Z
2025-05-13T01:42:37Z
3
johannaSommer
huggingface/lerobot
1,041
image transform of pi0 is inconsistent with openpi
Thank you for pi0 work in lerobot.However, i found that image transform was quite different from openpi. image transform of lerobot pi0: ![Image](https://github.com/user-attachments/assets/6ff30d08-bc84-4005-8cb9-adc917f9817e) image transform of openpi: ![Image](https://github.com/user-attachments/assets/75845f92-d54e-43ea-be08-81504b6df2ff) Are there some special considerations? By the way, resize_with_pad is also different.
https://github.com/huggingface/lerobot/issues/1041
closed
[ "question", "policies", "stale" ]
2025-04-28T03:08:10Z
2025-11-20T02:30:12Z
null
wushandinghua
huggingface/diffusers
11,423
Lora Hotswap no clear documentation
Hello everyone. Here is the scenario I have. I have say 10 LoRAs that I would like to load and use depending on the request. Option one: using `load_lora_weights` - reads from the disk and moves to device: expensive operation Option two: load all loras and weights of non-used LoRAS with `set_adapters` method to 0.0. Not practical since the forward pass becomes expensive. Since all LoRAS are still loaded. Option three: Find an elegant way of loading LoRAs to CPU and then moving them to GPU as needed. While I was trying to do that, I saw the new parameter of hotswapping in hte load_lora_weights method. And this is what is described in the documentation: hotswap — (bool, optional) Defaults to False. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter in-place. This means that, instead of loading an additional adapter, this will take the existing adapter weights and replace them with the weights of the new adapter. This can be faster and more memory efficient. However, the main advantage of hotswapping is that when the model is compiled with torch.compile, loading the new adapter does not require recompilation of the model. When using hotswapping, the passed adapter_name should be the name of an already loaded adapter. **If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need to call an additional method before loading the adapter** could someone help me out here and name the mysterious function to be called? and optionally would be great if someone could help me with my scenario.
https://github.com/huggingface/diffusers/issues/11423
open
[ "stale" ]
2025-04-26T13:44:08Z
2025-05-26T15:03:03Z
2
vahe-toffee
huggingface/diffusers
11,419
How to know that "Textual inversion" file I have loaded and not turn it on?
Reviewing the documentation I understand the load of IT with: # Add Embeddings Pipeline.load_textual_inversion("Sd-Concepts-Library/Cat-Toy"), # Remave All Token Embeddings Pipeline.unload_textual_inversion() # Remove Just One Token Pipeline.unload_textual_inversion ("<Moe-Bius>") But how do you know which are charged to the pipeline?
https://github.com/huggingface/diffusers/issues/11419
closed
[ "stale" ]
2025-04-25T17:18:07Z
2025-05-27T18:09:45Z
null
Eduardishion
huggingface/diffusers
11,418
How to add flux1-fill-dev-fp8.safetensors
### Describe the bug Hi! How to use flux1-fill-dev-fp8.safetensors in diffusers? Now I have code: ``` def init_pipeline(device: str): logger.info(f"Loading FLUX Inpaint Pipeline (Fill‑dev) on {device}") pipe = FluxFillPipeline.from_pretrained( "black-forest-labs/FLUX.1-Fill-dev", torch_dtype=torch.bfloat16, trust_remote_code=True ).to(device) logger.info("Pipeline loaded successfully") return pipe ``` Another try: ``` transformer = FluxTransformer2DModel.from_single_file( "https://huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/blob/main/flux1-fill-dev-Q4_0.gguf", quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.bfloat16 ) pipe = FluxFillPipeline.from_pretrained( "black-forest-labs/FLUX.1-Fill-dev", transformer=transformer, torch_dtype=torch.bfloat16, trust_remote_code=True ).to(device) pipe.enable_model_cpu_offload() ``` ### Reproduction https://huggingface.co/boricuapab/flux1-fill-dev-fp8/blob/main/README.md https://huggingface.co/pengxian/diffusion_models/blob/main/flux1-fill-dev_fp8.safetensors ### Logs ```shell ``` ### System Info Windows 11 Python 11 ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11418
closed
[ "bug" ]
2025-04-25T14:58:08Z
2025-04-28T19:06:17Z
null
SlimRG
huggingface/optimum
2,242
[onnx] What are the functions of the generated files by optimum-cli?
### System Info ```shell I try to use **optimum-cli** to export the onnx file for llama, but i don't get a onnx file as expect, but get a lot of files, so I don't know what are they used for ? (MindSpore) [ma-user llama149]$ls onnx_model/ config.json generation_config.json model.onnx model.onnx_data special_tokens_map.json tokenizer_config.json tokenizer.json > refer to https://zhuanlan.zhihu.com/p/663971402 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) > (py39) [ma-user llama149]$optimum-cli export onnx --model models--daryl149--llama-2-7b-hf onnx_model --task text-generation ### Expected behavior get a onnx file only, that is similar to **torch.onnx.export**
https://github.com/huggingface/optimum/issues/2242
closed
[]
2025-04-25T13:12:35Z
2025-04-28T09:18:06Z
1
vfdff
huggingface/diffusers
11,417
attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'?
### Describe the bug attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'? ### Reproduction export MODEL_NAME="black-forest-labs/FLUX.1-dev" export OUTPUT_DIR="trained-flux-dev-dreambooth-lora" accelerate launch train_dreambooth_lora_flux.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --mixed_precision="bf16" \ --train_text_encoder\ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ --guidance_scale=1 \ --gradient_accumulation_steps=4 \ --optimizer="prodigy" \ --learning_rate=1. \ --report_to="wandb" \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --max_train_steps=500 \ --validation_prompt="A photo of sks dog in a bucket" \ --seed="0" \ --push_to_hub ### Logs ```shell ``` ### System Info - 🤗 Diffusers version: 0.33.0 - Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35 - Running on Google Colab?: No - Python version: 3.10.12 - PyTorch version (GPU?): 2.4.0+cu121 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.30.2 - Transformers version: 4.44.1 - Accelerate version: 0.32.1 - PEFT version: 0.15.2 - Bitsandbytes version: not installed - Safetensors version: 0.4.2 - xFormers version: 0.0.27.post2 ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11417
open
[ "bug", "stale" ]
2025-04-25T03:30:52Z
2025-05-25T15:02:30Z
1
asjqmasjqm
huggingface/datasets
7,536
[Errno 13] Permission denied: on `.incomplete` file
### Describe the bug When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS. It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed. Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)? ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset builder_instance.download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare self._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare super()._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) .venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators downloaded_files = dl_manager.download(files) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download downloaded_path_or_paths = map_nested( .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched return thread_map( .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) .venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__ for obj in iterable: ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator yield _result_or_cancel(fs.pop()) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel return fut.result(timeout) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result return self.__get_result() ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result raise self._exception ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run result = self.fn(*self.args, **self.kwargs) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single out = cached_path(url_or_filename, download_config=download_config) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path output_path = get_from_cache( .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get fs.get_file(path, temp_file.name, callback=callback) .venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper return sync(self.loop, func, *args, **kwargs) .venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync raise return_result .venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner result[0] = await coro _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70> rpath = '<my-bucket>/<my-prefix>/img_1.jpg' lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0> version_id = None, kwargs = {} _open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120> body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0> content_length = 521923, failed_reads = 0, bytes_read = 0 async def _get_file( self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs ):
https://github.com/huggingface/datasets/issues/7536
closed
[]
2025-04-24T20:52:45Z
2025-05-06T13:05:01Z
4
ryan-clancy
huggingface/diffusers
11,396
How to convert the hidream lora trained by diffusers to a format that comfyui can load?
### Describe the bug The hidream lora trained by diffusers can't load in comfyui, how could I convert it? ### Reproduction No ### Logs ```shell ``` ### System Info No ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11396
closed
[ "bug", "stale" ]
2025-04-23T13:13:34Z
2025-06-23T09:49:19Z
null
yinguoweiOvO
huggingface/candle
2,916
how to save and load the model
I just use the varmap.save the varmap,but when I use the varmap.load then achieved a empty varmap. is there any way to save the trained model?
https://github.com/huggingface/candle/issues/2916
closed
[]
2025-04-23T11:10:04Z
2025-04-24T02:25:37Z
null
liguheng
huggingface/tokenizers
1,768
How to debug tokenizers with python?
Hi, I have a technical question. After installing transformers via pip, I successfully installed tokenizers==0.21.1 and transformers==4.49.0. When running the code: `tokenizer = AutoTokenizer.from_pretrained("../Qwen2") # (tokenizer configs in this folder)` `tokenizer.encode(data)` I want to trace the program flow to understand: - How tokenizers.encode_batch works internally - The implementation details of BPE (Byte Pair Encoding) However, I'm currently stuck because the code appears to be compiled into tokenizers.abi3.so, making the source code inaccessible. How can I debug or inspect these components?
https://github.com/huggingface/tokenizers/issues/1768
open
[]
2025-04-23T09:37:20Z
2025-04-30T14:11:11Z
null
JinJieGan
huggingface/diffusers
11,390
Better image interpolation in training scripts follow up
With https://github.com/huggingface/diffusers/pull/11206 we did a small quality improvement for the SDXL Dreambooth LoRA script by making `LANCZOS` the default interpolation mode for the image resizing. This issue is to ask for help from the community to bring this change to the other training scripts, specially for the popular ones. Since this is a really easy to make contribution I'll ask that we leave this issue for beginners and people that want to start learning how to contribute to open source projects. What I think are the most important ones: - [x] [train_dreambooth_flux](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flux.py) - [x] [train_dreambooth_lora](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py) - [x] [train_dreambooth_lora_lumina2](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py) - [x] [train_dreambooth_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py) - [x] [train_controlnet_flux](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_flux.py) - [x] [train_controlnet_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_sdxl.py) - [x] [train_text_to_image](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) - [x] [train_text_to_image_lora](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) - [x] [train_text_to_image_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) - [x] [train_text_to_image_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py) - [x] [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py) - [x] [train_dreambooth_lora_sd15_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py) - [x] [train_dreambooth_lora_sdxl_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py) If you have other preference, please feel free to ask me to add it. If you want to contribute just answer to this issue with the one you want to do and tag me in the PR. Please only take one since I want to use this issue to get people to learn the ropes on how to contribute and get started with open source.
https://github.com/huggingface/diffusers/issues/11390
closed
[ "good first issue", "contributions-welcome" ]
2025-04-23T00:04:10Z
2025-05-05T16:35:18Z
20
asomoza
huggingface/lerobot
1,019
How to resume dataset creation after interruption instead of starting from scratch?
Recently our dataset creation + upload got interrupted due to an error not related to LeRobot. However, I have not been able to launch the dataset creation using the information already processed. My cache folder shows the data, meta, and videos folders, and I was able to determine using the episodes.jsonl file in meta folder that there were 579 episodes processed. When I try to resume from 580th episode, the `LeRobotDataset.create()` command gives the error that `FileExistsError: [Errno 17] File exists:` because the cache has it. How to resume it instead of having to start from scratch again?
https://github.com/huggingface/lerobot/issues/1019
closed
[]
2025-04-22T21:30:12Z
2025-04-22T21:45:00Z
null
Anas-7
huggingface/peft
2,508
How to save the custom module into adapter_model.safetensrs when integrating new peft method
Just don't know where to save and load the module, or something can mark which module need to be saved. For example, we want a moe of lora, where multi-lora and a router will be the trainable part and need to be saved.
https://github.com/huggingface/peft/issues/2508
closed
[]
2025-04-22T15:46:39Z
2025-04-30T11:01:58Z
null
AaronZLT
huggingface/lerobot
1,015
How to efficiently collect and standardize datasets from multiple Gymnasium environments?
Hello, I am studying how to collect datasets from various Gymnasium environments for reinforcement learning and imitation learning experiments. Currently, I can collect some data from real environments, but how to collect data from Gymnasium?
https://github.com/huggingface/lerobot/issues/1015
closed
[ "question", "dataset", "good first issue" ]
2025-04-22T08:50:34Z
2025-10-17T11:16:09Z
null
ybu-lxd
huggingface/lerobot
1,013
When creating dataset, how to save_episode with existing video?
For video with compatible frames, height and width that is recorded/rendered elsewhere, how can I add it to an episode directly without redundant decode-encode round-trip?
https://github.com/huggingface/lerobot/issues/1013
closed
[ "enhancement", "dataset", "stale" ]
2025-04-22T04:05:10Z
2025-12-25T02:35:25Z
null
jjyyxx
huggingface/lerobot
1,012
why chunk_size not used in PI0?
https://github.com/huggingface/lerobot/blob/b43ece89340e7d250574ae7f5aaed5e8389114bd/lerobot/common/policies/pi0/modeling_pi0.py#L658 Is it more meaningful and reasonable here to change `n_action_steps` to `chunk_size`, since `chunk_size` means prediction action horizon and `n_action_steps` means action steps actually applied to control the robot?
https://github.com/huggingface/lerobot/issues/1012
closed
[ "question", "policies", "stale" ]
2025-04-22T03:43:38Z
2025-11-04T02:30:18Z
null
feixyz10
huggingface/huggingface_hub
3,020
How to run apps in local mode? local_files_only is failing
The app is running perfectly fine when internet available All models downloaded into `os.environ['HF_HOME'] = os.path.abspath(os.path.realpath(os.path.join(os.path.dirname(__file__), './hf_download')))` When i set like below ``` # Set local_files_only based on offline mode local_files_only = args.offline if local_files_only: print("Running in OFFLINE mode - using local models only") # Disable any online connections for HuggingFace when in offline mode os.environ['HF_HUB_OFFLINE'] = '1' os.environ['TRANSFORMERS_OFFLINE'] = '1' os.environ['DIFFUSERS_OFFLINE'] = '1' # Load models with local_files_only parameter when in offline mode text_encoder = LlamaModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='text_encoder', torch_dtype=torch.float16, local_files_only=local_files_only).cpu() text_encoder_2 = CLIPTextModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='text_encoder_2', torch_dtype=torch.float16, local_files_only=local_files_only).cpu() tokenizer = LlamaTokenizerFast.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='tokenizer', local_files_only=local_files_only) tokenizer_2 = CLIPTokenizer.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='tokenizer_2', local_files_only=local_files_only) vae = AutoencoderKLHunyuanVideo.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='vae', torch_dtype=torch.float16, local_files_only=local_files_only).cpu() feature_extractor = SiglipImageProcessor.from_pretrained("lllyasviel/flux_redux_bfl", subfolder='feature_extractor', local_files_only=local_files_only) image_encoder = SiglipVisionModel.from_pretrained("lllyasviel/flux_redux_bfl", subfolder='image_encoder', torch_dtype=torch.float16, local_files_only=local_files_only).cpu() transformer = HunyuanVideoTransformer3DModelPacked.from_pretrained('lllyasviel/FramePackI2V_HY', torch_dtype=torch.bfloat16, local_files_only=local_files_only).cpu() ``` and run with turning off internet i get below error `local_files_only = set as True` ``` Running in OFFLINE mode - using local models only Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 262.52it/s] Traceback (most recent call last): File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connection.py", line 198, in _new_conn sock = connection.create_connection( File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "C:\Python310\lib\socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen response = self._make_request( File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 488, in _make_request raise new_e File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 464, in _make_request self._validate_conn(conn) File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 1093, in _validate_conn conn.connect() File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connection.py", line 704, in connect self.sock = sock = self._new_conn() File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connection.py", line 205, in _new_conn raise NameResolutionError(self.host, self, e) from e urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001A126F7ED70>: Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\requests\adapters.py", line 486, in send resp = conn.urlopen( File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 841, in urlopen retries = retries.increment( File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\util\retry.py", line 519, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/lllyasviel/FramePackI2V_HY (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000001A126F7ED70>: Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)")) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Q:\FramePack_v1\FramePack\app.py", line 72, in <module> transformer
https://github.com/huggingface/huggingface_hub/issues/3020
closed
[ "bug" ]
2025-04-21T23:46:06Z
2025-04-22T09:24:57Z
null
FurkanGozukara
huggingface/finetrainers
378
How to finetune CogVideoX1.5-5B T2V LoRA?
Hello. I still unfamiliar with the finetuning process. I want to finetune CogVideoX1.5-5B T2V with LoRA. I have single RTX 4090. I try to re-run the bash script "finetrainers\examples\training\sft\cogvideox\crush_smol_lora\train.sh" with my own dataset and end up with error message `train.sh: line 130: accelerate: command not found train.sh: line 131: $'(\r --parallel_backend accelerate\r --pp_degree 1 --dp_degree 1 --dp_shards 1 --cp_degree 1 --tp_degree 1\r\r)\r': command not found : No such file or directory_path THUDM/CogVideoX1.5-5B --dataset_config D:/TA_ucup/finetrainers/examples/training/sft/cogvideox/crush_smol_: No such file or directoryize 10 train.sh: line 134: $'(\r --dataloader_num_workers 0\r)\r': command not found train.sh: line 135: $'(\r --flow_weighting_scheme logit_normal\r)\r': command not found train.sh: line 136: $'(\r --training_type lora\r --seed 42\r --batch_size 1\r --train_steps 3000\r --rank 32\r --lora_alpha 32\r --target_modules (transformer_blocks|single_transformer_blocks).*(to_q|to_k|to_v|to_out.0)\r --gradient_accumulation_steps 1\r --gradient_checkpointing\r --checkpointing_steps 1000\r --checkpointing_limit 2\r --enable_slicing\r --enable_tiling\r)\r': command not found train.sh: line 137: $'(\r --optimizer adamw\r --lr 5e-5\r --lr_scheduler constant_with_warmup\r --lr_warmup_steps 1000\r --lr_num_cycles 1\r --beta1 0.9\r --beta2 0.99\r --weight_decay 1e-4\r --epsilon 1e-8\r --max_grad_norm 1.0\r)\r': command not found --validation_dataset_file D:/TA_ucup/finetrainers/examples/training/sft/cogvideox/cr: No such file or directoryon : No such file or directoryogvideoxeox` I already install the library requirements and the diffusers. Is there anything I missing?
https://github.com/huggingface/finetrainers/issues/378
open
[]
2025-04-21T17:17:08Z
2025-04-24T06:24:06Z
null
MaulanaYusufIkhsanRobbani
huggingface/trl
3,333
How can I set the dataset to not shuffle? It seems there is no such option.
I'm using GRPOTrainer for training, and based on the logs I've printed, it seems that the dataset is being shuffled. However, the order of samples in the dataset is very important to me, and I don't want it to be shuffled. What should I do? I've checked the documentation but couldn't find any parameter to control this.
https://github.com/huggingface/trl/issues/3333
closed
[ "❓ question", "🏋 GRPO" ]
2025-04-21T11:11:53Z
2025-04-21T21:34:33Z
null
Tuziking
huggingface/trl
3,331
how to run multi-adapter PPO training in TRL==0.16.1 ?
In `TRL==0.11.0`, we can use multi-adapter to train PPO model like: - $\pi_\text{sft}$ sft model as base model - $\pi_\text{sft} + \text{LoRA}_\text{rm}$ as reward model - $\pi_\text{sft} + \text{LoRA}_\text{policy}$ as policy model - $\pi_\text{sft} + \text{LoRA}_\text{critic}$ as value model in v0.16.0 how to run multi-adapter PPO training.
https://github.com/huggingface/trl/issues/3331
closed
[ "❓ question", "🏋 PPO", "🏋 SFT" ]
2025-04-21T06:26:32Z
2025-06-17T08:59:11Z
null
dhcode-cpp
huggingface/huggingface_hub
3,019
How to solve "Spaces stuck in Building" problems
### Describe the bug Public spaces may stuck in Building after restarting, error log as follows: build error Unexpected job error ERROR: failed to push spaces-registry.huggingface.tech/spaces/:cpu--: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-: 401 Unauthorized ### Reproduction _No response_ ### Logs ```shell ``` ### System info ```shell This problem can still happen in python gradio spaces without requirements.txt ```
https://github.com/huggingface/huggingface_hub/issues/3019
closed
[ "bug" ]
2025-04-21T03:11:11Z
2025-04-22T07:50:01Z
null
ghost
huggingface/datasets
7,530
How to solve "Spaces stuck in Building" problems
### Describe the bug Public spaces may stuck in Building after restarting, error log as follows: build error Unexpected job error ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized ### Steps to reproduce the bug Restart space / Factory rebuild cannot avoid it ### Expected behavior Fix this problem ### Environment info no requirements.txt can still happen python gradio spaces
https://github.com/huggingface/datasets/issues/7530
closed
[]
2025-04-21T03:08:38Z
2025-11-11T00:57:14Z
null
ghost
huggingface/lerobot
1,005
[pi0] n_action_step vs chunk_size
In modeling_pi0.py, the config variable `chunk_size` is never used. Instead, the action queue is set to be the size of `n_action_step`, and the training loss is also calculated on the actions of size `n_action_step`. But I thought what should happen is that the model would predict actions of length `chunk size` (and the loss is calculated on this action length as well), and the actual execution only takes `n_action_step`. At the very least, the variable that defines the size of `action_queue` should not be the same as the variable that defines the size of the predicted action vector. They may take the same value, but should be different variables, so the user can use the config to adjust how often they want to do inference This is also what happens in pi0fast's implementation, if I am not mistaken Am I missing something here? Thanks in advance
https://github.com/huggingface/lerobot/issues/1005
closed
[ "question", "policies", "stale" ]
2025-04-20T04:00:23Z
2025-11-07T02:30:27Z
null
IrvingF7
huggingface/lerobot
1,000
How to implement a new policy?
How can I integrate a new policy (e.g., OpenVLA) into LeRobot, and specifically, which files do I need to modify?
https://github.com/huggingface/lerobot/issues/1000
closed
[ "enhancement", "policies" ]
2025-04-19T08:53:48Z
2025-07-29T14:30:18Z
null
Elycyx
huggingface/prettier-plugin-vertical-align
2
how to use
https://github.com/huggingface/prettier-plugin-vertical-align#installation Add plugins: ["@huggingface/prettier-plugin-vertical-align"] to your .prettierrc file. Are you sure to .prettierrc file?
https://github.com/huggingface/prettier-plugin-vertical-align/issues/2
closed
[]
2025-04-19T04:15:29Z
2025-04-24T02:53:42Z
null
twotwoba
huggingface/lerobot
997
how to convert pi0 fast
i just meet pi0 convert, how to convert pi0 fast ![Image](https://github.com/user-attachments/assets/ca6b8c52-4000-478e-88a0-501f0ce3c205)
https://github.com/huggingface/lerobot/issues/997
closed
[ "question" ]
2025-04-18T14:27:29Z
2025-10-14T14:06:30Z
null
ximiluuuu
huggingface/diffusers
11,359
[Feature request] LTX-Video v0.9.6 15x faster inference than non-distilled model.
**Is your feature request related to a problem? Please describe.** No problem. This request is Low priority. As and when time allows. **Describe the solution you'd like.** Please support the new release of LTX-Video 0.9.6 **Describe alternatives you've considered.** Original repo have support but it is easier to use with diffusers **Additional context.** April, 15th, 2025: New checkpoints v0.9.6: Release a new checkpoint [ltxv-2b-0.9.6-dev-04-25](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-dev-04-25.safetensors) with improved quality Release a new distilled model [ltxv-2b-0.9.6-distilled-04-25](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-distilled-04-25.safetensors) 15x faster inference than non-distilled model. Does not require classifier-free guidance and spatio-temporal guidance. Supports sampling with 8 (recommended), 4, 2 or 1 diffusion steps. Improved prompt adherence, motion quality and fine details. New default resolution and FPS: 1216 × 704 pixels at 30 FPS Still real time on H100 with the distilled model. Other resolutions and FPS are still supported. Support stochastic inference (can improve visual quality when using the distilled model) https://github.com/Lightricks/LTX-Video Feedback on distilled model https://www.reddit.com/r/StableDiffusion/comments/1k1xk1m/6_seconds_video_in_60_seconds_in_this_quality_is/ https://www.reddit.com/r/StableDiffusion/comments/1k1o4x8/the_new_ltxvideo_096_distilled_model_is_actually/
https://github.com/huggingface/diffusers/issues/11359
closed
[]
2025-04-18T08:05:27Z
2025-05-09T16:03:34Z
6
nitinmukesh
huggingface/transformers.js
1,291
@xenova/transformers vs. @huggingface/transformers npm package
### Question It's pretty confusing to have both of these on npm. Which are we supposed to use? Can you please deprecate the one that we aren't supposed to use? (`npm deprecate`)
https://github.com/huggingface/transformers.js/issues/1291
open
[ "question" ]
2025-04-17T16:10:36Z
2025-10-24T10:19:03Z
null
nzakas
huggingface/accelerate
3,510
Accelerate Config Error - How to debug this?
### System Info ```Shell pip list absl-py 2.2.2 accelerate 1.6.0 annotated-types 0.7.0 bitsandbytes 0.45.5 diffusers 0.33.0.dev0 /data/roy/diffusers ftfy 6.3.1 huggingface-hub 0.30.2 numpy 2.2.4 nvidia-cublas-cu12 12.4.5.8 nvidia-cuda-cupti-cu12 12.4.127 nvidia-cuda-nvrtc-cu12 12.4.127 nvidia-cuda-runtime-cu12 12.4.127 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.2.1.3 nvidia-curand-cu12 10.3.5.147 nvidia-cusolver-cu12 11.6.1.9 nvidia-cusparse-cu12 12.3.1.170 nvidia-cusparselt-cu12 0.6.2 nvidia-nccl-cu12 2.21.5 nvidia-nvjitlink-cu12 12.4.127 nvidia-nvtx-cu12 12.4.127 packaging 24.2 peft 0.15.2 pip 22.0.2 protobuf 5.29.4 safetensors 0.5.3 setuptools 59.6.0 tokenizers 0.21.1 torch 2.6.0 torchvision 0.21.0 transformers 4.51.3 triton 3.2.0 wandb 0.19.9 ... etc nvidia-smi +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H100 PCIe Off | 00000000:2E:00.0 Off | 0 | | N/A 43C P0 84W / 350W | 16460MiB / 81559MiB | 100% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA H100 PCIe Off | 00000000:30:00.0 Off | 0 | | N/A 45C P0 89W / 350W | 11456MiB / 81559MiB | 100% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA H100 PCIe Off | 00000000:3F:00.0 Off | 0 | | N/A 40C P0 86W / 350W | 11384MiB / 81559MiB | 100% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 3 NVIDIA H100 PCIe Off | 00000000:41:00.0 Off | 0 | | N/A 36C P0 47W / 350W | 1MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 4 NVIDIA H100 PCIe Off | 00000000:B0:00.0 Off | 0 | | N/A 46C P0 87W / 350W | 11384MiB / 81559MiB | 100% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 5 NVIDIA H100 PCIe Off | 00000000:B1:00.0 Off | 0 | | N/A 39C P0 48W / 350W | 1MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 6 NVIDIA H100 PCIe Off | 00000000:C1:00.0 Off | 0 | | N/A 35C P0 52W / 350W | 1MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 7 NVIDIA H100 PCIe Off | 00000000:C2:00.0 Off | 0 | | N/A 35C P0 51W / 350W | 1MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes:
https://github.com/huggingface/accelerate/issues/3510
closed
[]
2025-04-17T11:12:50Z
2025-05-19T08:46:12Z
null
KihongK
huggingface/diffusers
11,351
Why Wan i2v video processor always float32 datatype?
### Describe the bug I found image = self.video_processor.preprocess(image, height=height, width=width).to(device, dtype=torch.float32) https://github.com/huggingface/diffusers/blob/29d2afbfe2e09a4ee7cc51455e51ce8b8c0e252d/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L633 in pipeline_wan_i2v.py why datatype always float32, maybe it's a bug ### Reproduction just run ### Logs ```shell ``` ### System Info any platform ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11351
closed
[ "bug" ]
2025-04-17T07:00:42Z
2025-05-07T03:48:24Z
2
DamonsJ
huggingface/transformers
37,570
How to streaming output audio of Qwen2.5-omni-7b
All the examples of qwen2.5-omni-7b did not show how to streaming output audio, with passing streamer, I am able to get streaming text, but how can I get the streaming audio output?
https://github.com/huggingface/transformers/issues/37570
closed
[]
2025-04-17T04:16:35Z
2025-07-30T08:03:44Z
null
qinxuye
huggingface/diffusers
11,339
How to multi-GPU WAN inference
Hi,I didn't find multi-gpu inferences example in the documentation. Can you give me an example, such as Wan2.1-I2V-14B-720P-Diffusers. I would appreciate some help on that, thank you in advance
https://github.com/huggingface/diffusers/issues/11339
closed
[ "stale" ]
2025-04-16T10:22:41Z
2025-07-05T21:18:01Z
null
HeathHose
huggingface/trl
3,295
i have 2 gpu,but default gpu:0,How to specify a gpu:1 for training?
### Reproduction ```python from trl import ... ``` outputs: ``` Traceback (most recent call last): File "example.py", line 42, in <module> ... ``` ### System Info i have 2 gpu,but default gpu:0,How to specify a gpu:1 for training? ### Checklist - [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue)) - [x] I have included my system information - [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [x] Any traceback provided is complete
https://github.com/huggingface/trl/issues/3295
closed
[ "❓ question", "📱 cli" ]
2025-04-15T08:29:26Z
2025-04-24T19:46:37Z
null
Aristomd
huggingface/lerobot
981
How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations?
How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations?I am a beginner.
https://github.com/huggingface/lerobot/issues/981
closed
[ "question", "simulation" ]
2025-04-15T04:04:33Z
2025-10-17T11:19:34Z
null
harryhu0301
huggingface/diffusers
11,321
flux controlnet train ReadMe have a bug
### Describe the bug ![Image](https://github.com/user-attachments/assets/bc20df10-80b0-46fa-b013-799a3b1865b4) what is the controlnet config parameters? text is num_single_layers = 10, but the code set num_single_layers=0? ### Reproduction check readme file ### Logs ```shell ``` ### System Info diffusers ==0.33.0 ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11321
closed
[ "bug", "stale" ]
2025-04-15T01:30:58Z
2025-10-11T09:58:52Z
14
Johnson-yue
huggingface/agents-course
428
[QUESTION] Current schedule is non-sensical
First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord However, if you prefer you can ask here, please **be specific**. The course page states: > There’s a deadline for the certification process: all the assignments must be finished before May 1st 2025. But the "when will the next units be published" graph doesn't have Unit 4 even being released until "The end of April". And as of today (April 14, 2025) we still have no idea what any of the "use case assignments" are. As it stands, it appears to be impossible to actually complete this course. And no one from Hugging Face seems to be answering, or even acknowledging, any questions on this topic. It would be nice to get some clarity / updates.
https://github.com/huggingface/agents-course/issues/428
closed
[ "question" ]
2025-04-14T18:13:31Z
2025-04-28T06:51:58Z
null
mindcrime
huggingface/lerobot
975
[Question] How to modify model & dataset to accept two input images in observation.image?
Hi, thank you for the great repo! I’ve been going through the first three examples, and now I’d like to explore training a diffusion policy with some customized input. Specifically: My goal: I want each observation.image to contain two images as input (they have the same shape as the original single image). I want the output of the model to remain the same as in the original diffusion policy. My question: Since I’m new to this repo, I’d like to ask for guidance on what needs to be modified to support this: Model architecture: which parts of the model code should I look at or modify to handle a double-image input? Dataset / Data loading: where should I modify the dataset to provide observation.image with two images instead of one? Are there any other components I should be aware of (e.g., pre-processing, normalization, config changes, etc.)? Any advice or pointers to relevant parts of the code would be greatly appreciated! Thanks in advance 🙏
https://github.com/huggingface/lerobot/issues/975
closed
[ "dataset", "stale" ]
2025-04-14T08:35:47Z
2025-11-04T02:30:23Z
null
Keith-Luo
huggingface/candle
2,893
How to build a multi-node inference/training in candle?
Hi team, I'd like to have an example on mulit-node inference/training of candle, how can I find it? Thanks :) -- Klaus
https://github.com/huggingface/candle/issues/2893
open
[]
2025-04-14T08:03:20Z
2025-04-14T08:03:20Z
null
k82cn
huggingface/chat-ui
1,795
Offline Custom Tools
Would it be possible to define/use tools that the LLMs can use in an offline state? "Tools must use Hugging Face Gradio Spaces as we detect the input and output types automatically from the [Gradio API](https://www.gradio.app/guides/sharing-your-app#api-page)." Is there any reason that the tools can't be hosted locally with the same ability for the LLM to use?
https://github.com/huggingface/chat-ui/issues/1795
open
[ "enhancement" ]
2025-04-14T02:41:19Z
2025-04-14T02:41:19Z
0
cr-intezra
huggingface/chat-ui
1,794
Docker Image and Local Install missing file/image/etc upload
I've used the chat-ui-db:latest image as well as cloning the repo, setting up mongo and npm install/run dev and the UI I get does not have the icons or ability to upload in image or file. It only has the web search button. This would be for release 0.9.4. Is there something in .env.local that I am missing to enable this feature? Otherwise the chat-ui works as intended, I am able to use different models but wanted to test the ability to use a vision model. ![Image](https://github.com/user-attachments/assets/92c3117b-0f8e-467f-91e7-7ca4f7b95539)
https://github.com/huggingface/chat-ui/issues/1794
open
[]
2025-04-13T19:30:29Z
2025-04-13T19:30:29Z
0
cr-intezra
huggingface/optimum
2,228
Unable to convert an audio-to-audio model.
### Feature request ``` bash optimum-cli export onnx --model microsoft/speecht5_vc speecht5_vc_onnx/ ``` Output: ``` log The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] Traceback (most recent call last): File "/usr/local/bin/optimum-cli", line 8, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.12/dist-packages/optimum/commands/optimum_cli.py", line 208, in main service.run() File "/usr/local/lib/python3.12/dist-packages/optimum/commands/export/onnx.py", line 265, in run main_export( File "/usr/local/lib/python3.12/dist-packages/optimum/exporters/onnx/__main__.py", line 296, in main_export raise ValueError( ValueError: Asked to export a speecht5 model for the task audio-to-audio (auto-detected), but the Optimum ONNX exporter only supports the tasks text-to-audio for speecht5. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task audio-to-audio to be supported in the ONNX export for speecht5. ``` ### Motivation My primary objective is to convert Hugging Face models to TensorRT, but according to the documentation I've reviewed, ONNX must be used as an intermediate step ### Your contribution I don't believe I have the technical capability to implement this feature.
https://github.com/huggingface/optimum/issues/2228
closed
[ "Stale" ]
2025-04-13T00:50:26Z
2025-05-18T02:17:06Z
1
divinerapier
huggingface/lerobot
971
Can different robotic arms share the same dataset and model?
English: I currently have datasets and models for the Koch, SO100, and ALOHA robotic arms. Is it possible for these three arms to share the same dataset and model? If so, how should this be implemented? If not—given the significant hardware differences—what is the practical value of data sharing in this context? @Cadene 中文: 我这里有koch、so100、alhoa的数据集和模型,三款机械臂能共用数据集合模型么?如果能,怎么用?如果不能,那硬件千差万别,数据共享的意义何在?
https://github.com/huggingface/lerobot/issues/971
closed
[ "question", "dataset", "stale" ]
2025-04-12T05:03:27Z
2025-10-17T12:06:45Z
null
ZhangWuWei
huggingface/autotrain-advanced
881
Accelerators: Error fetching data. how to troubleshoot
Getting this error message when trying to train my model using Autotrain Accelerators: Error fetching data Error fetching training status My data file is a csv & correctly formatted. What are possible ways to troubleshoot this problem? I'm new to fine-tuning so would love any assistance
https://github.com/huggingface/autotrain-advanced/issues/881
closed
[ "stale" ]
2025-04-11T16:04:12Z
2025-06-02T15:02:09Z
null
innerspacestudio
huggingface/alignment-handbook
215
Use alignment-handbook on Apple Silicon
Hi, is it possible to install and use this tool on Apple Silicon? I am aware that certain dependencies, such as Flash Attention, do not work on Apple Silicon. Has anyone tried and successfully installed this tool without those dependencies?
https://github.com/huggingface/alignment-handbook/issues/215
closed
[]
2025-04-11T01:28:02Z
2025-04-27T01:09:55Z
0
minhquoc0712
huggingface/lerobot
968
没有物理机器人我如何进行仿真机器人,我应该如何学习
没有物理机器人我如何进行仿真机器人,我应该如何学习仿真机器人呢,有没有好的推荐吗
https://github.com/huggingface/lerobot/issues/968
closed
[ "question", "simulation" ]
2025-04-10T18:10:47Z
2025-10-08T12:54:19Z
null
harryhu0301
huggingface/diffusers
11,285
value errors in convert to/from diffusers from original stable diffusion
### Describe the bug There's a hardcode somewhere for 77 tokens, when it should be using the dimensions of what is actually in the model. I have a diffusers-layout SD1.5 model, with LongCLIP. https://huggingface.co/opendiffusionai/xllsd-alpha0 I can pull it locally, then convert to single file format, with python convert_diffusers_to_original_stable_diffusion.py \ --use_safetensors \ --model_path $SRCM \ --checkpoint_path $DESTM But then if I try to convert it back, I get size errors for the text encoder not being 77 size. I should point out that the model WORKS PROPERLY for diffusion, when loaded in diffusers format, so I dont have some funky broken model here. ### Reproduction from transformers import CLIPTextModel, CLIPTokenizer from diffusers import StableDiffusionPipeline, AutoencoderKL import torch pipe = StableDiffusionPipeline.from_single_file( "XLLsd-phase0.safetensors", torch_dtype=torch.float32, use_safetensors=True) outname = "XLLsd_recreate" pipe.save_pretrained(outname, safe_serialization=False) ### Logs ```shell venv/lib/python3.12/site-packages/diffusers/models/model_loading_utils.py", line 230, in load_model_dict_into_meta raise ValueError( ValueError: Cannot load because text_model.embeddings.position_embedding.weight expected shape torch.Size([77, 768]), but got torch.Size([248, 768]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example. ``` ### System Info - 🤗 Diffusers version: 0.32.2 - Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39 - Running on Google Colab?: No - Python version: 3.12.3 - PyTorch version (GPU?): 2.6.0+cu124 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.29.3 - Transformers version: 4.50.0 - Accelerate version: 1.5.2 - PEFT version: not installed - Bitsandbytes version: 0.45.2 - Safetensors version: 0.5.3 - xFormers version: not installed - Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11285
open
[ "bug" ]
2025-04-10T17:16:42Z
2025-05-12T15:03:03Z
2
ppbrown
huggingface/diffusers
11,272
what is the difference between from diffusion import *** and from diffusers import ***?
I have installed diffusers and it runs ok, however the code gets wrong with " no module named diffusion " when goes to from diffusion import ***? What is the difference between from diffusion import *** and from diffusers import ***? Need I install them all and what is the difference between diffusion and diffusers?
https://github.com/huggingface/diffusers/issues/11272
closed
[]
2025-04-10T05:11:56Z
2025-04-30T02:11:51Z
null
micklexqg
huggingface/inference-benchmarker
11
How to set the OPENAI_API_KEY?
There is no api_key param for inference-benchmarker. How to set the OPENAI_API_KEY? Thanks~ code there: https://github.com/huggingface/inference-benchmarker/blob/d91a0162bdfe318fe95b9a9bbb53b1bdc39194a9/src/requests.rs#L145C1-L153C36 ```bash root@P8757303A244:/opt/inference-benchmarker# inference-benchmarker -h Usage: inference-benchmarker [OPTIONS] --tokenizer-name <TOKENIZER_NAME> Options: -t, --tokenizer-name <TOKENIZER_NAME> The name of the tokenizer to use [env: TOKENIZER_NAME=] --model-name <MODEL_NAME> The name of the model to use. If not provided, the same name as the tokenizer will be used [env: MODEL_NAME=] -m, --max-vus <MAX_VUS> The maximum number of virtual users to use [env: MAX_VUS=] [default: 128] -d, --duration <DURATION> The duration of each benchmark step [env: DURATION=] [default: 120s] -r, --rates <RATES> A list of rates of requests to send per second (only valid for the ConstantArrivalRate benchmark) [env: RATES=] --num-rates <NUM_RATES> The number of rates to sweep through (only valid for the "sweep" benchmark) The rates will be linearly spaced up to the detected maximum rate [env: NUM_RATES=] [default: 10] --profile <PROFILE> A benchmark profile to use [env: PROFILE=] -b, --benchmark-kind <BENCHMARK_KIND> The kind of benchmark to run (throughput, sweep, optimum) [env: BENCHMARK_KIND=] [default: sweep] -w, --warmup <WARMUP> The duration of the prewarm step ran before the benchmark to warm up the backend (JIT, caches, etc.) [env: WARMUP=] [default: 30s] -u, --url <URL> The URL of the backend to benchmark. Must be compatible with OpenAI Message API [env: URL=] [default: http://localhost:8000] -n, --no-console Disable console UI [env: NO_CONSOLE=] --prompt-options <PROMPT_OPTIONS> Constraints for prompt length. No value means use the input prompt as defined in input dataset. We sample the number of tokens to generate from a normal distribution. Specified as a comma-separated list of key=value pairs. * num_tokens: target number of prompt tokens * min_tokens: minimum number of prompt tokens * max_tokens: maximum number of prompt tokens * variance: variance in the number of prompt tokens [env: PROMPT_OPTIONS=] --decode-options <DECODE_OPTIONS> Constraints for the generated text. We sample the number of tokens to generate from a normal distribution. Specified as a comma-separated list of key=value pairs. * num_tokens: target number of generated tokens * min_tokens: minimum number of generated tokens * max_tokens: maximum number of generated tokens * variance: variance in the number of generated tokens [env: DECODE_OPTIONS=] --dataset <DATASET> Hugging Face dataset to use for prompt generation [env: DATASET=] [default: hlarcher/inference-benchmarker] --dataset-file <DATASET_FILE> File to use in the Dataset [env: DATASET_FILE=] [default: share_gpt_filtered_small.json] --extra-meta <EXTRA_META> Extra metadata to include in the benchmark results file, comma-separated key-value pairs. It can be, for example, used to include information about the configuration of the benched server. Example: --extra-meta "key1=value1,key2=value2" [env: EXTRA_META=] --run-id <RUN_ID> [env: RUN_ID=] -h, --help Print help (see more with '--help') -V, --version Print version ```
https://github.com/huggingface/inference-benchmarker/issues/11
closed
[]
2025-04-10T04:36:11Z
2025-04-25T13:13:18Z
null
handsome-chips
huggingface/transformers
37,408
How to solve the error of converting Qwen onnx_model to tensorRT_model?
### **1. The transformers' Qwen ONNX model has been exported successfully.** ### **2. Convert ONNX_model to tensorRT_model failed by trtexec.** **error info** ``` [04/10/2025-11:04:52] [E] Error[3]: IExecutionContext::setInputShape: Error Code 3: API Usage Error (Parameter check failed, condition: engineDims.d[i] == dims.d[i]. Static dimension mismatch while setting input shape for key_cache.1. Set dimensions are [7,8,32,128]. Expected dimensions are [7,8,1,128].) [04/10/2025-11:04:52] [E] The engine was built with static shapes for input tensor key_cache.1 but the provided shapes do not match the static shapes! [04/10/2025-11:04:52] [E] Inference set up failed ``` ### **Due to the fact that Qwen of Transoformers utilizes the DynamicCache class to handle KVcache, The error should be attributed to DynamicCache.** **### ONNX model check OK** ``` The model is well-formed and valid! =======================Model1 inputs: x_s [1, 'seq_len', 1024] attn_mask [1, 'seq_len', 'seq_len'] key_cache.1 [7, 8, 'seq_len', 128] value_cache.1 [7, 8, 'seq_len', 128] =======================Model1 outputs: y_pred [1, 'seq_len', 1024] key_cache [7, 8, 'seq_len', 128] value_cache [7, 8, 'seq_len', 128] ``` **export foward** ``` def injected_forward( self, xs: torch.Tensor, att_mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool), key_cache: torch.Tensor = torch.zeros((0, 0, 0, 0), dtype=torch.float32), value_cache: torch.Tensor = torch.zeros((0, 0, 0, 0), dtype=torch.float32) ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: att_mask = ~att_mask.unsqueeze(1) * torch.finfo(xs.dtype).min past_key_values = DynamicCache(self.config.num_hidden_layers) for i in torch.arange(self.config.num_hidden_layers): past_key_values.key_cache[i] = key_cache[i].unsqueeze(0) past_key_values.value_cache[i] = value_cache[i].unsqueeze(0) past_seen_tokens = past_key_values.get_seq_length() cache_position = torch.arange(past_seen_tokens, past_seen_tokens + xs.shape[1], device=xs.device) position_ids = cache_position.unsqueeze(0) hidden_states = xs for decoder_layer in self.layers[: self.config.num_hidden_layers]: layer_outputs = decoder_layer( hidden_states, attention_mask=att_mask, position_ids=position_ids, past_key_value=past_key_values, output_attentions=False, use_cache=True, cache_position=cache_position, ) hidden_states = layer_outputs[0] xs = self.norm(hidden_states) new_key_cache = torch.cat(past_key_values.key_cache, dim=0) new_value_cache = torch.cat(past_key_values.value_cache, dim=0) return xs, new_key_cache, new_value_cache ```
https://github.com/huggingface/transformers/issues/37408
closed
[]
2025-04-10T04:08:47Z
2025-06-28T08:03:06Z
null
dearwind153
huggingface/lerobot
964
RuntimeError: Could not load libtorchcodec during lerobot/scripts/train.py script
### System Info ```Shell - `lerobot` version: 0.1.0 - Platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.29.3 - Dataset version: 3.4.1 - Numpy version: 1.26.4 - PyTorch version (GPU?): 2.5.1+cu124 (True) - Cuda version: 12040 Additionally: ffmpeg version : 7.1.1 TorchCodec version : 0.2.1 ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction Install leRobot from the main documentation as follows : conda create -n lerobot python=3.10 -y conda activate lerobot git clone https://github.com/huggingface/lerobot.git ~/lerobot pip install --no-binary=av -e pip install torchvision==0.20.1 conda install -c conda-forge 'ffmpeg>=7.0' -y After collecting a dataset, run `lerobot/scripts/train.py` script ### Expected behavior Hello all! I am getting started with the lerobot so100 arm and have had a few issues. The first was the same as the issue in #883 in running the `control_robot.py` script which I solved (or bypassed) by following [remi cadene's response](https://github.com/huggingface/lerobot/issues/679#issuecomment-2737292192 ) to do `pip install torchvision==0.20.1` and also `conda install -c conda-forge 'ffmpeg>=7.0' -y` after doing `pip install --no-binary=av -e `. This allowed me to successfully run the `control_robot.py` script successfully. However, then I tried to collect a dataset and run a training with the `lerobot/scripts/train.py` script and I encountered the following issue : ``` from torchcodec.decoders._core.video_decoder_ops import ( File "/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/decoders/_core/video_decoder_ops.py", line 59, in <module> load_torchcodec_extension() File "/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/decoders/_core/video_decoder_ops.py", line 44, in load_torchcodec_extension raise RuntimeError( RuntimeError: Could not load libtorchcodec. Likely causes: 1. FFmpeg is not properly installed in your environment. We support versions 4, 5, 6 and 7. 2. The PyTorch version (2.5.1+cu124) is not compatible with this version of TorchCodec. Refer to the version compatibility table: https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec. 3. Another runtime dependency; see exceptions below. The following exceptions were raised as we tried to load libtorchcodec: [start of libtorchcodec loading traceback] /home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/libtorchcodec7.so: undefined symbol: _ZNK3c1011StorageImpl27throw_data_ptr_access_errorEv libavutil.so.58: cannot open shared object file: No such file or directory libavutil.so.57: cannot open shared object file: No such file or directory /home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/libtorchcodec4.so: undefined symbol: _ZNK3c1011StorageImpl27throw_data_ptr_access_errorEv [end of libtorchcodec loading traceback]. ``` It seems that I have some issues with the `torchcodec`and `ffmpeg` versions not being compatible. Checking their versions gives me: ``` ffmpeg version 7.1.1 Copyright (c) 2000-2025 the FFmpeg developers built with gcc 13.3.0 (conda-forge gcc 13.3.0-2) configuration: --prefix=/home/moonshot/miniconda3/envs/lerobot --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-c++ --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --enable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --disable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-alsa --enable-libpulse --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libvorbis --enable-libopus --enable-librsvg --enable-ffplay --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/pkg-config libavutil 59. 39.100 / 59. 39.100 libavcodec 61. 19.101 / 61. 19.101 libavformat 61. 7.100 / 61. 7.100 libavdevice 61. 3.100 / 61. 3.100 libavfilter 10. 4.100 / 10. 4.100 libswscale 8. 3.100 / 8. 3.100 libswresample 5. 3.100 / 5. 3.100 libpostproc 58. 3.100 / 58. 3.100 ``` And `TorchCodec` version 0.2.1. Could anyone suggest the right v
https://github.com/huggingface/lerobot/issues/964
closed
[ "question" ]
2025-04-09T14:25:38Z
2025-04-15T13:32:24Z
null
shrutichakraborty
huggingface/transformers
37,390
how to reduce original model's tokenizer vocabulary
`###` Feature request I am working on model distillation. I am currently using the nllb-distilled-600M model, but the parameters of this model are still too large, and the vocabulary supports more than 100 languages. My use case is single language translation, such as English to Hebrew. Therefore, I need to reduce the redundant vocabulary of the original model and only keep the English and Hebrew vocabulary. I noticed that transformers do not use the sentencepiece.bpe.model file, and I don't want to retrain a tokenizer, because the trained tokenizer will be inconsistent with the original tokenizer result, which will lead to the subsequent model weight migration and model distillation process cannot be carried out. Therefore, my idea is to quickly replace the tokenizer.json and tokenizer_config.json files in the original model, and then migrate the model weights at the model level to get a pruned model. What I am doing now is to load the original model tokenizer, tokenize the corpus I prepared, count the registered tokens, regain a reduced vocabulary, and change the corresponding json file. Is there any better strategy to quickly replace the tokenizer vocabulary? ![Image](https://github.com/user-attachments/assets/0433f4df-766d-4804-a752-e02a104d3cfa) ### Motivation quick modify model vocabulary for beater application ### Your contribution > `def modify_tokenizer(): for sentences in tqdm.tqdm(range(100,len(en_corpus),100)): enc = teacher_tokenizer(en_corpus[sentences-100:sentences], add_special_tokens=False, return_attention_mask=False, return_token_type_ids=False) for ids in enc['input_ids']: selected_ids.update(ids) print('all english tokens nums is ',len(selected_ids)) for sentences in tqdm.tqdm(range(100,len(he_corpus),100)): enc = teacher_tokenizer(he_corpus[sentences-100:sentences], add_special_tokens=False, return_attention_mask=False, return_token_type_ids=False) for ids in enc['input_ids']: selected_ids.update(ids) print('all english+Hebrew tokens nums is ',len(selected_ids)) for tok in teacher_tokenizer.all_special_tokens: # print('special_token ',tok) selected_ids.add(teacher_tokenizer.convert_tokens_to_ids(tok)) print('all english+Hebrew_special tokens nums is ',len(selected_ids)) # 从原 vocab 中反查出对应 token orig_vocab = teacher_tokenizer.get_vocab() new_tokens = [] for tok, idx in sorted(orig_vocab.items(), key=lambda kv: kv[1]): if idx in selected_ids: new_tokens.append(tok) # 写出新的 vocab.json(Hugging Face 格式) new_vocab = {tok: i for i, tok in enumerate(new_tokens)} #修改原有tokenizer和tokenizer_config teacher_tokenizer_path='/workspace/nllb-200-distilled-600M/tokenizer.json' teacher_tokenizer_config_path='/workspace/nllb-200-distilled-600M/tokenizer_config.json' student_tokenizer_path='/workspace/distilled_model_test/tokenizer.json' student_tokenizer_config_path='/workspace/distilled_model_test/tokenizer_config.json' def _read_json(path): with open(path, "r", encoding="utf-8") as f: data = json.load(f) return data def _write_json(path,data): with open(path, "w", encoding="utf-8") as f: json.dump(data, f, ensure_ascii=False, indent=2) #change tokenizer student_tokenizer_data=_read_json(teacher_tokenizer_path) student_tokenizer_data['model']['vocab']=new_vocab for single_added_token in student_tokenizer_data['added_tokens']: single_added_token['id']=new_vocab[single_added_token['content']] new_merges=[] #change merges for merge_pair in student_tokenizer_data['model']['merges']: _temp_merge=merge_pair[0]+merge_pair[1] if _temp_merge in new_vocab.keys(): new_merges.append(merge_pair) student_tokenizer_data['model']['merges']=new_merges _write_json(student_tokenizer_path,student_tokenizer_data) #change tokenizer_config`
https://github.com/huggingface/transformers/issues/37390
open
[ "Feature request" ]
2025-04-09T10:45:56Z
2025-04-09T10:53:07Z
null
masterwang22327
huggingface/datasets
7,506
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM
### Describe the bug I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL error when I call next(dataloader_iter). Funny is, that I can run some test fine tuning (for just 200 training steps) in 1 A100 GPU using SLURM. Is there any rate limiter set for querying dataset? I could run the fine tuning with the same settings (4 A100 GPUs in SLURM) last month. ### Steps to reproduce the bug You would need a server installed with SLURM 1. Create conda environment 1.1 conda create -n example_env -c conda-forge gxx=11 python=3.10 1.2 conda activate example_env 1.3 pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124 1.4 conda install nvidia/label/cuda-12.4.0::cuda-toolkit 1.5 Download flash_attn-2.7.4.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl 1.6 pip3 install packaging 1.7 pip3 install ninja 1.8 pip3 install mlflow 1.9 Clone https://github.com/calvintanama/axolotl.git 1.10 `cd` to `axolotl` 1.11 pip3 install -e '.[deepspeed]' 2. Run the training 2.1. Create a folder called `config_run` in axolotl directory 2.2. Copy `config/phi3_pruned_extra_pretrain_22_29_bottleneck_residual_8_a100_4.yaml` to `config_run` 2.3. Change yaml file in the `config_run` accordingly 2.4. Change directory and conda environment name in `jobs/train_phi3_22_29_bottleneck_residual_8_a100_4_temp.sh` 2.5. `jobs/train_phi3_22_29_bottleneck_residual_8_a100_4_temp.sh` ### Expected behavior This should not cause any error, but gotten ``` File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/accelerate/data_loader.py", line 552, in __iter__ [rank3]: current_batch = next(dataloader_iter) [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 701, in __next__ [rank3]: data = self._next_data() [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 757, in _next_data [rank3]: data = self._dataset_fetcher.fetch(index) # may raise StopIteration [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 33, in fetch [rank3]: data.append(next(self.dataset_iter)) [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/accelerate/data_loader.py", line 338, in __iter__ [rank3]: for element in self.dataset: [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2266, in __iter__ [rank3]: for key, example in ex_iterable: [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1866, in __iter__ [rank3]: for key, example in self.ex_iterable: [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1084, in __iter__ [rank3]: yield from self._iter() [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1263, in _iter [rank3]: for key, transformed_example in outputs: [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1258, in <genexpr> [rank3]: outputs = ( [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1244, in iter_outputs [rank3]: for i, key_example in inputs_iterator: [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1106, in iter_batched_inputs [rank3]: for key, example in iterator: [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1866, in __iter__ [rank3]: for key, example in self.ex_iterable: [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1535, in __iter__ [rank3]: for x in self.ex_iterable: [rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datase
https://github.com/huggingface/datasets/issues/7506
open
[]
2025-04-09T06:32:04Z
2025-06-29T06:04:59Z
2
calvintanama
huggingface/lerobot
960
pi0-fintune-performance
I have been fine-tuning the provided pi0-base model on my dataset using LeRobot. After training for 100,000 steps, I found that the model performs well on tasks that appeared in my dataset, but its performance on unseen tasks is very poor. It seems to lack the generalization ability of a VLA model. Is this phenomenon normal? Are there any strategies to improve this situation?
https://github.com/huggingface/lerobot/issues/960
closed
[ "question", "policies" ]
2025-04-09T01:21:12Z
2025-10-08T08:43:22Z
null
yanghb1
huggingface/lerobot
956
pi0 multi gps train
if i have multi 4090, how to modify to train pi0? only 1 4090 just error ![Image](https://github.com/user-attachments/assets/5f1900f2-6d0a-4e05-be99-81587f0bb22d)
https://github.com/huggingface/lerobot/issues/956
closed
[ "question" ]
2025-04-08T13:06:27Z
2025-11-20T03:07:56Z
null
ximiluuuu
huggingface/transformers
37,364
How to find a specific func doc when using transformers doc?
### Feature request Better UX for doc ### Motivation The search and UI layout make it so hard to find a func doc, especially when there are so many func doc in one webpage and your just can not find what you want by web page search. ### Your contribution no, right now
https://github.com/huggingface/transformers/issues/37364
open
[ "Feature request" ]
2025-04-08T10:48:04Z
2025-09-15T19:16:35Z
null
habaohaba
huggingface/open-r1
586
what is next for this project?
https://github.com/huggingface/open-r1/issues/586
open
[]
2025-04-07T21:29:54Z
2025-04-07T21:29:54Z
null
Mnaik2
huggingface/lerobot
949
Optional deps in using LeRobot as am optional package
Hi, we are working on enabling LeRobot dataset generation in [IsaacLab](https://github.com/isaac-sim/IsaacLab), such that developers could create data with IsaacLab data generation workflow and use it in their robot learning models. The asks are, 1. Is there any scheduled release, such that downstream devs could have stable codebase to integrate LeRobot into their applications? 2. Can we move some deps as optional wrt the core code, if training/eval is not expected? For example, we only need Lerobot dataset related functions, Gymnasium dependency is not needed. You only need Gymnasium dependency if you want to use the environment in eval mode during training or deployment. I hope those could expand the user base further for LeRobot dataset generation and for training/eval with broader model families.
https://github.com/huggingface/lerobot/issues/949
closed
[ "question", "dataset", "simulation", "stale" ]
2025-04-07T16:55:48Z
2025-10-21T02:29:27Z
null
xyao-nv
huggingface/datasets
7,502
`load_dataset` of size 40GB creates a cache of >720GB
Hi there, I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows: ```python ds = DatasetDict( { "train": load_dataset( "parquet", data_dir=f"{local_dir}/{tok}", cache_dir=cache_dir, num_proc=min(12, os.cpu_count()), # type: ignore split=ReadInstruction("train", from_=0, to=NUM_TRAIN, unit="abs"), # type: ignore ), "validation": load_dataset( "parquet", data_dir=f"{local_dir}/{tok}", cache_dir=cache_dir, num_proc=min(12, os.cpu_count()), # type: ignore split=ReadInstruction("train", from_=NUM_TRAIN, unit="abs"), # type: ignore ) } ) ``` which still strangely creates 720GB of cache. In addition, if I remove the raw parquet file folder (`f"{local_dir}/{tok}"` in this example), I am not able to load anything. So, I am left wondering what this cache is doing. Am I missing something? Is there a solution to this problem? Thanks a lot in advance for your help! A related issue: https://github.com/huggingface/transformers/issues/10204#issue-809007443. --- Python: 3.11.11 datasets: 3.5.0
https://github.com/huggingface/datasets/issues/7502
closed
[]
2025-04-07T16:52:34Z
2025-04-15T15:22:12Z
2
pietrolesci
huggingface/trl
3,254
How to get completion_length?
I noticed that during GRPO training, `completion_length` is recorded. However, I found that it’s not simply obtained by `len(completion)`. How is this calculated—by tokens? Is it possible for me to access the `completion_length` for each sample?
https://github.com/huggingface/trl/issues/3254
open
[ "❓ question", "🏋 GRPO" ]
2025-04-07T15:02:04Z
2025-04-11T03:10:20Z
null
Tuziking
huggingface/diffusers
11,220
Unconditional image generation documentation page not working as expected
### Describe the bug When consulting the documentation for [unconditional image generation](https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation), the last embedded page seems to contain an error that blocks it from being shown (see image below). This is @stevhliu's model stored in [this](https://huggingface.co/spaces/stevhliu/unconditional-image-generation) huggingface space. This space is also down in HuggingFace. <img width="1511" alt="Image" src="https://github.com/user-attachments/assets/4b33be09-97b1-4f76-bd23-27c905616ee8" /> ### Reproduction - Go to https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation or https://huggingface.co/spaces/stevhliu/unconditional-image-generation, you will see that the unconditional image generation part is not loading ### Logs ```shell ``` ### System Info Not relevant as it is documentation, not system related ### Who can help? @stevhliu
https://github.com/huggingface/diffusers/issues/11220
closed
[ "bug" ]
2025-04-07T10:32:45Z
2025-04-08T08:47:18Z
2
alvaro-mazcu