repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
โŒ€
user
stringlengths
2
28
huggingface/transformers.js
1,130
Tips on Converting Newer Models
### Question ๐ŸŽ‰๐ŸŽ‰Happy New Year to the incredible Transformers.js team!๐ŸŽ‰๐ŸŽ‰ As I work on converting new (text-generation) models for use with Transformers.js. Here's what i've tried since last week : * python converter script * optimum cli onnx * onnx-community/convert-to-onnx spaces the problem i encounter as i move forward to newer models, i realize that the converter is looking for specific files like the ff below which is easy to convert both locally and online: ![image](https://github.com/user-attachments/assets/bdb031eb-c87a-4e1a-b895-7608b94699d0) while some newer models consist files like of the ff below which i couldn't convert: ![image](https://github.com/user-attachments/assets/01058d6a-f232-4ed5-87fe-2df82b346025) i have no problem with pc specs at all, i maybe missing some steps, rules or understanding converting models. Iโ€™d greatly appreciate any tips, best practices, or resources you could share to streamline the process and ensure compatibility. Much Appreciated!
https://github.com/huggingface/transformers.js/issues/1130
open
[ "question" ]
2025-01-01T05:32:09Z
2025-01-01T05:32:09Z
null
josephencila
huggingface/lerobot
606
Dataset does not support length of feature shape > 1
Hi, Thank you for this excellent project! I am trying to create a custom dataset with additional sensory information (such as tactile data) which is an Array3D tensor, but find that when I use the approach mentioned in #547, there is no support to add custom tensor like observations to the episode buffer. Specifically there are assertions that require the feature shape to be a 1D array at most [here](https://github.com/huggingface/lerobot/blob/59e275743499c5811a9f651a8947e8f881c4058c/lerobot/common/datasets/utils.py#L274)
https://github.com/huggingface/lerobot/issues/606
closed
[ "question", "dataset", "stale" ]
2024-12-31T21:08:26Z
2025-10-19T02:32:29Z
null
akashsharma02
huggingface/finetrainers
169
How to build a dataset for finetuning CogVideoX I2V 1.5
Hi, I want to finetune the CogVideoX I2V 1.5 (5B) model. I have a set of videos that I want to use, but first I need to preprocess them so they meet the requirements of the model. Do I have to make sure that my fine-tuning dataset meets the generation properties of the model? That is, in the case of CogVideoX 1.5, the videos should be: - Min(W, H) = 768 - 768 โ‰ค Max(W, H) โ‰ค 1360 - Max(W, H) % 16 = 0 - Video Length: 5 seconds or 10 seconds - Frame Rate: 16 frames / second Do I need to make sure that all my fine-tuning videos follow those guidelines?
https://github.com/huggingface/finetrainers/issues/169
closed
[]
2024-12-31T19:55:00Z
2025-03-08T23:43:31Z
null
royvelich
huggingface/diffusers
10,416
Euler flow matching scheduler is missing documentation for parameters
![image](https://github.com/user-attachments/assets/ecd16c04-8f31-42fc-9f30-e660cf4f5853) I think there are some undocumented parameters here.
https://github.com/huggingface/diffusers/issues/10416
closed
[]
2024-12-31T13:15:35Z
2025-01-09T18:54:41Z
4
bghira
huggingface/chat-ui
1,636
Any way to pass authorization header from Oauth2 down to custom endpoint?
## Describe your feature request It would be nice to be able to pass the authorization header from Oauth2 to custom endpoint. I have an endpoint that mimicks TGI and I would like to authenticate every request in order to protect the api, ## Implementation idea Just pass an authorization header from frontend to bff and pass it further to the endpoint. It could be a custom header if that would conflict with the current authorization configuration for endpoints. The current configuration allows to pass a static auth header, but I want to be able to pass the jwt of the authenticated user.
https://github.com/huggingface/chat-ui/issues/1636
open
[ "enhancement" ]
2024-12-31T13:00:22Z
2024-12-31T13:00:22Z
0
corte
huggingface/diffusers
10,415
[Pipelines] Add AttentiveEraser
### Model/Pipeline/Scheduler description Iโ€™ve worked on a project called AttentiveEraser, which is a tuning-free method for object removal in images using diffusion models. The code for this project is built upon modifications to existing Diffusers pipelines, so it should be relatively straightforward to integrate it into the library. ## About AttentiveEraser AttentiveEraser enhances object removal capabilities by using self-attention redirection guidance. It supports different levels of mask precision (semantic segmentation, bounding boxes, and hand-drawn masks) and effectively fills in removed regions by leveraging the generative power of diffusion models. ## Help Needed As someone new to this process, Iโ€™m unsure how to properly package this into a Diffusers pipeline. Is anyone interested in collaborating on this integration or able to provide guidance on the steps I should take next? Iโ€™d love to contribute this feature to the community, and the relevant code is already available! Code: <https://github.com/Anonym0u3/AttentiveEraser> Looking forward to any suggestions or assistance! ![fenmian](https://github.com/user-attachments/assets/6c21a68a-be14-437c-89db-a2059557b7a9) ### Open source status - [X] The model implementation is available. - [X] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links for the implementation _No response_
https://github.com/huggingface/diffusers/issues/10415
closed
[ "stale" ]
2024-12-31T07:44:48Z
2025-02-05T15:54:43Z
7
Anonym0u3
huggingface/diffusers
10,414
[<languageCode>] Translating docs to Chinese
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community ๐ŸŒ. Who would want to translate? Please follow the ๐Ÿค— [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about Diffusers ๐Ÿค—). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/diffusers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review. * ๐Ÿ™‹ If you'd like others to help you with the translation, you can also post in the ๐Ÿค— [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63). Thank you so much for your help! ๐Ÿค—
https://github.com/huggingface/diffusers/issues/10414
closed
[]
2024-12-31T06:45:21Z
2024-12-31T06:49:52Z
0
S20180576
huggingface/peft
2,301
How to pass in an attention _ mask that is one dimension more than input _ ids
### System Info Hello, how can I pass in `attention_mask` that has one more dimension than `input_ids`, for example: `output = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)` The `input_ids` dimension is [bitch_size,N], and the `attention_mask` dimension is [bitch_size,N,N]. Under this condition, when the above line of code is run, the following error will be reported: File "/root/anaconda3/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 179, in _expand_mask bsz, src_len = mask.size() ValueError: too many values โ€‹โ€‹to unpack (expected 2) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [X] My own task or dataset (give details below) ### Reproduction ` input_ids = torch.cat([ (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|mmu|>']).to(device), (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|soi|>']).to(device), image_tokens, (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|eoi|>']).to(device), (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|sot|>']).to(device), input_ids ], dim=1).long() attention_mask = create_attention_mask_for_mmu(input_ids.to(device), eoi_id=int(uni_prompting.sptids_dict['<|eoi|>'])) cont_toks_list = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)` ### Expected behavior Read the model for fine-tuning and reasoning.
https://github.com/huggingface/peft/issues/2301
closed
[]
2024-12-31T02:26:14Z
2025-02-07T15:03:57Z
null
Chinesehou97
huggingface/diffusers
10,411
How to call the training of lora weights obtained from examples/concestency_stiffness/train_lcm-distill-lora_std_wds. py
I followed https://github.com/huggingface/diffusers/tree/main/examples/consistency_distillation The provided tutorial trained the final Lora weight, but did not find a way to call it. May I ask if you can provide me with a demo of running and calling this weight? Thank you very much! the training set: ``` #!/bin/bash # Define the variables PRETRAINED_TEACHER_MODEL="/ai/yzy/latent-consistency-model-main/stable-diffusion-v1-5" OUTPUT_DIR="/ai/yzy/latent-consistency-model-main/output_sd001" RESOLUTION=512 LORA_RANK=64 LEARNING_RATE=1e-6 LOSS_TYPE='huber' ADAM_WEIGHT_DECAY=0.0 MAX_TRAIN_STEPS=1000 MAX_TRAIN_SAMPLES=4000000 DATALOADER_NUM_WORKERS=4 TRAIN_SHARDS_PATH_OR_URL='/ai/yzy/latent-consistency-model-main/00000.tar' VALIDATION_STEPS=200 CHECKPOINTING_STEPS=200 CHECKPOINTS_TOTAL_LIMIT=10 TRAIN_BATCH_SIZE=8 GRADIENT_ACCUMULATION_STEPS=1 SEED=453645634 # Run the training script python ./LCM_Training_Script/consistency_distillation/train_lcm_distill_lora_sd_wds.py \ --pretrained_teacher_model=$PRETRAINED_TEACHER_MODEL \ --output_dir=$OUTPUT_DIR \ --mixed_precision=fp16 \ --resolution=$RESOLUTION \ --lora_rank=$LORA_RANK \ --learning_rate=$LEARNING_RATE \ --loss_type=$LOSS_TYPE \ --adam_weight_decay=$ADAM_WEIGHT_DECAY \ --max_train_steps=$MAX_TRAIN_STEPS \ --max_train_samples=$MAX_TRAIN_SAMPLES \ --dataloader_num_workers=$DATALOADER_NUM_WORKERS \ --train_shards_path_or_url=$TRAIN_SHARDS_PATH_OR_URL \ --validation_steps=$VALIDATION_STEPS \ --checkpointing_steps=$CHECKPOINTING_STEPS \ --checkpoints_total_limit=$CHECKPOINTS_TOTAL_LIMIT \ --train_batch_size=$TRAIN_BATCH_SIZE \ --gradient_checkpointing \ --enable_xformers_memory_efficient_attention \ --gradient_accumulation_steps=$GRADIENT_ACCUMULATION_STEPS \ --use_8bit_adam \ --resume_from_checkpoint=latest \ --seed=$SEED ``` the output: ![image](https://github.com/user-attachments/assets/5fb9a474-52d9-4d2f-85e4-dd5c3e0902db)
https://github.com/huggingface/diffusers/issues/10411
closed
[]
2024-12-30T12:06:07Z
2024-12-31T07:21:40Z
null
yangzhenyu6
huggingface/text-embeddings-inference
461
How to Set the Threshold for gte-multilingual-reranker
I want to use the gte-multilingual-reranker-base model to re-rank the retrieved documents and discard some of them based on a threshold. I have seen examples on Hugging Face where the logits are used as the output scores, but how can I determine the appropriate threshold?
https://github.com/huggingface/text-embeddings-inference/issues/461
open
[]
2024-12-30T11:39:48Z
2025-02-09T06:29:02Z
null
ketusrai
huggingface/optimum
2,140
KeyError: 'swinv2 model type is not supported yet in NormalizedConfig.
### System Info ```shell Google Colab T4 GPU transformers Version: 4.47.1 optimum Version: 1.24.0.dev0 ``` ### Who can help? @michaelbenayoun, @JingyaHuang, @echarlaix ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) from optimum.onnxruntime import ORTModelForVision2Seq model = ORTModelForVision2Seq.from_pretrained("/content/swin-xlm-image-recognition", export=True, use_cache=False) model.save_pretrained("swin-xlm-image-recognition-onnx") ### Expected behavior How to solve this issue? I am trying to convert my VisionEncoderDecoderModel to onnx using optimum, but I am getting this error: `KeyError: 'swinv2 model type is not supported yet in NormalizedConfig. Only albert, bart, bert, blenderbot, blenderbot-small, bloom, falcon, camembert, codegen, cvt, deberta, deberta-v2, deit, distilbert, donut-swin, electra, encoder-decoder, gemma, gpt2, gpt-bigcode, gpt-neo, gpt-neox, gptj, imagegpt, llama, longt5, marian, markuplm, mbart, mistral, mixtral, mpnet, mpt, mt5, m2m-100, nystromformer, opt, pegasus, pix2struct, phi, phi3, phi3small, poolformer, regnet, resnet, roberta, segformer, speech-to-text, splinter, t5, trocr, vision-encoder-decoder, vit, whisper, xlm-roberta, yolos, qwen2, granite are supported. If you want to support swinv2 please propose a PR or open up an issue.'` The encoder is "swinv2" and the decoder is "xlm-roberta".
https://github.com/huggingface/optimum/issues/2140
open
[ "bug" ]
2024-12-30T10:29:14Z
2024-12-30T10:29:14Z
0
Billybeast2003
huggingface/optimum-intel
1,096
How to use trainer.train() with OVModelForCausalLM() model
I am currently converting a local LLM to Open Vino, I would like to fine tune my model with the Trainer function but I get an error stating: AttributeError: 'OVModelForCausalLM' object has no attribute 'named_children' Please let me know if there is a way to fine tune openVino models that are loaded with OVModelForCausalLM(). Attached is my script [Fine_Tuning_mistral_7b_v3 (2).zip](https://github.com/user-attachments/files/18271287/Fine_Tuning_mistral_7b_v3.2.zip)
https://github.com/huggingface/optimum-intel/issues/1096
closed
[]
2024-12-29T23:54:26Z
2025-02-27T14:54:20Z
null
CJames1261
huggingface/trl
2,523
How to solve the situation where the tokenizer of the reward model is inconsistent with the tokenizer of the actor model๏ผŸ
https://github.com/huggingface/trl/issues/2523
open
[ "โ“ question" ]
2024-12-27T09:43:06Z
2024-12-28T06:26:16Z
null
stephen-nju
huggingface/peft
2,298
Qdora support
### Feature request is it possible to use qdora with peft? ### Motivation qdora is better than qlora and perform like full fine tuning. ### Your contribution ``` peft_config = LoraConfig( r=8, lora_alpha=32, lora_dropout=0.1, qdora=True # adding qdora ) ```
https://github.com/huggingface/peft/issues/2298
closed
[]
2024-12-27T04:47:54Z
2025-01-03T12:26:58Z
2
imrankh46
huggingface/smolagents
2
How to call OpenAI-like models through an API?
How to call OpenAI-like models through an API?
https://github.com/huggingface/smolagents/issues/2
closed
[]
2024-12-27T04:34:35Z
2024-12-29T21:58:10Z
null
win4r
huggingface/datasets
7,347
Converting Arrow to WebDataset TAR Format for Offline Use
### Feature request Hi, I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by: ``` import json from datasets import load_dataset dataset = load_dataset("pixparse/cc3m-wds") dataset.save_to_disk("./cc3m_1") ``` now I need to convert it to WebDataset's TAR format for offline data ingestion. Is there a straightforward method to achieve this conversion without an internet connection? Can I simply convert it by ``` tar -cvf ``` btw, when I tried: ``` import webdataset as wds from huggingface_hub import get_token from torch.utils.data import DataLoader hf_token = get_token() url = "https://huggingface.co/datasets/timm/imagenet-12k-wds/resolve/main/imagenet12k-train-{{0000..1023}}.tar" url = f"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'" dataset = wds.WebDataset(url).decode() dataset.save_to_disk("./cc3m_webdataset") ``` error occured: ``` AttributeError: 'WebDataset' object has no attribute 'save_to_disk' ``` Thanks a lot! ### Motivation Converting Arrow to WebDataset TAR Format ### Your contribution No clue yet
https://github.com/huggingface/datasets/issues/7347
closed
[ "enhancement" ]
2024-12-27T01:40:44Z
2024-12-31T17:38:00Z
4
katie312
huggingface/transformers.js
1,118
Trying to use custom finetuned Whisper Model with
### Question @xenova I am trying to use our own Whisper fine tuned model https://huggingface.co/medxcribe/whisper-base.en with https://huggingface.co/spaces/Xenova/whisper-web. I have uploaded into a seperate repo for reference https://huggingface.co/medxcribe/whisper-base-onnx.en. We have converted the fine tuned medxcribe/whisper-base.en using this command. `pip install onnx==1.17.0 pip install onnxruntime==1.20.1 pip install transformers==4.35.2 optimum-cli export onnx --model medxcribe/whisper-base.en whisper_onnx --task automatic-speech-recognition-with-past --opset 14` But unfortunately while load the Whisper-web, we are stuck with this below error Can't create a session" at t.createSessionFinalize (http://localhost:4173/assets/worker-1c2c88a7.js:1789:105945) at t.createSession (http://localhost:4173/assets/worker-1c2c88a7.js:1789:106543) at t.createSession (http://localhost:4173/assets/worker-1c2c88a7.js:1789:98867) at t.OnnxruntimeWebAssemblySessionHandler.loadModel (http://localhost:4173/assets/worker-1c2c88a7.js:1789:101717) at Object.createSessionHandler (http://localhost:4173/assets/worker-1c2c88a7.js:9:115048) at dn.create (http://localhost:4173/assets/worker-1c2c88a7.js:1:14653) at async constructSession (http://localhost:4173/assets/worker-1c2c88a7.js:1810:22248) at async Promise.all (index 2) at async WhisperForConditionalGeneration.from_pretrained (http://localhost:4173/assets/worker-1c2c88a7.js:1810:29662) at async AutoModelForSpeechSeq2Seq.from_pretrained (http://localhost:4173/assets/worker-1c2c88a7.js:1810:77285) Any suggestions? On a high level there is a problem with the generated Onnx files.
https://github.com/huggingface/transformers.js/issues/1118
open
[ "question" ]
2024-12-26T20:18:36Z
2024-12-26T20:18:36Z
null
vijaim
huggingface/finetrainers
153
How to generate result of validation and resolution.
Hi author: I am using your hunyuan finetuning bash to finetune lora on my own dataset with original resolution of 1080p. But I find your model can only run on video with both height and weight can be divided by 32. Can the model also be trained on video with 360p or 720p and why?
https://github.com/huggingface/finetrainers/issues/153
closed
[]
2024-12-26T15:21:22Z
2025-01-10T23:38:39Z
null
Aristo23333
huggingface/lerobot
597
Inquiry About Support for RDT-1B Model
Hi, I would like to extend my heartfelt thanks for maintaining such an outstanding codebase. Your dedication and hard work have significantly contributed to advancements in the robotics field, and I truly appreciate the resources and support your community provides. I am reaching out to inquire whether there are any plans to support the RDT-1B model from the [RoboticsDiffusionTransformer](https://github.com/thu-ml/RoboticsDiffusionTransformer) repository within the LeRobot framework. The RDT-1B model appears to offer promising capabilities for robotics applications, and integrating it could potentially enhance the functionalities and performance of projects built on LeRobot. Could you please let me know if there are any intentions to incorporate this model in the future, or if there are any existing efforts towards this integration? Additionally, if there are ways the community can assist or contribute to this effort, I would be eager to participate. Thank you once again for all your contributions and support. I look forward to your response.
https://github.com/huggingface/lerobot/issues/597
closed
[ "question", "policies", "stale" ]
2024-12-26T11:12:58Z
2025-10-08T20:52:51Z
null
Robert-hua
huggingface/diffusers
10,383
[Request] Optimize HunyuanVideo Inference Speed with ParaAttention
Hi guys, First and foremost, I would like to commend you for the incredible work on the `diffusers` library. It has been an invaluable resource for my projects. I am writing to suggest an enhancement to the inference speed of the `HunyuanVideo` model. We have found that using [ParaAttention](https://github.com/chengzeyi/ParaAttention) can significantly speed up the inference of HunyuanVideo. ParaAttention provides context parallel attention that works with `torch.compile`, supporting Ulysses Style and Ring Style parallelism. I hope we could add a doc or introduction of how to make `HunyuanVideo` of `diffusers` run faster with `ParaAttention`. Besides `HunyuanVideo`, `FLUX`, `Mochi` and `CogVideoX` are also supported. Steps to Optimize HunyuanVideo Inference with `ParaAttention`: # Install ParaAttention: ```bash pip3 install para-attn # Or visit https://github.com/chengzeyi/ParaAttention.git to see detailed instructions ``` # Example Script: Here is an example script to run HunyuanVideo with ParaAttention: ```python import torch import torch.distributed as dist from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel from diffusers.utils import export_to_video dist.init_process_group() # [rank1]: RuntimeError: Expected mha_graph->execute(handle, variant_pack, workspace_ptr.get()).is_good() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) torch.backends.cuda.enable_cudnn_sdp(False) model_id = "tencent/HunyuanVideo" transformer = HunyuanVideoTransformer3DModel.from_pretrained( model_id, subfolder="transformer", torch_dtype=torch.bfloat16, revision="refs/pr/18", ) pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18", ).to(f"cuda:{dist.get_rank()}") pipe.vae.enable_tiling( # Make it runnable on GPUs with 48GB memory # tile_sample_min_height=128, # tile_sample_stride_height=96, # tile_sample_min_width=128, # tile_sample_stride_width=96, # tile_sample_min_num_frames=32, # tile_sample_stride_num_frames=24, ) from para_attn.context_parallel import init_context_parallel_mesh from para_attn.context_parallel.diffusers_adapters import parallelize_pipe from para_attn.parallel_vae.diffusers_adapters import parallelize_vae mesh = init_context_parallel_mesh( pipe.device.type, ) parallelize_pipe( pipe, mesh=mesh, ) parallelize_vae(pipe.vae, mesh=mesh._flatten()) # pipe.enable_model_cpu_offload(gpu_id=dist.get_rank()) # torch._inductor.config.reorder_for_compute_comm_overlap = True # pipe.transformer = torch.compile(pipe.transformer, mode="max-autotune-no-cudagraphs") output = pipe( prompt="A cat walks on the grass, realistic", height=720, width=1280, num_frames=129, num_inference_steps=30, output_type="pil" if dist.get_rank() == 0 else "pt", ).frames[0] if dist.get_rank() == 0: print("Saving video to hunyuan_video.mp4") export_to_video(output, "hunyuan_video.mp4", fps=15) dist.destroy_process_group() ``` Save the above code to `run_hunyuan_video.py` and run it with torchrun: ```bash torchrun --nproc_per_node=2 run_hunyuan_video.py ``` The generated video on 2xH100: https://github.com/user-attachments/assets/e67838a7-5261-452e-9bf0-9f186611c3b7 By following these steps, users can leverage `ParaAttention` to achieve faster inference times with `HunyuanVideo` on multiple GPUs. Thank you for considering this suggestion. I believe it could greatly benefit the community and enhance the performance of `HunyuanVideo`. Please let me know if there are any questions or further clarifications needed.
https://github.com/huggingface/diffusers/issues/10383
closed
[ "roadmap" ]
2024-12-25T15:07:53Z
2025-01-16T18:05:15Z
10
chengzeyi
huggingface/lerobot
596
How to achieve multiple tasks on the basis of LeRobot ๏ผŸ
LeRobot can achieve single tasks (such as inserting, transferring blocks, etc.), how to achieve multiple tasks on the basis of LeRobot (such as first recognizing objects and classifying, and then putting objects in order in boxes, etc.)?" Please give me some ideas.
https://github.com/huggingface/lerobot/issues/596
closed
[ "question", "stale" ]
2024-12-25T12:20:37Z
2025-10-17T11:38:20Z
null
wangwisdom
huggingface/diffusers
10,375
[low priority] Please fix links in documentation
https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video Both links are broken Make sure to check out the Schedulers [guide](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
https://github.com/huggingface/diffusers/issues/10375
closed
[]
2024-12-25T09:04:33Z
2024-12-28T20:01:27Z
0
nitinmukesh
huggingface/diffusers
10,374
Is there any plan to support TeaCache for training-free acceleration?
TeaCache is a training-free inference acceleration method for visual generation. TeaCache currently supports HunyuanVideo, CogVideoX, Open-Sora, Open-Sora-Plan and Latte. TeaCache can speedup HunyuanVideo 2x without much visual quality degradation. For example, the inference for a 720p, 129-frame video takes around 50 minutes on a single A800 GPU while TeaCache can sppeedup to 23 minutes. Thanks for your efforts! https://github.com/LiewFeng/TeaCache.
https://github.com/huggingface/diffusers/issues/10374
open
[ "wip" ]
2024-12-25T05:00:23Z
2025-01-27T01:28:53Z
4
LiewFeng
huggingface/chat-ui
1,633
docker run is not working
I'm running the following: ``` docker run -p 3000:3000 --env-file env.local huggingface/chat-ui ``` The env file has the following set: `HF_TOKEN`, `MONGODB_URL` and `MODELS`. The container prints the following: ``` Listening on 0.0.0.0:3000 ``` However, on hitting the `localhost:3000`, I get a blank page with `Not found`. I can repro this consistently. Can anyone share who is able to use docker and get it to work.
https://github.com/huggingface/chat-ui/issues/1633
open
[ "support" ]
2024-12-23T08:36:09Z
2025-01-06T07:30:46Z
1
sebastiangonsal
huggingface/peft
2,293
Is it possible to add LoRA on specific head?
### Feature request Could I add LoRA only to some selected heads on the model? I read some documentation [here](https://huggingface.co/docs/peft/developer_guides/custom_models), but am still not sure about how to implement my goal. ### Motivation Current LoRA Config can allow users to decide where matrices to add LoRA, a more fine-grained control on which heads to add LoRA would be beneficial for the developers. ### Your contribution I would appreciate some tips on how to implement this.
https://github.com/huggingface/peft/issues/2293
closed
[]
2024-12-22T19:57:54Z
2025-12-14T10:07:49Z
12
SpeeeedLee
huggingface/datasets
7,344
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs
### Describe the bug I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when I call `load_dataset`. The even odder part is that I am able to sucessfully run trainings with the [wikitext dataset](https://huggingface.co/datasets/Salesforce/wikitext). Is there something I need to setup to specifically train with SlimPajama or C4 with TPUs because I am not clear why I am getting these errors. ### Steps to reproduce the bug These are the commands you could run to produce the error below but you will require a ClearML account (you can create one [here](https://app.clear.ml/login?redirect=%2Fdashboard)) with a queue setup to run on Google TPUs ```bash git clone https://github.com/clankur/muGPT.git cd muGPT python -m train --config-name=slim_v4-32_84m.yaml +training.queue={NAME_OF_CLEARML_QUEUE} ``` The error I see: ``` Traceback (most recent call last): File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/clearml/binding/hydra_bind.py", line 230, in _patched_task_function return task_function(a_config, *a_args, **a_kwargs) File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 1037, in main main_contained(config, logger) File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 840, in main_contained loader = get_loader("train", config.training_data, config.training.tokens) File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 549, in get_loader return HuggingFaceDataLoader(split, config, token_batch_params) File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 395, in __init__ self.dataset = load_dataset( File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 2112, in load_dataset builder_instance = load_dataset_builder( File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1798, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1495, in dataset_module_factory raise e1 from None File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1479, in dataset_module_factory ).get_module() File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1034, in get_module else get_data_patterns(base_path, download_config=self.download_config) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 457, in get_data_patterns return _get_data_files_patterns(resolver) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 248, in _get_data_files_patterns data_files = pattern_resolver(pattern) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 340, in resolve_pattern for filepath, info in fs.glob(pattern, detail=True).items() File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 409, in glob return super().glob(path, **kwargs) File "/home/clankur/.clearml/venvs-builds/3.10/lib/python3.10/site-packages/fsspec/spec.py", line 602, in glob allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 429, in find out = self._ls_tree(path, recursive=True, refresh=refresh, revision=resolved_path.revision, **kwargs) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 358, in _ls_tree self._ls_tree( File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 375, in _ls_tree for path_info in tree: File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3080, in list_repo_tree for path_info in paginate(path=tree_url, headers=headers, params={"recursive": recursive, "expand": expand}): File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_pagination.py", line 46, in paginate hf_raise_for_status(r) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status raise _format(HfHubHTTPError, str(e), response) from e huggingface_hub.errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/cerebras/SlimPajama-627B/tree/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543?recursive=True&
https://github.com/huggingface/datasets/issues/7344
closed
[]
2024-12-22T16:30:07Z
2025-01-15T05:32:00Z
2
clankur
huggingface/diffusers
10,345
safetensor streaming in from_single_file_loading()
can we add support for streaming safetensors while loading using `from_single_file`. source:https://github.com/run-ai/runai-model-streamer example: ```python from runai_model_streamer import SafetensorsStreamer file_path = "/path/to/file.safetensors" with SafetensorsStreamer() as streamer: streamer.stream_file(file_path) for name, tensor in streamer.get_tensors(): tensor.to('CUDA:0') ```
https://github.com/huggingface/diffusers/issues/10345
closed
[ "stale" ]
2024-12-22T13:27:46Z
2025-01-21T15:07:58Z
2
AbhinavJangra29
huggingface/accelerate
3,309
deepspeed zero3 how to save custom model๏ผŸ
DeepSpeedEngine( (module): LLMDecoder( (model): Qwen2ForSequenceClassification( (model): Qwen2Model( (embed_tokens): Embedding(151936, 1536) (layers): ModuleList( (0-27): 28 x Qwen2DecoderLayer( (self_attn): Qwen2SdpaAttention( (q_proj): Linear(in_features=1536, out_features=1536, bias=True) (k_proj): Linear(in_features=1536, out_features=256, bias=True) (v_proj): Linear(in_features=1536, out_features=256, bias=True) (o_proj): Linear(in_features=1536, out_features=1536, bias=False) (rotary_emb): Qwen2RotaryEmbedding() ) (mlp): Qwen2MLP( (gate_proj): Linear(in_features=1536, out_features=8960, bias=False) (up_proj): Linear(in_features=1536, out_features=8960, bias=False) (down_proj): Linear(in_features=8960, out_features=1536, bias=False) (act_fn): SiLU() ) (input_layernorm): Qwen2RMSNorm((0,), eps=1e-06) (post_attention_layernorm): Qwen2RMSNorm((0,), eps=1e-06) ) ) (norm): Qwen2RMSNorm((0,), eps=1e-06) (rotary_emb): Qwen2RotaryEmbedding() ) (score): Linear(in_features=1536, out_features=1, bias=False) ) ) ) Hello, the above is my model structure. In short, I use a custom LLMDecoder, which has a variable named model which is a Qwen2ForSequenceClassification object. In this case, how should I save the model in deepspeed zero3? The following code is not suitable for my model structure, how should I modify it? unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained( args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save, state_dict=accelerator.get_state_dict(model), )
https://github.com/huggingface/accelerate/issues/3309
closed
[]
2024-12-21T17:01:17Z
2025-01-30T15:06:45Z
null
NLPJCL
huggingface/diffusers
10,334
Sana broke on MacOS. Grey images on MPS, NaN's on CPU.
### Describe the bug Just started to play with Sana, was excited when I saw it was coming to Diffusers as the NVIDIA supplied code was full of CUDA only stuff. Ran the example code, changing cuda to mps and got a grey image. ![output](https://github.com/user-attachments/assets/f8f230d2-c025-437a-adf4-9bbb76767a65) Removed the move to MPS to run it on the CPU and the script failed with ``` image_processor.py:147: RuntimeWarning: invalid value encountered in cast ``` that suggests the latents had NaN's on the CPU. ### Reproduction ```py import torch from diffusers import SanaPipeline pipe = SanaPipeline.from_pretrained( "Efficient-Large-Model/Sana_1600M_1024px_diffusers", torch_dtype=torch.float32 ) pipe.to("mps") pipe.text_encoder.to(torch.bfloat16) pipe.transformer = pipe.transformer.to(torch.float16) image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"')[0] image[0].save("output.png") ``` removed `pipe.to("mps")` to run on the CPU. ### Logs ```shell *** MPS run *** (Diffusers) $ python sana_test.py Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:10<00:00, 5.03s/it] Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:10<00:00, 2.18s/it] Setting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip: `pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation. Setting `clean_caption` to False... The 'batch_size' argument of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'max_batch_size' argument instead. The 'batch_size' attribute of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'self.max_batch_size' attribute instead. Setting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip: `pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation. Setting `clean_caption` to False... 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 20/20 [00:49<00:00, 2.48s/it] (Diffusers) $ ***CPU run*** (Diffusers) $ python sana_test.py Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:06<00:00, 3.13s/it] Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:07<00:00, 1.41s/it] Setting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip: `pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation. Setting `clean_caption` to False... The 'batch_size' argument of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'max_batch_size' argument instead. The 'batch_size' attribute of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'self.max_batch_size' attribute instead. Setting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip: `pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation. Setting `clean_caption` to False... 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 20/20 [20:14<00:00, 60.74s/it] /Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/image_processor.py:147: RuntimeWarning: invalid value encountered in cast images = (images * 255).round().astype("uint8") (Diffusers) $ ``` ### System Info - ๐Ÿค— Diffusers version: 0.32.0.dev0 - Platform: macOS-15.2-arm64-arm-64bit - Running on Google Colab?: No - Python version: 3.11.10 - PyTorch version (GPU?): 2.6.0.dev20241219 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.25.0 - Transformers version: 4.47.1 - Accelerate version: 0.34.2 - PEFT version: not installed - Bitsandbytes version: not installed - Safetensors version: 0.4.5 - xFormers version: not installed - Accelerator: Apple M3 - Using GPU in script?: both - Using distributed or parallel set-up in script?: no ### Who can help? @pcuenca
https://github.com/huggingface/diffusers/issues/10334
closed
[ "bug", "stale" ]
2024-12-21T11:26:40Z
2025-01-27T01:26:43Z
8
Vargol
huggingface/peft
2,292
Cannot import name 'EncoderDecoderCache' from 'transformers'
### System Info transformer==4.39.3;peft==0.14.0 Maybe this is from transformer's update,so which version can i use. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduction from src import models from src.utils import IImage, resize import numpy as np from src.methods import rasg, sd, sr from PIL import Image from peft import get_peft_model, LoraConfig, TaskType inp_model = models.load_inpainting_model('ds8_inp', device='cpu', cache=True) lora_config = LoraConfig( task_type=TaskType.IMAGE_GENERATION, inference_mode=True, r=8, lora_alpha=16, lora_dropout=0.05, ) new_model = get_peft_model(inp_model.unet, lora_config) print(new_model.state_dict().keys()) ### Expected behavior /root/miniconda3/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) Traceback (most recent call last): File "/root/autodl-tmp/workspace/HD-Painter/paratest.py", line 6, in <module> from peft import get_peft_model, LoraConfig, TaskType File "/root/miniconda3/lib/python3.10/site-packages/peft/__init__.py", line 22, in <module> from .auto import ( File "/root/miniconda3/lib/python3.10/site-packages/peft/auto.py", line 32, in <module> from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING File "/root/miniconda3/lib/python3.10/site-packages/peft/mapping.py", line 25, in <module> from .mixed_model import PeftMixedModel File "/root/miniconda3/lib/python3.10/site-packages/peft/mixed_model.py", line 29, in <module> from .peft_model import PeftModel File "/root/miniconda3/lib/python3.10/site-packages/peft/peft_model.py", line 37, in <module> from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel ImportError: cannot import name 'Cache' from 'transformers' (/root/miniconda3/lib/python3.10/site-packages/transformers/__init__.py)
https://github.com/huggingface/peft/issues/2292
closed
[]
2024-12-21T09:00:04Z
2025-03-31T06:50:20Z
4
Huang-jia-xuan
huggingface/sentence-transformers
3,141
How to load ModernBERT model correctly?
Hi Teams, I want to ask how to properly load [ModernBERT](https://huggingface.co/blog/modernbert) using SentenceTransformer? The main difficulty I met is about the weight loading of prediction head as defined [here](https://github.com/huggingface/transformers/blob/f42084e6411c39b74309af4a7d6ed640c01a4c9e/src/transformers/models/modernbert/modeling_modernbert.py#L1121-L1123) where `ModernBertPredictionHead` is not included in the `AutoModelClass`. I tried to use the following code: ```python import torch from sentence_transformers import SentenceTransformer,models model_name_or_path = "answerdotai/ModernBERT-base" modules = [] modules.append(models.Transformer(model_name_or_path)) ## head modules.append(models.Dense(768,768,activation_function=torch.nn.GELU())) modules.append(models.Dense(768,768,activation_function=torch.nn.Identity())) ## pooling modules.append(models.Pooling(768,pooling_mode="mean")) ## classifier modules.append(models.Dense(768,1)) model = SentenceTransformer(modules=modules,device="cpu") ``` However, it seems that `Dense` before `Pooling` is not supported and would throw an error: ``` KeyError: 'sentence_embedding' ```
https://github.com/huggingface/sentence-transformers/issues/3141
closed
[]
2024-12-20T06:52:44Z
2024-12-24T03:08:47Z
null
Hannibal046
huggingface/picotron
15
Difference between picotron and nanotron
What is the difference between picotron and [nanotron](https://github.com/huggingface/nanotron)? Why huggingface team rolled out two hybrid-parallelism framework?
https://github.com/huggingface/picotron/issues/15
closed
[ "question" ]
2024-12-19T12:48:57Z
2024-12-20T10:17:25Z
null
cailun01
huggingface/diffusers
10,302
Using FP8 for inference without CPU offloading can introduce noise.
### Describe the bug If I use ```pipe.enable_model_cpu_offload(device=device)```, the model can perform inference correctly after warming up. However, if I comment out this line, the inference results are noisy. ### Reproduction ```python from diffusers import ( FluxPipeline, FluxTransformer2DModel ) from transformers import T5EncoderModel, CLIPTextModel,CLIPTokenizer,T5TokenizerFast from optimum.quanto import freeze, qfloat8, quantize import torch from diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKL dtype = torch.bfloat16 bfl_repo = f"black-forest-labs/FLUX.1-dev" device = "cuda" scheduler = FlowMatchEulerDiscreteScheduler.from_pretrained(bfl_repo, subfolder="scheduler", torch_dtype=dtype) text_encoder = CLIPTextModel.from_pretrained(bfl_repo, subfolder="text_encoder", torch_dtype=dtype) tokenizer = CLIPTokenizer.from_pretrained(bfl_repo, subfolder="tokenizer", torch_dtype=dtype, clean_up_tokenization_spaces=True) text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype) tokenizer_2 = T5TokenizerFast.from_pretrained(bfl_repo, subfolder="tokenizer_2", torch_dtype=dtype, clean_up_tokenization_spaces=True) vae = AutoencoderKL.from_pretrained(bfl_repo, subfolder="vae", torch_dtype=dtype) transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=dtype) quantize(transformer, weights=qfloat8) freeze(transformer) quantize(text_encoder_2, weights=qfloat8) freeze(text_encoder_2) pipe = FluxPipeline( scheduler=scheduler, text_encoder=text_encoder, tokenizer=tokenizer, text_encoder_2=text_encoder_2, tokenizer_2=tokenizer_2, vae=vae, transformer=transformer ).to(device, dtype=dtype) # edit # pipe.enable_model_cpu_offload(device=device) params = { "prompt": "a cat", "num_images_per_prompt": 1, "num_inference_steps":1, "width": 64, "height": 64, "guidance_scale": 7, } image = pipe(**params).images[0] # wamup params = { "prompt": "a cat", "num_images_per_prompt": 1, "num_inference_steps":25, "width": 512, "height": 512, "guidance_scale": 7, } image = pipe(**params).images[0] image.save("1.jpg") ``` ### Logs _No response_ ### System Info WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.5.1+cu121 with CUDA 1201 (you have 2.4.1+cu121) Python 3.10.15 (you have 3.10.13) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - ๐Ÿค— Diffusers version: 0.32.0.dev0 - Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35 - Running on Google Colab?: No - Python version: 3.10.13 - PyTorch version (GPU?): 2.4.1+cu121 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.26.2 - Transformers version: 4.46.2 - Accelerate version: 0.31.0 - PEFT version: 0.14.0 - Bitsandbytes version: not installed - Safetensors version: 0.4.3 - xFormers version: 0.0.28.post3 - Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB NVIDIA GeForce RTX 3090, 24576 MiB - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @yiyixuxu @DN6
https://github.com/huggingface/diffusers/issues/10302
open
[ "bug" ]
2024-12-19T12:39:06Z
2025-03-10T14:18:58Z
6
todochenxi
huggingface/candle
2,674
[Question] How to create a autograd function like in PyTorch? How to customize forward and backward process?
https://github.com/huggingface/candle/issues/2674
open
[]
2024-12-19T07:02:04Z
2024-12-19T07:02:15Z
null
VanderBieu
huggingface/blog
2,551
How to process and visualize the segment output tokens?
How to process the segment tokens and generate segmentation masks? what the output means? ![ๅพฎไฟกๅ›พ็‰‡_20241219110946](https://github.com/user-attachments/assets/089e5d16-f133-449a-a0ee-0f7c07e335dc)
https://github.com/huggingface/blog/issues/2551
open
[]
2024-12-19T03:11:15Z
2024-12-19T03:11:15Z
null
00mmw
huggingface/transformers
35,316
How to use a custom Image Processor?
I want to use the processor in the form of `auto_map` but when using `AutoProcessor.from_pretrained`, I am unable to load the custom `ImageProcessor`. The root cause lies in the use of the `transformers_module` to initialize the class in `ProcessorMixin`. https://github.com/huggingface/transformers/blob/c7e48053aab09ad11efa2ad12513e9ab56f29563/src/transformers/processing_utils.py#L1018 Even though I have overridden the _get_arguments_from_pretrained method, this issue still exists in the `__init__`. https://github.com/huggingface/transformers/blob/c7e48053aab09ad11efa2ad12513e9ab56f29563/src/transformers/processing_utils.py#L383 Perhaps I could avoid inheriting from ProcessorMixin, but I would like to know if there is a more elegant way to achieve this functionality?
https://github.com/huggingface/transformers/issues/35316
closed
[]
2024-12-18T12:04:33Z
2024-12-19T02:53:43Z
null
glamourzc
huggingface/diffusers
10,281
Request to implement FreeScale, a new diffusion scheduler
### Model/Pipeline/Scheduler description FreeScale is a tuning-free method for higher-resolution visual generation, unlocking the 8k image generation for pre-trained SDXL! Compared to direct inference by SDXL, FreeScale brings negligible additional memory and time costs. ![fig_teaser](https://github.com/user-attachments/assets/3eef38cc-3642-42a7-b5e7-8b32c32ecc77) ![fig_diff8k](https://github.com/user-attachments/assets/8cec7c55-011e-4434-81e3-1e80dd5dd003) ### Open source status - [X] The model implementation is available. - [X] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links for the implementation - Project: http://haonanqiu.com/projects/FreeScale.html - Paper: https://arxiv.org/abs/2412.09626 - Code: https://github.com/ali-vilab/FreeScale - Hugging Face Demo: https://huggingface.co/spaces/MoonQiu/FreeScale The code changes of FreeScale are not complicated, but I do not know how to integrate them into diffusers smoothly. If you have questions about FreeScale, please ask me(@arthur-qiu).
https://github.com/huggingface/diffusers/issues/10281
open
[ "stale", "consider-for-modular-diffusers" ]
2024-12-18T06:32:34Z
2025-01-17T15:02:49Z
1
arthur-qiu
huggingface/diffusers
10,280
Safetensors loading uses mmap with multiple processes sharing the same fd cause slow gcsfuse performance
### Describe the bug When I use `StableDiffusionPipeline.from_single_file` to load a safetensors model, I noticed that the loading speed is extremely slow when the file is loaded from GCSFuse (https://cloud.google.com/storage/docs/cloud-storage-fuse/overview). The reason is that the loader creates multiple processes but they all share the same fd and its file handle. As each process reads different offset of the file, it makes the GCSFuse perform really badly because those reads appear to be random read jumping between offsets. For example: ``` connection.go:420] <- ReadFile (inode 2, PID 77, handle 1, offset 529453056, 262144 bytes) connection.go:420] <- ReadFile (inode 2, PID 78, handle 1, offset 531812352, 262144 bytes) connection.go:420] <- ReadFile (inode 2, PID 79, handle 1, offset 534171648, 262144 bytes) connection.go:420] <- ReadFile (inode 2, PID 50, handle 1, offset 527351808, 4096 bytes) ``` The question I have is why the loading multiple processes share the same fd in the first place? As `mmap` is already used, even the multiple processes don't share the same fd, the kernel will still map the virtual memory for each process back to the same the page cache naturally, so there is no need to share the fd across the fd. If they don't share the fd, GCSFuse will perform much better. Therefore, can we disable the fd sharing? ### Reproduction Simply using GCSFuse to serve a file to `StableDiffusionPipeline.from_single_file` ### Logs _No response_ ### System Info N/A ### Who can help? @yiyixuxu @asomoza
https://github.com/huggingface/diffusers/issues/10280
closed
[ "bug" ]
2024-12-18T06:02:41Z
2025-01-10T10:11:05Z
4
wlhee
huggingface/optimum-neuron
750
Document how to use Qwen 2.5
### Feature request Qwen 2.5 7B Instruct on EC2 with HF DL AMI Qwen 2.5 7B Instruct on Sagemaker with HF DLC Neuronx TGI Maybe something for the code version too? Dependency of adding the model to the cache ### Motivation increase AMI and DLC usage ### Your contribution doc
https://github.com/huggingface/optimum-neuron/issues/750
closed
[ "Stale" ]
2024-12-17T16:03:25Z
2025-01-22T08:04:54Z
null
pagezyhf
huggingface/accelerate
3,294
How to run accelerate with PYTORCH_ENABLE_MPS_FALLBACK
### System Info ```Shell MacOS transformers>=4.35.1 datasets[audio]>=2.14.7 accelerate>=0.24.1 matplotlib wandb tensorboard Cython - `Accelerate` version: 1.2.1 - Platform: macOS-14.7.1-arm64-arm-64bit - `accelerate` bash location: .venv/bin/accelerate - Python version: 3.12.3 - Numpy version: 2.0.2 - PyTorch version (GPU?): 2.5.1 (False) - PyTorch XPU available: False - PyTorch NPU available: False - PyTorch MLU available: False - PyTorch MUSA available: False - System RAM: 64.00 GB - `Accelerate` default config: Not found ``` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`) - [ ] My own task or dataset (give details below) ### Reproduction How to set `PYTORCH_ENABLE_MPS_FALLBACK` environment variable when running a script with accelerate. The accelerate is not picking up the PYTORCH_ENABLE_MPS_FALLBACK environment variable when running a script, no matter where this variable is set. I tried to set this variable in the script, in the command line and in the `./zshenv`, and still PyTorch is complaining it does not see this variable. ### Expected behavior expected the PYTORCH_ENABLE_MPS_FALLBACK variable be visible in the sub-process/thread.
https://github.com/huggingface/accelerate/issues/3294
closed
[]
2024-12-15T07:03:41Z
2025-01-23T15:06:57Z
null
mirodil-ml
huggingface/diffusers
10,223
Where should I obtain the lora-sdxl-dreambooth-id in Inference
### Describe the bug I tried to upload the download link from the README file generated during training, but an error indicated it was incorrect. Where should I obtain the lora-id for Inference? ### Reproduction README.md: --- base_model: /data/ziqiang/czc/diffusers/examples/dreambooth/model library_name: diffusers license: openrail++ instance_prompt: a photo of sks dog widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - daniu111/output <Gallery /> ## Model description These are daniu111/output LoRA adaption weights for /data/ziqiang/czc/diffusers/examples/dreambooth/model. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: /data/ziqiang/czc/diffusers/examples/dreambooth/model/vae. ## Trigger words You should use a photo of sks dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](daniu111/output/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model] Inference: from huggingface_hub.repocard import RepoCard from diffusers import DiffusionPipeline import torch lora_model_id = <"lora-sdxl-dreambooth-id"> card = RepoCard.load(lora_model_id) base_model_id = card.data.to_dict()["base_model"] pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.load_lora_weights(lora_model_id) image = pipe("A picture of a sks dog in a bucket", num_inference_steps=25).images[0] image.save("sks_dog.png") "The lora-dreambooth-sdxl-id seems to need to be uploaded, but I don't know where to obtain this ID." ### Logs _No response_ ### System Info - ๐Ÿค— Diffusers version: 0.32.0.dev0 - Platform: Linux-5.4.0-198-generic-x86_64-with-glibc2.31 - Running on Google Colab?: No - Python version: 3.12.4 - PyTorch version (GPU?): 2.4.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.26.2 - Transformers version: 4.46.3 - Accelerate version: 1.1.1 - PEFT version: 0.7.0 - Bitsandbytes version: not installed - Safetensors version: 0.4.5 - xFormers version: 0.0.27.post2 - Accelerator: NVIDIA RTX A6000, 49140 MiB NVIDIA RTX A6000, 49140 MiB NVIDIA RTX A6000, 49140 MiB - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @hlky
https://github.com/huggingface/diffusers/issues/10223
open
[ "bug", "stale" ]
2024-12-14T06:34:56Z
2025-02-07T15:03:24Z
5
Zarato2122
huggingface/lerobot
575
Gello dataset converter
I made a converter for the [Gello](https://wuphilipp.github.io/gello_site/) dataset format (pickles containing dicts with all the observations). If this is of interest, I am willing to contribute it back here. The current code can be found [here](https://github.com/tlpss/lerobot/blob/tlpss-dev/lerobot/common/datasets/push_dataset_to_hub/gello_pkl_format.py). It needs some cleanup and maybe a convenient way to specify the mapping of dict keys in case you have a different number of cameras or other sensors. Wanted to see if there is any interest in this, before I make the effort to clean it up.
https://github.com/huggingface/lerobot/issues/575
closed
[ "enhancement", "question", "dataset", "stale" ]
2024-12-13T15:47:58Z
2025-10-08T08:50:40Z
null
tlpss
huggingface/diffusers
10,207
KolorsPipeline does not support from_single_file
from diffusers import KolorsPipeline KolorsPipeline.from_single_file("models/kolrs-8steps.safetensors") How does KolorsPipeline load a single file model?
https://github.com/huggingface/diffusers/issues/10207
open
[ "stale", "single_file" ]
2024-12-13T09:44:46Z
2025-01-12T15:02:46Z
3
Thekey756
huggingface/sentence-transformers
3,134
How to set a proper batchsize when using CachedMultipleNegativesRankingLoss?
When using the [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss), I tried different batchsize(per_device_train_batch_size) setting, and found that 512 was the maximum. When batchsize was greater than 512, GPU memory OOM was happened. As stated in the document of [CachedMultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss): > GradCache is a smart way to solve this problem. It achieves the goal by dividing the computation into two stages of embedding and loss calculation, which both can be scaled by mini-batches. As a result, memory of constant size (e.g. that works with batch size = 32) can now process much larger batches (e.g. 65536). So, I tried CachedMultipleNegativesRankingLoss, and the mini_batch_size of CachedMultipleNegativesRankingLoss can go as high as 2048. mini_batch_size greather than 2048 will cause GPU memory OOM. Nevertheless, When setting the mini_batch_size as 2048, I can still increase the global batchsize(per_device_train_batch_size). Generally speaking, larger batchsize will achieve better performance in the constrastive learning settings. So, I tried different batchsize(per_device_train_batch_size), and found it can be as large as 1048576 and it won't cause GPU memory OOM (but the GPU utilization is 100%). So, I am wondering how to set a proper batchsize(per_device_train_batch_size), can it be infinite big?
https://github.com/huggingface/sentence-transformers/issues/3134
open
[]
2024-12-13T09:25:34Z
2024-12-27T13:46:17Z
null
awmoe
huggingface/sentence-transformers
3,133
How to avoid the long time waiting before start training?
Dear developer, Thanks for the great sentence-transformers library! I am finetuning the [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) using my own data following the tutorial from: https://sbert.net/docs/sentence_transformer/training_overview.html I first finetuned it with a toy dataset containing only hundreds of triplet sentence samples, and everything was ok, and the finetuning was very fast. After that, I finetuned it with the formal big dataset containing 100 million triplet sentence samples. I found that it had to wait a long time (about 60 minutes) to start training. And when the data is bigger, the waiting time is longer. Specifically: 1. It first spent 5 minutes to `Generating train split`. 2. Then spent 30 minutes to dataset mapping. 3. After that, it printed `Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.`. 4. and waiting about 60 minutes to start the real training. During the 60 minutes, I found that the GPU was working but the GPU utilization rate was relatively low (30%) and the GPU memory was not used. What's more, during the 60 minutes, no any log information was printed. Was it doing something like data preparation or tokenization? Could you tell me what was it doing, and how to avoid this long waiting time? After the 60-minute waiting, it started the real training, and the GPU utilization rate was as high as 80%, and the GPU memory was used around 70GB on H100. What's more, the training progress bar was printing similar as `x/y [69:08:34<130:13:54, 1.09it/s]`. So that I knew it was training. I also have another dataset which is 10 times larger than 100 million triplet sentence samples, I worry that I have to wait days to starting the training if I use the huge dataset. Could you tell me what was it doing during the 60-minute waiting, and how to avoid this long waiting time? Thank you very much and look forward to your reply.
https://github.com/huggingface/sentence-transformers/issues/3133
open
[]
2024-12-13T09:10:32Z
2024-12-25T03:46:50Z
null
awmoe
huggingface/lighteval
447
[BUG] how to eval large scale model use 1dp+8pp?
## Describe the bug I tired to eval a large scale model use1dp+8pp with accelerate. I use the command like the following: ``` accelerate launch --multi_gpu --num_processes=1 run_evals_accelerate.py \ --model_args="pretrained=<path to model on the hub>" \ --model_parallel \ --tasks <task parameters> \ --output_dir output_dir ``` The error is ```ValueError: You need to use at least 2 processes to use --multi_gpu``` How to solve this problem? ## Version info lighteval-0.3.0
https://github.com/huggingface/lighteval/issues/447
closed
[ "bug" ]
2024-12-13T03:56:36Z
2025-01-02T11:20:20Z
null
mxjmtxrm
huggingface/diffusers
10,196
How to finetune Flux-dev full params, 80G OOM ...
I am using the [train_dreambooth_flux](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flux.py) script to fine-tune the `flux-dev` model with full parameters using DeepSpeed Stage 2. However, I am still encountering out-of-memory issues on an 80GB GPU. Are there any solutions available to address this problem? Thanks!
https://github.com/huggingface/diffusers/issues/10196
open
[ "training" ]
2024-12-12T09:24:18Z
2025-08-20T13:19:20Z
null
huangjun12
huggingface/chat-ui
1,627
Cookie โ€œhf-chatโ€ has been rejected because there is an existing โ€œsecureโ€ cookie.
## Bug description I use `ghcr.io/huggingface/chat-ui-db:latest` to host `ChatUI` in docker. If `PUBLIC_ORIGIN="http://localhost"` in `.env.local` and visit `ChatUI` through `http://localhost:3000`, it works well. Then I try to replace `localhost` by my domain name `qiangwulab.sjtu.edu.cn`. For the sake of testing, I modify `/etc/hosts` so that `qiangwulab.sjtu.edu.cn` is resolved to `127.0.0.1`. I visit `ChatUI` through `http://qiangwulab.sjtu.edu.cn:3000`. It does not work with a similar page as in https://github.com/huggingface/chat-ui/issues/1057. The firefox console shows ``` Cookie โ€œhf-chatโ€ has been rejected because a non-HTTPS cookie canโ€™t be set as โ€œsecureโ€. ``` https://github.com/huggingface/chat-ui/issues/1057 says that I should use `ALLOW_INSECURE_COOKIES=true`. It still does not work, and the firefox console shows ``` Cookie โ€œhf-chatโ€ has been rejected because there is an existing โ€œsecureโ€ cookie. ``` `ALLOW_INSECURE_COOKIES=true` seems to be Legacy. Thus, I also tried `COOKIE_SAMESITE="lax"` and `COOKIE_SECURE=false`. The effect is the same. The firefox console shows ``` Cookie โ€œhf-chatโ€ has been rejected because there is an existing โ€œsecureโ€ cookie. ``` Is it possible to use `http` for domain name other than `localhost`? ## Steps to reproduce <!-- Steps to reproduce the issue --> ## Screenshots <!-- If applicable, add screenshots to help explain your problem. --> ## Context ### Logs <!-- Add any logs that are relevant to your issue. Could be browser or server logs. Wrap in code blocks. --> ``` // logs here if relevant ``` ### Specs - **OS**: ubuntu 24.04 - **Browser**: firefox - **chat-ui commit**: ghcr.io/huggingface/chat-ui-db:latest ### Config <!-- Add the environment variables you've used to setup chat-ui, making sure to redact any secrets. --> ## Notes <!-- Anything else relevant to help the issue get solved -->
https://github.com/huggingface/chat-ui/issues/1627
open
[ "bug" ]
2024-12-12T07:04:26Z
2024-12-12T07:04:26Z
0
ljw20180420
huggingface/diffusers
10,190
How to use fluxfill to repalce background๏ผŸ
I want to use fluxfill to change the background, but I find that the prompt words are almost useless, and the output image is more like the original image. I have tested multiple guidance_scale parameters, but found that the resulting image is more related to the original image, and less related to the prompt word.
https://github.com/huggingface/diffusers/issues/10190
closed
[]
2024-12-11T10:48:27Z
2025-05-23T12:12:28Z
null
babyta
huggingface/sentence-transformers
3,132
How to train a model with DDP for TSDAE
hello, I want to train a model using TSDAE method. Is there any way to train with DDP(Multi-GPU)? I already read your sample code. But I'm not sure how to apply DenoisingAutoEncoderDataset in SentenceTransformerTrainer. ([[v3] Training refactor - MultiGPU, loss logging, bf16, etc](https://github.com/UKPLab/sentence-transformers/pull/2449))
https://github.com/huggingface/sentence-transformers/issues/3132
closed
[]
2024-12-11T10:39:30Z
2024-12-11T14:04:32Z
null
OnAnd0n
huggingface/diffusers
10,180
Can't load multiple loras when using Flux Control LoRA
### Describe the bug I was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras. For example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I load the 8-step lora first and then the depth lora, it errors when loading the depth lora. ### Reproduction ``` from diffusers import FluxControlPipeline from huggingface_hub import hf_hub_download import torch control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda") control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora") control_pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors")) ``` ### Logs ```shell AttributeError Traceback (most recent call last) Cell In[6], line 8 5 control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda") 7 control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora") ----> 8 control_pipe.load_lora_weights( 9 hf_hub_download( 10 "ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors" 11 ), 12 adapter_name="HyperFlux", 13 ) File ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:1856, in FluxLoraLoaderMixin.load_lora_weights(self, pretrained_model_name_or_path_or_dict, adapter_name, **kwargs) 1849 transformer_norm_state_dict = { 1850 k: state_dict.pop(k) 1851 for k in list(state_dict.keys()) 1852 if "transformer." in k and any(norm_key in k for norm_key in self._control_lora_supported_norm_keys) 1853 } 1855 transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer -> 1856 has_param_with_expanded_shape = self._maybe_expand_transformer_param_shape_or_error_( 1857 transformer, transformer_lora_state_dict, transformer_norm_state_dict 1858 ) 1860 if has_param_with_expanded_shape: 1861 logger.info( 1862 "The LoRA weights contain parameters that have different shapes that expected by the transformer. " 1863 "As a result, the state_dict of the transformer has been expanded to match the LoRA parameter shapes. " 1864 "To get a comprehensive list of parameter names that were modified, enable debug logging." 1865 ) File ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:2316, in FluxLoraLoaderMixin._maybe_expand_transformer_param_shape_or_error_(cls, transformer, lora_state_dict, norm_state_dict, prefix) 2314 if isinstance(module, torch.nn.Linear): 2315 module_weight = module.weight.data -> 2316 module_bias = module.bias.data if hasattr(module, "bias") else None 2317 bias = module_bias is not None 2319 lora_A_weight_name = f"{name}.lora_A.weight" AttributeError: 'NoneType' object has no attribute 'data' ``` ### System Info - ๐Ÿค— Diffusers version: 0.32.0.dev0 - Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35 - Running on Google Colab?: No - Python version: 3.10.12 - PyTorch version (GPU?): 2.5.1+cu124 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.26.5 - Transformers version: 4.47.0 - Accelerate version: 1.2.0 - PEFT version: 0.14.0 - Bitsandbytes version: not installed - Safetensors version: 0.4.5 - xFormers version: not installed - Accelerator: NVIDIA H100 80GB HBM3, 81559 MiB - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @a-r-r-o-w @sayakpaul
https://github.com/huggingface/diffusers/issues/10180
closed
[ "bug", "help wanted", "lora" ]
2024-12-10T21:40:24Z
2024-12-20T09:00:33Z
11
jonathanyin12
huggingface/transformers
35,186
How to convert my Mask2Former model (ResNet-50 backbone) to Hugging Face transformer
### System Info ```shell - `transformers` version: 4.34.0 - Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.17 - Python version: 3.8.20 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.5 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I found the following script, but it only supports conversion for Mask2Former model (swin backbone) https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/convert_mask2former_original_pytorch_checkpoint_to_pytorch.py May I ask for some guidance on how to adjust the script so that it can support ResNet-50 architecture? ### Expected behavior ```shell Convert my Mask2Former model (ResNet-50 backbone) to Hugging Face transformer ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
https://github.com/huggingface/transformers/issues/35186
closed
[]
2024-12-10T19:17:22Z
2025-01-18T08:03:21Z
null
yujunwei04
huggingface/datasets
7,318
Introduce support for PDFs
### Feature request The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"path": ..., "bytes": ...} as a VideoReader using decord. We want to do the same with pdf and get a [pypdfium2.PdfDocument](https://pypdfium2.readthedocs.io/en/stable/_modules/pypdfium2/_helpers/document.html#PdfDocument). ### Motivation In many cases PDFs contain very valuable information beyond text (e.g. images, figures). Support for PDFs would help create datasets where all the information is preserved. ### Your contribution I can start the implementation of the Pdf type :)
https://github.com/huggingface/datasets/issues/7318
open
[ "enhancement" ]
2024-12-10T16:59:48Z
2024-12-12T18:38:13Z
6
yabramuvdi
huggingface/diffusers
10,172
Raise an error when `len(gligen_images )` is not equal to `len(gligen_phrases)` in `StableDiffusionGLIGENTextImagePipeline`
To whom it may concern, I found that when using `StableDiffusionGLIGENTextImagePipeline`, there is no error raised when `len(gligen_images )` is not equal to `len(gligen_phrases)`. And when I dig into the source code, it seems that these two features are zipped together in a for loop during the preprocessing. I guess this will cause the longer one to be clipped unintentionally. (If my understanding is wrong, feel free to correct me.) Is there any possibility to raise an error or at least warning? Thanks in advance. Source Code: https://github.com/huggingface/diffusers/blob/v0.31.0/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L689
https://github.com/huggingface/diffusers/issues/10172
closed
[]
2024-12-10T14:25:48Z
2024-12-11T08:59:44Z
1
abcdefg133hi
huggingface/lerobot
568
Do I need two SO 100 arms to get started?
I have printed and assembled one arms, the follower version. Do I need two arms to record datasets and do testing?
https://github.com/huggingface/lerobot/issues/568
closed
[ "question", "robots" ]
2024-12-10T13:31:50Z
2025-10-08T08:45:58Z
null
rabhishek100
huggingface/transformers
35,152
how to load the weight of decoder.embed_tokens.weight seperately from the shared weight?
### System Info - `transformers` version: 4.46.3 - Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.17 - Python version: 3.8.20 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: not found - PyTorch version (GPU?): 2.4.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA RTX A4000 ### Who can help? @ArthurZucker @muellerzr @SunMarc ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction when i use t5 1.1 on seq2seq task, which has 59744 source vocab size and only 32 target vocab size. And To correctly use softmax to calculate each token's probality and score on 32 candidates so I set model.lm_head as below: ```python torch.nn.Linear(config.d_model,target_vocab_size=32,bias=False). And everything looks good when the model is training. But after training, I load the safetensor as below: checkpoint_path = "./resultstest/checkpoint-100" config = T5Config.from_pretrained("./onlychangelmhead/checkpoint-100/config.json") model = T5ForConditionalGeneration(config) model.lm_head = torch.nn.Linear(config.d_model,target_vocab_size,bias=False) state_dict = load_file(f"{checkpoint_path}/model.safetensors") model.load_state_dict(state_dict, strict=True) ``` And the issue comes as: ``` Traceback (most recent call last): File "bs_based_on_massdic_failed.py", line 110, in <module> model.load_state_dict(state_dict, strict=True) File "/home/zhi/anaconda3/envs/peptide_completion/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2215, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration: Missing key(s) in state_dict: "encoder.embed_tokens.weight". size mismatch for decoder.embed_tokens.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([59744, 768]). ``` when I try to print the safetensors' shape it shows that the `lm_head. weight` looks fine as size of `[32, 768]`, but with no `decoder.embeded_tokens` or the way I load the safetensor can not load the embeded_tokens's weight from shared weight properly(I guess). So how can I fix that problem to correctly feat this model on my exact target vocab size as 32 but not same as the source vocab's size. It would be very appreciate if you can reply. Best. ### Expected behavior Use t5 1.1 to feat on 32 target vocab size task. And load the safetensor properly.
https://github.com/huggingface/transformers/issues/35152
closed
[ "bug" ]
2024-12-08T15:46:55Z
2025-01-22T08:03:52Z
null
SoSongzhi
huggingface/datasets
7,311
How to get the original dataset name with username?
### Feature request The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub. The solution used now is to get the dataset name, config and split, then `load_dataset` again and check the fingerprint. But it's unable to get the correct dataset name if it contains username. So how to get the dataset name with username prefix, or is there another way to query if a dataset is the original one with parquet available? @lhoestq ### Motivation https://github.com/ray-project/ray/issues/49008 ### Your contribution Would like to fix that.
https://github.com/huggingface/datasets/issues/7311
open
[ "enhancement" ]
2024-12-08T07:18:14Z
2025-01-09T10:48:02Z
null
npuichigo
huggingface/lerobot
555
To bulid my own policy, but have errors TypeError: '>' not supported between instances of 'int' and 'dict'
I improved the act policy in lerobot framework and created a new policy named myact. I mainly did the following: Create the my_act folder in the lerobot/common/policies/ path Create 'configuration_my_act.py' and 'modeling_my_act.py' in the + my_act folder Create lerobot/configs/policy/myact yaml, which is modified to ` name: myact ` But when I'm done, run the following command and get an error: xvfb-run python lerobot/scripts/train.py \ hydra.run.dir=mypolicy/train/AlohaInsertion-v0\ policy=myact \ dataset_repo_id=lerobot/aloha_sim_insertion_human \ env=aloha \ env.task=AlohaInsertion-v0 INFO 2024-12-07 17:01:50 n/logger.py:106 Logs will be saved locally. INFO 2024-12-07 17:01:50 ts/train.py:337 make_dataset Fetching 56 files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 56/56 [00:00<00:00, 9842.48it/s] INFO 2024-12-07 17:01:56 ts/train.py:350 make_env INFO 2024-12-07 17:01:56 /__init__.py:88 MUJOCO_GL is not set, so an OpenGL backend will be chosen automatically. INFO 2024-12-07 17:01:57 /__init__.py:96 Successfully imported OpenGL backend: %s INFO 2024-12-07 17:01:57 /__init__.py:31 MuJoCo library version is: %s INFO 2024-12-07 17:02:03 ts/train.py:353 make_policy Error executing job with overrides: ['policy=act', 'dataset_repo_id=lerobot/aloha_sim_insertion_human', 'env=aloha', 'env.task=AlohaInsertion-v0'] Traceback (most recent call last): File "/root/autodl-tmp/lerobot/lerobot/scripts/train.py", line 677, in train_cli train( File "/root/autodl-tmp/lerobot/lerobot/scripts/train.py", line 354, in train policy = make_policy( File "/root/autodl-tmp/lerobot/lerobot/common/policies/factory.py", line 105, in make_policy policy = policy_cls(policy_cfg, dataset_stats) File "<string>", line 26, in __init__ File "/root/autodl-tmp/lerobot/lerobot/common/policies/act/configuration_act.py", line 158, in __post_init__ if self.n_action_steps > self.chunk_size: TypeError: '>' not supported between instances of 'int' and 'dict' Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. At this time, I also reported this error when I ran lerobot's act strategy. Do you know how to solve, thank you!
https://github.com/huggingface/lerobot/issues/555
closed
[ "enhancement", "question" ]
2024-12-07T09:10:35Z
2025-04-07T16:08:38Z
null
zhouzhq2021
huggingface/diffusers
10,144
Why mochi diffusers video output is worse than mochi official code?
### Describe the bug The quality of video is worse. ### Reproduction Run the code with official prompt ### Logs _No response_ ### System Info diffusers@main ### Who can help? @a-r-r-o-w @yiyixuxu
https://github.com/huggingface/diffusers/issues/10144
closed
[ "bug", "stale" ]
2024-12-07T05:53:57Z
2025-01-07T15:38:38Z
10
foreverpiano
huggingface/peft
2,264
Guidance Needed on Two-Stage Fine-Tuning with LoRA(SFT and DPO) for Model Adaptation
# I am planning to perform a two-stage fine-tuning process and need some guidance on how to proceed. ## First Stage 1. Load Base Model: I start by loading the base model, qwen1.5 32B. 2. Apply LoRA Fine-Tuning: I then apply LoRA fine-tuning to this base model and obtain a new model state. 3. Save Adapter Model: This fine-tuned model state is saved as adapter_model.safetensors, named qwen1.5_lora_sft. ## Second Stage 1. Load the Model from the First Stage: I load both qwen1.5 32B and qwen1.5_lora_sft. It's crucial that qwen1.5_lora_sft integrates correctly with the base model qwen1.5 32B. 2. . Continue Fine-Tuning: On this model, which already includes the LoRA adapter, I continue to apply LoRA and DPO for further fine-tuning. 3. Save the New Adapter Model: After fine-tuning, I need to save the new adapter state, which includes adjustments from both the original LoRA and the new DPO. ## My questions are: 1. How to load the model from the base model(qwen1.5 32B) with the lora module qwen1.5_lora_sft 2. How to Continue Fine-Tuning from the First Stage model, and save the lora model after dpo training with the base model(qwen1.5 32B) and only one qwen1.5_lora_sft_dpo module.( adapter_model_sft_dpo.safetensors) ## What I had now 1. base model, qwen1.5 32B model path 2. qwen1.5_lora_sft module path: adapter_model.safetensors ## What I Need 1. qwen1.5_lora_sft _dpo module: adapter_model_sft_dpo.safetensors ## This is train a base_model to get LoRA_weights_1 base_model_1 = merge(base_model and LoRA_weights_1) train base_model_1 to get LoRA_weights_2 base_model_2 = merge(base_model_1 and LoRA_weights_2) how to split the base_model_2 into base_model and LoRA_weights_1_2 Thinks!
https://github.com/huggingface/peft/issues/2264
closed
[]
2024-12-06T13:35:20Z
2025-01-06T10:50:09Z
5
none0663
huggingface/transformers
35,118
How to load local transformers?
transformers==4.47.0.dev0 I want to use my local transformers. And I tried to set `sys.insert(0,'xxx/transformers/src')` and `PYTHONPATH=xxx/transformers/src`, but they doesn't work. PLZ, tell me why.
https://github.com/huggingface/transformers/issues/35118
closed
[]
2024-12-06T10:07:57Z
2024-12-12T04:05:08Z
null
yiyexy
huggingface/lerobot
552
Rounding to int32 makes robot less precise. Do we have a solid reason for doing this?
### System Info ```Shell Latest LeRobot. MacOS ``` ### Information - [X] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction 1) Run teleoperation 2) Measure preciseness with rounding and without. at lerobot/common/robot_devices/robots/manipulator.py ![image](https://github.com/user-attachments/assets/c7706edd-9284-4736-9600-6c4202af11d2) ### Expected behavior Smooth movement
https://github.com/huggingface/lerobot/issues/552
closed
[ "bug", "question", "stale" ]
2024-12-05T16:31:49Z
2025-10-08T13:08:50Z
null
1g0rrr
huggingface/tokenizers
1,696
How to determine the splicing logic in post_processor based on the sentence to be tokenized?
For example, ```python def post_processor(self, token_ids_0, token_ids_1=None): if "cls" in token_ids_0: return processors.TemplateProcessing( single=f"{cls} $A {sep}", pair=f"{cls} $A {sep} $B {cls}", special_tokens=[ (cls, cls_token_id), (sep, sep_token_id), ], ) else: return processors.TemplateProcessing( single=f"{sep} $A {cls}", pair=f"{sep} $A {cls} $B {sep}", special_tokens=[ (cls, cls_token_id), (sep, sep_token_id), ], ) ``` Thx~
https://github.com/huggingface/tokenizers/issues/1696
open
[]
2024-12-05T14:05:13Z
2024-12-05T14:05:13Z
null
gongel
huggingface/peft
2,262
Could you provide example code for AdaLoRA finetuning decoder-only model?
### Feature request The current [example of AdaLoRA](https://github.com/huggingface/peft/blob/b2922565c4c4445706a87cf7b988c828b451fe61/examples/conditional_generation/peft_adalora_seq2seq.py) is on **facebook/bart-base**. Since AdaLoRA requires hand-crafted calculations on loss, would it be possible to provide me some hints on how can this be done when it comes to decoder-only (e.g., Llama-Instruct) LM? Specificially, I would like to mask out the loss calculation on the instruction part or system prompt, focusing only on the assistant response. ### Motivation AdaLoRA requires hand-crafted calculations on loss, which becomes complex when desired to mask out some system/instructino tokens. ### Your contribution N.A.
https://github.com/huggingface/peft/issues/2262
closed
[]
2024-12-05T12:03:31Z
2025-01-18T15:03:29Z
4
SpeeeedLee
huggingface/diffusers
10,129
Does StableDiffusion3 have an image2image pipeline with ControlNet?
I want to use `ControlNet` with `StableDiffusion3`, providing a prompt, an original image, and a control image as inputs. However, I found that the `StableDiffusion3ControlNetPipeline` only supports prompts and control images as inputs. The `StableDiffusionControlNetImg2ImgPipeline` allows for providing a prompt, an original image, and a control image simultaneously, but it is not compatible with the `StableDiffusion3` model. Is there a `StableDiffusion3ControlNetImg2ImgPipeline` available?
https://github.com/huggingface/diffusers/issues/10129
closed
[ "New pipeline/model", "contributions-welcome" ]
2024-12-05T09:40:03Z
2025-01-02T20:02:33Z
1
ZHJ19970917
huggingface/diffusers
10,128
Is there any plan to support fastercache?
Expect to support fastercache, https://github.com/Vchitect/FasterCache
https://github.com/huggingface/diffusers/issues/10128
closed
[ "wip", "performance" ]
2024-12-05T09:11:19Z
2025-03-21T04:05:06Z
4
songh11
huggingface/datasets
7,306
Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values).
### Describe the bug When creating a dataset from a list of datapoints, information is lost of the individual items. Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below. -> What is the best way to create a dataset from a list of datapoints? --- e.g.: **When running this code:** ```python from datasets import load_dataset, Dataset commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True) datapoint = next(iter(commonvoice_data)) out = [datapoint] new_data = Dataset.from_list(out) #this loses datatype information new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information ``` **We get the following**: --- 1. `datapoint`: (the original datapoint) ``` 'audio': {'path': 'it_test_0/common_voice_it_23606167.mp3', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 2.21619011e-05, 2.72628222e-05, 0.00000000e+00]), 'sampling_rate': 48000} ``` Original Dataset Features: ``` >>> commonvoice_data.features 'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None) ``` - Here we see column "audio", has the proper values (both `path` & and `array`) and has the correct datatype (Audio). ---- 2. new_data[0]: ``` # Cannot be printed (as it prints the entire array). ``` New Dataset 1 Features: ``` >>> new_data.features 'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)} ``` - Here we see that the column "audio", has the correct values, but is not the Audio datatype anymore. --- 3. new_data2[0]: ``` 'audio': {'path': None, 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 48000}, ``` New Dataset 2 Features: ``` >>> new_data2.features 'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None), ``` - Here we see that the column "audio", has the correct datatype, but all the array & path values were lost! ### Steps to reproduce the bug ## Run: ```python from datasets import load_dataset, Dataset commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True) datapoint = next(iter(commonvoice_data)) out = [datapoint] new_data = Dataset.from_list(out) #this loses datatype information new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information ``` ### Expected behavior ## Expected: ```datapoint == new_data[0]``` AND ```datapoint == new_data2[0]``` ### Environment info - `datasets` version: 3.1.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.26.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
https://github.com/huggingface/datasets/issues/7306
open
[]
2024-12-05T09:07:53Z
2024-12-05T09:09:38Z
0
ai-nikolai
huggingface/lerobot
549
Low accuracy for act policy on pushT env
The highest success rate is 44%, as n_decoder_layers=7. Are there any other tricks for this?
https://github.com/huggingface/lerobot/issues/549
closed
[ "question", "policies", "stale" ]
2024-12-05T06:18:06Z
2025-10-19T02:32:37Z
null
KongCDY
huggingface/Google-Cloud-Containers
128
Can we use Multi-LORA CPU
Hi, Im currently following this doc: https://huggingface.co/docs/google-cloud/en/examples/gke-tgi-multi-lora-deployment After got a bug: "Canโ€™t scale up due to exceeded quota" and do some research, I suspect that my free trial (300$) account is not able to increase GPU quota (even I have activated my account to not be trial anymore and have to contact sale) Is there anyway I can run this with cpu instead. Thank you
https://github.com/huggingface/Google-Cloud-Containers/issues/128
open
[ "question" ]
2024-12-05T05:42:51Z
2024-12-12T10:06:43Z
null
AndrewNgo-ini
huggingface/peft
2,260
Is it possible to support the transformer engine when using Lora in Megatron?
### Feature request I am currently using the Megatron framework and want to use Lora for training. I saw that the Megatron format is supported at https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/tp_layer.py RowParallelLinear and ColumnParallelLinear do the adaptation. But if I use the transformer engine, the corresponding TELayerNormColumnParallelLinear and TERowParallelLinear will not be adapted. ### Motivation This will better support Megatron framework using LoRA. ### Your contribution I don't have a PR.
https://github.com/huggingface/peft/issues/2260
closed
[]
2024-12-05T03:24:15Z
2025-01-12T15:03:29Z
3
liulong11
huggingface/diffusers
10,120
memory consumption of dreambooth+SD3
Hi, I am running dreambooth SD3 with a single A100 GPU, I reduced resolution to 256; but it still need more memory than a single A100 has? I am wondering is this huge memory consumption normal? ``` !python train_dreambooth_sd3.py \ --pretrained_model_name_or_path="stabilityai/stable-diffusion-3-medium-diffusers" \ --instance_data_dir="erhu" \ --output_dir="trained-sd3" \ --mixed_precision="fp16" \ --instance_prompt="a photo of erhu" \ --resolution=256 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --learning_rate=1e-4 \ --report_to="wandb" \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --max_train_steps=300 \ --validation_prompt="A photo of erhu on the grass" \ --validation_epochs=25 \ --use_8bit_adam \ --seed="0" \ --push_to_hub ``` `torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 36.00 MiB. GPU 0 has a total capacity of 39.56 GiB of which 2.81 MiB is free. Process 16368 has 39.55 GiB memory in use. Of the allocated memory 38.05 GiB is allocated by PyTorch, and 1021.72 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management ` Thanks
https://github.com/huggingface/diffusers/issues/10120
closed
[ "bug", "stale", "training" ]
2024-12-04T19:39:04Z
2025-01-27T01:30:18Z
5
KolvacS-W
huggingface/diffusers
10,112
Detail-Daemon diffusers
**Describe the solution you'd like.** Detail-Daemon: https://github.com/Jonseed/ComfyUI-Detail-Daemon How to implement Detail-Daemon in diffusers, as seen in https://github.com/Jonseed/ComfyUI-Detail-Daemon. Will there be a better official component in the future?
https://github.com/huggingface/diffusers/issues/10112
open
[ "wip", "consider-for-modular-diffusers" ]
2024-12-04T09:14:39Z
2025-01-03T18:01:24Z
10
NicholasCao
huggingface/lerobot
547
How to make a custom LeRobotDataset with v2?
Hi folks, thanks for the amazing open source work! I am trying to make a custom dataset to use with the LeRobotDataset format. The readme says to copy the example scripts here which I've done, and I have a working format script of my own. https://github.com/huggingface/lerobot/blob/8e7d6970eaf5a64b8af6ec45586d201b8ca9ef16/README.md?plain=1#L323 but when it comes time to create the dataset, the `push_dataset_to_hub.py` uses `LeRobotDataset.from_preloaded` which is no longer supported in [dataset V2](https://github.com/huggingface/lerobot/pull/461) https://github.com/huggingface/lerobot/blob/8e7d6970eaf5a64b8af6ec45586d201b8ca9ef16/lerobot/scripts/push_dataset_to_hub.py#L216 So I'm just wondering what the proper way of loading your own custom local dataset is? Thank you in advance for your help!
https://github.com/huggingface/lerobot/issues/547
closed
[ "question", "dataset", "stale" ]
2024-12-04T08:00:19Z
2025-10-08T08:28:34Z
null
alik-git
huggingface/lerobot
545
Poor success rate in complex scenarios
Hi I used Moss robot to play with and train ACT policy, when it comes to one lego piece, it can finish grabbing task at high success rate after recording 50+ episodes with different pose & location variants, but generalization on multi-piece random location is not promising. When I started to add complexity (for example 6 pieces with different colors like the picture below), and place the lego pieces a little bit randomly, record one episode continuously until all the pieces are grabbed (other than 1 piece 1 episode). furthermore, were recorded with order ![IMG_4681 HEIC](https://github.com/user-attachments/assets/dbe58ebc-0690-4563-ab1d-cf0660305611) Here is what I found: 1. The trained policy can not work if the gripping sequence is randomized, in other words it has to keep a fixed spacial order e.g. from upper left to down right. 2. The trained policy can not work if the [location, color, pose] combination was not seen in training dataset, especially location combos 3. At first I suspected only iPhone and Mac fixed cameras can not give enough depth perception, so I bought a wide-angle USB camera mounted it on the gripper, as a result success rate didn't get higher. ![20241204141608](https://github.com/user-attachments/assets/346a6c22-7516-4854-ac1f-5d7029af5336) 4. Enlarging dataset size to 120+ episodes didn't give obvious change. I was wondering how to improve this task, is the method I used to record data wrong or due to the generalization of ACT is limited? Looking forward to hearing answers or experience
https://github.com/huggingface/lerobot/issues/545
closed
[ "question", "policies", "stale" ]
2024-12-04T06:20:31Z
2025-10-08T08:28:45Z
null
mydhui
huggingface/frp
14
where is the code of frpc-gradio-0.3
https://github.com/huggingface/frp/issues/14
closed
[]
2024-12-04T05:37:34Z
2025-03-11T00:55:39Z
null
BoyuanJiang
huggingface/peft
2,255
Is this the right way to check whether a model has been trained as expected?
I'd like to check whether my PEFT model has been trained as intended, i.e. whether the PEFT weights have changed, but not the base weights. The following code works, but I'm sure a PEFT specialist will suggest a better way. ```python import tempfile import torch from datasets import load_dataset from peft import LoraConfig, get_peft_model from transformers import AutoModelForCausalLM from trl import SFTConfig, SFTTrainer # Get the base model model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5" model = AutoModelForCausalLM.from_pretrained(model_id) # Get the base model parameter names base_param_names = [f"base_model.model.{n}" for n, _ in model.named_parameters()] # Turn the model into a peft model lora_config = LoraConfig() model = get_peft_model(model, lora_config) # Get the dataset dataset = load_dataset("trl-internal-testing/zen", "standard_language_modeling", split="train") with tempfile.TemporaryDirectory() as tmp_dir: # Initialize the trainer training_args = SFTConfig(output_dir=tmp_dir, report_to="none") trainer = SFTTrainer(args=training_args, model=model, train_dataset=dataset) # Save the initial parameters to compare them later previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() # Check the peft params have changed and the base model params have not changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) if n in base_param_names: # We expect the base model parameters to be the same if not torch.allclose(param, new_param): print(f"Parameter {n} has changed, but it should not have changed") elif "base_layer" not in n: # We expect the peft parameters to be different (except for the base layer) if torch.allclose(param, new_param): print(f"Parameter {n} has not changed, but it should have changed") ```
https://github.com/huggingface/peft/issues/2255
closed
[]
2024-12-03T17:36:00Z
2024-12-04T12:01:37Z
5
qgallouedec
huggingface/peft
2,251
a guide to add a new fine-tuning method in the doc
### Feature request Hello, I am a researcher in the finetune area. Can you publish a guide to add a new fine-tuning method in the doc? I think researchers like me are glad to experiment their methods based on this repo. ### Motivation Researchers like me are glad to experiment their methods based on this repo, but don't know how to add. ### Your contribution Yes, but after verifying the feasibility of my method.
https://github.com/huggingface/peft/issues/2251
closed
[]
2024-12-03T13:46:02Z
2024-12-04T02:12:35Z
2
YF-T
huggingface/diffusers
10,076
Do we have any script covert from hf format to orginal format?
**Is your feature request related to a problem? Please describe.** scripts/convert_cogvideox_to_diffusers.py in this script, we can convert cogvideox -> diffusers. Do we have the opposite script? cc @yiyixuxu
https://github.com/huggingface/diffusers/issues/10076
open
[ "good first issue", "contributions-welcome", "conversion script" ]
2024-12-02T07:49:34Z
2024-12-02T18:22:50Z
1
foreverpiano
huggingface/trl
2,424
How to calculate the loss of multi-turn dialogue training data?
In a single data entry containing multiple turns of dialogue, abbreviated as Q1 + A1 + Q2 + A2, does this project calculate the loss only for the last answer of the multi-turn dialogue, or for each answer?
https://github.com/huggingface/trl/issues/2424
closed
[ "โ“ question", "๐Ÿ‹ SFT" ]
2024-12-02T07:47:17Z
2025-01-20T02:47:34Z
null
NUMB1234
huggingface/diffusers
10,074
how to install diffusers 0.32.0
FluxFillPipeline Function need =0.32.0 But I don't know how to install it, can anyone help me? Thanks in advance
https://github.com/huggingface/diffusers/issues/10074
closed
[]
2024-12-02T07:05:24Z
2024-12-02T19:11:34Z
null
babyta
huggingface/diffusers
10,070
Xformers info , memory efficient atttention unavailable
### Describe the bug I just started learning Stable Diffuision on Win11. After I installed xformers, I found several memory_efficient_attention string is unavailable. Is it possible to make them available? Thanks for any help. ### Reproduction xFormers 0.0.28.post3 memory_efficient_attention.ckF: unavailable memory_efficient_attention.ckB: unavailable memory_efficient_attention.ck_decoderF: unavailable memory_efficient_attention.ck_splitKF: unavailable memory_efficient_attention.cutlassF-pt: available memory_efficient_attention.cutlassB-pt: available memory_efficient_attention.fa2F@v2.6.3-24-gbdf733b: available memory_efficient_attention.fa2B@v2.6.3-24-gbdf733b: available memory_efficient_attention.fa3F@0.0.0: unavailable memory_efficient_attention.fa3B@0.0.0: unavailable memory_efficient_attention.triton_splitKF: available indexing.scaled_index_addF: available indexing.scaled_index_addB: available indexing.index_select: available sequence_parallel_fused.write_values: available sequence_parallel_fused.wait_values: available sequence_parallel_fused.cuda_memset_32b_async: available sp24.sparse24_sparsify_both_ways: available sp24.sparse24_apply: available sp24.sparse24_apply_dense_output: available sp24._sparse24_gemm: available sp24._cslt_sparse_mm_search@0.0.0: available sp24._cslt_sparse_mm@0.0.0: available swiglu.dual_gemm_silu: available swiglu.gemm_fused_operand_sum: available swiglu.fused.p.cpp: available is_triton_available: True pytorch.version: 2.5.1+cu124 pytorch.cuda: available gpu.compute_capability: 8.9 gpu.name: NVIDIA GeForce RTX 4070 dcgm_profiler: unavailable build.info: available build.cuda_version: 1204 build.hip_version: None build.python_version: 3.10.11 build.torch_version: 2.5.1+cu124 build.env.TORCH_CUDA_ARCH_LIST: 6.0+PTX 7.0 7.5 8.0+PTX 9.0a build.env.PYTORCH_ROCM_ARCH: None build.env.XFORMERS_BUILD_TYPE: Release build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None build.env.NVCC_FLAGS: -allow-unsupported-compiler build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.28.post3 build.nvcc_version: 12.4.131 source.privacy: open source ### Logs _No response_ ### System Info Win11, Python 3.10.6,pytorch 2.5.1+cu124, xFormers 0.0.28.post3, triton==3.0.0 ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/10070
open
[ "bug", "stale" ]
2024-12-01T16:14:21Z
2025-01-01T15:03:09Z
1
Stareshine
huggingface/Google-Cloud-Containers
126
Deployment error on GKE
Hello! I deployed Gemma 2 2b it on GKE with autopilot mode following these instructions https://cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-tgi#autopilot. There's this error Node scale up in zones us-central1-c associated with this pod failed: GCE quota exceeded. Pod is at risk of not being scheduled. I checked quota there's enough GPU. However the pod is in pending state.
https://github.com/huggingface/Google-Cloud-Containers/issues/126
closed
[ "question" ]
2024-12-01T14:09:29Z
2025-01-07T08:39:07Z
null
piksida
huggingface/lerobot
538
questions about load dataset for localhost, make own policy and use headless eval mode
Hello, I'm trying to download a data set on hugging face to the local and then call this data set from the local. For example, 'aloha_sim_insertion_scripted_image' , its format is many 'episode_000000.parquet' files . Then how to load this format by LeRobotDataset() func or other ways? Second, I want to create my own policy. After I parse the code framework, I think I may need to create my policy code file by mimicking the following files: + lerobot/common/policies/act/configuration_act.py + lerobot/common/policies/act/modeling_act.py However, I am having some difficulties in making my own policy now, and I want to create a new policy to implement my idea, which is to introduce the concept of comparative learning. That is to say, the policy enables the agent to learn the correct samples and stay away from the wrong samples. I would like to ask you what should be modified to complete this idea. I really need examples of this, and it would be very helpful if you could give me detailed advice! Finally, my server is headless, which means that when evaluating a policy, there is no way to call mujujo to view the evaluation, so can our code framework support headless mode and save the evaluation video? As a new researcher in this field, it would be great if I could further communicate with you about the above issues. Thank you very much! Best wishes : )
https://github.com/huggingface/lerobot/issues/538
closed
[ "question", "stale" ]
2024-12-01T03:32:06Z
2025-10-19T02:32:41Z
null
zhouzhq2021
huggingface/lerobot
536
How auto calibration works
Is there any details about run_arm_auto_calibration_moss and run_arm_auto_calibration_so100 we can refer? I read the code but couldn't fully understand. When should we use auto_calibration, instead of the manual calibration calculating the homing_offset of the rotated (90d) pose? What to check whether my understanding is correct: for manual calibration, the homing offset include 2 terms, 1) the true offset causing by motor installation, 2) human bias due to manually rotate the motor. If correct, is there a way to also remove the second term? Considering using multiple robots for data collection, guess removing term (2) is required.
https://github.com/huggingface/lerobot/issues/536
closed
[ "question", "robots", "stale" ]
2024-11-30T18:04:23Z
2025-10-08T08:37:24Z
null
wzds2015
huggingface/accelerate
3,269
๐ŸคจQuestion: What if model has float16 dtype and `mixed_precision` is set to fp16 as well?
As the title: **๐ŸคจQuestion: What if model has float16 dtype and `mixed_precision` is set to fp16 as well?** - Will it computate in original float16? Like Auto-Mixed-Precision never exist - or some modules, which are easy to overflow(e.g. BatchNorm, LayerNorm), will be upcasted to float32, as AMP fp32->fp16 does? Could someone please help me with this question? โค
https://github.com/huggingface/accelerate/issues/3269
closed
[]
2024-11-29T17:55:58Z
2025-01-07T15:33:26Z
null
townwish4git
huggingface/chat-macOS
36
Document how to download and install a local model
1st, thanks very much for this work! I'm a bit of nube here. The 'Get' button takes you to web page for the example, however chat-macOS instruction are not part of the options. And also where do you place the downloaded model for the "add +" option and where do the models go? Is there a way to configure where models are stored? Thanks!
https://github.com/huggingface/chat-macOS/issues/36
open
[]
2024-11-29T17:18:43Z
2024-11-29T17:18:43Z
null
deepcoder
huggingface/diffusers
10,055
Training script for a Controlnet based on SD3 does not work
### Describe the bug Hi @sayakpaul and all others :) The training script for a Control-net based on Stable Diffusion 3 seems to not work. **RuntimeError: Given groups=1, weight of size [1536, 17, 2, 2], expected input[4, 16, 64, 64] to have 17 channels, but got 16 channels instead** I tried to follow the documentation on how to train a control net based on SD3. I used a custom dataset that I also used to train a control net based on SD1.5. Once i run the script. I receive a tensors channel do not match error. ### Reproduction !accelerate launch train_controlnet_sd3.py \ --pretrained_model_name_or_path="stabilityai/stable-diffusion-3-medium-diffusers" \ --output_dir="/home/xxx/models/v1/cn-stablediff-v3_out" \ --dataset_name="StudentYannik/v1-prepared-cn" \ --resolution=512 \ --learning_rate=1e-5 \ --max_train_steps=10000 \ --train_batch_size=4 \ --num_train_epochs=10 \ --gradient_accumulation_steps=4 ### Logs ```shell 11/29/2024 14:35:32 - INFO - __main__ - Distributed environment: NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda Mixed precision type: no You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors. {'base_image_seq_len', 'base_shift', 'max_image_seq_len', 'use_beta_sigmas', 'invert_sigmas', 'use_karras_sigmas', 'use_dynamic_shifting', 'max_shift', 'use_exponential_sigmas'} was not found in config. Values will be initialized to default values. Downloading shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 12539.03it/s] Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:09<00:00, 4.92s/it] {'mid_block_add_attention'} was not found in config. Values will be initialized to default values. {'dual_attention_layers', 'qk_norm'} was not found in config. Values will be initialized to default values. 11/29/2024 14:35:54 - INFO - __main__ - Initializing controlnet weights from transformer {'dual_attention_layers', 'pos_embed_type', 'qk_norm', 'use_pos_embed', 'force_zeros_for_pooled_projection'} was not found in config. Values will be initialized to default values. 11/29/2024 14:36:14 - INFO - __main__ - ***** Running training ***** 11/29/2024 14:36:14 - INFO - __main__ - Num examples = 150 11/29/2024 14:36:14 - INFO - __main__ - Num batches each epoch = 38 11/29/2024 14:36:14 - INFO - __main__ - Num Epochs = 1000 11/29/2024 14:36:14 - INFO - __main__ - Instantaneous batch size per device = 4 11/29/2024 14:36:14 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 16 11/29/2024 14:36:14 - INFO - __main__ - Gradient Accumulation steps = 4 11/29/2024 14:36:14 - INFO - __main__ - Total optimization steps = 10000 Steps: 0%| | 0/10000 [00:00<?, ?it/s]Traceback (most recent call last): File "/home/xxxx/repos/control-net/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1412, in <module> main(args) File "/home/xxxx/repos/control-net/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1278, in main control_block_res_samples = controlnet( File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/xxxx/repos/control-net/diffusers/src/diffusers/models/controlnets/controlnet_sd3.py", line 365, in forward hidden_states = hidden_states + self.pos_embed_input(controlnet_cond) File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/xxxx/repos/control-net/diffusers/src/diffusers/models/embeddings.py", line 266, in forward latent = self.proj(latent) File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "
https://github.com/huggingface/diffusers/issues/10055
open
[ "bug", "stale" ]
2024-11-29T13:46:29Z
2025-02-03T15:03:46Z
17
Putzzmunta
huggingface/diffusers
10,050
Is there any img2img KDiffusion equivalent of StableDiffusionKDiffusionPipeline?
### Model/Pipeline/Scheduler description I'm working on result alignment between diffusers and A1111 webui. In txt2img scene, I can achieve via `StableDiffusionKDiffusionPipeline`, refer to https://github.com/huggingface/diffusers/issues/3253. But in img2img scene, is there any KDiffusion pipeline equivalent? I'm also trying to implement this by merging `StableDiffusionKDiffusionPipeline` and `StableDiffusionImg2ImgPipeline` together. Any clarification and help is appreciated. ### Open source status - [ ] The model implementation is available. - [ ] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links for the implementation _No response_
https://github.com/huggingface/diffusers/issues/10050
open
[ "stale" ]
2024-11-29T07:47:11Z
2024-12-29T15:03:05Z
2
juju812
huggingface/diffusers
10,043
F5-TTS Integration
### Model/Pipeline/Scheduler description F5-TTS is a fully non-autoregressive text-to-speech system based on flow matching with Diffusion Transformer (DiT). It has excellent voice cloning capabilities, and audio generation is of quite high quality. ### Open source status - [X] The model implementation is available. - [X] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links for the implementation Paper - https://arxiv.org/abs/2410.06885 Code - https://github.com/SWivid/F5-TTS?tab=readme-ov-file Weights - https://huggingface.co/SWivid/F5-TTS Author - @SWivid
https://github.com/huggingface/diffusers/issues/10043
open
[ "help wanted", "contributions-welcome" ]
2024-11-28T11:14:18Z
2025-11-02T18:46:02Z
11
nityanandmathur
huggingface/lerobot
533
How to merge multiple recorded datasets?
Hi, Thank you so much for the automatic resume during data recording๏ผŒsometimes ubstable camera issues or other situations (e.g. do not have enough time to finish recording) might cause process stopping. I was wondering is there anyway to merge multiple recorded datasets? for instance I have two datasets 'cube grabbing' and 'cylinder grabbing' which were both recorded 50 episodes each and in the save environment, do you have tutorial about how to merge them into a 100-episode larger datasets? BTW, another reason for merging datasets is because storage usage is extremely high before video encoding, and record large datasets at once can be limited by storage. but merge several encoded datasets can mitigate this problem. Thanks
https://github.com/huggingface/lerobot/issues/533
closed
[ "question", "dataset" ]
2024-11-28T01:53:28Z
2025-10-08T08:33:31Z
null
mydhui
huggingface/transformers
34,981
How to Log Training Loss at Step Zero in Hugging Face Trainer or SFT Trainer?
### Feature request log train loss on start ---- โ€™m using the Hugging Face `Trainer` (or `SFTTrainer`) for fine-tuning, and I want to log the training loss at step 0 (before any training steps are executed). I know thereโ€™s an `eval_on_start` option for evaluation, but I couldn't find a direct equivalent for training loss logging at the beginning of training. Is there a way to log the initial training loss at step zero (before any updates) using `Trainer` or `SFTTrainer`? Ideally, I'd like something similar to `eval_on_start`. Hereโ€™s what Iโ€™ve tried so far: #### Solution 1: Custom Callback I implemented a custom callback to log the training loss at the start of training: ```python from transformers import TrainerCallback class TrainOnStartCallback(TrainerCallback): def on_train_begin(self, args, state, control, logs=None, **kwargs): # Log training loss at step 0 logs = logs or {} logs["train/loss"] = None # Replace None with an initial value if available logs["train/global_step"] = 0 self.log(logs) def log(self, logs): print(f"Logging at start: {logs}") wandb.log(logs) # Adding the callback to the Trainer trainer = SFTTrainer( model=model, tokenizer=tokenizer, train_dataset=train_dataset, eval_dataset=eval_dataset, args=training_args, optimizers=(optimizer, scheduler), callbacks=[TrainOnStartCallback()], ) ``` This works but feels a bit overkill. It logs metrics at the start of training before any steps. #### Solution 2: Manual Logging Alternatively, I manually log the training loss before starting training: ```python wandb.log({"train/loss": None, "train/global_step": 0}) trainer.train() ``` ### Question: Are there any built-in features in `Trainer` or `SFTTrainer` to log training loss at step zero? Or is a custom callback or manual logging the best solution here? If so, are there better ways to implement this functionality? similar to the `eval_on_start` but `train_on_start`? cross: https://discuss.huggingface.co/t/how-to-log-training-loss-at-step-zero-in-hugging-face-trainer-or-sft-trainer/128188 ### Motivation Crucial sanity check ### Your contribution yes, happy to implement this.
https://github.com/huggingface/transformers/issues/34981
open
[ "Feature request" ]
2024-11-28T00:24:43Z
2024-11-29T07:35:28Z
null
brando90
huggingface/transformers.js
1,055
Support for Typescript docs
### Question I have been trying to implement server side sentiment analysis using this [tutorial](https://huggingface.co/docs/transformers.js/main/en/tutorials/next#prerequisites) but its in Javascript. I looked through the docs but there seems to be no information on implementing it using Typescript. So far I have integrated Typescript but there is one error that is difficult to fix. This is what I have implemented so far: pipeline.ts ```ts import { pipeline, PipelineType } from "@huggingface/transformers"; // Use the Singleton pattern to enable lazy construction of the pipeline. // NOTE: We wrap the class in a function to prevent code duplication (see below). const P = () => class PipelineSingleton { static task: PipelineType = 'text-classification'; static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english'; static instance: PipelineSingleton | null = null; // eslint-disable-next-line @typescript-eslint/no-unsafe-function-type static async getInstance(progress_callback: Function | undefined = undefined) { if (!this.instance) { this.instance = pipeline(this.task, this.model, { progress_callback }); } return this.instance; } } let PipelineSingleton: ReturnType<typeof P>; if (process.env.NODE_ENV !== 'production') { // When running in development mode, attach the pipeline to the // global object so that it's preserved between hot reloads. // For more information, see https://vercel.com/guides/nextjs-prisma-postgres const globalWithPipeline = global as typeof global & { PipelineSingleton: ReturnType<typeof P> }; if (!globalWithPipeline.PipelineSingleton) { globalWithPipeline.PipelineSingleton = P(); } PipelineSingleton = globalWithPipeline.PipelineSingleton; } else { PipelineSingleton = P(); } export default PipelineSingleton; ``` request.ts ```ts import { NextResponse } from 'next/server' import PipelineSingleton from './pipeline'; export async function GET(request: Request) { // Extract the text parameter from the query string const url = new URL(request.url); const text = url.searchParams.get('text'); if (!text) { return NextResponse.json({ error: 'Missing text parameter', }, { status: 400 }); } // Get the classification pipeline. When called for the first time, // this will load the pipeline and cache it for future use. const classifier = await PipelineSingleton.getInstance(); // SHOWS THE ERROR - Type 'PipelineSingleton' has no call signatures.ts(2349) // Actually perform the classification const result = await classifier(text); return NextResponse.json(result); } ``` The problem is in the routes.ts when calling the classifier method. Typescript shows the error: > This expression is not callable. > Type 'PipelineSingleton' has no call signatures.ts(2349) So this probably means that my Typescript implementation is incorrect for Pipeline. Would appreciate any help on this. TIA.
https://github.com/huggingface/transformers.js/issues/1055
open
[ "question" ]
2024-11-26T21:38:54Z
2024-11-27T02:20:59Z
null
SadmanYasar
huggingface/datasets
7,299
Efficient Image Augmentation in Hugging Face Datasets
### Describe the bug I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient. I'm new to the Hugging Face datasets library, I didn't find nothing in the documentation or the issues here on github. Is there an existing way to add image transformations directly to the dataset loading pipeline? ### Steps to reproduce the bug from datasets import load_dataset from torch.utils.data import DataLoader ```python def collate_fn(batch): images = [item['image'] for item in batch] texts = [item['text'] for item in batch] return { 'images': images, 'texts': texts } dataset = load_dataset("Yuki20/pokemon_caption", split="train") dataloader = DataLoader(dataset, batch_size=4, collate_fn=collate_fn) # Output shows varying image sizes: # [(1280, 1280), (431, 431), (789, 789), (769, 769)] ``` ### Expected behavior I'm looking for a way to resize images on-the-fly when loading the dataset, similar to PyTorch's Dataset.__getitem__ functionality. This would be more efficient than handling resizing in the collate_fn. ### Environment info - `datasets` version: 3.1.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.11.10 - `huggingface_hub` version: 0.26.2 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
https://github.com/huggingface/datasets/issues/7299
open
[]
2024-11-26T16:50:32Z
2024-11-26T16:53:53Z
0
fabiozappo
huggingface/lerobot
527
Is there a `select_actions` abstraction?
This line references a `select_actions` function which doesn't seem to exist. This functionality (abstract away access to the future action queue, instead of just returning the first action) would be useful - did it use to / will it exist? https://github.com/huggingface/lerobot/blob/96c7052777aca85d4e55dfba8f81586103ba8f61/lerobot/common/policies/act/modeling_act.py#L102
https://github.com/huggingface/lerobot/issues/527
closed
[ "question", "policies", "stale" ]
2024-11-26T14:22:31Z
2025-10-08T08:33:51Z
null
genemerewether
huggingface/diffusers
10,025
attention mask for transformer Flux
### Describe the bug Is it possible to get back the `attention_mask` argument in the flux attention processor ``` hidden_states = F.scaled_dot_product_attention(query, key, value, dropout_p=0.0, is_causal=False,attn_mask=attention_mask) ``` https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1910 in order to tweak things a bit ? otherwise the argument `attention_mask` is unused. Thanks a lot ### Reproduction pip install diffusers ### Logs _No response_ ### System Info Ubuntu ### Who can help? @yiyixuxu @sayakpaul @DN6 @asomoza
https://github.com/huggingface/diffusers/issues/10025
closed
[ "bug" ]
2024-11-26T08:51:20Z
2024-12-05T00:22:37Z
19
christopher5106
huggingface/accelerate
3,263
How to load checkpoint shards one by one to avoid OOM error?
### System Info ```Shell - `Accelerate` version: 1.1.0 - Platform: Linux-5.10.112-005.ali5000.al8.x86_64-x86_64-with-glibc2.17 - `accelerate` bash location: /home/admin/anaconda3/envs/llama_factory/bin/accelerate - Python version: 3.10.14 - Numpy version: 1.26.4 - PyTorch version (GPU?): 2.4.1+cu121 (True) - PyTorch XPU available: False - PyTorch NPU available: False - PyTorch MLU available: False - PyTorch MUSA available: False - System RAM: 128.00 GB - GPU type: NVIDIA H20 - `Accelerate` default config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: no - use_cpu: False - debug: False - num_processes: 8 - machine_rank: 0 - num_machines: 1 - gpu_ids: all - rdzv_backend: static - same_network: True - main_training_function: main - enable_cpu_affinity: False - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`) - [X] My own task or dataset (give details below) ### Reproduction My code can run on 1/2/3/4 GPU(s), but errors occur when I try to use more GPUs. The command I use : `accelerate launch --multi_gpu --gpu_ids 0,1,2,3,4,5,6,7,8 --num_processes 8 --main_process_port 2525 ./train_args_multi.py --batch_size 4 --save_name tmp_model_multi` The code where errors occur: ``` accelerator = Accelerator() device = accelerator.device print('Device: ', device) model = MyModel(path=path, device=device).to(device) random.seed(seed) torch.manual_seed(seed) np.random.seed(seed) train_data, train_loader = data_provider(train_data_path, batch_size, num_workers=num_workers, flag='train') test_data, test_loader = data_provider(test_data_path, batch_size, num_workers=num_workers, flag='test') model_optim = optim.Adam(trained_parameters, lr=learning_rate) print('Preparing for accelerator...') model, model_optim, train_loader, test_loader = accelerator.prepare(model, model_optim, train_loader, test_loader) ``` ### Expected behavior Errors occur when loading checkpoint shards (as the bar shows below): ``` $accelerate launch --multi_gpu --num_processes 8 --gpu_ids 0,1,2,3,4,5,6,7 --main_process_port 25252 ./train_args_multi.py --batch_size 4 --save_name tmp_model_multi Device: cuda:0 Device: cuda:6 Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s$ Device: cuda:5 Device: cuda:3 Device: cuda:4 Device: cuda:7 Device: cuda:1 Device: cuda:2 Loading checkpoint shards: 50%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 2/4 [00:11<00:12, 6....r(args) File "/home/admin/anaconda3/envs/llama_factory/lib/python3.10/site-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher distrib_run.run(args) File "/home/admin/anaconda3/envs/llama_factory/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run elastic_launch( File "/home/admin/anaconda3/envs/llama_factory/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/admin/anaconda3/envs/llama_factory/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ====================================================== ./train_args_multi.py FAILED ------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-11-26_16:17:47 host : pe-resource-pool033093226243.center rank : 5 (local_rank: 5) exitcode : -9 (pid: 84403) error_file: <N/A> traceback : Signal 9 (SIGKILL) received by PID 84403 ====================================================== (llama_factory) ``` I found that the memory ran out (not CUDA memory) when loading the models b
https://github.com/huggingface/accelerate/issues/3263
closed
[]
2024-11-26T08:25:37Z
2025-01-06T15:06:50Z
null
amoyplane
huggingface/lerobot
525
Train a RL agent (without initial dataset)
Hi, I'm currently working on trying to integrate the following environment in the repo : https://github.com/perezjln/gym-lowcostrobot I would like to use it for learning a RL agent in sim and try it out on the real robot after. However, the current training script requires to have a local or online pre-recorded dataset. Is there a way to avoid this and pass an option to not load a dataset ? Thank you in advance
https://github.com/huggingface/lerobot/issues/525
closed
[ "enhancement", "question", "simulation" ]
2024-11-25T20:02:38Z
2025-04-07T16:19:01Z
null
alexcbb
huggingface/chat-ui
1,592
Add Markdown support for user messages
## Describe your feature request In pr #1562 , a WSIWYG editor has been added to the text input area, however, when a text is sent, it is displayed in unrendered markdown. The idea is to use `marked` to conditionally render certain elements in the user's sent message into markdown, and leave others untouched. The WSIWYG editor currently converts the following into markdown: - bold - italic - code blocks - code spans The sent user messages should display those specific elements converted into markdown, and leave the rest untouched and unconverted, such as headings. ## Screenshots An example of how a user message is currently displayed: ![image](https://github.com/user-attachments/assets/71ab2877-28c8-4676-a06a-ac403e101fac) ## Implementation idea The idea is to create a custom `renderer` which might be done using `marked` to be used when the message sender is the `user`. The renderer allows certain modifications, such as explicitly specifying what it should and should not convert, something like: ```typescript const renderer = new marked.Renderer(); renderer.list = (body, _ordered) => { return body; }; renderer.heading = (text: string, _level: number) => { return text; }; // continue to disable unwanted features // enable what we need renderer.code = (code: string) => `<pre><code>${code}</code></pre>`; renderer.codespan = (text: string) => `<code>${text}</code>`; renderer.strong = (text: string) => `<strong>${text}</strong>`; renderer.em = (text: string) => `<em>${text}</em>`; ``` However any other implementation ideas are welcome!
https://github.com/huggingface/chat-ui/issues/1592
open
[ "enhancement" ]
2024-11-25T17:26:10Z
2024-11-27T20:42:19Z
2
Mounayer
huggingface/accelerate
3,260
How to Properly Resume Multi-GPU Training with accelerate launch Without OOM or Loss Issues?
I encountered an issue while running multi-GPU training using `accelerate launch`. I am using 4 GPUs for training, and during the process, I save my model state using: ```python accelerator.save_state(state_path) ``` Later, I attempt to resume training by loading the model parameters with: ```python accelerator.load_state(state_path) ``` However, when I start training again, I observe multiple strange processes on the first GPU, which causes an OOM (out of memory) error, as shown in the attached figure. To address this, I tried adding the following line before: ```python accelerator.load_state(state_path) ``` The updated code looks like this: ```python if self.accelerator.is_main_process: self.accelerator.load_state(state_path) ``` I then used: ```python accelerator.wait_for_everyone() ``` afterward to synchronize the model state across all four GPUs. While this resolved the issue of multiple processes on the first GPU, the model's loss increases significantly. It seems that the trained weights are not being properly synchronized across all GPUs. Could anyone please suggest how to correctly resume training in a multi-GPU setup with `accelerate launch`, ensuring the model weights are properly loaded and synchronized across all devices? Thank you! ![ๅพฎไฟกๅ›พ็‰‡_20241124170918](https://github.com/user-attachments/assets/b83375b8-6da2-4b70-b7ed-2c6b6c110825) ![ๅพฎไฟกๅ›พ็‰‡_20241124170833](https://github.com/user-attachments/assets/b0aad650-083e-418d-bdd7-60f8e485d7bd)
https://github.com/huggingface/accelerate/issues/3260
closed
[]
2024-11-25T17:19:06Z
2025-05-29T10:26:13Z
null
tqxg2018
huggingface/chat-ui
1,589
Models using OpenAI endpoint have caching enabled
When using models that are currently using the OpenAI endpoint type on HuggingChat (Nemotron, llama 3.2, qwen coder) they seem to have caching enabled. This means retrying will just reload the previous response extremely quickly. This is not the intended behaviour and does not match what is happening when using the TGI endpoint.
https://github.com/huggingface/chat-ui/issues/1589
closed
[ "huggingchat" ]
2024-11-25T12:47:01Z
2025-03-12T12:56:00Z
1
nsarrazin