repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/agents-course
295
[QUESTION] Ambiguity what chat templates are.
Issue: Where ➡ https://huggingface.co/learn/agents-course/unit1/messages-and-special-tokens > This is where chat templates come in. They act as the bridge between conversational messages (user and assistant turns) and the specific formatting requirements of your chosen LLM. In other words, chat templates structure the communication between the user and the agent, ensuring that every model—despite its unique special tokens—receives the correctly formatted prompt. In my opinion, the first sentence about chat templates is correct. The second part seems wrong. It says `...chat templates structure the communication between the user and the agent...`. Correct Sentence: `...chat templates structure the communication between the agents and the language model or LLM...`. Reason: The Chat templates are implemented inside the agents with respective `chat.completion` method to send the user's request, through agents, to the LLMs. The user just types into the chatbox as similar to how we type messages. The text-flow is as below in it's simplest form is as below: User's message >> Chat Templates wraps the message as per LLM's specs >> send to LLMs through agents. So the `the user and the agent` part doesn't seem very right to me. I did give my best alternative, I could thought of. I okay with anything else you come up with.
https://github.com/huggingface/agents-course/issues/295
open
[ "question" ]
2025-03-06T17:12:41Z
2025-03-06T17:12:41Z
null
MekongDelta-mind
huggingface/open-r1
483
How to calculate total optimization steps
I ran it on 8 GPUs and set num_generations to 8, num_processes=7, Why Total optimization steps=196, isn't it Num examples/Total train batch size? It seems that multiplying by num_generations yields 196. Why do we need to multiply by num_generations? [INFO|trainer.py:2405] 2025-03-06 12:04:09,913 >> ***** Running training ***** [INFO|trainer.py:2406] 2025-03-06 12:04:09,913 >> Num examples = 5,498 [INFO|trainer.py:2407] 2025-03-06 12:04:09,914 >> Num Epochs = 1 [INFO|trainer.py:2408] 2025-03-06 12:04:09,914 >> Instantaneous batch size per device = 8 [INFO|trainer.py:2411] 2025-03-06 12:04:09,914 >> Total train batch size (w. parallel, distributed & accumulation) = 224 [INFO|trainer.py:2412] 2025-03-06 12:04:09,914 >> Gradient Accumulation steps = 4 [INFO|trainer.py:2413] 2025-03-06 12:04:09,914 >> Total optimization steps = 196 [INFO|trainer.py:2414] 2025-03-06 12:04:09,915 >> Number of trainable parameters = 7,615,616,512
https://github.com/huggingface/open-r1/issues/483
open
[]
2025-03-06T09:47:19Z
2025-03-13T08:45:23Z
null
HelloWorld506
huggingface/transformers.js
1,221
How to use Xenova/deplot using the transformers.js library.
### Question Currently I'm doing: ``` this.pipeline = await pipeline("image-text-to-text", "Xenova/deplot", { progress_callback: (progress) => { this.updateProgress({ status: `Loading model: ${progress.status}`, progress: 0.1 + (progress.progress * 0.9) }); }, device: "cpu", dtype: dtype, }); ``` I get the following error: ``` Error: Unsupported pipeline: image-text-to-text. Must be one of [text-classification,token-classification,question-answering,fill-mask,summarization,translation,text2text-generation,text-generation,zero-shot-classification,audio-classification,zero-shot-audio-classification,automatic-speech-recognition,text-to-audio,image-to-text,image-classification,image-segmentation,zero-shot-image-classification,object-detection,zero-shot-object-detection,document-question-answering,image-to-image,depth-estimation,feature-extraction,image-feature-extraction] ```
https://github.com/huggingface/transformers.js/issues/1221
open
[ "question" ]
2025-03-06T07:56:07Z
2025-03-06T11:36:19Z
null
aadya940
huggingface/peft
2,410
running forward loop using get_peft_model disables requires_grad on output
Hi, I would like to report a recent issue I have been facing, but I am not sure if it is a bug or I am doing something wrong in the process. The steps to re-create the steps are easy. The issue happens when I try to convert **Qwen2-VL-2B-Instruct** model into a PEFT model using `get_peft_model` method. Simply load the model using the sample code in https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct and try to convert it to a PEFT model using a typical **8bit** LoraConfig with just sample `target_modules=["q_proj", "v_proj"]`. Then simply run a forward call to the model using a dummy input, such as `input_ids = torch.zeros((4, 1247)).to(device)`. When I inspect the `requires_grad` of `logits` attribute of the output, it is False. Meaning that I cannot run backward based on that output. This issue has been puzzling me for a while. I would appreciate if you can help me with a solution or advice how to address it properly.
https://github.com/huggingface/peft/issues/2410
closed
[]
2025-03-06T05:12:42Z
2025-04-13T15:03:40Z
4
Hamidreza3252
huggingface/lerobot
826
Should the pi0 pytorch model on Huggingface load model.safetensors or the other three satetensors?
https://huggingface.co/lerobot/pi0/tree/main What is the difference between `model.safetensors` and the other three satetensors (`model-00001-of-0000*.safetensors`)? The pi0 model `from_pretrained()` method will load `model.safetensor`s by default instead of `model-00001-of-0000*.safetensors`.
https://github.com/huggingface/lerobot/issues/826
closed
[ "question", "stale" ]
2025-03-06T03:12:05Z
2025-10-08T08:42:49Z
null
chopinxxxx
huggingface/agents-course
290
[QUESTION] First Agent code does not produce any output
I cloned and tried running the first agent app.py. I wanted to try the image generation tool. the application built and ran but when I tried typing something in the chat such as "generate an image of a cat", there is no response from the bot. it stays blank
https://github.com/huggingface/agents-course/issues/290
open
[ "question" ]
2025-03-05T23:49:06Z
2025-03-18T14:45:44Z
null
Sabk0926
huggingface/accelerate
3,421
How to sync distribute model paramaters when training with continual learning fashion?
When performing distributed continual learning tasks, it is common to expand model parameters as tasks increase. For example, I have defined an `expand_classifier()` method with random initialization to increase the parameters of the classifier. How can I ensure that the newly added parameters are initialized the same on each GPU model? If i do ``` if self.accelerator.is_main_process: self.model.module.prompt.expand_classifier() ``` How can i sync classifier across all distributed model?
https://github.com/huggingface/accelerate/issues/3421
closed
[]
2025-03-05T13:44:15Z
2025-04-13T15:06:22Z
null
Iranb
huggingface/lerobot
817
SO 100 Arm assembly instruction inconsistency
Step 22 of the assembly guide shows a picture of wrist that is flipped comparing to the drawing and front page photo. Are both right? If not, which one is correct? [Latest instruction](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#wrist-assembly): <img width="723" alt="Image" src="https://github.com/user-attachments/assets/490e23aa-1085-4c89-9148-49304ac85ed5" /> [Assembly video](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#additional-guidance): <img width="812" alt="Image" src="https://github.com/user-attachments/assets/b12cc0a7-bff9-4b2a-b2a2-30333a205506" /> [Project home page](https://github.com/huggingface/lerobot/tree/main?tab=readme-ov-file#------------build-your-own-so-100-robot): ![Image](https://github.com/user-attachments/assets/f23cf441-93f9-4bd5-aeba-45d2d81aa80d)
https://github.com/huggingface/lerobot/issues/817
closed
[ "question", "robots", "stale" ]
2025-03-05T05:23:57Z
2025-11-30T02:37:07Z
null
liuhuanjim013
huggingface/open-r1
472
how to set the max_model_length, max_new_tokens and generation_size when evaluate ?
Suppose the max_position_embedding of my model is 4096, how to set max_model_length, max_new_tokens and generation_size to. get the correct evaluate result? For example , set max_model_length=4096, max_new_tokens=1000, generation_size=1000?
https://github.com/huggingface/open-r1/issues/472
open
[]
2025-03-05T04:01:48Z
2025-03-12T03:41:42Z
null
ItGirls
huggingface/transformers
36,546
how to use transformers with musicgen with float16
``` import transformers, torch, builtins, numpy processor = transformers.AutoProcessor.from_pretrained(' facebook/musicgen-stereo-melody-large', torch_dtype=torch.float16) model = transformers.MusicgenMelodyForConditionalGeneration.from_pretrained('facebook/musicgen-stereo-melody-large ,torch_dtype=torch.float16).to('cuda') result = [] for _ in builtins.range(2): inputs = processor(audio=result[-1] if result else None, sampling_rate=model.config.audio_encoder.sampling_rate, text='A grand and majestic symphony with soaring strings, powerful brass, and dynamic orchestration. Inspired by Beethoven and Tchaikovsky, featuring dramatic crescendos, delicate woodwind passages, and a triumphant finale. The mood is epic, emotional, and timeless', padding=True, return_tensors='pt').to('cuda') audio_values = model.generate(**inputs, max_new_tokens=1000) result += audio_values[0, 0].cpu().numpy(), from IPython.display import Audio Audio(numpy.concatenate(result), rate=model.config.audio_encoder.sampling_rate) ``` i alwayse get ``` <ipython-input-12-348220656bb8> in <cell line: 0>() 7 for _ in builtins.range(2): 8 inputs = processor(audio=torch.from_numpy(result[-1]).to(dtype=torch.float32) if result else None, sampling_rate=model.config.audio_encoder.sampling_rate, text='A grand and majestic symphony with soaring strings, powerful brass, and dynamic orchestration. Inspired by Beethoven and Tchaikovsky, featuring dramatic crescendos, delicate woodwind passages, and a triumphant finale. The mood is epic, emotional, and timeless', padding=True, return_tensors='pt').to('cuda') ----> 9 audio_values = model.generate(**inputs, max_new_tokens=1000) 10 result += audio_values[0, 0].cpu().numpy(), 11 5 frames /usr/local/lib/python3.11/dist-packages/torch/nn/modules/linear.py in forward(self, input) 123 124 def forward(self, input: Tensor) -> Tensor: --> 125 return F.linear(input, self.weight, self.bias) 126 127 def extra_repr(self) -> str: RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half ```
https://github.com/huggingface/transformers/issues/36546
closed
[]
2025-03-05T00:40:24Z
2025-03-06T09:49:18Z
null
ghost
huggingface/lerobot
813
State Collection Timing Issue in Manipulator Teleoperation: Post-action vs Pre-action States
**Description:** I've noticed in lerobot/lerobot/common/robot_devices/robots/manipulator.py that during teleoperation, the state being collected is the state after action execution. Is this intended behavior? In my understanding, model inference should use the state before action execution, not after. This could potentially impact learning and inference accuracy, as the model would be using post-action states to predict actions rather than pre-action states. ![Image](https://github.com/user-attachments/assets/89a88379-9369-4eda-8885-8a250ca950dc) ![Image](https://github.com/user-attachments/assets/1ad0705a-e225-4858-94d0-1b774bb4a974)
https://github.com/huggingface/lerobot/issues/813
closed
[ "question", "policies", "stale" ]
2025-03-04T14:19:52Z
2025-10-07T02:26:55Z
null
www-Ye
huggingface/agents-course
284
[QUESTION] Clarify Payment Required for completing Unit 2 notebooks
For the notebook for [components.ipynb]() I ran the `IngestionPipeline` function as follows: ```py from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding from llama_index.core.node_parser import SentenceSplitter from llama_index.core.ingestion import IngestionPipeline # create the pipeline with transformations pipeline = IngestionPipeline( transformations=[ SentenceSplitter(), HuggingFaceInferenceAPIEmbedding(model_name="BAAI/bge-small-en-v1.5"), ] ) # run the pipeline sync or async nodes = await pipeline.arun(documents=documents[:10]) nodes ``` I got the following outcome and looks like this .ipynb can't be executed without a payment route: ```python --------------------------------------------------------------------------- ClientResponseError Traceback (most recent call last) [<ipython-input-15-067f632f4f21>](https://localhost:8080/#) in <cell line: 1>() 12 13 # run the pipeline sync or async ---> 14 nodes = await pipeline.arun(documents=documents[:10]) 15 nodes 12 frames [/usr/local/lib/python3.11/dist-packages/aiohttp/client_reqrep.py](https://localhost:8080/#) in raise_for_status(self) 1159 self.release() 1160 -> 1161 raise ClientResponseError( 1162 self.request_info, 1163 self.history, ClientResponseError: 402, message='Payment Required', url='https://api-inference.huggingface.co/pipeline/feature-extraction/BAAI/bge-small-en-v1.5' ``` is there any free and open alternatives?
https://github.com/huggingface/agents-course/issues/284
open
[ "question" ]
2025-03-04T14:16:01Z
2025-03-06T16:08:39Z
null
carlosug
huggingface/agents-course
281
[any free and unpaid alternative for Inference Providers?]
while executing the [notebook](https://colab.research.google.com/github/huggingface/agents-course/blob/main/notebooks/unit2/smolagents/multiagent_notebook.ipynb) on **unit2. multi agent systems**, i got the following client error for [Inference Providers](https://huggingface.co/blog/inference-providers): ```python > result = agent.run(task) HTTPError: 402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/together/v1/chat/completions The above exception was the direct cause of the following exception: HfHubHTTPError Traceback (most recent call last) [/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name) 475 # Convert `HTTPError` into a `HfHubHTTPError` to display request information 476 # as well (request id and/or server error message) --> 477 raise _format(HfHubHTTPError, str(e), response) from e 478 479 HfHubHTTPError: 402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/together/v1/chat/completions (Request ID: Root=1-67c6f46c-005ae18a6bffc88c0d7a6668;04e6891c-45f6-4358-81fc-b5b794f25ddd) You have exceeded your monthly included credits for Inference Providers. Subscribe to PRO to get 20x more monthly allowance. ``` any free and unpaid alternative for Inference Providers?
https://github.com/huggingface/agents-course/issues/281
open
[ "question" ]
2025-03-04T12:51:26Z
2025-03-31T07:23:49Z
null
carlosug
huggingface/lerobot
808
How to acquire the End-Effector(eef) pose?
Hi, thanks for your great job! How can we acquire the eef pose and control the eef pose instead of only the joints states? Thanks for your attention and hope for your kind response!
https://github.com/huggingface/lerobot/issues/808
closed
[ "question", "policies", "robots", "stale" ]
2025-03-04T09:30:35Z
2025-10-16T02:28:50Z
null
oym1994
huggingface/lerobot
806
How to control local robot with remote model?
I have achieved the inference process on my local computer. I want to know how to put the model on a remote server and control a robot on local. My robot: Koch1.1
https://github.com/huggingface/lerobot/issues/806
closed
[ "question", "stale" ]
2025-03-04T09:09:12Z
2025-10-16T02:28:51Z
null
neverspillover
huggingface/optimum-intel
1,186
How to initialize development env for this repo?
Hi! I would like to develop this repo, met some issues during env initialization. I ran `pip install -e .` to install current repo to local python env. However error came out when running 'pytest tests\' `ImportError while importing test module '/home/shji/codes/optimum-intel/tests/ipex/test_modeling.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: ../../miniforge3/envs/optimum-intel/lib/python3.11/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/ipex/test_modeling.py:42: in <module> from optimum.intel import ( E ImportError: cannot import name 'IPEXModelForSeq2SeqLM' from 'optimum.intel' (/home/shji/codes/optimum-intel/optimum/intel/__init__.py` Seems like installation is wrong or something has been missed as local module cannot be found. Could you provide me some suggestions? Any documentation for setting dev env would be better, thank you
https://github.com/huggingface/optimum-intel/issues/1186
closed
[]
2025-03-04T06:10:15Z
2025-03-10T06:01:21Z
null
shjiyang-intel
huggingface/open-r1
457
How to run reject sampling
I ran generate_reaoning and got the cot data. How do I run reject sampling after that?
https://github.com/huggingface/open-r1/issues/457
open
[]
2025-03-03T03:56:32Z
2025-03-03T03:56:32Z
null
JavaZeroo
huggingface/lerobot
797
use_delta_joint_actions_aloha
if self.use_delta_joint_actions_aloha: raise NotImplementedError( "`use_delta_joint_actions_aloha` is used by pi0 for aloha real models. It is not ported yet in LeRobot." ) when will you put implementation for it because it is very important
https://github.com/huggingface/lerobot/issues/797
closed
[ "question", "policies" ]
2025-03-02T18:14:13Z
2025-04-03T16:39:39Z
null
AbdElrahmanMostafaRifaat1432
huggingface/open-r1
453
How to log the intermediate outputs results?
How to log the intermediate outputs results to track the 'aha moment'. How can I set this in config or modify the code?
https://github.com/huggingface/open-r1/issues/453
closed
[]
2025-03-01T17:08:48Z
2025-03-09T13:53:59Z
null
0205090923
huggingface/Math-Verify
32
How to adjust the priority of '\\ln' and '*' when parsing latex?
When I try to parse a string: "$$ \\dfrac{\\cos x}{2\\lnx * x^{\\ln x - 1}} $$", the result is "cos(x)/((2*log(x*x**(log(x, E) - 1), E)))", rather than "cos(x)/((2*x**(log(x, E) - 1)*log(x, E)))". It seems that there is something wrong when dealing with the priority of '\\ln' and '*'. So I wonder how to adjust the priority to fix this error. Thank you! Error case: ![Image](https://github.com/user-attachments/assets/e6255a11-6365-4f9c-af2a-a2cd49092ea1) Expected (which changes the order of '\\ln'): ![Image](https://github.com/user-attachments/assets/13459cf1-e245-420b-bff6-25d027bbce2f)
https://github.com/huggingface/Math-Verify/issues/32
closed
[]
2025-03-01T09:22:31Z
2025-07-01T20:17:49Z
null
yhhu99
huggingface/smolagents
842
How to pass custom type variables to tools
I’m working on a Telegram bot and using the `smolagents` library to create agents that handle reminders. The issue I’m facing is related to passing the `context` object (which is specific to each message received by the bot) to a tool function (`add_reminder`). The `context` object is required to access the `job_queue` for scheduling reminders. ### Problem: Even though I’m passing the `context` variable through the `additional_args` argument in `agent.run`, the agent doesn’t seem to pass this variable directly to the code interpreter. Instead, it redefines the variable as `None`, which causes the rest of the code to fail. Here’s the relevant part of the code: ```python @tool def add_reminder(title: str, date_time: datetime.datetime, chat_id: str, context: Any, location: str = None, details: str = None) -> dict: ''' Add a reminder to the job queue. Args: title: The title of the reminder (str) date_time: The time for the reminder location: The location of the reminder if it is specified. If not then None (str) details: The details of the reminder if it is specified. If not then None (str) chat_id: pass the chat_id given to you context: pass the context given to you ''' # try: reminder = {} reminder['Title'] = title reminder['Time'] = date_time reminder['Location'] = location reminder['Details'] = details # Convert the reminder time string to a localized datetime object timer_date = date_time.replace(tzinfo=None) timer_date = tz.localize(timer_date) timer_date_string = timer_date.strftime("%H:%M %d/%m/%Y") timer_name = f"{title} ({timer_date_string})" reminder['run'] = 'once' reminder['text'] = reminder_to_text(reminder) # Calculate the time remaining in seconds now = datetime.datetime.now(tz) seconds_until_due = (timer_date - now).total_seconds() # Check if the time is in the past if seconds_until_due <= 0: return {'success': False, 'message': TXT_NOT_ABLE_TO_SCHEDULE_PAST} reminder['type'] = 'parent' context.job_queue.run_once( alarm, when=timer_date, chat_id=chat_id, name=timer_name, data=reminder, ) reminder['type'] = '-30' context.job_queue.run_once( alarm_minus_30, when=timer_date - datetime.timedelta(minutes=30), chat_id=chat_id, name=timer_name, data=reminder, ) return {'success': True, 'message': TXT_REMINDER_SCHEDULED, 'response_for_user': reminder['text']} async def add_reminder_from_input(update, context): # Add the reminder input = update.message.text chat_id = update.effective_chat.id now = datetime.datetime.now(tz).strftime("%d/%m/%Y %H:%M") logger.info(f'chat_id: {chat_id}, input: {input}') agent = CodeAgent(tools=[add_reminder], additional_authorized_imports=['datetime'], model=OpenAIServerModel(model_id='gpt-4o-mini', api_key = OPENAI_TOKEN), verbosity_level=3, max_steps = 2) answer = agent.run(TXT_MENU_AGENT_SYSTEM_PROMPT.format(input=input, now=now), additional_args={"context": context, "chat_id":chat_id}) await send_message(update, context, text=answer) ``` When the agent runs, it generates code like this: ```python ─ Executing parsed code: ──────────────────────────────────────────────────────────────────────────────────────────────────────────────── from datetime import datetime, timedelta # Set the reminder details title = "Meeting with John" date_time = datetime(2025, 3, 1, 9, 0) # March 1, 2025, at 09:00 chat_id = 6129357493 context = None # This would typically be the provided context object # Add the reminder reminder_response = add_reminder(tit
https://github.com/huggingface/smolagents/issues/842
closed
[]
2025-02-28T23:04:49Z
2025-03-01T23:45:40Z
null
ebravofm
huggingface/sentence-transformers
3,254
How to train sentencetransformer with multiple negative?
I have a dataset like: {'anchor':str,'postive':str,negative:list[str]} it seems invalid by example code ```python model = SentenceTransformer(model_path) extend_position_embeddings(model._first_module().auto_model,max_length) loss = CachedMultipleNegativesRankingLoss(model, mini_batch_size=16) training_args = SentenceTransformerTrainingArguments( output_dir=f"./model_dir/{args.save_name}-{args.data_mode}", overwrite_output_dir=True, logging_dir="./logs", logging_steps=1, save_strategy='epoch', save_total_limit=2, # max_steps=900, num_train_epochs=3, warmup_ratio=0.05, learning_rate=3e-5, weight_decay=0.01, gradient_accumulation_steps=16, per_device_train_batch_size=4, dataloader_num_workers=1, batch_sampler=BatchSamplers.NO_DUPLICATES, fp16=True, lr_scheduler_type="cosine", remove_unused_columns=False, # deepspeed='/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/ruanjunhao/chatrag-bench/train/ds3.json', # gradient_checkpointing=True, ) trainer = SentenceTransformerTrainer( model=model, args=training_args, train_dataset=dataset, loss=loss, ) dataloader = trainer.get_train_dataloader() for d in dataloader: import pdb pdb.set_trace() trainer.train() ``` ```bash File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1191, in __init__ self._reset(loader, first_iter=True) File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1228, in _reset self._try_put_index() File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1471, in _try_put_index index = self._next_index() ^^^^^^^^^^^^^^^^^^ File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 691, in _next_index return next(self._sampler_iter) # may raise StopIteration ^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/sentence_transformers/sampler.py", line 193, in __iter__ value TypeError: unhashable type: 'list' ```
https://github.com/huggingface/sentence-transformers/issues/3254
closed
[]
2025-02-28T15:01:19Z
2025-06-13T05:04:35Z
null
rangehow
huggingface/lerobot
789
how to run eval with mujoco sim?
now ,run eval.py is only output in command line. how to run eval with mujoco sim?
https://github.com/huggingface/lerobot/issues/789
closed
[ "simulation", "stale" ]
2025-02-28T10:42:46Z
2025-10-08T11:57:42Z
null
mmlingyu
huggingface/lerobot
788
offline run convert_dataset_v1_to_v2.py
I need help!!!!! for example,when i run convert_dataset_v1_to_v2.py, it prompts the following: ![Image](https://github.com/user-attachments/assets/a4a87562-f0bd-444f-9e32-11cae281ae6f) and what is train.parquet? ![Image](https://github.com/user-attachments/assets/8e24bb90-ef6c-4e55-9b1e-17acd7050312) how to solve it?
https://github.com/huggingface/lerobot/issues/788
closed
[ "bug", "question", "dataset", "stale" ]
2025-02-28T06:41:43Z
2025-10-09T21:54:09Z
null
ximiluuuu
huggingface/sentence-transformers
3,252
How to train sentence transformers with multi machines?
The [docs](https://sbert.net/docs/sentence_transformer/training/distributed.html) describes how to train sentence transformers with multi-GPUs. But both my model and my data are huge, and training sentence transformers with 8 GPUs in one single machine is still very slow. Does sentence transformers support training using mutiple machines, each with 8 GPUs. Do we have any examples? Thank you very much.
https://github.com/huggingface/sentence-transformers/issues/3252
open
[]
2025-02-27T13:37:02Z
2025-02-27T13:37:02Z
null
awmoe
huggingface/diffusers
10,917
Is lumina-2.0 script correct?
I wrote a script, based on the one provided [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py) it gets stuck on loss around 0.5, and i think it is a lot, isn't it?
https://github.com/huggingface/diffusers/issues/10917
open
[]
2025-02-27T11:17:00Z
2025-02-28T15:46:43Z
3
Riko0
huggingface/open-r1
444
How to increase the context window from 4k to 32k on qwen models ?
Hello, I'm trying to distill a subset of the [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/openr1-220k-math) dataset into my Qwen/Qwen2.5-Math-7B-Instruct. I want to do this via a custom SFT pipeline in order to see if I can match the results obtained in the evaluations. However I'm struggling increasing the context window of the Qwen math model from 4k to 32k tokens. This is what I tried in the config.json of the model: ``` { "_name_or_path": "Qwen/Qwen2.5-Math-7B-Instruct", "architectures": [ "Qwen2ForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "initializer_range": 0.02, "intermediate_size": 18944, "max_position_embeddings": 32768, "max_window_layers": 28, "model_type": "qwen2", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "rms_norm_eps": 1e-06, "rope_scaling": { "type": "linear", "factor": 8.0 }, "rope_theta": 10000.0, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.48.1", "use_cache": true, "use_sliding_window": false, "vocab_size": 152064 } ``` But the generations obtained with this base model are garbage. Do you have any advices on which parameters are the best and how to be able to train the model on bigger context windows than initially released ? Thanks !
https://github.com/huggingface/open-r1/issues/444
closed
[]
2025-02-27T10:27:43Z
2025-07-24T23:56:12Z
null
Jeremmmyyyyy
huggingface/trl
2,972
How many H20 (96GB) GPUs are needed to train Qwen7B with the GRPO algorithm?
I want to use the GRPO algorithm to train Qwen7B, but I failed using 4 H20 (96GB) GPUs with the trl library. I would like to know how many H20 GPUs are needed.
https://github.com/huggingface/trl/issues/2972
open
[ "❓ question", "🏋 GRPO" ]
2025-02-27T04:12:16Z
2025-03-14T02:22:36Z
null
Tuziking
huggingface/lerobot
779
Is there a way for a robot arm with kinesthetic teaching function to collect data using lerobot?
Hello, I have a robot arm with kinesthetic teaching function. I guess I can teach my robot at the first time, and collect data from the second time using lerobot? I'm here to ask is this easy to achieve by modifying control_robot.py file? Thanks
https://github.com/huggingface/lerobot/issues/779
closed
[ "question", "stale" ]
2025-02-26T17:50:51Z
2025-10-16T02:28:54Z
null
yzzueong
huggingface/diffusers
10,910
ValueError: Attempting to unscale FP16 gradients.
### Describe the bug I encountered the following error when running train_text_to_image_lora.py: ValueError: Attempting to unscale FP16 gradients. The script I am running is as follows: export MODEL_NAME="CompVis/stable-diffusion-v1-4" export DATASET_NAME="lambdalabs/naruto-blip-captions" accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$DATASET_NAME --caption_column="text" \ --resolution=512 --random_flip \ --train_batch_size=1 \ --num_train_epochs=100 --checkpointing_steps=5000 \ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \ --seed=42 \ --output_dir="sd-naruto-model-lora-clean" \ --validation_prompt="cute dragon creature" --report_to="wandb" How can I resolve this error? ### Reproduction export MODEL_NAME="CompVis/stable-diffusion-v1-4" export DATASET_NAME="lambdalabs/naruto-blip-captions" accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$DATASET_NAME --caption_column="text" \ --resolution=512 --random_flip \ --train_batch_size=1 \ --num_train_epochs=100 --checkpointing_steps=5000 \ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \ --seed=42 \ --output_dir="sd-naruto-model-lora-clean" \ --validation_prompt="cute dragon creature" --report_to="wandb" ### Logs ```shell ``` ### System Info Traceback (most recent call last): File "train_text_to_image_lora.py", line 975, in <module> main() File "train_text_to_image_lora.py", line 856, in main accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) File "/root/miniconda3/lib/python3.8/site-packages/accelerate/accelerator.py", line 2396, in clip_grad_norm_ self.unscale_gradients() File "/root/miniconda3/lib/python3.8/site-packages/accelerate/accelerator.py", line 2340, in unscale_gradients self.scaler.unscale_(opt) File "/root/miniconda3/lib/python3.8/site-packages/torch/amp/grad_scaler.py", line 338, in unscale_ optimizer_state["found_inf_per_device"] = self._unscale_grads_( File "/root/miniconda3/lib/python3.8/site-packages/torch/amp/grad_scaler.py", line 260, in _unscale_grads_ raise ValueError("Attempting to unscale FP16 gradients.") ValueError: Attempting to unscale FP16 gradients. ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/10910
closed
[ "bug" ]
2025-02-26T14:43:57Z
2025-03-18T17:43:08Z
4
Messimanda
huggingface/transformers.js
1,209
Is NFD type normalizer supported?
### Question Hi, I was trying the following code on browser which uses [dewdev/language_detection](https://huggingface.co/dewdev/language_detection): `import { pipeline, Pipeline } from '@huggingface/transformers'; export class DetectLanguage { private modelid: string | null = null; private detectPipeline: Pipeline | null = null; private initialized: boolean = false; constructor(modelid: string = 'dewdev/language_detection') { this.modelid = modelid; } async initialize() { try { this.detectPipeline = await pipeline('text-classification', this.modelid, { dtype: 'fp32', device: navigator.gpu? 'webgpu': 'wasm' }); this.initialized = true; console.log("Model initialization successful."); } catch (error) { console.error('Error initializing language detection model with fallback:', error); this.initialized = false; throw error; } } async detect(text: string) { if (!this.initialized || !this.detectPipeline) { console.error("Model not initialized."); return ''; } try { const language = await this.detectPipeline(text, { top: 1 }); return language; } catch (error) { console.error('Error during language detection:', error); return ''; } } } async function main() { const detectLanguage = new DetectLanguage(); await detectLanguage.initialize(); const text = "This is a test sentence."; const language = await detectLanguage.detect(text); console.log(`Detected language: ${language}`); } // Call the main function main(); ` The above code brings up the following error: Error initializing language detection model with fallback: Error: Unknown Normalizer type: NFD at Normalizer.fromConfig (tokenizers.js:1011:1) at tokenizers.js:1187:1 at Array.map (<anonymous>) at new NormalizerSequence (tokenizers.js:1187:1) at Normalizer.fromConfig (tokenizers.js:993:1) at new PreTrainedTokenizer (tokenizers.js:2545:1) at new BertTokenizer (tokenizers.js:3277:8) at AutoTokenizer.from_pretrained (tokenizers.js:4373:1) at async Promise.all (:5173/index 0) at async loadItems (pipelines.js:3413:1) Here is the normalizer section from tokenizer: `"normalizer": { "type": "Sequence", "normalizers": [ { "type": "NFD" }, { "type": "BertNormalizer", "clean_text": true, "handle_chinese_chars": true, "strip_accents": true, "lowercase": true } ] },` May be NFD normalizer is missing. Is there any way to bypass this error? Can you please me know? Thanks
https://github.com/huggingface/transformers.js/issues/1209
closed
[ "question" ]
2025-02-26T08:48:08Z
2025-02-26T14:41:38Z
null
adewdev
huggingface/open-r1
436
Why is the reward low and not increased in grpo training?How to solve?
my config # Model arguments model_name_or_path: ../experiment/models/Qwen2.5-1.5B-Instruct #model_revision: main torch_dtype: bfloat16 attn_implementation: flash_attention_2 # Data training arguments dataset_name: ../experiment/datasets/NuminaMath-TIR/data dataset_configs: - default system_prompt: "You are a helpful AI Assistant that provides well-reasoned and detailed responses. You first think about the reasoning process as an internal monologue and then provide the user with the answer. Respond in the following format: <think>\n...\n</think>\n<answer>\n...\n</answer>" # Num processes is less by 1 as vLLM is using 1 GPU num_processes: 3 # GRPO trainer config bf16: true use_vllm: true vllm_device: auto vllm_gpu_memory_utilization: 0.7 do_eval: false gradient_accumulation_steps: 16 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false #hub_model_id: Qwen2.5-1.5B-Open-R1-GRPO #hub_strategy: every_save learning_rate: 2.0e-05 log_completions: true log_level: info logging_first_step: true logging_steps: 5 logging_strategy: steps lr_scheduler_type: cosine max_prompt_length: 512 max_completion_length: 1024 max_steps: -1 num_generations: 6 num_train_epochs: 1 output_dir: outputs/Qwen2.5-1.5B-Open-R1-GRPO-no-difficulty overwrite_output_dir: true per_device_eval_batch_size: 16 per_device_train_batch_size: 8 push_to_hub: false report_to: - none reward_funcs: - accuracy - format #- tag_count reward_weights: - 1.0 - 1.0 #- 1.0 save_strategy: "steps" save_steps: 100 #save_total_limit: 1 seed: 42 warmup_ratio: 0.1
https://github.com/huggingface/open-r1/issues/436
open
[]
2025-02-26T05:12:18Z
2025-02-27T01:06:53Z
null
AXy1527
huggingface/lerobot
773
How to overrite the code to collect action datas from others robot?
Hey,I have got a problem when i try to overwrite the code of lerobot to collect action datas from my own robot. Here‘s the detail. My robot is a single six joint robot arm, so i make a new RobotConfig, which only contains the info of the camera. And then I overwrite the fuction 'teleop_step' in file manipulator.py. I also set a default value of the robot pos to test at first. When i start to record, the datad of observation and action are fine, but when it comes to call the function 'save_eposide', error comes up, which i show below. I reall want to know what else should i suppose to do to make it work, thanks. ![Image](https://github.com/user-attachments/assets/62fdd3a3-1efc-4801-8965-faf72c0005fe) ![Image](https://github.com/user-attachments/assets/e3780dee-0dbc-4b5d-9353-c4945579f576) ![Image](https://github.com/user-attachments/assets/d5b3afc1-4a33-41e9-8ab7-9abee076d6e4)
https://github.com/huggingface/lerobot/issues/773
closed
[ "question", "stale" ]
2025-02-26T03:33:09Z
2025-10-16T02:28:56Z
null
tjh-flash
huggingface/lerobot
771
Example of training a policy with PI0?
is there an example config file for training a policy with PI0 policy?
https://github.com/huggingface/lerobot/issues/771
closed
[ "question", "policies" ]
2025-02-25T19:39:51Z
2025-04-03T16:44:44Z
null
pqrsqwewrty
huggingface/diffusers
10,904
CLIP Score Evaluation without Pre-processing.
I am referring to [Evaluating Diffusion Models](https://huggingface.co/docs/diffusers/main/en/conceptual/evaluation), specifically the quantitative evaluation using CLIP score example. We have images of shape (6, 512, 512, 3). CLIP score is calculated using `"openai/clip-vit-base-patch16"`. However, as far as I can tell, the images are not pre-processed to match the format that `"openai/clip-vit-base-patch16"` was trained on (e.g., images of size 224x224 pixels). Should the images have been processed before or can we still reliably use the CLIP score with the images in their original format? Please let me know if I have overlooked or am misunderstanding something. Thanks!
https://github.com/huggingface/diffusers/issues/10904
open
[ "stale" ]
2025-02-25T16:51:44Z
2025-03-28T15:03:20Z
1
e-delaney
huggingface/lerobot
769
How to convert my ALOHA hdf5 data type to your dataset format?
https://github.com/huggingface/lerobot/issues/769
closed
[ "question", "dataset", "stale" ]
2025-02-25T14:07:13Z
2025-10-16T02:28:58Z
null
return-sleep
huggingface/diffusers
10,901
HunyuanVIdeo in diffusers use negative_prompt but generate wrong video
### Describe the bug Diffusers support negative_prompt for hunyuan_video recently, but when I use negative_prompt and set **guidance_scale** and **true_cfg_scale**, I got a video with all black elements. Maybe I set wrong parameters or save video fail. How can I fix my problem? Thanks ### Reproduction import torch import time from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel, AutoencoderKLHunyuanVideo from diffusers.utils import export_to_video, load_image, load_video NEGATIVE_PROMPT = "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion" model_path = "/realpath/hunyuanvideo-community-HunyuanVideo" pipe = HunyuanVideoPipeline.from_pretrained(model_path, torch_dtype=torch.float16) pipe.vae.enable_tiling() pipe.to("cuda") output = pipe( prompt="The video shows a man and a woman standing in the snow, wearing winter clothing and holding cups of coffee. ", negative_prompt=NEGATIVE_PROMPT, height=480, width=720, num_frames=129, num_inference_steps=10, true_cfg_scale=6.0, guidance_scale=1.0, ).frames[0] export_to_video(output, "diffusers_480p_output.mp4", fps=24) ### Logs ```shell ``` ### System Info H20 resolution = 480 * 720 steps=10 ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/10901
open
[ "bug", "stale" ]
2025-02-25T11:08:43Z
2025-07-15T07:19:15Z
2
philipwan
huggingface/optimum
2,200
Bug exporting Whisper?
### System Info Hi! I'm exporting some fine-tuned whisper models, small and base, being fine-tuned in english or spanish. In some cases I've detected that the tokenizer.json is 2.423KB and in other cases 3.839, being the tokenizer.json exported for the same language. I have some models in english where the tokenizer weight's 2.423KB and others where the tokenizer weight's 3.839KB, and same for the spanish ones. When the tokenizer is 2.423KBs I get problems generating the output, as it reachs the max_lenght of the model, but when the tokenizer file is 3.839KBs, the output gets as it should. The tokenizer from the original models weights 2.423KBs, and I they works well, but when finetuned the weight change. I don't know if this is an expected output, ### Who can help? @michaelbenayoun @JingyaHuang @echarlaix ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) I have used the following URL to train my models: https://huggingface.co/blog/fine-tune-whisper The datasets I have used in spanish are: ```py voxpopuli_spanish = load_dataset( "facebook/voxpopuli", "es", split="train", streaming=True, trust_remote_code=True ) # I take 133 random instances common_voice_spanish = load_dataset( "mozilla-foundation/common_voice_17_0", "es", split="train", streaming=True, trust_remote_code=True, ) # I take 66 random instances librispeech_spanish = load_dataset( "facebook/multilingual_librispeech", "spanish", split="train", streaming=True ) # I take 66 random instances ``` I have used the same datasets for english: In case of the common_voice and voxpopuli, I just change "es"for "en". For the librispeech: ```py librispeech_asr = load_dataset( "openslr/librispeech_asr", split="train.other.500", streaming=True, trust_remote_code=True ) ``` I use other private dataset that I can't share right now, but they are around 200 instances. For exporting the model I use the following line: ``` optimum-cli export onnx --model whisper-small-es-trained whisper-small-es-onnx --task automatic-speech-recognition --opset 18 ``` I have tested using multiple opsets, but I get the same output. ### Expected behavior I don't know if the behavior is the correct one, or I the exported tokenizer.json must be always the same.
https://github.com/huggingface/optimum/issues/2200
open
[ "bug" ]
2025-02-25T09:45:02Z
2025-03-05T20:58:30Z
1
AlArgente
huggingface/diffusers
10,899
Whether lohaconfig is supported in the convert_state_dict_to_diffusers method
In the train_text_to_image_lora.py file unet_lora_config = LoraConfig( r=cfg.rank, lora_alpha=cfg.rank, init_lora_weights="gaussian", target_modules=["to_k", "to_q", "to_v", "to_out.0"], ) modified to unet_lora_config = LoHaConfig( r=cfg.rank, alpha=cfg.rank, target_modules=["to_k", "to_q", "to_v", "to_out.0"], ), unet_lora_state_dict = convert_state_dict_to_diffusers( get_peft_model_state_dict(unwrapped_unet) ) in this line, an error will occur. Please tell me how to modify it.
https://github.com/huggingface/diffusers/issues/10899
open
[ "stale" ]
2025-02-25T08:39:08Z
2025-03-27T15:03:17Z
2
llm8047
huggingface/sentence-transformers
3,246
How to save the merged model trained with peft?
I am working on fine tuning a 7B model and due to the size, we trained it with lora- by following the guidance (https://sbert.net/examples/training/peft/README.html) ```python peft_config = LoraConfig( task_type=TaskType.FEATURE_EXTRACTION, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1, ) model.add_adapter(peft_config) ``` Training works great and we are looking for some guidances to merge the lora layer with the base model and saved. What we have tried: 1. `model.save_pretrained("")` => only save the lora layer 2. using `peft` library: this doesn't seem to work correctly, as the inference result is the same as the base model. ``` model.save_pretrained(tmp_path) base_model = SentenceTransformer(model_name_or_path=model_path) adapter_model = PeftModel.from_pretrained(base_model, adapter_tmp_path) merged_model = adapter_model.merge_and_unload() merged_model.config = transformers.AutoConfig.from_pretrained(model_path) merged_model.save_pretrained(path) ``` We are reaching out for insights about how to merge the sentence transformer trained peft model with the base model. Thanks!
https://github.com/huggingface/sentence-transformers/issues/3246
closed
[]
2025-02-25T00:56:20Z
2025-12-05T12:33:48Z
null
chz816
huggingface/datasets
7,420
better correspondence between cached and saved datasets created using from_generator
### Feature request At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a copy of the cached dataset. For large datasets this can end up wasting a lot of space. In my case the saving operation failed so I am stuck with a large cached dataset and no clear way to convert to a `Dataset` that I can use. The requested feature is to provide a way to be able to load a cached dataset using `.load_from_disk`. Alternatively `.from_generator` can create the dataset at a specified location so that it can be loaded from there with `.load_from_disk`. ### Motivation I have the following workflow which has exposed some awkwardness about the Datasets saving/caching. 1. I created a cached dataset using `.from_generator` which was cached in a folder. This dataset is rather large (~600GB) with many shards. 2. I tried to save this dataset using `.save_to_disk` to another location so that I can use later as a `Dataset`. This essentially creates another copy (for a total of 1.2TB!) of what is already in the cache... In my case the saving operation keeps dying for some reason and I am stuck with a cached dataset and no copy. 3. Now I am trying to "save" the existing cached dataset but it is not clear how to access the cached files after `.from_generator` has finished e.g. from a different process. I should not be even looking at the cache but I really do not want to waste another 2hr to generate the set so that if fails agains (I already did this couple of times). - I tried `.load_from_disk` but it does not work with cached files and complains that this is not a `Dataset` (!). - I looked at `.from_file` which takes one file but the cached file has many (shards) so I am not sure how to make this work. - I tried `.load_dataset` but this seems to either try to "download" a copy (of a file which is already in the local file system!) which I will then need to save or I need to use `streaming=False` to create an `IterableDataset `which then I need to convert (using the cache) to `Dataset` so that I can save it. With both options I will end up with 3 copies of the same dataset for a total of ~2TB! I am hoping here is another way to do this... Maybe I am missing something here: I looked at docs and forums but no luck. I have a bunch of arrow files cached by `Dataset.from_generator` and no clean way to make them into a `Dataset` that I can use. This all could be so much easer if `load_from_disk` can recognize the cached files and produce a `Dataset`: after the cache is created I would not have to "save" it again and I can just load it when I need. At the moment `load_from_disk` needs `state.json` which is lacking in the cache folder. So perhaps `.from_generator` could be made to "finalize" (e.g. create `state.json`) the dataset once it is done so that it can be loaded easily. Or provide `.from_generator` with a `save_to_dir` parameter in addition to `cache_dir` which can be used for the whole process including creating the `state.json` at the end. As a proof of concept I just created `state.json` by hand and `load_from_disk` worked using the cache! So it seems to be the missing piece here. ### Your contribution Time permitting I can look into `.from_generator` to see if adding `state.json` is feasible.
https://github.com/huggingface/datasets/issues/7420
open
[ "enhancement" ]
2025-02-24T22:14:37Z
2026-01-05T15:16:35Z
3
vttrifonov
huggingface/open-r1
413
How many resources are required to train deepseek r1 671b using grpo?
.
https://github.com/huggingface/open-r1/issues/413
open
[]
2025-02-24T11:55:12Z
2025-02-24T11:55:12Z
null
LiuShixing
huggingface/safetensors
577
Could I get safe tensor without lazy loading?
### System Info I see safe_open and deserialize, it seems that both two are lazy loading. So if I don't want to load safetensor without lazy loading how could I do, thanks ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Reproduction I use sglang, and in sglang model_loader/weight_utils.py it load safetensors like this `if not is_all_weights_sharded: with safe_open(st_file, framework="pt") as f: for name in f.keys(): # noqa: SIM118 param = f.get_tensor(name) yield name, param else: result = load_file(st_file, device="cpu") for name, param in result.items(): yield name, param ` I found it loads safe tensor too slow(about 20min+), whether is_all_weights_sharded is True and if I prefetch safetensors before load_model(like cat * > /dev/null), it could only cost 5min I try to use threadExecutor to parallel this code, and although get_tensor could be quick, but loading weight still cost 20min +, so I doubt that lazy loading.thanks ### Expected behavior without lazy loading
https://github.com/huggingface/safetensors/issues/577
open
[]
2025-02-24T07:55:33Z
2025-03-13T16:51:49Z
1
voidxb
huggingface/trl
2,941
How to dynamically adjust params during grpo training?
How to dynamically adjust params during training? For example, I want to adopt a smaller num_generations(8) at the beginning of grpo training, and enlarge it to 32 and also adopt a larger temperature from the 50th step.
https://github.com/huggingface/trl/issues/2941
open
[ "❓ question", "🏋 GRPO" ]
2025-02-24T02:08:52Z
2025-02-24T07:49:10Z
null
Tomsawyerhu
huggingface/open-r1
406
How many GPU hours you take to train a simple model?
I wonder how many hours you take to use this repo to train a simple model, like DeepSeek-R1-Distill-Qwen-1.5B or DeepSeek-R1-Distill-Qwen-7B, if on 8 H100?
https://github.com/huggingface/open-r1/issues/406
closed
[]
2025-02-24T00:27:52Z
2025-02-24T06:31:31Z
null
Red-Scarff
huggingface/safetensors
576
How to access header with python
Is there a way to access the header in Python to know the offsets of each tensor data?
https://github.com/huggingface/safetensors/issues/576
closed
[]
2025-02-23T17:42:46Z
2025-03-13T16:58:36Z
null
justinchuby
huggingface/diffusers
10,878
How to expand peft.LoraConfig
If expanding peft.LoraConfig, How to modify to accommodate more lora?
https://github.com/huggingface/diffusers/issues/10878
open
[ "stale" ]
2025-02-23T14:01:11Z
2025-03-25T15:03:28Z
null
llm8047
huggingface/diffusers
10,874
Does it support adding LoHa method
Does it support adding LoHa method? Where can I modify it?
https://github.com/huggingface/diffusers/issues/10874
open
[ "stale" ]
2025-02-23T12:06:14Z
2025-03-25T15:03:41Z
3
llm8047
huggingface/diffusers
10,872
[Feature request] Please add from_single_file support in SanaTransformer2DModel to support first Sana Apache licensed model
**Is your feature request related to a problem? Please describe.** We all know Sana model is very good but unfortunately the LICENSE is restrictive. Recently a Sana finetuned model is released under Apache LICENSE. Unfortunately SanaTransformer2DModel does not support from_single_file to use it **Describe the solution you'd like.** ```python import torch from diffusers import SanaPipeline from diffusers import SanaTransformer2DModel model_path = "Efficient-Large-Model/Sana_1600M_1024px_MultiLing" dtype = torch.float16 transformer = SanaTransformer2DModel.from_single_file ( "Swarmeta-AI/Twig-v0-alpha/Twig-v0-alpha-1.6B-2048x-fp16.pth", torch_dtype=dtype, ) pipe = SanaPipeline.from_pretrained( pretrained_model_name_or_path=model_path, transformer=transformer, torch_dtype=dtype, use_safetensors=True, ) pipe.to("cuda") pipe.enable_model_cpu_offload() pipe.enable_vae_slicing() pipe.enable_vae_tiling() inference_params = { "prompt": "rose flower", "negative_prompt": "", "height": 1024, "width": 1024, "guidance_scale": 4.0, "num_inference_steps": 20, } image = pipe(**inference_params).images[0] image.save("sana.png") ``` ``` (venv) C:\aiOWN\diffuser_webui>python sana_apache.py Traceback (most recent call last): File "C:\aiOWN\diffuser_webui\sana_apache.py", line 6, in <module> transformer = SanaTransformer2DModel.from_single_file ( AttributeError: type object 'SanaTransformer2DModel' has no attribute 'from_single_file' ``` **Describe alternatives you've considered.** No alternatives available as far as I know **Additional context.** N.A.
https://github.com/huggingface/diffusers/issues/10872
closed
[ "help wanted", "Good second issue", "contributions-welcome", "roadmap" ]
2025-02-23T11:36:21Z
2025-03-10T03:08:32Z
5
nitinmukesh
huggingface/lerobot
761
How to convert from custom dataset format to LeRobotDataset format?
I'm trying to train a LeRobot model on some custom data I've recorded on a custom robot, but first, I need to convert that custom data into the correct format for LeRobotDataset. I'm guessing that an example of how to do this is in the `pusht_zarr.py` file. Questions: 1) Is the example in `pusht_zarr.py` the proper way to do this dataset format conversion 2) I only care about predicting future actions, so I don't need a `reward` or `success` field for each frame. Can I omit these fields or should I put a dummy value for them? e.g. in these lines of code below in `pusht_zarr.py`, can I omit the `next.reward` and `next.success` fields or must I put some dummy values for them? (and if so, what are the recommended dummy values?) ``` frame = { "action": torch.from_numpy(action[i]), # Shift reward and success by +1 until the last item of the episode "next.reward": reward[i + (frame_idx < num_frames - 1)], "next.success": success[i + (frame_idx < num_frames - 1)], } ```
https://github.com/huggingface/lerobot/issues/761
closed
[]
2025-02-22T02:35:36Z
2025-02-25T19:39:08Z
null
pqrsqwewrty
huggingface/trl
2,922
How to support multi-device VLLM inference in the GRPO Trainer
https://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L439-L461 In the current GRPO implementation, VLLM can only run on a single GPU, which becomes a performance bottleneck. For example, in an 8-GPU setup, the remaining 7 GPUs have to wait for 1 GPU to complete inference, and it also can't accommodate larger models. How can we enable VLLM to run on multiple GPUs? The only concern is that we need to figure out a way to update the parameters across multiple GPUs each time the model is reloaded: https://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L624-L653
https://github.com/huggingface/trl/issues/2922
open
[ "✨ enhancement", "🏋 GRPO" ]
2025-02-21T09:24:51Z
2025-03-14T02:45:21Z
null
0x404
huggingface/safetensors
575
How to change the model weights in safetensors?
### Feature request For example, I want to change some weight with shape [K,K,C] into [K,K,C/2], how can I achieve this hacking? ### Motivation N/A ### Your contribution N/A
https://github.com/huggingface/safetensors/issues/575
open
[]
2025-02-21T03:36:27Z
2025-03-13T16:59:32Z
null
JulioZhao97
huggingface/transformers.js
1,201
Unable to convert Janus models to ONNX
### Question I see that @xenova has successfully export Janus-1.3B and Janus-Pro-1B to ONNX, presumably using some version of scripts/convert.py. We are interested in exporting Janus-Pro-7B to ONNX as well, but have not been able to do so using this script (nor any other path). Attempting to convert either of the previous two models encounters the same errors, so hopefully whatever steps were taken to convert those will also enable the 7B version. The initial error was: ``` ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. ``` This was fixed by installing https://github.com/deepseek-ai/Janus and adding `from janus.models import MultiModalityCausalLM` to convert.py. The error that I'm now stuck at is: ``` KeyError: "Unknown task: any-to-any. Possible values are: `audio-classification` for AutoModelForAudioClassification, `audio-frame-classification` for AutoModelForAudioFrameClassification, `audio-xvector` for AutoModelForAudioXVector, `automatic-speech-recognition` for ('AutoModelForSpeechSeq2Seq', 'AutoModelForCTC'), `depth-estimation` for AutoModelForDepthEstimation, `feature-extraction` for AutoModel, `fill-mask` for AutoModelForMaskedLM, `image-classification` for AutoModelForImageClassification, `image-segmentation` for ('AutoModelForImageSegmentation', 'AutoModelForSemanticSegmentation'), `image-to-image` for AutoModelForImageToImage, `image-to-text` for AutoModelForVision2Seq, `mask-generation` for AutoModel, `masked-im` for AutoModelForMaskedImageModeling, `multiple-choice` for AutoModelForMultipleChoice, `object-detection` for AutoModelForObjectDetection, `question-answering` for AutoModelForQuestionAnswering, `semantic-segmentation` for AutoModelForSemanticSegmentation, `text-to-audio` for ('AutoModelForTextToSpectrogram', 'AutoModelForTextToWaveform'), `text-generation` for AutoModelForCausalLM, `text2text-generation` for AutoModelForSeq2SeqLM, `text-classification` for AutoModelForSequenceClassification, `token-classification` for AutoModelForTokenClassification, `zero-shot-image-classification` for AutoModelForZeroShotImageClassification, `zero-shot-object-detection` for AutoModelForZeroShotObjectDetection" ``` I can't find anything about optimum supporting this task, so it is unclear to me how @xenova was able to get around this. Any insight or assistance would be greatly appreciated.
https://github.com/huggingface/transformers.js/issues/1201
open
[ "question" ]
2025-02-20T17:55:00Z
2025-08-19T12:55:58Z
null
turneram
huggingface/datasets
7,415
Shard Dataset at specific indices
I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from_disk`, how can I load just a subset of the shards to save memory and time on different ranks? I guess an alternative to this would be, given a loaded `Dataset`, how can I run `Dataset.shard` such that sharding doesn't split any episode across shards?
https://github.com/huggingface/datasets/issues/7415
open
[]
2025-02-20T10:43:10Z
2025-02-24T11:06:45Z
3
nikonikolov
huggingface/trl
2,913
How to specify the GPU used by vllm
https://github.com/huggingface/trl/blob/a92e00e810762548787fadd5c4a5e6fc13a4928a/trl/trainer/grpo_trainer.py#L392 I have an 8-GPUs server, of which only the last two GPUs are available, and I set CUDA_VISIBLE_DEVICE=6,7, the value of torch.cuda.device_count() is 2. I want to load vllm into GPU 6, and I set vllm_device=cuda:6, but this line of code keeps giving an ValueError. What should I do?
https://github.com/huggingface/trl/issues/2913
closed
[ "❓ question" ]
2025-02-20T10:32:30Z
2025-02-21T03:14:13Z
null
xiaolizh1
huggingface/open-r1
381
how to set sampling parameters when do evaluation
As you said you use greedy decoding to reproduce deepseek's evaluation results, And I get different score, there may be something not aligning. So I want to know how to set the sampling parameters and how to see them when I use the 'evaluate.py' to do evaluation.
https://github.com/huggingface/open-r1/issues/381
open
[]
2025-02-20T08:41:26Z
2025-02-24T06:57:59Z
null
ItGirls
huggingface/open-r1
380
How to set cuda device for your Data generation pipline
Hi author, thanks for your work. When I use your pipline to generate data set (deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) I find I can not set device with os.environ ![Image](https://github.com/user-attachments/assets/ff7bc85f-63a0-4618-80f0-f0516081e7ec) It is actually always on the cude:0, how can I set it correctl? Thank you!
https://github.com/huggingface/open-r1/issues/380
open
[]
2025-02-20T07:06:44Z
2025-02-20T07:06:44Z
null
Aristo23333
huggingface/transformers
36,293
Bug in v4.49 where the attention mask is ignored during generation (t5-small)
### System Info Hi all! First, thank you very much for your hard work and making these features avalible. I'm seeing a bug after updating to v4.49 where the output changes even though the attention mask should be masking padded values. Below is a script to reproduce the error. It will tokenize two prompts, and then call `.generate` on the shorter prompt while trying different slices of the padded `input_ids` and padded `attention_mask`. At some point, the generated response will change for v4.49 but not v4.48. Enviroment information ``` - `transformers` version: 4.49.0 - Platform: macOS-15.3-arm64-arm-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.29.0 - Safetensors version: 0.5.2 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (GPU?): 2.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No ``` output of `uv pip compile requirements.in` ``` transformers==4.48.0 # change this to 4.49.0 to reproduce the error asttokens==3.0.0 certifi==2025.1.31 charset-normalizer==3.4.1 decorator==5.1.1 exceptiongroup==1.2.2 executing==2.2.0 filelock==3.17.0 fsspec==2025.2.0 huggingface-hub==0.29.0 idna==3.10 ipython==8.32.0 jedi==0.19.2 jinja2==3.1.5 markupsafe==3.0.2 matplotlib-inline==0.1.7 mpmath==1.3.0 networkx==3.4.2 numpy==2.2.3 packaging==24.2 parso==0.8.4 pexpect==4.9.0 prompt-toolkit==3.0.50 ptyprocess==0.7.0 pure-eval==0.2.3 pygments==2.19.1 pyyaml==6.0.2 regex==2024.11.6 requests==2.32.3 safetensors==0.5.2 sentencepiece==0.2.0 stack-data==0.6.3 sympy==1.13.1 tokenizers==0.21.0 torch==2.6.0 tqdm==4.67.1 traitlets==5.14.3 typing-extensions==4.12.2 urllib3==2.3.0 wcwidth==0.2.13 ``` ### Who can help? @ArthurZucker ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig model = AutoModelForSeq2SeqLM.from_pretrained("t5-small") tokenizer = AutoTokenizer.from_pretrained("t5-small") cfg = GenerationConfig( max_new_tokens=512, do_sample=False, use_cache=True, # same behavior with use_cache=False ) shortprompt = ("summarize: Transformers v4.49 appears to have a bug where .generate stops respecting " "the attention_mask after some number of tokens.") longprompt = ("summarize: I enjoy walking with my cute dog, especially in the early mornings " "when the air is crisp and the streets are quiet. Watching my dog happily trot along, " "always brings a smile to my face.") # --- print("# Single prompt ---") inputs = tokenizer( [shortprompt], return_tensors="pt", padding=True ) outputs = model.generate(**inputs, generation_config=cfg) expected = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] print(f"short prompt: '{expected}'") print() # --- print("# Double prompt ---") inputs = tokenizer( [shortprompt, longprompt], return_tensors="pt", padding=True ) outputs = model.generate(**inputs, generation_config=cfg) text = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(f"short prompt: '{text[0]}'") print(f"long prompt: '{text[1]}'") print() # --- print("# Single shortprompt with mask ---") def run_sliced_input(slice_, show_text=False): shortprompt_tokens = inputs.input_ids[0:1, slice_] shortprompt_mask = inputs.attention_mask[0:1, slice_] outputs = model.generate(inputs=shortprompt_tokens, attention_mask=shortprompt_mask, generation_config=cfg) text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] if show_text: print(f"'{text}'") return text != expected # run a bisect search to find the first slice that fails import bisect start = inputs.attention_mask[0].sum().item() full_range = inputs.attention_mask.size(1) ends = range(start, full_range) print(f"searching in range {start} to {full_range}") first_failure = start + bisect.bisect_left( [slice(None, end) for end in ends], True, key=run_sliced_input ) if first_failure == full_range: print("No failure found in the full range!") else: print(f"First failing slice: {first_failure}") print(f"Output with slice at {first_failure-1}: ", end="") run_sliced_input(slice(None, first_failure-1), show_text=True) print(f"Output with slice at {first_failure}: ", end="") run_sliced_input(slice(None, first_failure), show_text=True) ``` ### Expected behavior version 4.48 ``` # Single prompt --- short prompt: 'v4.49 appears to have a bug where.generate stops respecting the attention_mask after some tokens.' # Double prompt --- short prompt: 'v4.49 appears to have a bug w
https://github.com/huggingface/transformers/issues/36293
closed
[ "bug" ]
2025-02-20T02:16:23Z
2025-02-20T16:28:11Z
null
bdhammel
huggingface/optimum-nvidia
176
How to run whisper after #133
I see that previously, whisper could be run as follows: [https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py](https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py) But after #133 the code has been significantly refactored. Is there any documentation that shows how to properly run whisper with a tensorRT backend? ```python from optimum.nvidia.pipelines import pipeline asr = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=device) > NotImplementedError: Model type whisper is not currently supported ``` ```python from optimum.nvidia.models.whisper import WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base", torch_dtype=torch_dtype) > AttributeError: type object 'WhisperForConditionalGeneration' has no attribute 'from_pretrained' ```
https://github.com/huggingface/optimum-nvidia/issues/176
open
[]
2025-02-19T17:45:01Z
2025-02-19T17:45:01Z
null
huggingfacename
huggingface/peft
2,388
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported.
## Context I'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly. In short, I followed these steps ```python # load model model, processor = get_model_tokenizer( 'Qwen/Qwen2.5-VL-3B-Instruct', torch_dtype=torch.bfloat16, use_hf=True, attn_impl="flash_attn", ) # get lora ... model_arch = get_model_arch(model.model_meta.model_arch) lora_config = LoraConfig( task_type='CAUSAL_LM', r=4, lora_alpha=8, lora_dropout=0.05, use_rslora=True, target_modules=get_multimodal_target_regex( model_arch, freeze_llm=False, freeze_vit=False, freeze_aligner=True ), ) model = Swift.prepare_model(model, lora_config) # train config e run ... trainer = Seq2SeqTrainer( model=model, args=training_args, data_collator=template.data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, template=template, callbacks= [ EarlyStoppingCallback( early_stopping_patience=6, early_stopping_threshold=0.001 ) ] ) stats = trainer.train() # push adapter model.push_to_hub(f"tech4humans/{model_name}", private=True) ``` debugging the peft model was loaded with the class `PeftModelForCausalLM`. ## Problem Then after I tried to recharge the adapter and I get an error with peft ```python from transformers import Qwen2_5_VLForConditionalGeneration model = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct", device_map="auto") model.load_adapter("tech4humans/Qwen2.5-VL-3B-Instruct-r4-tuned") ``` ```python /usr/local/lib/python3.10/dist-packages/peft/tuners/lora/model.py in _create_new_module(lora_config, adapter_name, target, **kwargs) 345 if new_module is None: 346 # no module could be matched --> 347 raise ValueError( 348 f"Target module {target} is not supported. Currently, only the following modules are supported: " 349 "`torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, ". ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel( (patch_embed): Qwen2_5_VisionPatchEmbed( (proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False) ) (rotary_pos_emb): Qwen2_5_VisionRotaryEmbedding() (blocks): ModuleList( (0-31): 32 x Qwen2_5_VLVisionBlock( (norm1): Qwen2RMSNorm((1280,), eps=1e-06) (norm2): Qwen2RMSNorm((1280,), eps=1e-06) (attn): Qwen2_5_VLVisionSdpaAttention( (qkv): Linear(in_features=1280, out_features=3840, bias=True) (proj): Linear(in_features=1280, out_features=1280, bias=True) ) (mlp): Qwen2_5_VLMLP( (gate_proj): Linear(in_features=1280, out_features=3420, bias=True) (up_proj): Linear(in_features=1280, out_features=3420, bias=True) (down_proj): Linear(in_features=3420, out_features=1280, bias=True) (act_fn): SiLU() ) ) ) (merger): Qwen2_5_VLPatchMerger( (ln_q): Qwen2RMSNorm((1280,), eps=1e-06) (mlp): Sequential( (0): Linear(in_features=5120, out_features=5120, bias=True) (1): GELU(approximate='none') (2): Linear(in_features=5120, out_features=2048, bias=True) ) ) ) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`. ``` ## Sytem info ``` transformers 4.50.0.dev0 peft 0.14.1.dev0 ms-swift 3.2.0.dev0 Python 3.10.12 CUDA Version: 12.6 ``` Am I missing something or doing something wrong? Any pointers would be appreciated. Thanks!
https://github.com/huggingface/peft/issues/2388
closed
[]
2025-02-19T15:09:17Z
2025-04-09T16:23:53Z
8
samuellimabraz
huggingface/trl
2,905
How to use GRPOTrainer to train a LLM for code generation? What is the format of the dataset?
https://github.com/huggingface/trl/issues/2905
open
[]
2025-02-19T12:38:13Z
2025-02-19T12:38:13Z
null
xiangxinhello
huggingface/open-r1
370
how to train grpo on 2 nodes(16gpus)
how to train grpo on 2 nodes(16gpus)? 10000 thanks for giving a successful example.
https://github.com/huggingface/open-r1/issues/370
closed
[]
2025-02-19T09:15:14Z
2025-03-26T11:36:03Z
null
glennccc
huggingface/finetrainers
267
How to save the best performing checkpoint during LoRA fine-tuning on Hunyuan Video?
In the HunyuanVideo training scripts, we can save checkpoints every 500 steps by passing `--checkpointing_steps 500`. The final model is saved through the following code: ```python if accelerator.is_main_process: transformer = unwrap_model(accelerator, self.transformer) if self.args.training_type == "lora": transformer_lora_layers = get_peft_model_state_dict(transformer) self.model_config["pipeline_cls"].save_lora_weights( save_directory=self.args.output_dir, transformer_lora_layers=transformer_lora_layers, ) else: transformer.save_pretrained(os.path.join(self.args.output_dir, "transformer")) ``` (Reference: https://github.com/a-r-r-o-w/finetrainers/blob/4bb10c62324aef4fbac85bb381acb9f6f39a5076/finetrainers/trainer.py#L837C1-L848C95) My question is: How can I ensure that I save the best performing model during LoRA fine-tuning? The final saved model might not be the best, as the loss could fluctuate during training. The same applies to intermediate checkpoints. Is there a recommended approach for tracking and saving the best-performing model?
https://github.com/huggingface/finetrainers/issues/267
open
[]
2025-02-19T07:49:11Z
2025-02-21T01:39:30Z
null
dingangui
huggingface/lerobot
748
[pi0] confusion about the state embedding dimension in `embed_suffix`
### System Info ```Shell - `lerobot` version: 0.1.0 - Platform: Linux-5.14.0-284.86.1.el9_2.x86_64-x86_64-with-glibc2.35 - Python version: 3.11.11 - Huggingface_hub version: 0.28.1 - Dataset version: 3.2.0 - Numpy version: 1.26.4 - PyTorch version (GPU?): 2.6.0+cu124 (True) - Cuda version: 12040 - Using GPU in script?: Yes ``` ### Information - [x] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction In the model definition of `modeling_pi0.py`,[ line 567](https://github.com/huggingface/lerobot/blob/fe483b1d0d4ad8506f61924d905943eaa6d3ece0/lerobot/common/policies/pi0/modeling_pi0.py#L567), we see that ``` # Embed state state_emb = self.state_proj(state) state_emb = state_emb.to(dtype=torch.bfloat16) embs.append(state_emb[:, None, :]) bsize = state_emb.shape[0] dtype = state_emb.dtype device = state_emb.device ``` We see that the state embedding dimension is bumped up at the 1st dimension. The problem is, models like pi0 usually use datasets that have `n_obs_steps.`, which is the default of LeRobot's own datasets as well. For example, if I use the `pusht` dataset as specified in this LeRobot example [script](https://github.com/huggingface/lerobot/blob/main/examples/3_train_policy.py), we see that the dimension of the dataset looks something like this ``` image shape torch.Size([64, 2, 3, 96, 96]) state shape torch.Size([64, 2, 2]) action shape torch.Size([64, 16, 2]) ``` The first 2 in the dimensions of image and state come from the fact that the dataset gives you two frames of the past in one batch. The 16 in action comes from the fact that diffusion policy has an action horizon of 16 frames in the future. Now, if we train on dataset like this or any similar dataset, it would have a dimension mismatch in `embed_suffix` because it would bump the state_embedding and give you something like ``` RuntimeError: Tensors must have same number of dimensions: got 4 and 3 ``` For pi0 it's more or less okay, because the default n_obs_steps is usually 1, so you can squeeze out the 1st dimension of state, but this current way doesn't seem very expendable in the future, and also not consistent with LeRobot's usual dataset format. ### Expected behavior I would like to hear some reasoning behind the design choice like this so I can know if I am misunderstanding something. Thank you very much in advance!
https://github.com/huggingface/lerobot/issues/748
closed
[ "question", "policies", "stale" ]
2025-02-19T03:33:01Z
2025-10-20T02:31:45Z
null
IrvingF7
huggingface/transformers.js
1,198
whisper: how to get streaming word level timestamps? (automatic-speech-recognition)
### Question ## Goal - streaming - word level timestamps ## Issue `on_chunk_start` / `on_chunk_end` are not called when using `return_timestamps: "word"`. These callbacks only provide timestamps with `return_timestamps: true` I also tried to decode tokens, as I’ve seen it in the demo, but that uses callbacks that no longer exist (e.g. `chunk_callback(chunk)` and `callback_function(item)`) ## Setup ```ts const transcriber = await pipeline( "automatic-speech-recognition", "Xenova/whisper-tiny", { device: "webgpu", } ); ``` ```ts token_callback_function: (tokens) => { const { feature_extractor } = transcriber.processor; const { config: modelConfig } = transcriber.model; const time_precision = feature_extractor.config.chunk_length / modelConfig.max_source_positions; if (tokens) { const data = transcriber.tokenizer._decode_asr( [{ tokens, finalised: false }], { time_precision, return_timestamps: true, force_full_sequences: false, } ); console.log("data", data); } }; ``` Decoding works, but timestamps are null. <img width="370" alt="Image" src="https://github.com/user-attachments/assets/38779a91-7a2a-43c3-be29-cd785e294378" />
https://github.com/huggingface/transformers.js/issues/1198
open
[ "question" ]
2025-02-18T15:29:42Z
2025-02-20T04:45:48Z
null
getflourish
huggingface/diffusers
10,817
auto_pipeline missing SD3 contol nets
### Describe the bug Hey, auto_pipeline seesm to be missing the control nets variants for SD3 venv\Lib\site-packages\diffusers\pipelines\auto_pipeline.py ### Reproduction Load an sd3 model checkpoint with a controlnet loading any of the auto pipes you will just get the none control net variations as its not set in the configuration. ### Logs ```shell ``` ### System Info - 🤗 Diffusers version: 0.32.2 - Platform: Windows-10-10.0.19045-SP0 - Running on Google Colab?: No - Python version: 3.12.7 - PyTorch version (GPU?): 2.5.1+cu124 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.27.1 - Transformers version: 4.48.0 - Accelerate version: 1.2.1 - PEFT version: not installed - Bitsandbytes version: 0.45.2 - Safetensors version: 0.5.2 - xFormers version: not installed - Accelerator: NVIDIA GeForce RTX 3080 Ti, 12288 MiB - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/10817
closed
[ "bug", "help wanted", "contributions-welcome" ]
2025-02-18T12:54:40Z
2025-02-24T16:21:03Z
3
JoeGaffney
huggingface/lerobot
746
How should I run the model on my own datasets in different envs which is not clearly mentioned in the README?
I want to run the diffusion model on my own real world arms datasets, which are different from the example env and input format in observation and action dims. I've seem some yaml files to store these parameters in earlier version of the repo, but I can't find it in the newest version of the repo. So should I write this params myself in some yaml-like or json-like files or there are some new ways to solve these problems. This is my first issue in github, so the format may be informal, but I'm really eager for the answers. Thank you for your answers!!!
https://github.com/huggingface/lerobot/issues/746
closed
[ "question", "policies", "dataset", "stale" ]
2025-02-18T12:33:07Z
2025-10-19T02:32:17Z
null
shi-akihi
huggingface/lerobot
741
Inquiry on Implementing NoMaD Model (Transformers and Diffusion Policy)
I am planning to implement the NoMaD model, which combines Transformers and Diffusion Policy, within the LeRobot project. Before proceeding, I wanted to check if anyone else is currently working on or has already started implementing this model. For reference, here are the relevant resources: Website: https://general-navigation-models.github.io/nomad/ Paper: https://arxiv.org/pdf/2310.07896 Please let me know if there is ongoing work related to this model or if anyone is interested in collaborating.
https://github.com/huggingface/lerobot/issues/741
closed
[ "question", "stale" ]
2025-02-17T19:57:23Z
2025-10-08T20:56:42Z
null
vaishanth-rmrj
huggingface/lerobot
738
convert simulation data of insertion from v1 to v2
I cannot convert using the file (datasets/v2/convert_dataset_v1_to_v2.py) which requires robotconfig which I don't have I just want to convert your data on lerobot/act_aloha_sim_transfer_cube_human
https://github.com/huggingface/lerobot/issues/738
closed
[ "question", "dataset", "stale" ]
2025-02-17T11:00:38Z
2025-10-08T08:59:52Z
null
AbdElrahmanMostafaRifaat1432
huggingface/open-r1
340
About the data using in sft, how to set SFTConfig.dataset_text_field?
how to use the HuggingFaceH4/Bespoke-Stratos-17k in sft. I find there are two items in the data, "system" and "conversations". So, when I download this data and to finetune a LLM such as Qwen2.5-1.5B-Instruct, how to organize the data, in trl SFTConfig has a default parameter named dataset_text_field, it's default value is "text" which is not exists in such data, I mean Bespoke-Stratos-17k .
https://github.com/huggingface/open-r1/issues/340
open
[]
2025-02-17T07:06:14Z
2025-02-20T08:59:49Z
null
ItGirls
huggingface/finetrainers
264
How to set --precompute_conditions for CogvideoI2V training?
cause i don't find this feature in Image2Video training. does it exist?
https://github.com/huggingface/finetrainers/issues/264
open
[]
2025-02-17T06:00:50Z
2025-03-05T03:49:05Z
null
BlackTea-c
huggingface/diffusers
10,805
is there inpainiting dataset and parameters example provided for xl training?
**What API design would you like to have changed or added to the library? Why?** **What use case would this enable or better enable? Can you give us a code example?** Hi patil-suraj @patil-suraj , appreciated for the convenient script ! Is there any code example and dataset example to run the script: https://github.com/huggingface/diffusers/blob/inpainting-script/examples/inpainting/train_inpainting_sdxl.py ?
https://github.com/huggingface/diffusers/issues/10805
closed
[]
2025-02-17T01:56:14Z
2025-02-17T02:03:09Z
2
fire2323
huggingface/gsplat.js
109
Info request: How to update individual points in splat?
I would like to update position of individual points dynamically in order to create animations and effects. What would be the optimal way to do it?
https://github.com/huggingface/gsplat.js/issues/109
open
[]
2025-02-16T18:11:14Z
2025-02-16T18:43:23Z
null
sjovanovic
huggingface/diffusers
10,803
SANARubber a flexible version of SANA with i2i and multidiffusion/regional diffusion
### Model/Pipeline/Scheduler description I made a pipeline that is as reliable as the basic SANA pipeline but more flexible by making it run an array of functions which runs everything the og pipeline does. this can make easy combinations if necessary. here's the link, enjoy https://github.com/alexblattner/SANARubber example of multidiffusion in sana: ['bright moon','red','blue','green','black'] (first prompt is applied in the background ["0:0-512:512","512:0-1024:512","512:1024-1024:1024","0:512-512:1024"] those are the areas of the rest of the prompts [.7,.7,.7,.7] those are the strengths of the areas applied with their prompts ![Image](https://github.com/user-attachments/assets/98e207f5-a229-4a91-9349-6824095bc50c) again with i2i at stength .5 and the same settings as before (mild changes only): ![Image](https://github.com/user-attachments/assets/65329495-ea25-42e4-b8f7-d4fbc4be8a19) ENJOY! ### Open source status - [x] The model implementation is available. - [ ] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links for the implementation _No response_
https://github.com/huggingface/diffusers/issues/10803
open
[ "stale" ]
2025-02-16T15:08:11Z
2025-03-19T15:03:31Z
1
alexblattner
huggingface/candle
2,774
Dumb Question: How to do forward hooks ?
For example I want to extract activations of intermediate layers. How do I register forward hooks similar to PyTorch or is there a similar/comparable paradigm in candle for this ?
https://github.com/huggingface/candle/issues/2774
open
[]
2025-02-16T12:41:26Z
2025-02-16T12:41:26Z
null
pzdkn
huggingface/diffusers
10,799
Effective region mask for controlnet
Hi, I just want to ask is there any way to use controlnet with mask like [this](https://github.com/Mikubill/sd-webui-controlnet/discussions/2831) As you know comfyui, webui support effective region (mask for controlnet affect). But I can't find how to do this with diffusers.
https://github.com/huggingface/diffusers/issues/10799
closed
[ "stale" ]
2025-02-15T17:42:20Z
2025-04-03T04:01:37Z
8
Suprhimp
huggingface/swift-coreml-diffusers
102
Question: how to use in my own swift project for inference?
How would I run diffusers on device on all apple devices in my swift Xcode project?
https://github.com/huggingface/swift-coreml-diffusers/issues/102
open
[]
2025-02-15T15:56:36Z
2025-02-15T15:56:36Z
null
SpyC0der77
huggingface/transformers.js
1,194
How do I know which ONNX transformation models are available? (Errors when loading models with CDN)
### Question I am using a CDN to load the models, as shown in the code below. I filtered the models in HuggingFace the way you recommend (text-generation, transformers.js) and put the id of the model I looked up. As I understand it, to change the model, I only need to change the model id. However, I get an error for each of the below models. `Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'model')` - **HuggingFaceTB/SmolLM2-135M-Instruct** - **Xenova/codegen-350M-mono** ... `Uncaught (in promise) Error: Can't create a session. ERROR_CODE: 1, ERROR_MESSAGE: Deserialize tensor model.layers.4.mlp.gate_proj.MatMul.weight_Q4 failed.Failed to load external data file ""model_q4f16.onnx_data"", error: Module.MountedFiles is not available.` - **onnx-community/Phi-3.5-mini-instruct-onnx-web** ... I'm ultimately saying that I don't know what model will be available. Additionally, I was wondering if there is a way to know 'in advance' which 'dtype' and 'device' can be supported. ``` import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.3.3'; generator = await pipeline('text-generation', 'onnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX', { dtype: "auto", device: "auto", }); ```
https://github.com/huggingface/transformers.js/issues/1194
open
[ "question" ]
2025-02-15T10:31:32Z
2025-02-16T14:02:08Z
null
mz-imhj
huggingface/open-r1
333
how to use tensorboard instead of wandb?
https://github.com/huggingface/open-r1/issues/333
closed
[]
2025-02-15T08:00:06Z
2025-02-15T08:02:35Z
null
ngrxmu
huggingface/diffusers
10,796
Docs for HunyuanVideo LoRA?
### Describe the bug As it seems like LoRA loading on HunyuanVideo has been implemented, I wonder where I can find the docs on this? Are they missing? ### Reproduction Search for HunyuanVideo and LoRA ### Logs ```shell ``` ### System Info As it is the online docs... ### Who can help? @stevhliu @sayakpaul
https://github.com/huggingface/diffusers/issues/10796
closed
[ "bug", "stale" ]
2025-02-15T04:31:34Z
2025-06-10T20:52:28Z
9
tin2tin
huggingface/open-r1
328
How to set generation sampling parameters?
Need to use deepseek reference settings of temperature=0.6, top_p=0.95. Greedy sampling does poorly on AIME: ## r1-1.5B - AIME24: 23.33% Tried to refer to lighteval docs and ran into issues using model config: ``` model: # Model specific parameters base_params: model_args: "pretrained=Qwen/Qwen2.5-7B-Instruct,dtype=bfloat16,max_model_length=768,gpu_memory_utilisation=0.7" # Model args that you would pass in the command line generation: # Generation specific parameters temperature: 1.0 stop_tokens: null truncate_prompt: false ``` run with: ``` TASK=aime24 lighteval vllm \ "config.yaml" \ "custom|$TASK|0|0" \ --custom-tasks tasks.py \ --use-chat-template \ --output-dir ./results/ ``` hitting: ``` TypeError: expected str, bytes or os.PathLike object, not dict ``` [ref](https://github.com/huggingface/lighteval/issues/563)
https://github.com/huggingface/open-r1/issues/328
open
[]
2025-02-14T21:42:28Z
2025-02-20T03:28:53Z
null
rawsh
huggingface/trl
2,864
How to train GPRO on 2 GPUs, one for training, one for vllm
### Reproduction When I use `Qwen2.5-3B-instruct` to train GRPO, the device for vllm always appear OOM when loading weights. II used two GPUs with 32GB of memory, one device for training, another for vllm. I dont know why a 3B model using so much memory on `device 1` ![Image](https://github.com/user-attachments/assets/79dfd03c-d123-496d-9fcc-07afc3027dff) arguments settings: ```yaml per_device_train_batch_size: 8 gradient_accumulation_steps: 8 num_generations: 8 use_vllm: true vllm_gpu_memory_utilization: 0.8 use_peft: true lora_r: 64 lora_alpha: 64 load_in_4bit: true use_bnb_nested_quant: true attn_implementation: flash_attention_2 bf16: true ... ``` Start command: ```shell export CUDA_VISIBLE_DEVICES=0,1 accelerate launch --num_processes 1 train_Datawhale-R1.py --config Datawhale-R1.yaml ``` ### System Info - Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35 - Python version: 3.10.8 - PyTorch version: 2.5.1 - CUDA device(s): NVIDIA vGPU-32GB, NVIDIA vGPU-32GB - Transformers version: 4.48.3 - Accelerate version: 1.3.0 - Accelerate config: not found - Datasets version: 3.1.0 - HF Hub version: 0.27.0 - TRL version: 0.16.0.dev0+ffcb9f4 - bitsandbytes version: 0.45.2 - DeepSpeed version: 0.16.3 - Diffusers version: 0.32.2 - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: 1.59.7 - PEFT version: 0.14.0 ### Checklist - [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue)) - [x] I have included my system information - [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [x] Any traceback provided is complete
https://github.com/huggingface/trl/issues/2864
open
[ "⚡ PEFT", "⏳ needs more info", "⚡accelerate", "🏋 GRPO" ]
2025-02-14T15:00:58Z
2025-03-12T12:00:10Z
null
AIR-hl
huggingface/peft
2,377
Contributing new model merging method to PEFT
### Feature request Hi all, I noticed that several model merging methods, such as TIES and DARE, have been implemented in this library, as mentioned [here](https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/model_merging.md). I was wondering if there is a way for me to contribute a recently accepted model merging method to this repo. I would really appreciate any guidance or suggestions on how to proceed. Thanks in advance! ### Motivation Enhance the diversity of model merging supported in this library. ### Your contribution I can submit a PR.
https://github.com/huggingface/peft/issues/2377
closed
[]
2025-02-14T12:17:46Z
2025-03-24T15:04:11Z
2
SpeeeedLee
huggingface/optimum
2,189
PEFT to ONNX conversion
### System Info ```shell Hello! I have a fine-tuned LLM model from Hugging Face saved in PEFT format, and it’s about 2.1 GB. When we convert it to ONNX, its size nearly doubles to about 4.1 GB. What causes this significant increase in model size after converting from PEFT to ONNX? Is there any bug under this conversion? ( Here is the code do this conversion. Need to mention: loading it in any commented formats will kill the accuracy). Thanks model = ORTModelForCausalLM.from_pretrained( peft_path, provider='OpenVINOExecutionProvider', provider_options={'device_type': 'GPU_FP16'}, # use_cache=False, #use_io_binding=False export=True, #load_in_4bit=True, #load_in_8bit=True #torch_dtype=torch.bfloat16, #device_map=device, #from_transformers=True ) tokenizer = AutoTokenizer.from_pretrained(peft_path) model.save_pretrained(onnex_path) tokenizer.save_pretrained(onnex_path) ``` ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) model = ORTModelForCausalLM.from_pretrained( peft_path, provider='OpenVINOExecutionProvider', provider_options={'device_type': 'GPU_FP16'}, # use_cache=False, #use_io_binding=False export=True, #load_in_4bit=True, #load_in_8bit=True #torch_dtype=torch.bfloat16, #device_map=device, #from_transformers=True ) tokenizer = AutoTokenizer.from_pretrained(peft_path) model.save_pretrained(onnex_path) tokenizer.save_pretrained(onnex_path) ### Expected behavior I need to have the OONX model with at least the same size while not loosing accuracy performance.
https://github.com/huggingface/optimum/issues/2189
open
[ "bug" ]
2025-02-13T18:21:05Z
2025-03-10T13:58:28Z
2
morteza89
huggingface/agents-course
113
Show how to use Inference Providers for inference
Can be helpful for students to explore different models easily.
https://github.com/huggingface/agents-course/issues/113
open
[]
2025-02-13T07:46:01Z
2025-02-13T08:04:58Z
null
pcuenca
huggingface/lerobot
718
Hand-Eye Calibration for LeRobot
Hello, I am starting a project where I plan to use LeRobot for pick-and-place tasks utilizing classical robotics and vision techniques. I am wondering if anyone has experience with performing hand-eye calibration for this robot. My major concern is that the high-mounted camera is usually parallel to the arm, which may make it difficult for the camera to see the Aruco marker. Does anyone have any suggestions or insights on how to approach this? Thank you!
https://github.com/huggingface/lerobot/issues/718
closed
[ "question", "stale" ]
2025-02-12T05:44:09Z
2025-12-21T02:59:43Z
null
Akumar201
huggingface/optimum-neuron
782
Docs on how to compile a pre-trained transformer
Hello, I am experimenting with Transformers and trying to run them on AWS Inferentia. I checked the official [docs](https://huggingface.co/docs/optimum-neuron/index) but I could not find a clear answer to my current problem. I currently have a customized model based on the [ALBERT transformer](https://huggingface.co/docs/transformers/en/model_doc/albert) that I fine-tuned and for which I exported the weights. ```python from transformers import AlbertConfig, AlbertModel import torch config_dict= { "vocab_size": 178, "hidden_size": 768, "num_attention_heads": 12, "intermediate_size": 2048, "max_position_embeddings": 512, "num_hidden_layers": 12, "dropout": 0.1, } albert_config = AlbertConfig(**config_dict) model = AlbertModel(albert_config) weights = torch.load("path/to/weights.pt") model.load_state_dict(weights) ``` My question is, how do I go from the model above to compiling it for AWS Inferentia using the `optimum-neuron` python library programmatically? I could not find documented examples or snippets for this use-case.
https://github.com/huggingface/optimum-neuron/issues/782
closed
[ "Stale" ]
2025-02-11T23:36:13Z
2025-03-20T08:05:40Z
null
efemaer
huggingface/diffusers
10,772
Sana Controlnet Support
**Is your feature request related to a problem? Please describe.** The first controlnet for Sana has appeared, so the feature is to add the sana controlnet to the diffusers pipeline https://github.com/NVlabs/Sana/blob/main/asset/docs/sana_controlnet.md **Describe the solution you'd like.** Be able to use the sana controlnet **Describe alternatives you've considered.** Using the sana repo
https://github.com/huggingface/diffusers/issues/10772
closed
[ "help wanted", "Good second issue", "contributions-welcome", "roadmap" ]
2025-02-11T22:39:10Z
2025-04-13T13:49:40Z
5
jloveric
huggingface/smolagents
610
Is this normal? Im getting this a lot
Hey, is this normal? ![Image](https://github.com/user-attachments/assets/8da7d739-10c4-4bd3-bc1d-78db00c707bd) also, out: None is this ok as well??
https://github.com/huggingface/smolagents/issues/610
closed
[ "question" ]
2025-02-11T22:05:27Z
2025-03-19T07:12:32Z
null
Mhdaw
huggingface/agents-course
77
[QUESTION] Why am I able to select multiple options in Quick Quiz?
In quick quizzes as there is a single answer correct, shouldn't it be like only be able to choose a single option instead of being able select all at once to see correct answer?
https://github.com/huggingface/agents-course/issues/77
closed
[ "question" ]
2025-02-11T17:35:31Z
2025-02-13T07:20:59Z
null
Devrajsinh-Gohil
huggingface/agents-course
66
[QUESTION] About the **Thought: Internal Reasoning and the Re-Act Approach** section of UNIT 1
I am a bit confused about the ReAct prompting example at the end of the **Thought: Internal Reasoning and the Re-Act Approach** section in Unit 1. The figure label describes it as an example of **ReAct**, but the image itself mentions "Zero-shot CoT." Could you please take a look at this section and clarify? I would really appreciate your help!
https://github.com/huggingface/agents-course/issues/66
closed
[ "question" ]
2025-02-11T03:54:26Z
2025-02-13T07:30:13Z
null
saidul-islam98
huggingface/datasets
7,390
Re-add py.typed
### Feature request The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here? ### Motivation MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be good to be PEP 561 compliant as long as it's not too onerous. ### Your contribution I can re-add py.typed, but I don't know how to make sur all of the `__all__` files are provided (although you may not need to with modern PyRight).
https://github.com/huggingface/datasets/issues/7390
open
[ "enhancement" ]
2025-02-10T22:12:52Z
2025-08-10T00:51:17Z
1
NeilGirdhar
huggingface/lerobot
707
is there option to run on parallel gpu
I have 2 gpus 4090 I wonder if there is an option to run on parallel while finetuning the model I have found this parameter here ![Image](https://github.com/user-attachments/assets/d88768fe-0c93-40cd-9301-30bfd60315a9) but I don't actually understand what do you mean by mp so if there is option for parallel gpu please tell us about it
https://github.com/huggingface/lerobot/issues/707
closed
[ "question" ]
2025-02-10T09:34:13Z
2025-05-14T20:51:43Z
null
AbdElrahmanMostafaRifaat1432
huggingface/lerobot
706
adapt_to_pi_aloha parameter
I am finetuning pi0 on a static aloha dataset and I found the following parameter : adapt_to_pi_aloha : false in /lerobot/common/policies/pi0/configuration_pi0.py but when I set it to true the first loss increased from 0.17 to 4.7 should I set it to true or not knowing that I want the predicted actions to be in aloha space
https://github.com/huggingface/lerobot/issues/706
open
[ "question", "configuration" ]
2025-02-10T09:24:45Z
2025-07-24T08:15:35Z
null
AbdElrahmanMostafaRifaat1432
huggingface/chat-ui
1,708
Generation failed occur
when I ask model then get generation error ![Image](https://github.com/user-attachments/assets/9cccfa87-09d6-48fb-b693-67b6ecffabd4) using base model is llama3 -1b below code is my .env.local code ![Image](https://github.com/user-attachments/assets/5cd50727-be1f-4081-ac80-e24fdb3e20dd)
https://github.com/huggingface/chat-ui/issues/1708
open
[ "support" ]
2025-02-10T08:12:56Z
2025-02-12T07:48:47Z
5
mondayjowa
huggingface/open-r1
260
How to use tensor_parallel_size for vllm in GRPO?
GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported. What if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800, Is there any setting we can set the tensor_parallel_size for vllm params? ``` if self.accelerator.is_main_process: vllm_device = self.args.vllm_device if vllm_device == "auto": vllm_device = f"cuda:{self.accelerator.num_processes}" # take the next GPU idx # Check that the requested device is available if vllm_device.split(":")[0] == "cuda" and int(vllm_device.split(":")[1]) >= torch.cuda.device_count(): raise ValueError( f"The requested device for vllm ({vllm_device}) is not available. You are likely using vLLM " "without restricting the number of GPUs for training. Set the `--num_processes` argument to a " "value lower than the number of GPUs available on your machine—typically, reducing it by one " f"is sufficient. In your case: `--num_processes {torch.cuda.device_count() - 1}`." ) # Check that the requested device is not also used for training if vllm_device in {f"cuda:{idx}" for idx in range(self.accelerator.num_processes)}: warnings.warn( f"The requested device {vllm_device} is also used for training. This may lead to unexpected " "behavior. It is recommended to use a dedicated device for vLLM." ) # vLLM is not compatible with accelerate. So we need to patch it to make sure we can (1) place the vLLM # model on the desired device (world_size_patch) and (2) avoid a test that is not designed for our # setting (profiling_patch). world_size_patch = patch("torch.distributed.get_world_size", return_value=1) profiling_patch = patch( "vllm.worker.worker.Worker._assert_memory_footprint_increased_during_profiling", return_value=None ) with world_size_patch, profiling_patch: self.llm = LLM( model=model.name_or_path, device=vllm_device, gpu_memory_utilization=self.args.vllm_gpu_memory_utilization, dtype=self.args.vllm_dtype, # Automatic Prefix Caching caches the KV cache of existing queries, so that a new query can # directly reuse the KV cache if it shares the same prefix with one of the existing queries. # This is particularly useful here because we generate completions from the same prompts. enable_prefix_caching=True, max_model_len=self.args.vllm_max_model_len, ) self.sampling_params = SamplingParams( temperature=args.temperature, max_tokens=self.max_completion_length, ) ```
https://github.com/huggingface/open-r1/issues/260
open
[]
2025-02-10T07:17:07Z
2025-02-20T12:21:15Z
null
bannima
huggingface/trl
2,814
How to use tensor_parallel_size for vllm reference in GRPO?
GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported. What if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800, Is there any setting we can set the tensor_parallel_size for vllm params? ``` if self.accelerator.is_main_process: vllm_device = self.args.vllm_device if vllm_device == "auto": vllm_device = f"cuda:{self.accelerator.num_processes}" # take the next GPU idx # Check that the requested device is available if vllm_device.split(":")[0] == "cuda" and int(vllm_device.split(":")[1]) >= torch.cuda.device_count(): raise ValueError( f"The requested device for vllm ({vllm_device}) is not available. You are likely using vLLM " "without restricting the number of GPUs for training. Set the `--num_processes` argument to a " "value lower than the number of GPUs available on your machine—typically, reducing it by one " f"is sufficient. In your case: `--num_processes {torch.cuda.device_count() - 1}`." ) # Check that the requested device is not also used for training if vllm_device in {f"cuda:{idx}" for idx in range(self.accelerator.num_processes)}: warnings.warn( f"The requested device {vllm_device} is also used for training. This may lead to unexpected " "behavior. It is recommended to use a dedicated device for vLLM." ) # vLLM is not compatible with accelerate. So we need to patch it to make sure we can (1) place the vLLM # model on the desired device (world_size_patch) and (2) avoid a test that is not designed for our # setting (profiling_patch). world_size_patch = patch("torch.distributed.get_world_size", return_value=1) profiling_patch = patch( "vllm.worker.worker.Worker._assert_memory_footprint_increased_during_profiling", return_value=None ) with world_size_patch, profiling_patch: self.llm = LLM( model=model.name_or_path, device=vllm_device, gpu_memory_utilization=self.args.vllm_gpu_memory_utilization, dtype=self.args.vllm_dtype, # Automatic Prefix Caching caches the KV cache of existing queries, so that a new query can # directly reuse the KV cache if it shares the same prefix with one of the existing queries. # This is particularly useful here because we generate completions from the same prompts. enable_prefix_caching=True, max_model_len=self.args.vllm_max_model_len, ) self.sampling_params = SamplingParams( temperature=args.temperature, max_tokens=self.max_completion_length, )```
https://github.com/huggingface/trl/issues/2814
open
[ "⚡accelerate", "🏋 GRPO" ]
2025-02-10T07:09:47Z
2025-03-04T11:40:13Z
null
bannima
huggingface/diffusers
10,755
Difference in Output When Using PIL.Image vs numpy.array for Image and Mask Input.
hi. I get different results when providing image and mask as input using PIL.Image versus numpy. array. Why does this happen? Is there an issue with my normalization method? | pillow | array | |---|---| | ![Image](https://github.com/user-attachments/assets/8e8a3af8-00cd-4675-93ce-b1c05eec4eb5) | ![Image](https://github.com/user-attachments/assets/25253b2a-9758-4a0f-8925-42e7a1558e50) | #### pillow code ```python image = Image.open(image_path).convert("RGB") mask = Image.open(mask_path).convert("L") output_image = pipeline( image=image, mask_image=mask, generator=torch.Generator(device=self.device).manual_seed(0), ).images[0] ``` #### array code ```python image = Image.open(image_path).convert("RGB") mask = Image.open(mask_path).convert("L") image_array = np.array(image) / 255.0 mask_array = np.array(mask) / 255.0 output_image = pipeline( image=image_array, mask_image=mask_array, generator=torch.Generator(device=self.device).manual_seed(0), ).images[0] ```
https://github.com/huggingface/diffusers/issues/10755
open
[ "stale" ]
2025-02-10T05:24:27Z
2025-03-12T15:03:12Z
2
purple-k
huggingface/datasets
7,387
Dynamic adjusting dataloader sampling weight
Hi, Thanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again.
https://github.com/huggingface/datasets/issues/7387
open
[]
2025-02-10T03:18:47Z
2025-03-07T14:06:54Z
3
whc688
huggingface/trl
2,813
What is the minimum GPU requirement in gigabytes for TRL intensive training?
https://github.com/huggingface/trl/issues/2813
open
[]
2025-02-10T02:52:07Z
2025-02-11T08:41:56Z
null
lonngxiang