repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 1,275 | How to use @xenova/transformers in a musl-based environment? | ### Question
Hi,
I encountered the following error when using @xenova/transformers:
```bash
Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /app/node_modules/onnxruntime-node/bin/napi-v3/linux/x64//libonnxruntime.so.1.14.0)
```
After investigating the issue, I found tha... | https://github.com/huggingface/transformers.js/issues/1275 | closed | [
"question"
] | 2025-04-07T06:34:51Z | 2025-10-07T21:23:36Z | null | ezcolin2 |
huggingface/open-r1 | 583 | num_iterations in GRPOConfig does NOT DO what it is supposed to DO | Hi @qgallouedec and @lewtun
Thanks again for the amazing work ! I got the chance to try the v0.16.0 trl release in open-r1.
I was excited about num_iterations which was supposed to make the training 6 times faster. Simply one needs something like:
`training_args = GRPOConfig(..., num_iterations=4)
`
But I did not... | https://github.com/huggingface/open-r1/issues/583 | closed | [] | 2025-04-06T15:57:43Z | 2025-04-12T06:00:21Z | null | ahatamiz |
huggingface/agents-course | 412 | [QUESTION] - Dummy Agent Library | _---
Do you see the issue?
The answer was hallucinated by the model. We need to stop to actually execute the function! Let’s now stop on “Observation” so that we don’t hallucinate the actual function response.
---_
Can someone explain how the system is hallucinating in this example. I am kind of stuck on this. | https://github.com/huggingface/agents-course/issues/412 | open | [
"question"
] | 2025-04-06T09:44:14Z | 2025-04-06T09:44:14Z | null | NewTonDBA |
huggingface/lerobot | 940 | Possible mismatch in observations.state metadata in Libero datasets on Hugging Face | Hello,
I believe there might be a mistake in the Libero datasets hosted on huggingface/datasets.
Specifically, the issue is with the `observations.state` column. According to `meta/info.json`, the structure is described as:
```
"observation.state": {
"dtype": "float32",
"shape": [
8
],
"names"... | https://github.com/huggingface/lerobot/issues/940 | closed | [
"question",
"dataset",
"stale"
] | 2025-04-06T04:18:55Z | 2025-10-19T02:32:09Z | null | ozgraslan |
huggingface/diffusers | 11,208 | MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline | ### Describe the bug
When using `StableDiffusion3ControlNetInpaintingPipeline` with `SD3MultiControlNetModel`, I receive an error:
`NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.`
### Reproduction
Example reproduction code:
```python
import os
import torch
from dif... | https://github.com/huggingface/diffusers/issues/11208 | open | [
"bug",
"help wanted",
"Good Example PR",
"contributions-welcome"
] | 2025-04-04T12:39:10Z | 2025-05-11T15:03:00Z | 5 | DanilaAniva |
huggingface/sentence-transformers | 3,308 | How to load locally saved transformer models into sentence transformer? | I’ve made some modifications to the NVEMBEDV2 model architecture and saved the updated version locally using `model.save_pretrained()`. However, when I try to wrap the saved model in a SentenceTransformer, I encounter a `KeyError: 'NVEmbedConfig'`.
I checked the documentation, and while loading pretrained models seems... | https://github.com/huggingface/sentence-transformers/issues/3308 | open | [] | 2025-04-03T15:11:20Z | 2025-04-08T15:48:26Z | null | samehkhattab |
huggingface/datasets | 7,497 | How to convert videos to images? | ### Feature request
Does someone know how to return the images from videos?
### Motivation
I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two versi... | https://github.com/huggingface/datasets/issues/7497 | open | [
"enhancement"
] | 2025-04-03T07:08:39Z | 2025-04-15T12:35:15Z | null | Loki-Lu |
huggingface/blog | 2,781 | How to submit revised version of Arxiv paper (v2) to Daily Papers | I would like to submit a revised version (v2) of our arXiv paper to Daily Papers, but the original submission (v1) was uploaded too long ago, so it's not eligible through the regular submission form.
However, this v2 version was recently accepted to CVPR 2025, and it is a completely different paper compared to v1, bot... | https://github.com/huggingface/blog/issues/2781 | closed | [] | 2025-04-02T09:20:30Z | 2025-11-03T15:22:36Z | null | eveningglow |
huggingface/lerobot | 927 | How to train a model for VLN? | ### System Info
```Shell
To control four legs dogs.
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
rt
### Expected behavior
tret | https://github.com/huggingface/lerobot/issues/927 | closed | [
"question"
] | 2025-04-01T13:26:20Z | 2025-04-01T15:50:04Z | null | lucasjinreal |
huggingface/agents-course | 391 | [QUESTION] UNIT-3 not yet published ? | <img width="1440" alt="Image" src="https://github.com/user-attachments/assets/aa8ed881-f998-4c63-805f-8af936d630c5" /> | https://github.com/huggingface/agents-course/issues/391 | closed | [
"question"
] | 2025-04-01T11:24:07Z | 2025-04-30T04:50:26Z | null | ynareshkalyan21 |
huggingface/hub-docs | 1,664 | Page: "how to be registered as a provider"? | https://github.com/huggingface/hub-docs/issues/1664 | closed | [] | 2025-04-01T10:55:01Z | 2025-04-03T13:03:26Z | null | hanouticelina | |
huggingface/lerobot | 926 | [Question] Deploy leRobot for a delta kinematic | Bonjour everyone,
I'm currently working on the development of an **open source delta robot** via ROS.
I'm wondering if any of you have a clue to help me integrate leRobot ACT algorithm to the custom kinematic of my delta.
ATM the inverse kinematic is managed by a marlin CNC firmware (on arudino mega), so we communi... | https://github.com/huggingface/lerobot/issues/926 | closed | [
"question"
] | 2025-04-01T09:46:29Z | 2025-04-28T10:57:31Z | null | man0n0n0 |
huggingface/optimum | 2,220 | optimum-cli diffusion policy model issue | ### System Info
```shell
Hi,
Trying to export a diffusion policy model to onnx format. From the error message and printed list of model types, it looks like “diffusion” model cannot be exported to onnx.
Is there a way to get around this?
optimum-cli export onnx --model lerobot/diffusion_pusht --task reinforcement-lea... | https://github.com/huggingface/optimum/issues/2220 | closed | [
"bug"
] | 2025-04-01T04:59:53Z | 2025-06-11T13:57:20Z | 1 | kraza8 |
huggingface/lerobot | 923 | Cannot install Lerobot | I am getting an error when the installation is building the av wheel. It is not passing this part of the installation | https://github.com/huggingface/lerobot/issues/923 | closed | [
"documentation",
"question",
"dependencies"
] | 2025-03-31T18:26:16Z | 2025-07-03T01:32:17Z | null | Prasit7 |
huggingface/open-r1 | 564 | How to evaluate pass@16 for aime 2024 benchmark? | https://github.com/huggingface/open-r1/issues/564 | open | [] | 2025-03-31T09:27:02Z | 2025-03-31T09:27:02Z | null | Cppowboy | |
huggingface/diffusers | 11,176 | How to use attention_mask and encoder_attention_mask or apply prompts to specific areas in the image? | Hi, I'm aware of the attention_mask and encoder_attention_mask that exist in the forward function of the UNet2DConditionModel yet there are no examples on how to use this
I would appreciate some help on that, thank you in advance
@patrickvonplaten @Birch-san | https://github.com/huggingface/diffusers/issues/11176 | open | [
"stale"
] | 2025-03-30T16:56:40Z | 2025-04-30T15:03:34Z | null | alexblattner |
huggingface/lerobot | 920 | [Question] How to convert dataset locally | I've noticed that `convert_dataset_v20_to_v21.py` convert LeRobot dataset from v20 to v21 that've already been pushed to the hub. But is there a script to do with local dataset? | https://github.com/huggingface/lerobot/issues/920 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-30T13:32:50Z | 2025-10-13T02:30:26Z | null | Frozenkiddo |
huggingface/lerobot | 919 | [Question] Why does "action" exist? | I am a beginner and I am very confused about it. What I can understand is that during my entire operation, I sampled at fixed time intervals. It's like a signal being collected by a letter. I only have to observe and what does action mean? Many data sets in the project have data with the column title `action`. Moreover... | https://github.com/huggingface/lerobot/issues/919 | closed | [
"question"
] | 2025-03-30T10:45:57Z | 2025-03-31T07:50:19Z | null | ipc-robot |
huggingface/trl | 3,179 | How to resume from the last checkpoint? | I want to continue training from the last checkpoint. How should I do it? I set resume_from_checkpoint=True in the GRPOConfig, but based on the output, it seems to start training from the first step. Do I also need to change the model to the checkpoint path? | https://github.com/huggingface/trl/issues/3179 | closed | [
"❓ question",
"🏋 GRPO"
] | 2025-03-30T02:30:47Z | 2025-03-30T04:35:58Z | null | Tuziking |
huggingface/diffusers | 11,168 | Sage Attention for diffuser library | **Is your feature request related to a problem? No
**Describe the solution you'd like.**
A clear and concise description of what you want to happen.
Incorporate a way to add sage attention to the diffusers library: Flux pipeline, Wan pipeline, etc.
**Describe alternatives you've considered.**
None
**Additional conte... | https://github.com/huggingface/diffusers/issues/11168 | open | [
"wip"
] | 2025-03-28T20:39:30Z | 2025-06-23T05:59:27Z | 12 | ukaprch |
huggingface/agents-course | 381 | [QUESTION]LLM or Agent? | In the tutorial, a lot of the contents mislead to a wrong conectp with LLM and Agents.
```
The Stop and Parse Approach
One key method for implementing actions is the stop and parse approach. This method ensures that the agent’s output is structured and predictable:
Generation in a Structured Format:
The agent outputs... | https://github.com/huggingface/agents-course/issues/381 | closed | [
"question"
] | 2025-03-28T15:36:45Z | 2025-04-30T04:50:54Z | null | joshhu |
huggingface/lerobot | 912 | [Question]When will MultiLeRobotDataset available? | Hello, the MultiLeRobotDataset is very useful for training on large amounts of data; without it, training complex tasks would be difficult. However, I noticed that after the Simplify configs(#550) commit on January 31st, MultiLeRobotDataset have been marked as unavailable(raise NotImplementedError("The MultiLeRobotDat... | https://github.com/huggingface/lerobot/issues/912 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-28T09:16:06Z | 2025-10-22T02:30:53Z | null | Vacuame |
huggingface/agents-course | 380 | [QUESTION] Question on using HuggingFace space | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.
I am on AI Agents course now.
I have trouble in using HuggingFace space.
I studied this course at company so I have to open a fi... | https://github.com/huggingface/agents-course/issues/380 | closed | [
"question"
] | 2025-03-28T08:28:23Z | 2025-04-30T04:47:14Z | null | kjh0303 |
huggingface/Math-Verify | 47 | Question: How to configure `verify` for strict multi-part answer checking? | Hi Math-Verify Team,
I'm currently using `math-verify` for evaluating LLM outputs, specifically for questions that might require multiple answers (e.g., "Find all X...").
I've observed that the `verify` function in `grader.py`, which seems to use logic similar to `any(product(gold, target))`, can return `True` even i... | https://github.com/huggingface/Math-Verify/issues/47 | closed | [] | 2025-03-27T16:54:52Z | 2025-07-01T19:31:51Z | null | TweedBeetle |
huggingface/transformers.js | 1,259 | 3.2.4 has wrong env check in transformers.web.js | ### Question
## Background
I have developed a chrome extension which is followed by the [example](https://github.com/huggingface/transformers.js/tree/main/examples/extension). The example was used the package @xenova/transformers.
## Motivation
It seems that multithreads is work now. [Issue](https://github.com/huggin... | https://github.com/huggingface/transformers.js/issues/1259 | closed | [
"question"
] | 2025-03-27T07:35:23Z | 2025-07-02T04:45:26Z | null | sanixa |
huggingface/datasets | 7,480 | HF_DATASETS_CACHE ignored? | ### Describe the bug
I'm struggling to get things to respect HF_DATASETS_CACHE.
Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.
Current version: 3.2.1dev. In the process... | https://github.com/huggingface/datasets/issues/7480 | open | [] | 2025-03-26T17:19:34Z | 2025-10-23T15:59:18Z | 8 | stephenroller |
huggingface/transformers.js | 1,258 | Tokenizer encode and decode get different token ids and text, missing word_ids | ### Question
```js
import { AutoTokenizer } from '@huggingface/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1')
console.log(tokenizer.encode(" e.g., ♩"))
console.log(tokenizer.decode([105]))
console.log(tokenizer.encode("♩"))
```
```
[ 312, 3588, 1042, 30717, 105 ]
�
[... | https://github.com/huggingface/transformers.js/issues/1258 | closed | [
"question"
] | 2025-03-26T10:44:12Z | 2025-03-31T20:18:45Z | null | liho00 |
huggingface/lerobot | 905 | Supporting selection of obs and action keys in dataset | Hi all, thanks a lot for the framework.
Currently, it seems the LeRobotDataset format requires users to have a fixed state/environment state/images or actions defined in their dataset. However, this means that for multiple similar applications, the user has to record different datasets with different state or action d... | https://github.com/huggingface/lerobot/issues/905 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-26T08:12:10Z | 2025-10-10T02:27:27Z | null | Mayankm96 |
huggingface/chat-ui | 1,772 | USE_LOCAL_WEBSEARCH No results found for this search query | ## Bug description
With `USE_LOCAL_WEBSEARCH=true`, Web Search always reports _No results found for this search query_.
## Steps to reproduce
- enable search
- enter and submit question
## Screenshots
<img width="488" alt="Image" src="https://github.com/user-attachments/assets/b948b629-ff67-4edb-9f7c-25ca9d3d1325"... | https://github.com/huggingface/chat-ui/issues/1772 | open | [
"bug",
"help wanted",
"websearch"
] | 2025-03-25T21:28:11Z | 2025-10-22T21:13:54Z | 6 | brechtm |
huggingface/chat-ui | 1,771 | Client disconnects before response is received | ## Bug description
If an answer takes several minutes to complete, the chat-ui client simply disconnects. This disconnection happens at 1 minute, but I'm unsure.
## Steps to reproduce
Ask your LLM a riddle but change it a little, so it becomes confused and wonders for a while.
man and a goat are one one side of a ... | https://github.com/huggingface/chat-ui/issues/1771 | open | [
"bug"
] | 2025-03-25T19:14:54Z | 2025-06-14T13:46:28Z | 3 | drewwells |
huggingface/datasets | 7,477 | What is the canonical way to compress a Dataset? | Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset?
Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https:... | https://github.com/huggingface/datasets/issues/7477 | open | [] | 2025-03-25T16:47:51Z | 2025-04-03T09:13:11Z | null | eric-czech |
huggingface/lerobot | 901 | Any tutorial on how to make experiments on the SimXArm enviroment? | https://github.com/huggingface/lerobot/issues/901 | closed | [] | 2025-03-25T13:29:59Z | 2025-03-25T16:42:11Z | null | chenkang455 | |
huggingface/chat-ui | 1,765 | `truncate` parameter ignored for OpenAI chat_completions endpoint | ## Bug description
The `truncate` parameter in the ChatUI configuration is not being applied when using the OpenAI chat_completions endpoint.
## Root Cause
The issue arises because the chat_completions endpoint does not utilize the buildPrompt function where the `truncate` parameter is handled. The logic for truncat... | https://github.com/huggingface/chat-ui/issues/1765 | open | [
"bug"
] | 2025-03-25T10:13:40Z | 2025-03-25T10:20:33Z | 0 | calycekr |
huggingface/finetrainers | 350 | how to train wan using 8 GPUs | I notice that there is only 4 GPUs scripts, even though I modify the script for 8 GPU training, it gets some errors. | https://github.com/huggingface/finetrainers/issues/350 | open | [] | 2025-03-25T05:02:18Z | 2025-05-06T14:54:50Z | null | tanshuai0219 |
huggingface/diffusers | 11,147 | [LTX0.9.5] make LTX0.9.5 works with text-to-video | see more context here https://github.com/huggingface/diffusers/issues/11143#issuecomment-2747390564 | https://github.com/huggingface/diffusers/issues/11147 | closed | [
"help wanted"
] | 2025-03-24T09:56:47Z | 2025-04-04T14:43:16Z | 9 | yiyixuxu |
huggingface/search-and-learn | 47 | How to run this project on CPU? | Hello, I'm going to run the code for the project on cpu
The graphics card I have now is 4060ti, but even with the lightest option (minimum batch size, use 1.5B model, etc.), I couldn't run the project due to memory capacity issues
So I want to move this project to cpu and see the results even if it takes some time
H... | https://github.com/huggingface/search-and-learn/issues/47 | open | [] | 2025-03-24T01:13:44Z | 2025-03-24T01:13:44Z | null | pss0204 |
huggingface/datasets | 7,473 | Webdataset data format problem | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted ... | https://github.com/huggingface/datasets/issues/7473 | closed | [] | 2025-03-21T17:23:52Z | 2025-03-21T19:19:58Z | 1 | edmcman |
huggingface/datasets | 7,470 | Is it possible to shard a single-sharded IterableDataset? | I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.
Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs mo... | https://github.com/huggingface/datasets/issues/7470 | closed | [] | 2025-03-21T04:33:37Z | 2025-11-22T07:55:43Z | 6 | jonathanasdf |
huggingface/lerobot | 884 | [Question] Support of PointCloud | Hi,
I'm currently developing a plugin for lerobot and would like to know if there are any plans to support PointCloud data.
Additionally, I'd like to ask if there is a recommended storage format for handling PointCloud data within the project.
Looking forward to your response.
Thanks | https://github.com/huggingface/lerobot/issues/884 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-21T04:29:15Z | 2025-10-07T02:26:39Z | null | yilin404 |
huggingface/inference-benchmarker | 4 | Can i use local model's tokenizer and local dataset? | Hello, may I specify the paths of the locally downloaded model and dataset through the ./inference-benchmarker command, instead of accessing Hugging Face via the network? | https://github.com/huggingface/inference-benchmarker/issues/4 | open | [
"question"
] | 2025-03-21T01:55:03Z | 2025-03-27T18:44:04Z | null | handsome-chips |
huggingface/video-dataset-scripts | 20 | parquet file how to convert to Training Dataset Format for finetrainers | parquet file how to convert to Training Dataset Format for finetrainers ? | https://github.com/huggingface/video-dataset-scripts/issues/20 | closed | [] | 2025-03-20T16:22:39Z | 2025-04-10T17:46:06Z | null | kanghua309 |
huggingface/trl | 3,114 | What is the reason for using only one GPU when integration with llm? | At [line](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L507) of the code, when using vllm, a unique GPU device is specified here. However, in fact, it is quite common to use a single vllm instance with multiple GPUs.
1. What is the reason that the code is designed to only select a single ... | https://github.com/huggingface/trl/issues/3114 | closed | [
"❓ question",
"🏋 GRPO"
] | 2025-03-19T16:20:03Z | 2025-04-05T17:01:33Z | null | spencergotowork |
huggingface/smollm | 67 | How to fine tune smolvlm on OCR | Is there any guid to finet-tune smovlm on OCR like in https://huggingface.co/ds4sd/SmolDocling-256M-preview | https://github.com/huggingface/smollm/issues/67 | open | [
"Image"
] | 2025-03-19T14:17:33Z | 2025-07-29T13:09:05Z | null | abdelkareemkobo |
huggingface/peft | 2,436 | Fine-tuning with Multiple LoRAs | Thanks for your valuable work!
I would like to know if it's possible to jointly train two LoRAs while only loading one base model. The overall output depends on the respective outputs of LoRA1 and LoRA2. For example, logits1 is obtained from the base model with LoRA1, and logits2 is obtained from the base model with L... | https://github.com/huggingface/peft/issues/2436 | closed | [] | 2025-03-19T13:49:28Z | 2025-07-19T05:45:12Z | 7 | xymou |
huggingface/setfit | 590 | How do I disable requests to huggingface.co:443 after training? | I'm currently evaluating setfit in a proof of concept situation. Unfortunately, I'm working behind a company firewall, where I do not have access to the world wide web, only to company-internal URLs.
That's a bit annoying in terms of downloading models, but I can work around that. More importantly, it seems there are ... | https://github.com/huggingface/setfit/issues/590 | open | [] | 2025-03-19T08:42:12Z | 2025-03-19T18:44:12Z | null | AdrianSchneble |
huggingface/diffusers | 11,114 | channel inconsistency in cogvideo Lora training example | ### Describe the bug
while using the training script in (https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_image_to_video_lora.py)
I made a dataset as described in readme and run training.
but a bug occurred at the forward pass process.It is because the model in-channel is 16 but m... | https://github.com/huggingface/diffusers/issues/11114 | open | [
"bug",
"stale"
] | 2025-03-19T07:55:00Z | 2025-04-18T15:02:52Z | 2 | MrTom34 |
huggingface/trl | 3,109 | where is file https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py | ### Reproduction
```python
from trl import ...
```
outputs:
```
Traceback (most recent call last):
File "example.py", line 42, in <module>
...
```
### System Info
https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py
### Checklist
- [x] I have checked that my issue isn't already filed (see [ope... | https://github.com/huggingface/trl/issues/3109 | closed | [
"🐛 bug",
"🏋 SFT"
] | 2025-03-19T02:20:26Z | 2025-03-19T02:22:23Z | null | zh794390558 |
huggingface/transformers.js | 1,245 | QuestionAnsweringOutput does not return start/end index | ### Question
Question/Answering pipeline does not seem to return start/end index.
console output example
``` { answer: 'anywhere', score: 0.8719829671013909 }```
source code in pipeline.js
```
class QuestionAnsweringPipeline ...
// TODO add start and end?
// NOTE: HF returns character index
toRetu... | https://github.com/huggingface/transformers.js/issues/1245 | open | [
"question"
] | 2025-03-18T21:20:25Z | 2025-03-18T21:20:25Z | null | sleep9 |
huggingface/transformers.js | 1,243 | Transformer.js compatibility with Angular17 | ### Question
I want to add transformer.js in Angular 17 project. Getting several errors can some one guide me how to add transformer.js with Angular project | https://github.com/huggingface/transformers.js/issues/1243 | open | [
"question"
] | 2025-03-18T16:15:30Z | 2025-03-24T21:27:11Z | null | AnuragPant01 |
huggingface/diffusers | 11,108 | Is there a way to generate a single image using multiple GPUs? | This is related to #2977 and #3392, but I would like to know how to generate a single image using multiple GPUs. If such a method does not exist, I would also like to know if Accelerate's [Memory-efficient pipeline parallelism](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference#memory-efficient-p... | https://github.com/huggingface/diffusers/issues/11108 | closed | [
"stale"
] | 2025-03-18T13:43:05Z | 2025-05-02T21:00:31Z | 12 | suzukimain |
huggingface/lerobot | 876 | Multiple GPU Training Support | Hi, lerobot team!
Thanks for the great work and organized content.
Are there plans to support PyTorch's Distributed Data Parallel (DDP) training in this framework? | https://github.com/huggingface/lerobot/issues/876 | closed | [
"enhancement",
"question",
"stale"
] | 2025-03-18T12:44:43Z | 2025-10-07T02:26:45Z | null | kingchou007 |
huggingface/open-r1 | 521 | How to use my own dataset in sft? | Could you please give an instruction/demo on how to use my own dataset (any column name) to apply sft? | https://github.com/huggingface/open-r1/issues/521 | open | [] | 2025-03-18T11:38:19Z | 2025-03-18T14:21:36Z | null | dongdongzhaoUP |
huggingface/diffusers | 11,103 | Which repo should I use for LTX-Video 0.9.5 diffusers | I see the changes are merged
Checked repo and it is empty
https://huggingface.co/Lightricks/LTX-Video-0.9.5/tree/main
Noticed in test pipeline it is
repo = "YiYiXu/ltx-95"
So can I safely assume that the above can be used?
@yiyixuxu | https://github.com/huggingface/diffusers/issues/11103 | closed | [] | 2025-03-18T10:50:41Z | 2025-03-18T11:00:34Z | 2 | nitinmukesh |
huggingface/trl | 3,103 | How are Lora parameters used in VLLM generation? (_move_model_to_vllm in GRPO trainer) | From the following code does not see the process of moving lora training parameters to VLLM? How guarantee that generated with the latest parameters? Can someone help explain.
<img width="1123" alt="Image" src="https://github.com/user-attachments/assets/62cacf0a-0197-4210-b326-c4e24b9b6701" />
And I printed the vllm l... | https://github.com/huggingface/trl/issues/3103 | closed | [
"❓ question",
"⚡ PEFT"
] | 2025-03-18T09:24:48Z | 2025-03-24T18:32:19Z | null | cuiyuhao1996 |
huggingface/datasets | 7,457 | Document the HF_DATASETS_CACHE env variable | ### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`... | https://github.com/huggingface/datasets/issues/7457 | closed | [
"enhancement"
] | 2025-03-17T12:24:50Z | 2025-05-06T15:54:39Z | 4 | LSerranoPEReN |
huggingface/transformers | 36,762 | When what needs to be loaded is in the cache directory, there is no need to make a request to the remote | ### Feature request
When what needs to be loaded is in the cache directory, there is no need to make a request to the remote.
### Motivation
I noticed that when `AutoTokenizer` loads a file using `from_pretrained`, it first tries to load it from a cached directory when `pretrained_model_name_or_path` is a model_id... | https://github.com/huggingface/transformers/issues/36762 | closed | [
"Feature request"
] | 2025-03-17T11:20:24Z | 2025-03-19T15:49:04Z | null | JinFish |
huggingface/diffusers | 11,086 | RuntimeError after using apply_group_offloading on diffusers: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same | Can anyone help me?
I used WanX's diffusers and used apply_group_offloading according to url: https://huggingface.co/docs/diffusers/main/en/optimization/memory.
The code is as follows:
```
image_encoder = CLIPVisionModel.from_pretrained(local_model_path, subfolder="image_encoder", torch_dtype=torch.float32)
vae = Auto... | https://github.com/huggingface/diffusers/issues/11086 | open | [
"stale"
] | 2025-03-17T11:03:48Z | 2025-04-16T15:03:36Z | 5 | tiga-dudu |
huggingface/trl | 3,093 | How to use a custom function as the reward model for PPO training | The new version of TRL's PPOtrainer requires Module as the reward model, but I need a custom function calculation to calculate the reward. I tried to lower the TRL version to 0.11.4, but the old version does not seem to support the peft model. I get the following error:
ValueError: model must be a PreTrainedModelWrappe... | https://github.com/huggingface/trl/issues/3093 | open | [
"❓ question",
"🏋 PPO",
"⚡ PEFT"
] | 2025-03-16T09:02:25Z | 2025-03-20T10:33:02Z | null | JWQZ |
huggingface/ai-deadlines | 19 | How to know the rankings of a conference? | @NielsRogge, may I know where we can get the conference rankings? | https://github.com/huggingface/ai-deadlines/issues/19 | closed | [] | 2025-03-15T18:32:34Z | 2025-03-15T21:45:02Z | null | julurisaichandu |
huggingface/diffusers | 11,063 | prepare_attention_mask - incorrect padding? | ### Describe the bug
I'm experimenting with attention masking in Stable Diffusion (so that padding tokens aren't considered for cross attention), and I found that UNet2DConditionModel doesn't work when given an `attention_mask`.
https://github.com/huggingface/diffusers/blob/8ead643bb786fe6bc80c9a4bd1730372d410a9df/sr... | https://github.com/huggingface/diffusers/issues/11063 | open | [
"bug",
"stale"
] | 2025-03-14T19:01:01Z | 2025-04-14T15:03:14Z | 2 | cheald |
huggingface/transformers.js | 1,237 | Using pipeline API in Mobile Devices | ### Question
How can I do the pipeline running in mobile devices?
Like here:
pipeline('background-removal', 'briaai/RMBG-1.4', { device: "webgpu" })
Or it depends from the model avaliable?
I don't find documentations about pipeline API options, like 'device' and others params... | https://github.com/huggingface/transformers.js/issues/1237 | open | [
"question"
] | 2025-03-14T17:55:27Z | 2025-05-11T19:58:39Z | null | LuSrodri |
huggingface/autotrain-advanced | 869 | How to fine-tune a custom model for Ollama? | Probably a stupid question, but I'm trying to upload a .csv dataset and fine-tune an 8B model in Autotrain. But when I add the model name taken from Ollama (e.g. deepseek-r1:8b or DeepSeek-R1-Distill-Llama-8B-NexaQuant) and try to train, I get an error.
validated_self = self.__pydantic_validator__.validate_python(d... | https://github.com/huggingface/autotrain-advanced/issues/869 | closed | [
"stale"
] | 2025-03-14T14:46:23Z | 2025-05-03T15:01:33Z | null | nigelp |
huggingface/diffusers | 11,060 | `prepare_image` in Kandinsky pipelines doesn't support `torch.Tensor` | Hi, I want to report a bug in Kandinsky pipelines.
https://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L413-L420
According to the above contents, elements in `image` can be either `PIL.Image.Image` or `torch.Tensor`.
h... | https://github.com/huggingface/diffusers/issues/11060 | closed | [
"good first issue",
"help wanted"
] | 2025-03-14T10:34:30Z | 2025-04-21T18:41:10Z | 1 | dk-hong |
huggingface/Math-Verify | 39 | How to choose ExprExtractionConfig() and LatexExtractionConfig() | Hi. Thanks for your awesome tool.
I want to ask how I should set the configuration when the answer is either LaTeX or Expr? I found that if the case below (without $$ $$) is not set, the output will be false when the expected result is true.
```python
from math_verify import parse, verify
gold = parse("\\frac{\sqrt... | https://github.com/huggingface/Math-Verify/issues/39 | closed | [] | 2025-03-13T23:36:27Z | 2025-04-28T20:42:03Z | null | Zhuofeng-Li |
huggingface/diffusers | 11,055 | Training on unconditional image generation creates colorized images | ### Describe the bug
Hi, I'm trying to follow the tutorial from unconditional image generation on my own dataset, and I'm getting weirdly colored images. I originally thought it was due to RGB/BGR channel order, but I've switched it around and got the same result. Do you have any suggestions of how to fix it?
### Re... | https://github.com/huggingface/diffusers/issues/11055 | open | [
"bug",
"stale"
] | 2025-03-13T20:47:22Z | 2025-04-13T15:02:53Z | 1 | esizikova-fda |
huggingface/lerobot | 860 | Modify camera async_read/read API to return a dictionary instead of tuple for better compatability? | Currently the intel real sense camera api supports returning either a single rgb image or a rgb image and depth image as a 2-uple
https://github.com/huggingface/lerobot/blob/3c0a209f9fac4d2a57617e686a7f2a2309144ba2/lerobot/common/robot_devices/cameras/intelrealsense.py#L440-L443
However this is not super compatible t... | https://github.com/huggingface/lerobot/issues/860 | closed | [
"enhancement",
"question"
] | 2025-03-13T18:44:20Z | 2025-05-26T09:28:48Z | null | StoneT2000 |
huggingface/transformers.js | 1,230 | Using background-removal pipeline produces images with 50% opacity | ### Question
I have a issue using the background-removal pipeline. Some models returns the exacly same image, but 50% opacite (RGBA: [X, Y, Z, 127]). So other models, returns an error like this: Uncaught Error: Unsupported model type: null transformers:1:670067.
How can I procede? | https://github.com/huggingface/transformers.js/issues/1230 | closed | [
"question"
] | 2025-03-13T17:00:13Z | 2025-03-25T22:28:37Z | null | LuSrodri |
huggingface/lerobot | 858 | DATASET conversion from V.16 to V2.0 ❌❌❌ |
Hi @aliberts @Cadene
Thanks for your amazing work. I have one doubt, I forked lerobot repo and training some policies, now i want to convert to V1.6 to V2.0, but my episodes are .pth format not in parquet format. I check remaining issues, i didn't find anything. right now while conversion it takes only parquet format.... | https://github.com/huggingface/lerobot/issues/858 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-13T15:22:51Z | 2025-10-07T02:26:46Z | null | Kacchan16 |
huggingface/optimum | 2,215 | not able to convert DeepSeek-R1 into Onnx using optimum-cli | ### System Info
```shell
v1.24.0
```
### Who can help?
@michaelbenayoun
I'm trying to convert DeepSeek-R1 into a onnx format, but i'm being presented with
> ValueError: Loading deepseek-ai/DeepSeek-R1 requires you to execute the configuration file in that repo on your local machine. Make sure you have read the c... | https://github.com/huggingface/optimum/issues/2215 | open | [
"bug"
] | 2025-03-13T07:07:10Z | 2025-05-13T11:13:36Z | 1 | volcano619 |
huggingface/trl | 3,066 | How to switch on the multi-GPU for GRPOTrainer? | Issue:
OOM errors during GRPO training - Need multi-GPU support for combined VRAM
Problem Description:
I'm encountering Out-of-Memory (OOM) errors while using GRPOTrainer to train reasoning capabilities similar to DeepSeek R1.
My Question:
How to switch on multi-GPU support for GRPOTrainer to utilize the combined VR... | https://github.com/huggingface/trl/issues/3066 | closed | [
"🏋 GRPO"
] | 2025-03-13T05:01:12Z | 2025-04-05T17:04:50Z | null | tjoymeed |
huggingface/agents-course | 314 | [QUESTION] agent.run(stream=True) How get finall result | agent = CodeAgent(
tools=[],
model=model,
max_steps=10,
verbosity_level=2
)
response = agent.run(
"""
descripe image
""",
images=image_urls,
stream=True
)
print()??? | https://github.com/huggingface/agents-course/issues/314 | open | [
"question"
] | 2025-03-13T02:32:47Z | 2025-03-13T02:32:47Z | null | via007 |
huggingface/diffusers | 11,046 | flux pipeline inference with controlnet, inpainting, plus ip-adapter | ### Describe the bug
Hi, I would like to utilize flux pipeline. But for now, I have gpu issues to use origin flux pipeline.
If I would like to use nf4 version, How can I set up the inference file on controlnet, inpainting, ip-adapter?
Do I use Fluxcontrol depth or canny and mask, ip-adapter model? or fluxcontrol, flu... | https://github.com/huggingface/diffusers/issues/11046 | open | [
"bug",
"stale"
] | 2025-03-12T20:14:01Z | 2025-04-12T15:02:52Z | 1 | john09282922 |
huggingface/lerobot | 854 | How to train diffusion policy in only state space, no images? | I have been having a lot of trouble trying to only train a model on purely a state space task so there are no images involved. I have already looked through every tutorial and most source code files and just can not get this working.
I have a script that creates a LeRobotDataset through human demonstrations. The scrip... | https://github.com/huggingface/lerobot/issues/854 | closed | [
"question",
"policies",
"stale"
] | 2025-03-12T16:01:19Z | 2025-10-26T02:30:57Z | null | Nicholas-Baldassini |
huggingface/diffusers | 11,045 | Crash when loading Flux Schnell 1 model with train_dreambooth_lora_flux | ### Describe the bug
When using the `Diffusers/example/dreambooth/train_dreambooth_lora_flux` script with the Flux Schnell 1 model, the process consistently crashes during the transformer shard loading at 33% (1/3), causing my entire Google JupyterLab kernel to crash.
**Question:** Is this related to using the Flux S... | https://github.com/huggingface/diffusers/issues/11045 | closed | [
"bug",
"stale"
] | 2025-03-12T15:08:11Z | 2025-05-07T15:18:15Z | 4 | rleygonie |
huggingface/diffusers | 11,043 | When will we be getting Quanto support for Wan 2.1? | The diffusers library for quantizers currently doesn't contain an entry for Quantro:
https://github.com/huggingface/diffusers/tree/main/src/diffusers/quantizers
Isn't this needed to perform requantization on a quantized Transformer for WAN 2.1?
Currently we can't do this due to missing Quanto quantizer after we've q... | https://github.com/huggingface/diffusers/issues/11043 | closed | [] | 2025-03-12T12:43:59Z | 2025-03-23T18:17:53Z | 2 | ukaprch |
huggingface/lerobot | 853 | How to customize adding other robot and manipulator? | Thanks for your great work! Now I got a problem how to customize adding other robot and manipulator.
I have 7DOF bimanual manipulators robot, which is powered by servo-motor. I want to add it to lerobot so I can use this fantastic platform to collect data and train. Specially the ACT and diffusion policy.
I have the... | https://github.com/huggingface/lerobot/issues/853 | closed | [
"question",
"robots"
] | 2025-03-12T11:39:19Z | 2025-10-08T20:16:23Z | null | meijie-jesse |
huggingface/smollm | 65 | How to set video size when fine tuning | Hi,
I've tried a bunch of variants but I can't seem to figure out how to set the video size. Currently, I have:
```py
processor.video_size = { "longest_edge": 128 }
processor.do_image_splitting = False
def sample_indices_fn(metadata, num_frames=None, fps=None, **kwargs):
return np.arange(0, 20, dtype=int)
m... | https://github.com/huggingface/smollm/issues/65 | open | [
"Video"
] | 2025-03-12T11:20:28Z | 2025-07-29T13:12:05Z | null | FredrikNoren |
huggingface/accelerate | 3,437 | Need help on how to disable enable_model_cpu_offload / enable_sequential_cpu_offload | So during my testing when used individually, I observed that
enable_sequential_cpu_offload require- 11 GB VRAM
enable_model_cpu_offload require - 8 GB VRAM
I am using Diffusers + nunchaku + sd_embed
Problem: sd_embed does not support enable_sequential_cpu_offload but support enable_model_cpu_offload
Requirement: ... | https://github.com/huggingface/accelerate/issues/3437 | closed | [] | 2025-03-12T09:29:08Z | 2025-03-12T10:10:33Z | null | nitinmukesh |
huggingface/diffusers | 11,042 | ZeroDivisionError when performing forward pass with UNet3DConditionModel | ### Describe the bug
# ZeroDivisionError when performing forward pass with UNet3DConditionModel
I'm encountering a ZeroDivisionError when attempting to perform a forward pass with the UNet3DConditionModel. This seems to be related to the num_attention_heads parameter being None, which causes self.inner_dim to be 0.
... | https://github.com/huggingface/diffusers/issues/11042 | closed | [
"bug"
] | 2025-03-12T09:26:01Z | 2025-03-13T02:00:12Z | 2 | txz32102 |
huggingface/lerobot | 851 | Hello, I would like to ask if I can use my ROS2 MoveIt2 robotic arm? | Can it support ROS training? I believe this would be beneficial for ecosystem development. | https://github.com/huggingface/lerobot/issues/851 | open | [
"question"
] | 2025-03-12T07:39:51Z | 2025-08-04T19:29:03Z | null | Gates-456 |
huggingface/open-r1 | 502 | How to use vllm with 2 GPUs? | Just as GRPO OOM #475 stated, the vllm kv init is so large that 1 A100 80GB could not hold it, while I have 8*A100 in total.
However, only 1 GPU is allowed to assign to vllm, as `vllm_device: auto` or `ib/python3.10/site-packages/trl/trainer/grpo_trainer.py`.
How should I solve the issue? Would anybody know?
| https://github.com/huggingface/open-r1/issues/502 | open | [] | 2025-03-12T03:36:18Z | 2025-06-03T11:55:47Z | null | greatxue |
huggingface/diffusers | 11,036 | Why perform the following operations on the latent condition? | in the code :https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
line 395-404:
```
latents_mean = (
torch.tensor(self.vae.config.latents_mean)
.view(1, self.vae.config.z_dim, 1, 1, 1)
.to(latents.device, latents.dtype)
)
latents_std = 1.0 / torch.tensor(self.va... | https://github.com/huggingface/diffusers/issues/11036 | closed | [] | 2025-03-12T02:32:09Z | 2025-03-15T02:40:13Z | 2 | trouble-maker007 |
huggingface/lerobot | 847 | Is there a way Merge | Convert | Edit datasets function or a way how we can train model using different datasets ? | Hey, everyone.
At the moment, we have this problem: We have recorded datasets with around 100 episodes each, but we would like to train our model with 1000 episodes. Unfortunately, we didn't find a way to load multiple datasets into a single policy training job, is it even possible ? If no, ss there a way to merge a ... | https://github.com/huggingface/lerobot/issues/847 | closed | [
"question",
"policies",
"dataset"
] | 2025-03-11T17:25:08Z | 2025-10-17T12:09:32Z | null | runmaget |
huggingface/lerobot | 846 | How to convert my own dataset to LerobotDataset format? | Hi, I am new to Lerobot and have a dataset in my own format. I would like to convert it to the LerobotDataset format.
I referred to `lerobot/scripts/push_dataset_to_hub.py`, but it seems to be deprecated. Could you provide guidance or an updated method for converting custom datasets?
Thanks in advance! | https://github.com/huggingface/lerobot/issues/846 | closed | [
"question",
"dataset"
] | 2025-03-11T09:17:23Z | 2025-04-15T00:59:10Z | null | yilin404 |
huggingface/open-r1 | 498 | How to Enable enforce_eager or Disable CUDA Graph in Evaluation | Evaluation code is currently using lighteval and vLLM for inference, and I would like to disable CUDA Graph by enabling options like ```enforce_eager```. However, I could not find a command-line argument for this in ```$MODEL_ARGS```. Additionally, setting it as an environment variable (e.g., VLLM_ENFORCE_EAGER) does n... | https://github.com/huggingface/open-r1/issues/498 | closed | [] | 2025-03-11T00:25:49Z | 2025-03-11T04:54:02Z | null | superdocker |
huggingface/diffusers | 11,020 | Multi-gpus Context Parallel training support? | Nowadays, the number of parameters in video generation models is increasing, and the video length is increasing. When training video models, it is difficult to fit a complete video sequence(200k~ tokens) on a single GPU. Some sequence parallel training technologies can solve this problem, such as the [fastvideo](https:... | https://github.com/huggingface/diffusers/issues/11020 | open | [] | 2025-03-10T11:45:30Z | 2025-07-18T13:05:08Z | 2 | yinian-lw |
huggingface/blog | 2,728 | Open In "02_how_to_generate", code cell 1 has an outdated version of tensorflow | The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.
if we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.... | https://github.com/huggingface/blog/issues/2728 | open | [] | 2025-03-09T18:05:55Z | 2025-03-09T18:06:11Z | null | Umashankar86 |
huggingface/blog | 2,727 | Open In "02_how_to_generate", code cell 1 has an outdated version of tensorflow | The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.
if we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.... | https://github.com/huggingface/blog/issues/2727 | closed | [] | 2025-03-09T18:04:48Z | 2025-03-09T18:05:03Z | null | Umashankar86 |
huggingface/datasets | 7,442 | Flexible Loader | ### Feature request
Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?
It can be something as simple as this one:
```
def load_hf_dataset(path_or_name):
if os.path.exists(path_or_name):
return load_from_disk(path_or_name)
... | https://github.com/huggingface/datasets/issues/7442 | open | [
"enhancement"
] | 2025-03-09T16:55:03Z | 2025-03-27T23:58:17Z | 3 | dipta007 |
huggingface/chat-ui | 1,751 | Analyze uploaded PDF files through OpenAI API | When I upload a PDF file and leverage it, I will get the base64 data. But I didn't find the code to process it in endpoints/openai, while it can handle the image base64 data. Besides, I failed to transfer it back to text. How can I analyze the file through OpenAI API?
 that the latest version of hf-hub is 0.4.2, but I can't find the 0.4.2 tag on GitHub. Could you tell me what is the commit ID corresponding to this version?
Sincerely suggest that you add a corresponding tag for each version release, which can effectively ... | https://github.com/huggingface/hf-hub/issues/99 | closed | [] | 2025-03-08T12:43:18Z | 2025-06-16T09:41:15Z | null | HairlessVillager |
huggingface/transformers | 36,613 | In "02_how_to_generate", code cell 1 has an error message | ### System Info
In "02_how_to_generate", code cell 1 has an error message but the rest works fine: ERROR: Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, ... | https://github.com/huggingface/transformers/issues/36613 | closed | [
"bug"
] | 2025-03-08T07:46:39Z | 2025-04-16T08:03:04Z | null | kvutien |
huggingface/diffusers | 11,008 | Support wan2.1 video model? | ### Did you like the remote VAE solution?
Yes.
### What can be improved about the current solution?
Wan2.1 video model support is appreciated!
### What other VAEs you would like to see if the pilot goes well?
Wan2.1 video model support is appreciated!
### Notify the members of the team
@hlky @sayakpaul | https://github.com/huggingface/diffusers/issues/11008 | open | [
"stale"
] | 2025-03-08T04:21:33Z | 2025-05-09T15:03:47Z | 6 | kexul |
huggingface/trl | 3,028 | Distill teacher models where the vocab size of teacher and student is different | I am trying to distill a Qwen2.5-7B-Instruct to Qwen2.5-5B-Instruct using a sample code
```from datasets import Dataset
from trl import GKDConfig, GKDTrainer
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
NUM_DUMMY_SAMPLES = 100
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B... | https://github.com/huggingface/trl/issues/3028 | open | [
"🏋 GKD"
] | 2025-03-08T00:29:01Z | 2025-10-29T04:15:50Z | null | shaunakjoshi12 |
huggingface/diffusers | 11,005 | pipeline_wan_i2v.py: minor discrepancy between arg default and docstring | ### Describe the bug
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
Line 447 (arg default):
```output_type: Optional[str] = "np",```
Line 496 (docstring):
```output_type (`str`, *optional*, defaults to `"pil"`):```
### Reproduction
n/a
### Logs
```shell
```
#... | https://github.com/huggingface/diffusers/issues/11005 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-03-07T16:37:48Z | 2025-04-24T18:49:38Z | 2 | rolux |
huggingface/finetrainers | 301 | How to train text-to-video generation model on different generation models using Disney dataset? | The current repository does not explicitly describe ho to change training methods between t2v or i2v.
| https://github.com/huggingface/finetrainers/issues/301 | closed | [] | 2025-03-07T16:02:42Z | 2025-03-07T16:08:06Z | null | kjosh925 |
huggingface/speech-to-speech | 159 | What is from df.enhance import enhance, init_df ? in vad_handler? | https://github.com/huggingface/speech-to-speech/issues/159 | open | [] | 2025-03-07T15:07:53Z | 2025-03-07T15:07:53Z | null | Manukrishna2K | |
huggingface/diffusers | 11,002 | Any chance class members like self._interrupt could be defined in __init__ across pipelines? | ### Describe the bug
I think there is no benefit to late initializing here and it puts a burden on the library user that could be easily avoided. Also leads to some confusion as it is uncommon, code inspection flags this. Let me know if I'm missing something.
### Reproduction
```
class WanImageToVideoPipeline:
def ... | https://github.com/huggingface/diffusers/issues/11002 | open | [
"bug",
"help wanted",
"contributions-welcome"
] | 2025-03-07T11:28:27Z | 2025-05-26T07:21:47Z | 9 | spezialspezial |
huggingface/diffusers | 10,993 | f-divergence | Is there a plan to implement the f-divergence scheduler ? I would like to contribute that to the library. | https://github.com/huggingface/diffusers/issues/10993 | open | [
"stale"
] | 2025-03-06T22:46:13Z | 2025-04-06T15:02:55Z | 5 | manmeet3591 |
huggingface/smolagents | 902 | How to populate custom variables in prompt template? | I'm trying to configure custom template variables in my system prompt.
**Current Implementation:**
1. I have a system prompt template with custom variables:
```python
CUSTOM_CODE_SYSTEM_PROMPT = """You are {{ bot_name }}, a customer support assistant...
{{ formatting_guidelines }}
```
2. Agent creation and configura... | https://github.com/huggingface/smolagents/issues/902 | closed | [] | 2025-03-06T20:45:51Z | 2025-03-07T08:54:22Z | null | Luisotee |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.