repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 11,914 | Loading multiple LoRAs to 1 pipeline in parallel, 1 LoRA to 2-pipelines on 2-GPUs | Hi everyone,
I have the following scenario.
I have a machine with 2-GPUs and a running service that keep has two pipelines loaded to their corresponding devices. Also I have a list of LoRAs (say 10). On each request I split the batch into 2 parts (request also has the corresponding information about LoRA), load LoRA... | https://github.com/huggingface/diffusers/issues/11914 | closed | [] | 2025-07-12T15:54:44Z | 2025-07-15T19:40:11Z | 5 | vahe-toffee |
huggingface/lerobot | 1,494 | release the code for reproducing the performance on the LIBERO dataset reported in the SmolVLA paper? | Has anyone been able to reproduce the performance on the LIBERO dataset reported in the SmolVLA paper? Iβd appreciate any guidelines or tips to help with reproducing the results. | https://github.com/huggingface/lerobot/issues/1494 | closed | [
"question",
"policies",
"simulation"
] | 2025-07-12T09:35:00Z | 2025-09-23T09:44:59Z | null | JustinKai0527 |
huggingface/datasets | 7,680 | Question about iterable dataset and streaming | In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78
I am confused,
1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style datase... | https://github.com/huggingface/datasets/issues/7680 | open | [] | 2025-07-12T04:48:30Z | 2025-08-01T13:01:48Z | 8 | Tavish9 |
huggingface/transformers | 39,377 | FlashAttention2 support for GSAI-ML / LLaDA-8B-Instruct? | Hi there,
I attempted to use flash attention 2 with this model but it seems like it isn't supported, based on this error:
```
ValueError: LLaDAModelLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/GSAI-ML/LLaDA-8B-Instru... | https://github.com/huggingface/transformers/issues/39377 | closed | [] | 2025-07-12T02:48:36Z | 2025-08-19T08:03:26Z | 2 | lbertge |
huggingface/lerobot | 1,492 | Is there any plan to add a validation loss in the training pipeline, which is not dependent on simulation env. | Can we have a dataset split in the training code to run the model on a holdout validation episode to check loss on it? | https://github.com/huggingface/lerobot/issues/1492 | open | [
"enhancement",
"question",
"policies"
] | 2025-07-11T20:43:04Z | 2025-12-30T07:12:20Z | null | mohitydv09 |
huggingface/peft | 2,642 | Prompt_Tuning.ipynb example doesn't seem to train the model | Hello! I am running Prompt-Tuning notebook example from PEFT lib examples [here](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/Prompt_Tuning.ipynb). I did **not** change any line of code and I ran the code block sequentially.
However, the performance under metrics remain exactly the *... | https://github.com/huggingface/peft/issues/2642 | closed | [] | 2025-07-11T18:26:58Z | 2025-08-23T15:03:47Z | 8 | ruixing76 |
huggingface/transformers | 39,366 | RuntimeError when loading llmcompressor W8A8 quantized model: int8 dtype in weight initialization | I'm trying to load the quantized model `RedHatAI/Qwen2.5-VL-7B-Instruct-quantized.w8a8` but encountering a dtype compatibility issue during model initialization. The model appears to be quantized using `llmcompressor` with W8A8 quantization scheme.
**Note**: I need to load this model without vLLM because I may need to... | https://github.com/huggingface/transformers/issues/39366 | closed | [
"Good First Issue"
] | 2025-07-11T15:15:09Z | 2025-12-08T13:30:10Z | 10 | AdelineXinyi |
huggingface/lerobot | 1,483 | How can I set `max_relative_target` to get safe action? | I saw this in function `send_action` in `src/lerobot/robots/so100_follower/so100_follower.py`
```python
def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
"""Command arm to move to a target joint configuration.
The relative action magnitude may be clipped depending on the configura... | https://github.com/huggingface/lerobot/issues/1483 | open | [
"question",
"robots"
] | 2025-07-11T02:46:02Z | 2025-08-12T09:34:51Z | null | milong26 |
huggingface/peft | 2,640 | Why peft.utils.other.fsdp_auto_wrap_policy do not warp the module do not require grad? | In https://github.com/huggingface/peft/blob/main/src/peft/utils/other.py#L977,
```
def fsdp_auto_wrap_policy(model):
if hasattr(FullyShardedDataParallelPlugin, "get_module_class_from_name"):
get_module_class_from_name = FullyShardedDataParallelPlugin.get_module_class_from_name
else:
from accel... | https://github.com/huggingface/peft/issues/2640 | closed | [] | 2025-07-10T12:07:13Z | 2025-08-18T15:05:03Z | 4 | Changlin-Lee |
huggingface/transformers | 39,336 | TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format' | I am using CogVLM2 video captioning model
It works latest with transformers==4.43.4
with transformers==4.44.0 and forward I get below error
but I need to use latest version of transformers since currently 4bit quantization fails on some gpus and platforms
how can i fix this issue?
`TypeError: GenerationMixin._extr... | https://github.com/huggingface/transformers/issues/39336 | closed | [
"bug"
] | 2025-07-10T11:49:02Z | 2025-08-18T08:03:13Z | 4 | FurkanGozukara |
huggingface/lerobot | 1,476 | Here as interactive gym to play with the robot, (I still need some help) | ### First the good news:
This is an interactive gym where you can experiment with pre-trained policies to control the robot in real time.
Here is how to use it:
- `Double-click` on a body to select it.
- `Ctrl + left` drag applies a torque to the selected object, resulting in rotation.
- `Ctrl + right` drag applies a ... | https://github.com/huggingface/lerobot/issues/1476 | open | [
"question",
"simulation"
] | 2025-07-09T14:59:22Z | 2025-12-16T13:41:00Z | null | raul-machine-learning |
huggingface/lerobot | 1,475 | [Question] What does each number in predicted action(SmolVLA) stand for? | Hi, I'm trying to load the SmolVLA and test on my simulation env.
After passing the observations to the model using "policy.select_action(obs)" I got a 6-dimensional action, but I'm quite confused about what exactly they are. And if there are three for position translation and three for rotation, how could I control ... | https://github.com/huggingface/lerobot/issues/1475 | open | [
"question",
"policies"
] | 2025-07-09T13:39:25Z | 2025-08-12T10:08:26Z | null | Calvert0921 |
huggingface/lerobot | 1,471 | where is 7_get_started_with_real_robot.md? | I didn't find 7_get_started_with_real_robot.md | https://github.com/huggingface/lerobot/issues/1471 | closed | [
"documentation",
"question"
] | 2025-07-09T08:02:32Z | 2025-10-08T08:42:21Z | null | von63 |
huggingface/alignment-handbook | 218 | Will you release SmolLM 3 recipe? | First off, thank you so much for sharing these training resources.
I was wondering if, with the recent release of SmolLM3, you have plans to also share its training recipe.
Have a nice day! | https://github.com/huggingface/alignment-handbook/issues/218 | closed | [] | 2025-07-08T19:47:20Z | 2025-07-15T14:16:11Z | 1 | ouhenio |
huggingface/sentence-transformers | 3,433 | How to use a custom batch sampler? | `SentenceTransformerTrainer.__init__` will check the type of args, so I have to write a class inheriting from `SentenceTransformerTrainingArgs` rather than `TransformerTrainingArgs`. The problem is that `SentenceTransformerTrainingArgs.__post__init__` forces to use `BatchSampler` to initialize a batch sampler. Is there... | https://github.com/huggingface/sentence-transformers/issues/3433 | open | [] | 2025-07-08T09:35:24Z | 2025-07-08T12:36:33Z | null | Hypothesis-Z |
huggingface/transformers | 39,266 | Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. | ### System Info
```bash
Traceback (most recent call last):
File "/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 767, in convert_to_tensors
tensor = as_tensor(value)
File "/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenizat... | https://github.com/huggingface/transformers/issues/39266 | closed | [
"bug"
] | 2025-07-08T05:19:35Z | 2025-07-08T06:50:47Z | 0 | mumu029 |
huggingface/lerobot | 1,460 | How to support dataloading with historical cue? | as i see, the getitem function of LerobotDataset now returns the single frame data, how to stack the historical frames and make use of batch data with historical information like univla?
| https://github.com/huggingface/lerobot/issues/1460 | open | [
"question",
"dataset"
] | 2025-07-08T01:49:11Z | 2025-08-12T09:44:02Z | null | joeyxin-del |
huggingface/lerobot | 1,458 | how to control a real robot arm-101 with my own pretrained model? | I don't see the instruction or script example on this repositoryγ
Please help
Thanks,
| https://github.com/huggingface/lerobot/issues/1458 | open | [
"question",
"policies"
] | 2025-07-08T01:19:50Z | 2025-08-12T09:45:13Z | null | jcl2023 |
huggingface/candle | 3,016 | Build fails on Maxwell GPU due to __dp4a undefined in quantized.cu | Iβm trying to build a Rust project locally that depends on candle-kernels on my laptop with an NVIDIA GeForce 940MX (Maxwell, compute capability 5.0). The build fails with errors like:
```
src/quantized.cu(1997): error: identifier "__dp4a" is undefined
...
18 errors detected in the compilation of "src/quantized.cu".
... | https://github.com/huggingface/candle/issues/3016 | open | [] | 2025-07-07T14:41:53Z | 2025-07-07T14:41:53Z | 0 | fishonamos |
huggingface/text-generation-inference | 3,289 | How to detect watermark? | Hi,
Thanks for the great work.
I saw in the current code the KGW watermark is implemented. But it seems lack of code to evaluate and detect whether the generated text contains watermark.
Could anyone suggest whether this code is exists? It will be very helpful.
Thanks | https://github.com/huggingface/text-generation-inference/issues/3289 | open | [] | 2025-07-07T11:42:54Z | 2025-07-07T11:42:54Z | null | Allencheng97 |
huggingface/lerobot | 1,448 | How to specify both policy.type and pretrained path at the same time? | Hi, I am adding custom configs to a PreTrainedConfig, and I also want to load it from a pretrained path. However, if I specify the pretrained path (with policy.path), I won't be able to modify the fields inside the new PreTrainedConfig subclass. If I use policy.type="myNewModel" instead, I am able to call the fields (s... | https://github.com/huggingface/lerobot/issues/1448 | open | [
"enhancement",
"configuration"
] | 2025-07-07T03:33:15Z | 2025-08-12T09:45:58Z | null | branyang02 |
huggingface/lerobot | 1,447 | SmolVLA input/output clarification | I'm now trying to load the SmolVLA to control the Franka arm in simulation. I found that there could be three image inputs(Obeservation.image, 1 and 2) and I have top, wrist and side views. Is there a fixed order for those camera views?
And the predicted action has 6 dimensions, does that mean it doesn't include the g... | https://github.com/huggingface/lerobot/issues/1447 | closed | [
"question",
"policies"
] | 2025-07-06T21:56:43Z | 2025-10-09T21:59:17Z | null | Calvert0921 |
huggingface/lerobot | 1,446 | How to evaluate finetuned SmolVLA model | Dear authors and your wonderful work.
I have fine-tuned the smolvla model based on a customized lerobot format dataset. My dataset is picking up a banana and placing it on a box. How can I evaluate the performance of the model? I tried eval.py in the scripes directory, but env_type=pusht doesn't work. I think this env_... | https://github.com/huggingface/lerobot/issues/1446 | closed | [
"question",
"policies"
] | 2025-07-06T15:27:22Z | 2025-10-17T11:57:49Z | null | BintaoBryant |
huggingface/diffusers | 11,865 | AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file' | ### Describe the bug
I would like to run the Cosmos-Predict2-14B-Text2Image model, but it is too large to fit in 24GB of VRAM normally, so I tried to load a Q8_0 GGUF quantization. I copied some code from the [HiDreamImageTransformer2DModel](https://huggingface.co/docs/diffusers/en/api/models/hidream_image_transformer... | https://github.com/huggingface/diffusers/issues/11865 | closed | [
"bug"
] | 2025-07-05T12:14:50Z | 2025-07-11T07:15:23Z | 9 | mingyi456 |
huggingface/diffusers | 11,864 | AutoencoderDC.encode fails with torch.compile(fullgraph=True) - "name 'torch' is not defined" | ### Describe the bug
I'm trying to optimize my data preprocessing pipeline for the Sana model by using `torch.compile` on the DC-AE encoder. Following PyTorch's best practices, I attempted to compile only the `encode` method with `fullgraph=True` for better performance, but I'm encountering an error.
When I try:
```p... | https://github.com/huggingface/diffusers/issues/11864 | closed | [
"bug"
] | 2025-07-05T06:15:11Z | 2025-07-09T01:32:39Z | 6 | SingleBicycle |
huggingface/datasets | 7,669 | How can I add my custom data to huggingface datasets | I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that. | https://github.com/huggingface/datasets/issues/7669 | open | [] | 2025-07-04T19:19:54Z | 2025-07-05T18:19:37Z | null | xiagod |
huggingface/lerobot | 1,442 | Trained pi0 policy ignores visual cues | I am having an issue in which my trained pi0 policy looks smooth but it completely ignores the camera input. I have tried covering up a camera and the policy still looks smooth! This seems very wrong. I wonder if it is because my images are not normalized correctly? Has anyone else seen this?
Do i need to change the ... | https://github.com/huggingface/lerobot/issues/1442 | open | [
"question",
"policies"
] | 2025-07-03T20:13:08Z | 2025-08-12T09:47:09Z | null | kumarhans |
huggingface/lerobot | 1,439 | [QUESTION] run a policy on a real robot | Hi There, In the documentation , scripts to teleoperate, record, replay or evaluate a policy are provided **but how to run a policy for inference only on a real robot** ? I did not find such a script?
Besides it may be interesting to add such a script in the documentation as well
Thank you very much for your help
| https://github.com/huggingface/lerobot/issues/1439 | open | [
"question",
"policies"
] | 2025-07-03T18:09:10Z | 2025-08-12T09:47:27Z | null | FaboNo |
huggingface/smolagents | 1,512 | How can we use this benchmark to evaluate local models? | examples/smolagents_benchmark/run.py
| https://github.com/huggingface/smolagents/issues/1512 | closed | [
"enhancement"
] | 2025-07-03T06:17:58Z | 2025-07-03T08:07:26Z | null | OoOPenN |
huggingface/diffusers | 11,849 | Can not load fusionx_lora into original wan2.1-14b | hello, i am adding the fusionx_lora into original wan2.1-14b-i2v, my code is as follow:
> pipe = WanImageToVideoPipeline.from_pretrained(my_local_path + "Wan2.1-I2V-14B-480P-Diffusers", vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
> pipe.load_lora_weights(
> my_local_path + "Wan14BT2VFusio... | https://github.com/huggingface/diffusers/issues/11849 | open | [] | 2025-07-02T13:48:17Z | 2025-07-02T13:48:17Z | 0 | fzuo1230 |
huggingface/transformers | 39,169 | Using Gemma3n with text-only generation requires image dependencies | ### System Info
- `transformers` version: 4.53.0
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.8
- Huggingface_hub version: 0.33.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (... | https://github.com/huggingface/transformers/issues/39169 | closed | [
"bug"
] | 2025-07-02T07:46:43Z | 2025-08-01T08:14:26Z | 6 | marianheinsen |
huggingface/lerobot | 1,429 | When will release the SmolVLA(2.25B & 0.24b) | Hi dear authors
thx for ur all and the wonderful work - SmolVLA!
I wonder will u release the **SmolVLA(2.25B)?** I want to compare the performance with your release version(0.45B) | https://github.com/huggingface/lerobot/issues/1429 | closed | [
"question",
"policies"
] | 2025-07-02T03:39:06Z | 2025-10-11T07:21:57Z | null | JuilieZ |
huggingface/sentence-transformers | 3,416 | How to calculate prompt tokens for embedding model encode? | I want to calculate input prompt tokens, which returns to user to let them know how many tokens they consumed. How can I do that? Could you give me an example? | https://github.com/huggingface/sentence-transformers/issues/3416 | open | [] | 2025-07-02T03:27:11Z | 2025-07-03T07:02:55Z | null | gaoxt1983 |
huggingface/sentence-transformers | 3,414 | How to fine tune multimodal embedding model? | Hi @tomaarsen and Team - hope all is well & thanks for the work.
I used to fine tune some pure text based embedding models using this package and now I would like to fine tune multimodal embedding models such as `llamaindex/vdr-2b-multi-v1` and `jinaai/jina-embeddings-v4`.
I wonder if you can share some insights / re... | https://github.com/huggingface/sentence-transformers/issues/3414 | open | [] | 2025-07-01T23:45:04Z | 2025-07-03T10:25:29Z | null | groklab |
huggingface/lerobot | 1,424 | evaluated trained policy reports 14 pc_success only | Trained act policy using
```
python lerobot/scripts/train.py \
--policy.type=act \
--dataset.repo_id=lerobot/act_aloha_sim_insertion_human \
--env.type=aloha \
--output_dir=outputs/train/act_aloha_insertion
```
Question: I think I mistakenly used the prefix `act_` in the `repo_id` but if I don't use ... | https://github.com/huggingface/lerobot/issues/1424 | open | [
"question",
"policies"
] | 2025-07-01T12:16:38Z | 2025-08-12T09:49:05Z | null | raul-machine-learning |
huggingface/lerobot | 1,421 | It would help to have a description for the lerobots datasets: | for example, for [lerobot/aloha_sim_insertion_human](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_human) comes with no description at all
I'd help to know
- What makes this data special/interesting
- How to train different models in the simulator
- What should we expect
- what does the `_human` means, ... | https://github.com/huggingface/lerobot/issues/1421 | open | [
"question",
"dataset"
] | 2025-07-01T10:14:45Z | 2025-08-12T09:49:27Z | null | raul-machine-learning |
huggingface/lerobot | 1,419 | simulator should allow pushing objects around with the mouse interactively | Not having this is preventing us from testing, debugging and playing with the robots.
According to Mujoco documentation this feature available in their simulator but it is not exposed in lerobot:
```
A related usability feature is the ability to βreach intoβ the simulation, push objects around and see how the
physic... | https://github.com/huggingface/lerobot/issues/1419 | open | [
"question",
"simulation"
] | 2025-07-01T09:47:02Z | 2025-08-12T09:50:18Z | null | raul-machine-learning |
huggingface/lerobot | 1,418 | Robot tries to transfer cube even if it failed to pick it up, shouldn't it retry? | I am evaluating the following policy:
```
python lerobot/scripts/eval.py --policy.path=lerobot/act_aloha_sim_transfer_cube_human --env.type=aloha --env.task=AlohaTransferCube-v0 --eval.n_episodes=1 --eval.batch_size=1
```
However the robot fails to pick up the cube but carries on with the task, shouldn't the robot kee... | https://github.com/huggingface/lerobot/issues/1418 | closed | [
"question",
"simulation"
] | 2025-07-01T09:18:38Z | 2025-10-17T11:57:34Z | null | raul-machine-learning |
huggingface/transformers | 39,137 | ImportError: cannot import name 'pipeline' from 'transformers' | ### System Info
I am using Databricks notebook.
Databricks runtime: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)
### Who can help?
@Rocketknight1 @SunMarc @zach-huggingface
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the ... | https://github.com/huggingface/transformers/issues/39137 | closed | [
"Usage",
"bug"
] | 2025-06-30T18:49:54Z | 2025-10-23T00:53:19Z | 14 | atabari-bci |
huggingface/lerobot | 1,407 | Can read the current signals from the lerobot? | Can a user read the current signals from the LeRobot? | https://github.com/huggingface/lerobot/issues/1407 | open | [
"question",
"sensors"
] | 2025-06-30T10:05:26Z | 2025-08-12T09:51:06Z | null | Frank-ZY-Dou |
huggingface/optimum | 2,314 | How to set the dynamic input sizes for decoder_with_past_model.onnx of NLLB | Dear author,
I'm a beginner in optimum. So this question may be an elementary one. I used optimum to export decoder_with_past_model.onnx from nllb-200-distilled-600M. The resulted onnx has many inputs with dynamic shape. Now I intend to overwrite the inputs with static sizes. However, I'm not sure about the correct set... | https://github.com/huggingface/optimum/issues/2314 | closed | [
"Stale"
] | 2025-06-30T06:37:50Z | 2025-08-07T02:17:43Z | null | liamsun2019 |
huggingface/transformers | 39,114 | Is there a way to force it to use ASCII based progress bar and not the ipython widget one? | When loading models, I like it better to have a ASCII based progress bar and not a IPython one | https://github.com/huggingface/transformers/issues/39114 | open | [
"Feature request"
] | 2025-06-29T22:41:19Z | 2025-07-07T13:20:13Z | 0 | weathon |
huggingface/transformers | 39,105 | How to use other acceleration apis of npu? | ### Feature request
I noticed that transformers now support using flash attention directly in the npu by [```npu_flash_attention.py```](https://github.com/huggingface/transformers/pull/36696). There are many other acceleration apis that can be used in npu, such as shown in [doc](https://www.hiascend.com/document/detai... | https://github.com/huggingface/transformers/issues/39105 | closed | [
"Feature request"
] | 2025-06-29T08:26:29Z | 2026-01-04T07:23:26Z | null | zheliuyu |
huggingface/candle | 3,013 | Word Timestamp for whisper | Hi is there no way to get word timestamp using the whisper in candle?
The example successfully demonstrates the retrieval of segment timestamp but how would one retrieve word timestamp.
When I look into python code, they seem to pass this `word_timestamp=True` argument while transcribing and get the result with `base... | https://github.com/huggingface/candle/issues/3013 | open | [] | 2025-06-29T01:16:38Z | 2025-06-29T23:47:39Z | 2 | bp7968h |
huggingface/trl | 3,662 | What is the point of steps_per_gen in GRPO Trainer | Hello, can you please explain what is the point of steps_per_gen in GRPO Training config when we already have num_iterations? The policy update logic can then simply be:
if num_iterations = 1, generations and model update are on_policy (per_token_logps = old_per_token_logps)
When num_iterations > 1, then the same gen... | https://github.com/huggingface/trl/issues/3662 | open | [
"β question",
"π GRPO"
] | 2025-06-28T20:08:01Z | 2025-07-25T08:05:50Z | null | ankur6ue |
huggingface/lerobot | 1,399 | calibrate.py for only follower | the calibrate.py file doesnt work for setting up the motors for the follower arm, as there arent enough parameters for the function to run. Has anyone made an adaption for the calibrate file that doesnt take into consideration the teleop? | https://github.com/huggingface/lerobot/issues/1399 | open | [
"question",
"teleoperators"
] | 2025-06-27T20:53:47Z | 2025-08-12T09:51:53Z | null | ramallis |
huggingface/transformers | 39,091 | `transformers`' dependency on `sentencepiece` blocks use on windows in python 3.13 | ### System Info
Due to
* changes in Python 3.13,
* an incompatibility in `sentencepiece`,
* `transformers` dependency on `sentencepiece`,
`transformers` cannot be easily installed under windows + py3.13, and does not work as a dependency of other packages in this environment
There are multiple issues and a merged P... | https://github.com/huggingface/transformers/issues/39091 | closed | [
"Usage"
] | 2025-06-27T15:23:57Z | 2025-07-03T16:02:47Z | 5 | leondz |
huggingface/transformers | 39,073 | Inefficient default GELU implementation in GPT2 | While profiling the HuggingFace GPT2 model, I found that the default GELU backend used is NewGELUActivation, which is inefficient in most cases. Instead of using a fused CUDA kernel, NewGELUActivation executes multiple separate PyTorch-level operators, leading to unnecessary kernel launches and memory overhead.
```pyt... | https://github.com/huggingface/transformers/issues/39073 | closed | [] | 2025-06-27T09:07:39Z | 2025-08-12T03:35:13Z | 4 | null-pointer-access |
huggingface/diffusers | 11,816 | set_adapters performance degrades with the number of inactive adapters | ### Describe the bug
### Goal
Build an image-generation service with `StableDiffusionXLPipeline` that:
1. Keeps ~50 LoRA adapters resident in GPU VRAM.
2. For each request:
β’ activate **β€ 5** specific LoRAs via `pipeline.set_adapters(...)`
β’ run inference
β’ deactivate them (ready for the next request).
... | https://github.com/huggingface/diffusers/issues/11816 | closed | [
"bug"
] | 2025-06-26T22:27:54Z | 2025-09-29T14:33:13Z | 27 | hrazjan |
huggingface/lerobot | 1,393 | motor configuration request - one motor at a time like configure_motors | I like the new process generally but I think the ability to configure a single motor was valuable (e.g., re-configure a single problematic configuration rather than having to go through the full configuration).
In addition to the current process, it would be nice if we could bring that per-motor functionality forward,... | https://github.com/huggingface/lerobot/issues/1393 | open | [
"question",
"robots"
] | 2025-06-26T19:27:36Z | 2025-08-12T09:52:30Z | null | brainwavecoder9 |
huggingface/text-generation-inference | 3,277 | Rubbish responses by Llama-3.3-70B-Instruct when message API is enabled. | ### System Info
TGI endpoint deployed on AWS SageMaker using the 3.2.3 image version.
The image URI is `763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.6.0-tgi3.2.3-gpu-py311-cu124-ubuntu22.04`
The environment is:
```python
env = {'HF_MODEL_ID': 'meta-llama/Llama-3.3-70B-Instruct',
... | https://github.com/huggingface/text-generation-inference/issues/3277 | open | [] | 2025-06-26T06:49:31Z | 2025-06-26T06:56:22Z | 0 | alexshtf |
huggingface/peft | 2,615 | How can I fine-tune the linear layers of the LLM part in Qwen2.5_VL 3B? | I only want to fine-tune the linear layers in the LLM part of Qwen2.5_VL 3B. The LoRA target modules are as follows:
```
target_modules: List[str] = field(default_factory=lambda: [
'self_attn.q_proj',
'self_attn.k_proj',
'self_attn.v_proj',
'self_attn.o_proj',
'mlp.gate_proj',
'mlp.up_proj',
... | https://github.com/huggingface/peft/issues/2615 | closed | [] | 2025-06-26T02:08:43Z | 2025-07-18T16:04:27Z | 7 | guoguo1314 |
huggingface/lerobot | 1,383 | Can multiple Lerobot datasets be mixed to pre-train a VLA model? | Hello, I would like to know if multiple independent Lerobot datasets can be mixed to achieve large-scale pre-training of a VLA model. Just like OpenVLA, it can mix multiple RLDS datasets to pre-train models. | https://github.com/huggingface/lerobot/issues/1383 | open | [
"enhancement",
"question",
"dataset"
] | 2025-06-25T08:45:48Z | 2025-08-12T09:55:48Z | null | xliu0105 |
huggingface/transformers | 39,023 | Does Gemma 3 need positions ids to be 1-indexed explicitly? | Hi Team
At some point `Gemma3ForConditionalGeneration` used to impose a 1-indexing of `position_ids`, [see here](https://github.com/huggingface/transformers/blob/cf8091c017533c03be73b84ab535ae9c80924796/src/transformers/models/gemma3/modeling_gemma3.py#L1430). However you won't find this in the latest main anymore, [s... | https://github.com/huggingface/transformers/issues/39023 | closed | [] | 2025-06-25T00:00:14Z | 2025-07-25T17:27:26Z | 2 | krypticmouse |
huggingface/transformers | 39,017 | Not able to use flash attention with torch.compile with model like BERT | ### System Info
when using torch.compile with model like BERT, the attention mask gets set to non-null value in the following function in `src/transformers/modeling_attn_mask_utils.py`. Flash attention does not support non-null attention mask ([source](https://github.com/pytorch/pytorch/blob/b09bd414a6ccba158c09f586a2... | https://github.com/huggingface/transformers/issues/39017 | closed | [
"bug"
] | 2025-06-24T19:09:07Z | 2025-10-09T23:03:45Z | 3 | gambiTarun |
huggingface/lerobot | 1,379 | New motor configuration doesn't center servo motors for so100 | I was used to using the previously existing `configure_motor.py` script to set the baudrate, ID and center the servo. And I used to do this before attempting assembly.
This script was also useful for configuring individual motors whenever I had to replace one in case they brok for some reason.
I just pulled the lates... | https://github.com/huggingface/lerobot/issues/1379 | open | [
"question",
"robots"
] | 2025-06-24T15:43:16Z | 2025-08-12T09:56:02Z | null | Esser50K |
huggingface/datasets | 7,637 | Introduce subset_name as an alias of config_name | ### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically call... | https://github.com/huggingface/datasets/issues/7637 | open | [
"enhancement"
] | 2025-06-24T12:49:01Z | 2025-07-01T16:08:33Z | 4 | albertvillanova |
huggingface/candle | 3,003 | Build for multiple arch? | CUDA_COMPUTE_CAP="90,100,121" ?? | https://github.com/huggingface/candle/issues/3003 | open | [] | 2025-06-23T13:17:45Z | 2025-06-23T13:17:45Z | 0 | johnnynunez |
huggingface/transformers | 38,984 | QA pipeline prediction generates wrong response when `top_k` param > 1 | ### System Info
- `transformers` version: 4.53.0.dev0
- Platform: Linux-5.4.0-1128-aws-fips-x86_64-with-glibc2.31
- Python version: 3.11.11
- Huggingface_hub version: 0.33.0
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (ac... | https://github.com/huggingface/transformers/issues/38984 | closed | [
"bug"
] | 2025-06-23T13:09:23Z | 2025-07-17T08:24:31Z | 4 | WeichenXu123 |
huggingface/lighteval | 822 | Documenting how to launch multilingual tasks | Atm, need to use custom tasks to launch them, must be documented | https://github.com/huggingface/lighteval/issues/822 | open | [] | 2025-06-23T11:10:13Z | 2025-09-03T15:28:42Z | null | clefourrier |
huggingface/candle | 3,002 | Is there a roadmap or intention to support CUDA Graph? | vLLM v1 uses CUDA Graph to capture the execution workflow of the entire model, resulting in significant performance improvements compared to the previous version. I'm wondering if there are any plans to support CUDA Graph in Candle. Would it be possible to add `start_capture`, `end_capture`, and `replay` to the `Module... | https://github.com/huggingface/candle/issues/3002 | open | [] | 2025-06-23T10:11:12Z | 2025-09-06T14:04:53Z | 4 | guoqingbao |
huggingface/transformers | 38,977 | LMHead is processing redundant tokens in prefill | While using `GPT2LMHeadModel.generate()` and compare its performance with vLLM, I noticed a significant inefficiency in the `forward()` implementation of many huggingface models. For example, in the `GPT2LMHeadModel.forward`, `self.lm_head` is applied to all token hidden states, even when called from the `generate()` m... | https://github.com/huggingface/transformers/issues/38977 | closed | [] | 2025-06-23T08:32:22Z | 2025-06-25T08:29:02Z | 3 | null-pointer-access |
huggingface/lerobot | 1,369 | The performance of SmolVLA on LIBERO cannot be replicated | I trained SmolVLA from scratch on the LIBERO dataset (the LIBERO dataset under Lerobot), but during the test, I couldn't reproduce its results in the paper. Could there be a problem with my reproduction code or process? Could you produce a version of the reproduction tutorial? | https://github.com/huggingface/lerobot/issues/1369 | closed | [
"question",
"policies"
] | 2025-06-23T07:38:52Z | 2025-10-07T19:58:50Z | null | hahans |
huggingface/transformers | 38,970 | Global and Local Anomaly co-Synthesis Strategy (GLASS) | ### Model description
Hi π€ Transformers team,
I would like to contribute a new model to the library:
GLASS β A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization
π Paper: https://arxiv.org/abs/2407.09359
π» Code: https://github.com/cqylunlun/GLASS
GLASS is a... | https://github.com/huggingface/transformers/issues/38970 | closed | [
"New model"
] | 2025-06-22T12:28:19Z | 2025-06-23T20:55:16Z | 2 | sbrzz |
huggingface/smolagents | 1,467 | How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one | How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one? | https://github.com/huggingface/smolagents/issues/1467 | closed | [
"enhancement"
] | 2025-06-22T07:34:13Z | 2025-06-22T10:49:30Z | null | ShelterWFF |
huggingface/transformers | 38,965 | Modernbert implementation with Tensorflow | Hi all!
I've noticed that ModernBERT [does not have an implementation in tensorflow](https://github.com/huggingface/transformers/issues/37128#issuecomment-2766235185) and I was looking into it.
I'm checking this https://huggingface.co/docs/transformers/main/add_tensorflow_model and I noticed that it's talking abo... | https://github.com/huggingface/transformers/issues/38965 | closed | [
"Feature request"
] | 2025-06-21T18:52:50Z | 2025-06-23T15:17:50Z | 2 | lfoppiano |
huggingface/lerobot | 1,361 | Nvidia Gr00t | Hi,
Are there any plans to integrate Nvidia Gr00t policy? | https://github.com/huggingface/lerobot/issues/1361 | open | [
"enhancement",
"question",
"policies"
] | 2025-06-21T10:42:07Z | 2025-08-20T13:34:30Z | null | AbdElRahmanFarhan |
huggingface/lerobot | 1,360 | Homing offset not taken into account during calibration | ### System Info
```Shell
As of lerobot commit `c940676bdda5ab92e3f9446a72fafca5c550b505`. Other system information is irrelevant for this issue.
```
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
In `lerobot/common/moto... | https://github.com/huggingface/lerobot/issues/1360 | open | [
"question",
"robots"
] | 2025-06-21T01:28:04Z | 2025-08-12T09:57:27Z | null | godardt |
huggingface/lerobot | 1,359 | Not clear how to setup a basic interactive simulator demo | Before buying the real robot most people would want to run a visual, interactive demo in the simulator.
A demo should provide:
- A trained model on the Franka robot
- an intuitive way to interact with the cube using the mouse (e.g. drag, move, or βkickβ it around) so we can see the robot chasing the cube.
Many th... | https://github.com/huggingface/lerobot/issues/1359 | closed | [
"question",
"simulation"
] | 2025-06-20T14:12:17Z | 2025-10-09T21:49:19Z | null | aguaviva |
huggingface/optimum | 2,300 | Support for EuroBERT models | ### Feature request
I would like to export and optimize the [EuroBERT models](https://huggingface.co/collections/EuroBERT/eurobert-67ceb6c01804878b1f7999c6).
Currently, it doesn't seem to be possible. When I run :
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
onnx_model = ORTModelForSe... | https://github.com/huggingface/optimum/issues/2300 | closed | [
"Stale"
] | 2025-06-20T12:35:46Z | 2025-08-21T02:11:39Z | 2 | antonioloison |
huggingface/peft | 2,601 | How to Load Adapters with Per-Layer Variable Shapes in `PeftModel.from_pretrained` | ### Feature request
Hi PEFT team,
Thank you for the great work on the PEFT library!
I'm working on an extension to LoKrConfig that supports layer-wise adapters with different internal shapes. Specifically:
- Each **adapter assigned to a layer** (e.g., adapter for layer A vs. layer B) may have a different shape.
- T... | https://github.com/huggingface/peft/issues/2601 | closed | [] | 2025-06-20T11:11:19Z | 2025-06-21T05:42:58Z | null | yuxuan-z19 |
huggingface/diffusers | 11,762 | Could you help fix the backdoor vulnerability caused by two risky pre-trained models used in this repo? | ### Describe the bug
Hi, @patrickvonplaten, @sayakpaul, I'd like to report that two potentially risky pretrained models are being used in this project, which may pose **backdoor threats**.Please check the following code example:
### Reproduction
β’ **tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upsc... | https://github.com/huggingface/diffusers/issues/11762 | open | [
"bug"
] | 2025-06-20T09:31:50Z | 2025-06-23T05:25:22Z | 2 | Rockstar292 |
huggingface/transformers | 38,927 | Can't load my LoRA checkpoint after gemma3 refactor | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.32.2
- Safetensors version: 0.4.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0... | https://github.com/huggingface/transformers/issues/38927 | closed | [
"bug"
] | 2025-06-20T06:59:34Z | 2025-10-07T18:53:15Z | 12 | jood-canva |
huggingface/mcp-course | 119 | How to preview the project locally? | I'm trying to preview the project locally to see my changes and contribute to the project. But when executing the script the following errors is triggered.
Error:

Preview:
_
Here Iβm using Β±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then buil... | https://github.com/huggingface/datasets/issues/7627 | closed | [] | 2025-06-19T14:28:41Z | 2025-06-23T12:39:10Z | 1 | Thunderhead-exe |
huggingface/lerobot | 1,351 | Need help about dataset and train. | # What this for
Attracted by smolvla, and new to smolvla_base, and i am now trying to ask few questions before a try with this model.
Several parts:
1) dataset
2) simulation
3) real world
## dataset
### Two cameras ?
I have read three datasets, including
https://huggingface.co/datasets/lerobot/svla_so101_pickplac... | https://github.com/huggingface/lerobot/issues/1351 | closed | [
"question",
"policies",
"dataset"
] | 2025-06-19T04:03:43Z | 2025-10-17T11:47:56Z | null | hbj52152 |
huggingface/candle | 2,997 | Implement Conv3D support for compatibility with Qwen-VL and similar models | Several vision-language models such as Qwen-VL and its variants make use of 3D convolution layers (Conv3D) in their architecture, especially for handling video or temporal spatial data. Currently, Candle does not support Conv3D operations, which makes it impossible to run or port such models natively.
In order to supp... | https://github.com/huggingface/candle/issues/2997 | open | [] | 2025-06-19T02:57:20Z | 2025-10-10T16:51:20Z | 1 | maximizemaxwell |
huggingface/accelerate | 3,633 | how to save a model with FSDP2 ? | Hello everyone, Iβm confused about how to save model weights using FSDP2. I keep running into OOM (out-of-memory) issues when trying to save a trained 8B model with FSDP2. Interestingly, memory is sufficient during training, but saving the model requires too much memory.
I would like each rank to save only its own wei... | https://github.com/huggingface/accelerate/issues/3633 | closed | [] | 2025-06-18T11:41:05Z | 2025-06-18T15:36:37Z | null | colinzhaoxp |
huggingface/datasets | 7,624 | #Dataset Make "image" column appear first in dataset preview UI | Hi!
#Dataset
Iβm currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"im... | https://github.com/huggingface/datasets/issues/7624 | closed | [] | 2025-06-18T09:25:19Z | 2025-06-20T07:46:43Z | 2 | jcerveto |
huggingface/agents-course | 550 | [QUESTION] Diagram of the multi-agent architecture | [Unit 2.1 Multi-Agent Systems](https://huggingface.co/learn/agents-course/unit2/smolagents/multi_agent_systems#multi-agent-systems) contains [an image](https://mermaid.ink/img/pako:eNp1kc1qhTAQRl9FUiQb8wIpdNO76eKubrmFks1oRg3VSYgjpYjv3lFL_2hnMWQOJwn5sqgmelRWleUSKLAtFs09jqhtoWuYUFfFAa6QA9QDTnpzamheuhxn8pt40-6l13UtS0ddhtQ... | https://github.com/huggingface/agents-course/issues/550 | open | [
"question"
] | 2025-06-18T08:58:58Z | 2025-06-18T08:58:58Z | null | st143575 |
huggingface/lerobot | 1,337 | how to work with ur robot,and collect the data and fine turn the model ? | https://github.com/huggingface/lerobot/issues/1337 | closed | [
"question",
"policies",
"dataset"
] | 2025-06-17T09:51:16Z | 2025-10-17T11:49:17Z | null | mmlingyu | |
huggingface/diffusers | 11,730 | Add `--lora_alpha` and metadata handling in training scripts follow up | With #11707, #11723 we pushed some small changes to the way we save and parse metadata for trained LoRAs, which also allow us to add a `--lora_alpha` arg to the Dreambooth LoRA training scripts, making LoRA alpha also configurable.
This issue is to ask for help from the community to bring these changes to the other t... | https://github.com/huggingface/diffusers/issues/11730 | closed | [
"good first issue",
"contributions-welcome"
] | 2025-06-17T09:29:24Z | 2025-06-24T10:58:54Z | 8 | linoytsaban |
huggingface/trl | 3,605 | How to convert my multiturn dialogue datasetοΌ | I have created a multiturn dialogue dataset. During the training process, the assistant's reply needs to be based on the user's reply and historical records in the previous round. First, the user's reply is labeled, and then the corresponding reply sentence is generated. In other words, the assistant's reply needs to r... | https://github.com/huggingface/trl/issues/3605 | closed | [
"π Reward"
] | 2025-06-17T09:07:47Z | 2025-09-22T17:46:35Z | null | Miaoqinghong |
huggingface/lerobot | 1,333 | SO-100 Follower: Severe wrist_roll motor instability causing unwanted rotation during teleoperation | ## Problem Description
The SO-100 Follower robot arm experiences severe instability in the `wrist_roll` motor during teleoperation, causing unwanted and uncontrollable rotation that significantly impacts usability. The motor exhibits extreme sensitivity and appears to be completely out of control in the default config... | https://github.com/huggingface/lerobot/issues/1333 | open | [
"question",
"policies"
] | 2025-06-17T07:10:23Z | 2025-12-05T12:17:16Z | null | TKDRYU104 |
huggingface/safetensors | 624 | Interest in Parallel Model Training and Xformers Saving Support (Bug?) (SOLVED) | ### Feature request
I would like to request official support for xformers (link: https://github.com/facebookresearch/xformers) and parallel model training: https://huggingface.co/docs/transformers/v4.13.0/en/parallelism for the safetensor saving file format if this does not currently exist. This safetensors saving err... | https://github.com/huggingface/safetensors/issues/624 | closed | [] | 2025-06-17T03:20:15Z | 2025-06-18T22:01:11Z | 1 | viasky657 |
huggingface/lerobot | 1,330 | Could you update the repository to enable the evaluation of SmolVLA's performance? | Could you update the repository to enable the evaluation of SmolVLA's performance? | https://github.com/huggingface/lerobot/issues/1330 | closed | [
"question",
"policies"
] | 2025-06-17T02:38:22Z | 2025-10-17T11:50:22Z | null | Pandapan01 |
huggingface/transformers | 38,851 | Should `compute_metrics` only run on the main process when doing DDP? | Hi, I want to know when doing training and evaluation on a multi-GPU setup (DDP using trainer and accelerate), does `compute_metrics` only need to be run on the main process?
The reason being that `trainer` itself already does `gather_for_metrics` ([here](https://github.com/huggingface/transformers/blob/v4.51-release... | https://github.com/huggingface/transformers/issues/38851 | closed | [] | 2025-06-17T00:09:43Z | 2025-07-25T08:02:33Z | 2 | TIE666 |
huggingface/lerobot | 1,324 | Where is control_robot.py script? | It is mentioned in the readme in the Walkthrough section that there is a script called control_robot.py. however, I can not see it in the main branch | https://github.com/huggingface/lerobot/issues/1324 | closed | [] | 2025-06-16T15:57:34Z | 2025-06-18T11:06:11Z | null | AbdElRahmanFarhan |
huggingface/agents-course | 547 | [QUESTION] Possible mistake in transformers size in terms of parameters | Hey,
Thanks for the great course!
I have a question on what looks to me like an inconsistency.
In the [unit1/what-are-llms](https://huggingface.co/learn/agents-course/unit1/what-are-llms) section, when explaining the 3 types of transformers, in the Typical Size, we can see:
Decoders:
Typical Size: Billions (in the U... | https://github.com/huggingface/agents-course/issues/547 | open | [
"question"
] | 2025-06-16T14:43:29Z | 2025-06-16T14:43:29Z | null | jonoillar |
huggingface/transformers.js | 1,341 | FireFox compatible models | ### Question
I am fairly new to everything here and kind of just vibe code while I learn JS, but I use Zen browser and enjoy making it more like Arc over my summer. I was wondering if it was possible to expose the native Firefox AI and be able to prompt it, which I was able to do [here](https://github.com/Anoms12/Fire... | https://github.com/huggingface/transformers.js/issues/1341 | open | [
"question"
] | 2025-06-16T12:43:39Z | 2025-06-16T12:47:44Z | null | 12th-devs |
huggingface/lerobot | 1,319 | How to debug or inspect the health of Feetech servos in so101 setup? | Hi, I'm working with the `so101` robot and running into issues with the Feetech servos.
I would like to ask:
1. Are there any recommended tools or procedures for debugging Feetech servos?
2. How can I check the health of a servo (e.g. temperature, load, internal error)?
Any help or pointers would be greatly apprecia... | https://github.com/huggingface/lerobot/issues/1319 | open | [
"question",
"robots"
] | 2025-06-16T08:58:32Z | 2025-08-12T10:01:41Z | null | DIMARIA123 |
huggingface/lerobot | 1,318 | How to use my own dataset to train pi0 or smolVLA | I have a dataset that I collected and converted to Lerobot format. This dataset has not been uploaded to huggingface. I want to use this dataset to train `pi0` or `smolvla`. How should I set it up?
I have tried to use only `dataset.root`, but it prompts that `dataset.repo_id` needs to be entered. What should I do? | https://github.com/huggingface/lerobot/issues/1318 | closed | [
"question",
"policies"
] | 2025-06-16T08:40:50Z | 2025-10-17T11:51:54Z | null | xliu0105 |
huggingface/lerobot | 1,316 | [Question] SmolVLA LIBERO / MetaWorld evaluation | Hello, thank you for open sourcing this wonderful repository. I have read the SmolVLA paper impressively and tried to run some evaluations.

In Section 4.5 of the paper, under Simulation Evaluation, it seems that you have fine-tu... | https://github.com/huggingface/lerobot/issues/1316 | closed | [
"question",
"policies",
"simulation"
] | 2025-06-16T06:28:50Z | 2025-12-10T22:11:17Z | null | tykim0507 |
huggingface/agents-course | 546 | [QUESTION] Can i solve this final assignment with free versions? | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer, you can ask here, please **be specific**.
I like to solve the final assignment, but I failed with free tools. I try to take inspiration from leaderboard toppers; they us... | https://github.com/huggingface/agents-course/issues/546 | open | [
"question"
] | 2025-06-16T06:13:37Z | 2025-06-16T06:13:37Z | null | mehdinathani |
huggingface/datasets | 7,617 | Unwanted column padding in nested lists of dicts | ```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '... | https://github.com/huggingface/datasets/issues/7617 | closed | [] | 2025-06-15T22:06:17Z | 2025-06-16T13:43:31Z | 1 | qgallouedec |
huggingface/transformers.js | 1,340 | Audio-to-Audio task | ### Question
Hi there.
I would like to know how running **Audio-to-Audio models** with _transformers.js_.
I haven't success to found any material about this. If has no way, is there some schedule to adds this?
Thanks! | https://github.com/huggingface/transformers.js/issues/1340 | open | [
"question"
] | 2025-06-15T17:58:54Z | 2025-10-13T04:45:39Z | null | LuSrodri |
huggingface/open-r1 | 677 | Error from E2B executor: cannot access local variable 'sandbox' where it is not associated with a value | Hi there,
I encountered a bug while following the sandbox setup instructions exactly as provided. Hereβs what Iβm seeing:

Has anyone experienced this before? Any advice on how to resolve it would be greatly appreciated!
Thank ... | https://github.com/huggingface/open-r1/issues/677 | closed | [] | 2025-06-14T19:08:22Z | 2025-07-22T06:55:38Z | null | juyongjiang |
huggingface/agents-course | 536 | [QUESTION] Llama-3.3-70B-Instruct model request denied | My request was denied for access to Llama-3.3-70B-Instruct model. However, it was accepted for the Llama 4 models. Is it possible that meta is limiting access after the release of Llama 4 in April?
Could the course be updated to reflect this change? | https://github.com/huggingface/agents-course/issues/536 | open | [
"question"
] | 2025-06-12T00:29:48Z | 2025-06-12T00:29:48Z | null | BookDisorder |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.