repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
βŒ€
user
stringlengths
2
28
huggingface/lerobot
1,497
ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.
### System Info ```Shell lerobot commit version: https://github.com/huggingface/lerobot/tree/69901b9b6a2300914ca3de0ea14b6fa6e0203bd4 ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction (lerobot) robot@robot-Legion-Y9000...
https://github.com/huggingface/lerobot/issues/1497
open
[ "question", "policies", "configuration" ]
2025-07-13T04:33:14Z
2025-08-12T09:32:36Z
null
dbdxnuliba
huggingface/trl
3,730
How to design stable reward functions for open-ended text generation tasks in GRPO?
I'm using GRPO for a text generation task where there's no single correct answer. I currently compute the reward using cosine similarity between the model output and a reference response. However, during training (around 400 steps), the reward values are quite unstable and fluctuate significantly. I'm wondering: Is c...
https://github.com/huggingface/trl/issues/3730
open
[ "❓ question", "πŸ‹ Reward", "πŸ‹ GRPO" ]
2025-07-12T18:39:37Z
2025-07-12T18:40:05Z
null
Jax922
huggingface/diffusers
11,915
Create modular pipeline from existing pipeline
new concept of modular pipelines added via #9672 is very flexible way of creating custom pipelines and one of the best early use-cases is new concept of modular guiders added via #11311 however, this would require a complete rewrite of the existing user apps/codebases to use new concepts and would likely signifi...
https://github.com/huggingface/diffusers/issues/11915
closed
[]
2025-07-12T16:08:30Z
2025-08-28T08:18:08Z
6
vladmandic
huggingface/diffusers
11,914
Loading multiple LoRAs to 1 pipeline in parallel, 1 LoRA to 2-pipelines on 2-GPUs
Hi everyone, I have the following scenario. I have a machine with 2-GPUs and a running service that keep has two pipelines loaded to their corresponding devices. Also I have a list of LoRAs (say 10). On each request I split the batch into 2 parts (request also has the corresponding information about LoRA), load LoRA...
https://github.com/huggingface/diffusers/issues/11914
closed
[]
2025-07-12T15:54:44Z
2025-07-15T19:40:11Z
5
vahe-toffee
huggingface/lerobot
1,494
release the code for reproducing the performance on the LIBERO dataset reported in the SmolVLA paper?
Has anyone been able to reproduce the performance on the LIBERO dataset reported in the SmolVLA paper? I’d appreciate any guidelines or tips to help with reproducing the results.
https://github.com/huggingface/lerobot/issues/1494
closed
[ "question", "policies", "simulation" ]
2025-07-12T09:35:00Z
2025-09-23T09:44:59Z
null
JustinKai0527
huggingface/datasets
7,680
Question about iterable dataset and streaming
In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78 I am confused, 1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style datase...
https://github.com/huggingface/datasets/issues/7680
open
[]
2025-07-12T04:48:30Z
2025-08-01T13:01:48Z
8
Tavish9
huggingface/transformers
39,377
FlashAttention2 support for GSAI-ML / LLaDA-8B-Instruct?
Hi there, I attempted to use flash attention 2 with this model but it seems like it isn't supported, based on this error: ``` ValueError: LLaDAModelLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/GSAI-ML/LLaDA-8B-Instru...
https://github.com/huggingface/transformers/issues/39377
closed
[]
2025-07-12T02:48:36Z
2025-08-19T08:03:26Z
2
lbertge
huggingface/lerobot
1,492
Is there any plan to add a validation loss in the training pipeline, which is not dependent on simulation env.
Can we have a dataset split in the training code to run the model on a holdout validation episode to check loss on it?
https://github.com/huggingface/lerobot/issues/1492
open
[ "enhancement", "question", "policies" ]
2025-07-11T20:43:04Z
2025-12-30T07:12:20Z
null
mohitydv09
huggingface/peft
2,642
Prompt_Tuning.ipynb example doesn't seem to train the model
Hello! I am running Prompt-Tuning notebook example from PEFT lib examples [here](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/Prompt_Tuning.ipynb). I did **not** change any line of code and I ran the code block sequentially. However, the performance under metrics remain exactly the *...
https://github.com/huggingface/peft/issues/2642
closed
[]
2025-07-11T18:26:58Z
2025-08-23T15:03:47Z
8
ruixing76
huggingface/transformers
39,366
RuntimeError when loading llmcompressor W8A8 quantized model: int8 dtype in weight initialization
I'm trying to load the quantized model `RedHatAI/Qwen2.5-VL-7B-Instruct-quantized.w8a8` but encountering a dtype compatibility issue during model initialization. The model appears to be quantized using `llmcompressor` with W8A8 quantization scheme. **Note**: I need to load this model without vLLM because I may need to...
https://github.com/huggingface/transformers/issues/39366
closed
[ "Good First Issue" ]
2025-07-11T15:15:09Z
2025-12-08T13:30:10Z
10
AdelineXinyi
pytorch/vision
9,146
https://github.com/pytorch/vision/blob/b818d320a14a2e6d9d9f28853e9e7beae703e52e/torchvision/io/video.py#L274
### πŸ› Describe the bug https://github.com/pytorch/vision/blob/b818d320a14a2e6d9d9f28853e9e7beae703e52e/torchvision/io/video.py#L274 this function warning infinite. and we don't know how to find the equalent code in torchcodec as well/...... ### Versions dsf
https://github.com/pytorch/vision/issues/9146
open
[]
2025-07-11T14:46:36Z
2025-08-07T14:22:22Z
2
OpenJarvisAI
huggingface/lerobot
1,483
How can I set `max_relative_target` to get safe action?
I saw this in function `send_action` in `src/lerobot/robots/so100_follower/so100_follower.py` ```python def send_action(self, action: dict[str, Any]) -> dict[str, Any]: """Command arm to move to a target joint configuration. The relative action magnitude may be clipped depending on the configura...
https://github.com/huggingface/lerobot/issues/1483
open
[ "question", "robots" ]
2025-07-11T02:46:02Z
2025-08-12T09:34:51Z
null
milong26
huggingface/peft
2,640
Why peft.utils.other.fsdp_auto_wrap_policy do not warp the module do not require grad?
In https://github.com/huggingface/peft/blob/main/src/peft/utils/other.py#L977, ``` def fsdp_auto_wrap_policy(model): if hasattr(FullyShardedDataParallelPlugin, "get_module_class_from_name"): get_module_class_from_name = FullyShardedDataParallelPlugin.get_module_class_from_name else: from accel...
https://github.com/huggingface/peft/issues/2640
closed
[]
2025-07-10T12:07:13Z
2025-08-18T15:05:03Z
4
Changlin-Lee
huggingface/transformers
39,336
TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'
I am using CogVLM2 video captioning model It works latest with transformers==4.43.4 with transformers==4.44.0 and forward I get below error but I need to use latest version of transformers since currently 4bit quantization fails on some gpus and platforms how can i fix this issue? `TypeError: GenerationMixin._extr...
https://github.com/huggingface/transformers/issues/39336
closed
[ "bug" ]
2025-07-10T11:49:02Z
2025-08-18T08:03:13Z
4
FurkanGozukara
huggingface/lerobot
1,476
Here as interactive gym to play with the robot, (I still need some help)
### First the good news: This is an interactive gym where you can experiment with pre-trained policies to control the robot in real time. Here is how to use it: - `Double-click` on a body to select it. - `Ctrl + left` drag applies a torque to the selected object, resulting in rotation. - `Ctrl + right` drag applies a ...
https://github.com/huggingface/lerobot/issues/1476
open
[ "question", "simulation" ]
2025-07-09T14:59:22Z
2025-12-16T13:41:00Z
null
raul-machine-learning
huggingface/lerobot
1,475
[Question] What does each number in predicted action(SmolVLA) stand for?
Hi, I'm trying to load the SmolVLA and test on my simulation env. After passing the observations to the model using "policy.select_action(obs)" I got a 6-dimensional action, but I'm quite confused about what exactly they are. And if there are three for position translation and three for rotation, how could I control ...
https://github.com/huggingface/lerobot/issues/1475
open
[ "question", "policies" ]
2025-07-09T13:39:25Z
2025-08-12T10:08:26Z
null
Calvert0921
huggingface/lerobot
1,471
where is 7_get_started_with_real_robot.md?
I didn't find 7_get_started_with_real_robot.md
https://github.com/huggingface/lerobot/issues/1471
closed
[ "documentation", "question" ]
2025-07-09T08:02:32Z
2025-10-08T08:42:21Z
null
von63
huggingface/alignment-handbook
218
Will you release SmolLM 3 recipe?
First off, thank you so much for sharing these training resources. I was wondering if, with the recent release of SmolLM3, you have plans to also share its training recipe. Have a nice day!
https://github.com/huggingface/alignment-handbook/issues/218
closed
[]
2025-07-08T19:47:20Z
2025-07-15T14:16:11Z
1
ouhenio
huggingface/sentence-transformers
3,433
How to use a custom batch sampler?
`SentenceTransformerTrainer.__init__` will check the type of args, so I have to write a class inheriting from `SentenceTransformerTrainingArgs` rather than `TransformerTrainingArgs`. The problem is that `SentenceTransformerTrainingArgs.__post__init__` forces to use `BatchSampler` to initialize a batch sampler. Is there...
https://github.com/huggingface/sentence-transformers/issues/3433
open
[]
2025-07-08T09:35:24Z
2025-07-08T12:36:33Z
null
Hypothesis-Z
huggingface/transformers
39,266
Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
### System Info ```bash Traceback (most recent call last): File "/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 767, in convert_to_tensors tensor = as_tensor(value) File "/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenizat...
https://github.com/huggingface/transformers/issues/39266
closed
[ "bug" ]
2025-07-08T05:19:35Z
2025-07-08T06:50:47Z
0
mumu029
huggingface/lerobot
1,460
How to support dataloading with historical cue?
as i see, the getitem function of LerobotDataset now returns the single frame data, how to stack the historical frames and make use of batch data with historical information like univla?
https://github.com/huggingface/lerobot/issues/1460
open
[ "question", "dataset" ]
2025-07-08T01:49:11Z
2025-08-12T09:44:02Z
null
joeyxin-del
huggingface/lerobot
1,458
how to control a real robot arm-101 with my own pretrained model?
I don't see the instruction or script example on this repository。 Please help Thanks,
https://github.com/huggingface/lerobot/issues/1458
open
[ "question", "policies" ]
2025-07-08T01:19:50Z
2025-08-12T09:45:13Z
null
jcl2023
pytorch/torchtitan
1,369
Puzzling collectives in TP ( SP to be exact)
### Bug description On running 1 step of a modified Llama3 debug_model ( n_layer=1) on 2 ranks with TP=2 , noticed 12 alleduce's ( reduce_scatter+allgather) of expected size , 8 * 2048 * 256 / 2 = 2097152 . There should be 8 allreduce's altogether, right ? One each for SelfAttention and FFN/MLP in the forward a...
https://github.com/pytorch/torchtitan/issues/1369
open
[ "question" ]
2025-07-07T22:12:46Z
2025-07-10T01:28:07Z
null
githubsgi
pytorch/tutorials
3,429
[BUG] - Broken link in intro of 'Learn the Basics' tutorial
### Add Link https://docs.pytorch.org/tutorials/beginner/basics/intro.html ### Describe the bug In the 'How to Use This Guide' section, the text reads: ``` If you’re new to deep learning frameworks, head right into the first section of our step-by-step guide: [1. Tensors](https://docs.pytorch.org/tutorials/beginne...
https://github.com/pytorch/tutorials/issues/3429
closed
[ "bug" ]
2025-07-07T19:52:10Z
2025-07-07T22:18:19Z
0
pankajkakkar
huggingface/candle
3,016
Build fails on Maxwell GPU due to __dp4a undefined in quantized.cu
I’m trying to build a Rust project locally that depends on candle-kernels on my laptop with an NVIDIA GeForce 940MX (Maxwell, compute capability 5.0). The build fails with errors like: ``` src/quantized.cu(1997): error: identifier "__dp4a" is undefined ... 18 errors detected in the compilation of "src/quantized.cu". ...
https://github.com/huggingface/candle/issues/3016
open
[]
2025-07-07T14:41:53Z
2025-07-07T14:41:53Z
0
fishonamos
huggingface/text-generation-inference
3,289
How to detect watermark?
Hi, Thanks for the great work. I saw in the current code the KGW watermark is implemented. But it seems lack of code to evaluate and detect whether the generated text contains watermark. Could anyone suggest whether this code is exists? It will be very helpful. Thanks
https://github.com/huggingface/text-generation-inference/issues/3289
open
[]
2025-07-07T11:42:54Z
2025-07-07T11:42:54Z
null
Allencheng97
pytorch/xla
9,447
[RFC] Controller for SPMD+MPMD
# [RFC] Controller for SPMD+MPMD ## Background Current work is being done to design a solution for making `mark_sharding` first trace the model before it is loaded into devices (https://github.com/pytorch/xla/issues/9341). Together with [Local SPMD](https://github.com/pytorch/xla/issues/9181), this should enable us to...
https://github.com/pytorch/xla/issues/9447
open
[ "distributed", "RFC" ]
2025-07-07T05:22:59Z
2025-07-09T02:01:27Z
2
pgmoka
huggingface/lerobot
1,448
How to specify both policy.type and pretrained path at the same time?
Hi, I am adding custom configs to a PreTrainedConfig, and I also want to load it from a pretrained path. However, if I specify the pretrained path (with policy.path), I won't be able to modify the fields inside the new PreTrainedConfig subclass. If I use policy.type="myNewModel" instead, I am able to call the fields (s...
https://github.com/huggingface/lerobot/issues/1448
open
[ "enhancement", "configuration" ]
2025-07-07T03:33:15Z
2025-08-12T09:45:58Z
null
branyang02
huggingface/lerobot
1,447
SmolVLA input/output clarification
I'm now trying to load the SmolVLA to control the Franka arm in simulation. I found that there could be three image inputs(Obeservation.image, 1 and 2) and I have top, wrist and side views. Is there a fixed order for those camera views? And the predicted action has 6 dimensions, does that mean it doesn't include the g...
https://github.com/huggingface/lerobot/issues/1447
closed
[ "question", "policies" ]
2025-07-06T21:56:43Z
2025-10-09T21:59:17Z
null
Calvert0921
pytorch/ao
2,496
[Feature Req] Can you add *args and **kwargs to improve extensibility ?
**Description:** The current class implementations have not _*args_ and _**kwargs_ and this reduces extensibility. **Example:** > Current ```python class AdamW4bit(_AdamBase): def __init__( self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=1e-2, ...
https://github.com/pytorch/ao/issues/2496
open
[ "triaged" ]
2025-07-06T17:29:19Z
2025-08-01T02:52:20Z
3
Musa-Sina-Ertugrul
huggingface/lerobot
1,446
How to evaluate finetuned SmolVLA model
Dear authors and your wonderful work. I have fine-tuned the smolvla model based on a customized lerobot format dataset. My dataset is picking up a banana and placing it on a box. How can I evaluate the performance of the model? I tried eval.py in the scripes directory, but env_type=pusht doesn't work. I think this env_...
https://github.com/huggingface/lerobot/issues/1446
closed
[ "question", "policies" ]
2025-07-06T15:27:22Z
2025-10-17T11:57:49Z
null
BintaoBryant
huggingface/diffusers
11,865
AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'
### Describe the bug I would like to run the Cosmos-Predict2-14B-Text2Image model, but it is too large to fit in 24GB of VRAM normally, so I tried to load a Q8_0 GGUF quantization. I copied some code from the [HiDreamImageTransformer2DModel](https://huggingface.co/docs/diffusers/en/api/models/hidream_image_transformer...
https://github.com/huggingface/diffusers/issues/11865
closed
[ "bug" ]
2025-07-05T12:14:50Z
2025-07-11T07:15:23Z
9
mingyi456
huggingface/diffusers
11,864
AutoencoderDC.encode fails with torch.compile(fullgraph=True) - "name 'torch' is not defined"
### Describe the bug I'm trying to optimize my data preprocessing pipeline for the Sana model by using `torch.compile` on the DC-AE encoder. Following PyTorch's best practices, I attempted to compile only the `encode` method with `fullgraph=True` for better performance, but I'm encountering an error. When I try: ```p...
https://github.com/huggingface/diffusers/issues/11864
closed
[ "bug" ]
2025-07-05T06:15:11Z
2025-07-09T01:32:39Z
6
SingleBicycle
huggingface/datasets
7,669
How can I add my custom data to huggingface datasets
I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.
https://github.com/huggingface/datasets/issues/7669
open
[]
2025-07-04T19:19:54Z
2025-07-05T18:19:37Z
null
xiagod
pytorch/executorch
12,221
How to build executorch with ANDROID_ABI=armeabi-v7a
### πŸš€ The feature, motivation and pitch https://github.com/pytorch/executorch/blob/main/tools/cmake/Utils.cmake#L89 here, there is no "ANDROID_ABI=armeabi-v7a" option, so if i want to build executorch for ANDROID_ABI=armeabi-v7a, how to do? thank you very much ### Alternatives _No response_ ### Additional context ...
https://github.com/pytorch/executorch/issues/12221
open
[ "module: build/install", "triaged" ]
2025-07-04T02:22:51Z
2025-12-01T07:52:13Z
null
barbecacov
huggingface/lerobot
1,442
Trained pi0 policy ignores visual cues
I am having an issue in which my trained pi0 policy looks smooth but it completely ignores the camera input. I have tried covering up a camera and the policy still looks smooth! This seems very wrong. I wonder if it is because my images are not normalized correctly? Has anyone else seen this? Do i need to change the ...
https://github.com/huggingface/lerobot/issues/1442
open
[ "question", "policies" ]
2025-07-03T20:13:08Z
2025-08-12T09:47:09Z
null
kumarhans
huggingface/lerobot
1,439
[QUESTION] run a policy on a real robot
Hi There, In the documentation , scripts to teleoperate, record, replay or evaluate a policy are provided **but how to run a policy for inference only on a real robot** ? I did not find such a script? Besides it may be interesting to add such a script in the documentation as well Thank you very much for your help
https://github.com/huggingface/lerobot/issues/1439
open
[ "question", "policies" ]
2025-07-03T18:09:10Z
2025-08-12T09:47:27Z
null
FaboNo
huggingface/smolagents
1,512
How can we use this benchmark to evaluate local models?
examples/smolagents_benchmark/run.py
https://github.com/huggingface/smolagents/issues/1512
closed
[ "enhancement" ]
2025-07-03T06:17:58Z
2025-07-03T08:07:26Z
null
OoOPenN
pytorch/ao
2,477
Support running multi-device tests in CI
For float8 training, the test_everything.sh script requires multiple GPUs for FSDP/TP tests, so we currently don't run in CI as it's not configured for multi-device jobs. We should figure out how to run these multi-device tests in CI. This would also be useful for some of our new MoE training parallelism tests.
https://github.com/pytorch/ao/issues/2477
closed
[ "ci", "float8" ]
2025-07-02T16:29:47Z
2025-07-16T16:31:06Z
2
danielvegamyhre
huggingface/diffusers
11,849
Can not load fusionx_lora into original wan2.1-14b
hello, i am adding the fusionx_lora into original wan2.1-14b-i2v, my code is as follow: > pipe = WanImageToVideoPipeline.from_pretrained(my_local_path + "Wan2.1-I2V-14B-480P-Diffusers", vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16) > pipe.load_lora_weights( > my_local_path + "Wan14BT2VFusio...
https://github.com/huggingface/diffusers/issues/11849
open
[]
2025-07-02T13:48:17Z
2025-07-02T13:48:17Z
0
fzuo1230
huggingface/transformers
39,169
Using Gemma3n with text-only generation requires image dependencies
### System Info - `transformers` version: 4.53.0 - Platform: macOS-15.5-arm64-arm-64bit - Python version: 3.12.8 - Huggingface_hub version: 0.33.2 - Safetensors version: 0.5.3 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.7.1 (...
https://github.com/huggingface/transformers/issues/39169
closed
[ "bug" ]
2025-07-02T07:46:43Z
2025-08-01T08:14:26Z
6
marianheinsen
huggingface/lerobot
1,429
When will release the SmolVLA(2.25B & 0.24b)
Hi dear authors thx for ur all and the wonderful work - SmolVLA! I wonder will u release the **SmolVLA(2.25B)?** I want to compare the performance with your release version(0.45B)
https://github.com/huggingface/lerobot/issues/1429
closed
[ "question", "policies" ]
2025-07-02T03:39:06Z
2025-10-11T07:21:57Z
null
JuilieZ
huggingface/sentence-transformers
3,416
How to calculate prompt tokens for embedding model encode?
I want to calculate input prompt tokens, which returns to user to let them know how many tokens they consumed. How can I do that? Could you give me an example?
https://github.com/huggingface/sentence-transformers/issues/3416
open
[]
2025-07-02T03:27:11Z
2025-07-03T07:02:55Z
null
gaoxt1983
huggingface/sentence-transformers
3,414
How to fine tune multimodal embedding model?
Hi @tomaarsen and Team - hope all is well & thanks for the work. I used to fine tune some pure text based embedding models using this package and now I would like to fine tune multimodal embedding models such as `llamaindex/vdr-2b-multi-v1` and `jinaai/jina-embeddings-v4`. I wonder if you can share some insights / re...
https://github.com/huggingface/sentence-transformers/issues/3414
open
[]
2025-07-01T23:45:04Z
2025-07-03T10:25:29Z
null
groklab
pytorch/pytorch
157,393
How to compose HSDP with CP?
### πŸ› Describe the bug We're trying to compose HSDP with CP following the [torchtitan blog post](https://discuss.pytorch.org/t/distributed-w-torchtitan-breaking-barriers-training-long-context-llms-with-1m-sequence-length-in-pytorch-using-context-parallel/215082) but are running into some issues and it's unclear to us...
https://github.com/pytorch/pytorch/issues/157393
closed
[ "oncall: distributed", "triaged" ]
2025-07-01T20:45:27Z
2025-07-09T00:10:23Z
null
EugenHotaj
huggingface/lerobot
1,424
evaluated trained policy reports 14 pc_success only
Trained act policy using ``` python lerobot/scripts/train.py \ --policy.type=act \ --dataset.repo_id=lerobot/act_aloha_sim_insertion_human \ --env.type=aloha \ --output_dir=outputs/train/act_aloha_insertion ``` Question: I think I mistakenly used the prefix `act_` in the `repo_id` but if I don't use ...
https://github.com/huggingface/lerobot/issues/1424
open
[ "question", "policies" ]
2025-07-01T12:16:38Z
2025-08-12T09:49:05Z
null
raul-machine-learning
huggingface/lerobot
1,421
It would help to have a description for the lerobots datasets:
for example, for [lerobot/aloha_sim_insertion_human](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_human) comes with no description at all I'd help to know - What makes this data special/interesting - How to train different models in the simulator - What should we expect - what does the `_human` means, ...
https://github.com/huggingface/lerobot/issues/1421
open
[ "question", "dataset" ]
2025-07-01T10:14:45Z
2025-08-12T09:49:27Z
null
raul-machine-learning
huggingface/lerobot
1,419
simulator should allow pushing objects around with the mouse interactively
Not having this is preventing us from testing, debugging and playing with the robots. According to Mujoco documentation this feature available in their simulator but it is not exposed in lerobot: ``` A related usability feature is the ability to β€œreach into” the simulation, push objects around and see how the physic...
https://github.com/huggingface/lerobot/issues/1419
open
[ "question", "simulation" ]
2025-07-01T09:47:02Z
2025-08-12T09:50:18Z
null
raul-machine-learning
huggingface/lerobot
1,418
Robot tries to transfer cube even if it failed to pick it up, shouldn't it retry?
I am evaluating the following policy: ``` python lerobot/scripts/eval.py --policy.path=lerobot/act_aloha_sim_transfer_cube_human --env.type=aloha --env.task=AlohaTransferCube-v0 --eval.n_episodes=1 --eval.batch_size=1 ``` However the robot fails to pick up the cube but carries on with the task, shouldn't the robot kee...
https://github.com/huggingface/lerobot/issues/1418
closed
[ "question", "simulation" ]
2025-07-01T09:18:38Z
2025-10-17T11:57:34Z
null
raul-machine-learning
pytorch/pytorch
157,352
[aot_compile]Explanation: Dynamo does not know how to trace the builtin `time.time.`
### πŸ› Describe the bug Graph break error happened when I compile yolov5 with torch._export.aot_compile interface. I also try with torch.compile and graph breaks also happened. but it compile normally. I am not sure whether this is dynamo bug and how can I resolve this issue. ### Error logs # code example: ``` class...
https://github.com/pytorch/pytorch/issues/157352
closed
[ "oncall: pt2", "module: dynamo", "oncall: export" ]
2025-07-01T05:58:10Z
2025-07-04T06:23:42Z
null
duanmu0228
pytorch/examples
1,362
Resnet50 on single node with 8 GPUs, all the parameters are default. why the result is different ?
Hello, I use the command "python main.py -a resnet50 --dist-url 'tcp://127.0.0.1:60000/' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0 /my_data_dir/" train and test resnet50 on a single node with 8 GPUs. But I got Acc@1 75.694 Acc@5 92.704, this is different from the result presented on h...
https://github.com/pytorch/examples/issues/1362
open
[]
2025-07-01T04:37:58Z
2025-07-01T04:37:58Z
0
sdwhzh
huggingface/transformers
39,137
ImportError: cannot import name 'pipeline' from 'transformers'
### System Info I am using Databricks notebook. Databricks runtime: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12) ### Who can help? @Rocketknight1 @SunMarc @zach-huggingface ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the ...
https://github.com/huggingface/transformers/issues/39137
closed
[ "Usage", "bug" ]
2025-06-30T18:49:54Z
2025-10-23T00:53:19Z
14
atabari-bci
huggingface/lerobot
1,407
Can read the current signals from the lerobot?
Can a user read the current signals from the LeRobot?
https://github.com/huggingface/lerobot/issues/1407
open
[ "question", "sensors" ]
2025-06-30T10:05:26Z
2025-08-12T09:51:06Z
null
Frank-ZY-Dou
huggingface/optimum
2,314
How to set the dynamic input sizes for decoder_with_past_model.onnx of NLLB
Dear author, I'm a beginner in optimum. So this question may be an elementary one. I used optimum to export decoder_with_past_model.onnx from nllb-200-distilled-600M. The resulted onnx has many inputs with dynamic shape. Now I intend to overwrite the inputs with static sizes. However, I'm not sure about the correct set...
https://github.com/huggingface/optimum/issues/2314
closed
[ "Stale" ]
2025-06-30T06:37:50Z
2025-08-07T02:17:43Z
null
liamsun2019
pytorch/TensorRT
3,637
❓ [Question] Why is `torch.bfloat16` excluded from the `allowed_casts` set ?
https://github.com/pytorch/TensorRT/blob/a66241158dc33a96138ac768a9e1facf0cae3594/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py#L1030-L1037 Is there a specific reason why `torch.bfloat16` is not included in the `allowed_casts` set within the `to_copy_dtype_validator` function? Plus, this causes graph pa...
https://github.com/pytorch/TensorRT/issues/3637
closed
[ "question" ]
2025-06-30T02:24:47Z
2025-07-04T00:01:16Z
null
junstar92
huggingface/transformers
39,114
Is there a way to force it to use ASCII based progress bar and not the ipython widget one?
When loading models, I like it better to have a ASCII based progress bar and not a IPython one
https://github.com/huggingface/transformers/issues/39114
open
[ "Feature request" ]
2025-06-29T22:41:19Z
2025-07-07T13:20:13Z
0
weathon
huggingface/transformers
39,105
How to use other acceleration apis of npu?
### Feature request I noticed that transformers now support using flash attention directly in the npu by [```npu_flash_attention.py```](https://github.com/huggingface/transformers/pull/36696). There are many other acceleration apis that can be used in npu, such as shown in [doc](https://www.hiascend.com/document/detai...
https://github.com/huggingface/transformers/issues/39105
closed
[ "Feature request" ]
2025-06-29T08:26:29Z
2026-01-04T07:23:26Z
null
zheliuyu
huggingface/candle
3,013
Word Timestamp for whisper
Hi is there no way to get word timestamp using the whisper in candle? The example successfully demonstrates the retrieval of segment timestamp but how would one retrieve word timestamp. When I look into python code, they seem to pass this `word_timestamp=True` argument while transcribing and get the result with `base...
https://github.com/huggingface/candle/issues/3013
open
[]
2025-06-29T01:16:38Z
2025-06-29T23:47:39Z
2
bp7968h
huggingface/trl
3,662
What is the point of steps_per_gen in GRPO Trainer
Hello, can you please explain what is the point of steps_per_gen in GRPO Training config when we already have num_iterations? The policy update logic can then simply be: if num_iterations = 1, generations and model update are on_policy (per_token_logps = old_per_token_logps) When num_iterations > 1, then the same gen...
https://github.com/huggingface/trl/issues/3662
open
[ "❓ question", "πŸ‹ GRPO" ]
2025-06-28T20:08:01Z
2025-07-25T08:05:50Z
null
ankur6ue
pytorch/torchtitan
1,355
Llama4 TP bug: DTensor local tensor dtype does not match DTensorSpec tensor meta dtype, causing meta registration error
### Bug description When I apply FSDP+TP to the Llama4 debug model using plain eager bf16 training, the MoE routed experts weights are DTensors. The local tensor dtype is bf16, but the Dtensor spec tensor meta dtype (`self.w1._spec.tensor_meta.dtype`) is fp32. This mismatch seems to cause the meta registration error b...
https://github.com/pytorch/torchtitan/issues/1355
closed
[ "bug" ]
2025-06-28T05:31:22Z
2025-08-21T03:23:49Z
2
danielvegamyhre
pytorch/ao
2,456
How to not decompose the choose_qparams_affine call_func
Hi, In the current v0.11.0, after torch.export.export() I have the graph below: ``` (Pdb) print(ep.graph) graph(): %linear1_weight : [num_users=1] = get_attr[target=linear1.weight] %x : [num_users=2] = placeholder[target=x] %choose_qparams_affine : [num_users=2] = call_function[target=torch.ops.torchao.choo...
https://github.com/pytorch/ao/issues/2456
open
[]
2025-06-27T22:23:33Z
2025-07-25T18:26:32Z
null
lanluo-nvidia
huggingface/lerobot
1,399
calibrate.py for only follower
the calibrate.py file doesnt work for setting up the motors for the follower arm, as there arent enough parameters for the function to run. Has anyone made an adaption for the calibrate file that doesnt take into consideration the teleop?
https://github.com/huggingface/lerobot/issues/1399
open
[ "question", "teleoperators" ]
2025-06-27T20:53:47Z
2025-08-12T09:51:53Z
null
ramallis
huggingface/transformers
39,091
`transformers`' dependency on `sentencepiece` blocks use on windows in python 3.13
### System Info Due to * changes in Python 3.13, * an incompatibility in `sentencepiece`, * `transformers` dependency on `sentencepiece`, `transformers` cannot be easily installed under windows + py3.13, and does not work as a dependency of other packages in this environment There are multiple issues and a merged P...
https://github.com/huggingface/transformers/issues/39091
closed
[ "Usage" ]
2025-06-27T15:23:57Z
2025-07-03T16:02:47Z
5
leondz
huggingface/transformers
39,073
Inefficient default GELU implementation in GPT2
While profiling the HuggingFace GPT2 model, I found that the default GELU backend used is NewGELUActivation, which is inefficient in most cases. Instead of using a fused CUDA kernel, NewGELUActivation executes multiple separate PyTorch-level operators, leading to unnecessary kernel launches and memory overhead. ```pyt...
https://github.com/huggingface/transformers/issues/39073
closed
[]
2025-06-27T09:07:39Z
2025-08-12T03:35:13Z
4
null-pointer-access
huggingface/diffusers
11,816
set_adapters performance degrades with the number of inactive adapters
### Describe the bug ### Goal Build an image-generation service with `StableDiffusionXLPipeline` that: 1. Keeps ~50 LoRA adapters resident in GPU VRAM. 2. For each request: β€’ activate **≀ 5** specific LoRAs via `pipeline.set_adapters(...)` β€’ run inference β€’ deactivate them (ready for the next request). ...
https://github.com/huggingface/diffusers/issues/11816
closed
[ "bug" ]
2025-06-26T22:27:54Z
2025-09-29T14:33:13Z
27
hrazjan
huggingface/lerobot
1,393
motor configuration request - one motor at a time like configure_motors
I like the new process generally but I think the ability to configure a single motor was valuable (e.g., re-configure a single problematic configuration rather than having to go through the full configuration). In addition to the current process, it would be nice if we could bring that per-motor functionality forward,...
https://github.com/huggingface/lerobot/issues/1393
open
[ "question", "robots" ]
2025-06-26T19:27:36Z
2025-08-12T09:52:30Z
null
brainwavecoder9
huggingface/text-generation-inference
3,277
Rubbish responses by Llama-3.3-70B-Instruct when message API is enabled.
### System Info TGI endpoint deployed on AWS SageMaker using the 3.2.3 image version. The image URI is `763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.6.0-tgi3.2.3-gpu-py311-cu124-ubuntu22.04` The environment is: ```python env = {'HF_MODEL_ID': 'meta-llama/Llama-3.3-70B-Instruct', ...
https://github.com/huggingface/text-generation-inference/issues/3277
open
[]
2025-06-26T06:49:31Z
2025-06-26T06:56:22Z
0
alexshtf
pytorch/torchtitan
1,344
Issue reproducing Float8 performance benchmark
### Bug description I'm looking at https://github.com/pytorch/torchtitan/blob/main/benchmarks/llama3_h100_202412_torchtitan.md. Specifically, this table: <img width="1170" alt="Image" src="https://github.com/user-attachments/assets/a1d26639-1d79-4992-ae17-9f37c86828f2" /> I'm not certain what the repro command for t...
https://github.com/pytorch/torchtitan/issues/1344
open
[ "documentation" ]
2025-06-26T04:22:28Z
2025-07-10T01:53:47Z
6
xmfan
huggingface/peft
2,615
How can I fine-tune the linear layers of the LLM part in Qwen2.5_VL 3B?
I only want to fine-tune the linear layers in the LLM part of Qwen2.5_VL 3B. The LoRA target modules are as follows: ``` target_modules: List[str] = field(default_factory=lambda: [ 'self_attn.q_proj', 'self_attn.k_proj', 'self_attn.v_proj', 'self_attn.o_proj', 'mlp.gate_proj', 'mlp.up_proj', ...
https://github.com/huggingface/peft/issues/2615
closed
[]
2025-06-26T02:08:43Z
2025-07-18T16:04:27Z
7
guoguo1314
pytorch/xla
9,405
Cannot mark sharding or print values of a SPMD tensor in a scanned function
## πŸ› Bug Cannot mark sharding or print values of a SPMD tensor in a scanned function ## To Reproduce ```python import torch_xla.core.xla_model as xm import torch_xla.runtime as xr import torch_xla.distributed.spmd as xs from torch_xla.experimental.scan import scan import torch from torch import nn import numpy as...
https://github.com/pytorch/xla/issues/9405
closed
[ "bug" ]
2025-06-25T10:36:50Z
2025-06-27T12:31:38Z
3
Topologized
huggingface/lerobot
1,383
Can multiple Lerobot datasets be mixed to pre-train a VLA model?
Hello, I would like to know if multiple independent Lerobot datasets can be mixed to achieve large-scale pre-training of a VLA model. Just like OpenVLA, it can mix multiple RLDS datasets to pre-train models.
https://github.com/huggingface/lerobot/issues/1383
open
[ "enhancement", "question", "dataset" ]
2025-06-25T08:45:48Z
2025-08-12T09:55:48Z
null
xliu0105
pytorch/pytorch
156,797
How to use compile cache?
According to the documentation at https://docs.pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html, we can use torch.compiler.save_cache_artifacts() and torch.compiler.load_cache_artifacts() to reduce compilation time. However, when exactly should we save the cache, and when should we load it? Is there ...
https://github.com/pytorch/pytorch/issues/156797
closed
[ "module: docs", "oncall: pt2" ]
2025-06-25T06:15:38Z
2025-06-30T03:32:22Z
null
jhl13
huggingface/transformers
39,023
Does Gemma 3 need positions ids to be 1-indexed explicitly?
Hi Team At some point `Gemma3ForConditionalGeneration` used to impose a 1-indexing of `position_ids`, [see here](https://github.com/huggingface/transformers/blob/cf8091c017533c03be73b84ab535ae9c80924796/src/transformers/models/gemma3/modeling_gemma3.py#L1430). However you won't find this in the latest main anymore, [s...
https://github.com/huggingface/transformers/issues/39023
closed
[]
2025-06-25T00:00:14Z
2025-07-25T17:27:26Z
2
krypticmouse
pytorch/torchtitan
1,334
[Low-bit Optimizers] Do torchtitan plan to integrate AdamW8bit or AdamWFP8 from TorchAO
Currently, using low-bit optimizers from [TorchAO](https://github.com/pytorch/ao) such as AdamW8bit and AdamWFP8 is not supported in this repo. Low-bit optimizers could significantly reduce memory usage and improve training efficiency. It would be a great enhancement to support them natively. Is there any plan to supp...
https://github.com/pytorch/torchtitan/issues/1334
open
[]
2025-06-24T21:28:20Z
2025-06-25T03:13:35Z
4
haochengxi
huggingface/transformers
39,017
Not able to use flash attention with torch.compile with model like BERT
### System Info when using torch.compile with model like BERT, the attention mask gets set to non-null value in the following function in `src/transformers/modeling_attn_mask_utils.py`. Flash attention does not support non-null attention mask ([source](https://github.com/pytorch/pytorch/blob/b09bd414a6ccba158c09f586a2...
https://github.com/huggingface/transformers/issues/39017
closed
[ "bug" ]
2025-06-24T19:09:07Z
2025-10-09T23:03:45Z
3
gambiTarun
huggingface/lerobot
1,379
New motor configuration doesn't center servo motors for so100
I was used to using the previously existing `configure_motor.py` script to set the baudrate, ID and center the servo. And I used to do this before attempting assembly. This script was also useful for configuring individual motors whenever I had to replace one in case they brok for some reason. I just pulled the lates...
https://github.com/huggingface/lerobot/issues/1379
open
[ "question", "robots" ]
2025-06-24T15:43:16Z
2025-08-12T09:56:02Z
null
Esser50K
huggingface/datasets
7,637
Introduce subset_name as an alias of config_name
### Feature request Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata). ### Motivation The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically call...
https://github.com/huggingface/datasets/issues/7637
open
[ "enhancement" ]
2025-06-24T12:49:01Z
2025-07-01T16:08:33Z
4
albertvillanova
pytorch/pytorch
156,673
[Onnx] How to do torch-dynamo based onnx exports for SAM-like models with optional inputs?
### πŸ› Describe the bug I like to generate an onnx model with torch-dynamo for SAM. How can I work with conditional inputs, like so: ``` from typing import Optional import torch from torch import Tensor class Model(torch.nn.Module): def __init__(self): super().__init__() def foward(self, image, poi...
https://github.com/pytorch/pytorch/issues/156673
closed
[ "module: onnx", "oncall: pt2", "oncall: export" ]
2025-06-24T04:20:24Z
2025-09-11T04:37:42Z
null
FabianSchuetze
pytorch/torchtitan
1,329
OOM recovery under multi-node FSDP/HSDP
### Bug description Does torchtitan provide any recipes of how to implement batch skipping / OOM recovery in multi-node FSDP setup? In RL/GRPO training this is very pertinent (where we don't know response seqlens a-priori to do packing / clipping): - https://github.com/volcengine/verl/issues/2159 One thing I could t...
https://github.com/pytorch/torchtitan/issues/1329
open
[ "question", "post training" ]
2025-06-23T16:22:58Z
2025-10-02T02:33:20Z
null
vadimkantorov
huggingface/candle
3,003
Build for multiple arch?
CUDA_COMPUTE_CAP="90,100,121" ??
https://github.com/huggingface/candle/issues/3003
open
[]
2025-06-23T13:17:45Z
2025-06-23T13:17:45Z
0
johnnynunez
huggingface/transformers
38,984
QA pipeline prediction generates wrong response when `top_k` param > 1
### System Info - `transformers` version: 4.53.0.dev0 - Platform: Linux-5.4.0-1128-aws-fips-x86_64-with-glibc2.31 - Python version: 3.11.11 - Huggingface_hub version: 0.33.0 - Safetensors version: 0.5.3 - Accelerate version: 1.8.1 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (ac...
https://github.com/huggingface/transformers/issues/38984
closed
[ "bug" ]
2025-06-23T13:09:23Z
2025-07-17T08:24:31Z
4
WeichenXu123
huggingface/lighteval
822
Documenting how to launch multilingual tasks
Atm, need to use custom tasks to launch them, must be documented
https://github.com/huggingface/lighteval/issues/822
open
[]
2025-06-23T11:10:13Z
2025-09-03T15:28:42Z
null
clefourrier
huggingface/candle
3,002
Is there a roadmap or intention to support CUDA Graph?
vLLM v1 uses CUDA Graph to capture the execution workflow of the entire model, resulting in significant performance improvements compared to the previous version. I'm wondering if there are any plans to support CUDA Graph in Candle. Would it be possible to add `start_capture`, `end_capture`, and `replay` to the `Module...
https://github.com/huggingface/candle/issues/3002
open
[]
2025-06-23T10:11:12Z
2025-09-06T14:04:53Z
4
guoqingbao
huggingface/transformers
38,977
LMHead is processing redundant tokens in prefill
While using `GPT2LMHeadModel.generate()` and compare its performance with vLLM, I noticed a significant inefficiency in the `forward()` implementation of many huggingface models. For example, in the `GPT2LMHeadModel.forward`, `self.lm_head` is applied to all token hidden states, even when called from the `generate()` m...
https://github.com/huggingface/transformers/issues/38977
closed
[]
2025-06-23T08:32:22Z
2025-06-25T08:29:02Z
3
null-pointer-access
huggingface/lerobot
1,369
The performance of SmolVLA on LIBERO cannot be replicated
I trained SmolVLA from scratch on the LIBERO dataset (the LIBERO dataset under Lerobot), but during the test, I couldn't reproduce its results in the paper. Could there be a problem with my reproduction code or process? Could you produce a version of the reproduction tutorial?
https://github.com/huggingface/lerobot/issues/1369
closed
[ "question", "policies" ]
2025-06-23T07:38:52Z
2025-10-07T19:58:50Z
null
hahans
huggingface/transformers
38,970
Global and Local Anomaly co-Synthesis Strategy (GLASS)
### Model description Hi πŸ€— Transformers team, I would like to contribute a new model to the library: GLASS – A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization πŸ“„ Paper: https://arxiv.org/abs/2407.09359 πŸ’» Code: https://github.com/cqylunlun/GLASS GLASS is a...
https://github.com/huggingface/transformers/issues/38970
closed
[ "New model" ]
2025-06-22T12:28:19Z
2025-06-23T20:55:16Z
2
sbrzz
huggingface/smolagents
1,467
How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one
How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one?
https://github.com/huggingface/smolagents/issues/1467
closed
[ "enhancement" ]
2025-06-22T07:34:13Z
2025-06-22T10:49:30Z
null
ShelterWFF
huggingface/transformers
38,965
Modernbert implementation with Tensorflow
Hi all! I've noticed that ModernBERT [does not have an implementation in tensorflow](https://github.com/huggingface/transformers/issues/37128#issuecomment-2766235185) and I was looking into it. I'm checking this https://huggingface.co/docs/transformers/main/add_tensorflow_model and I noticed that it's talking abo...
https://github.com/huggingface/transformers/issues/38965
closed
[ "Feature request" ]
2025-06-21T18:52:50Z
2025-06-23T15:17:50Z
2
lfoppiano
huggingface/lerobot
1,361
Nvidia Gr00t
Hi, Are there any plans to integrate Nvidia Gr00t policy?
https://github.com/huggingface/lerobot/issues/1361
open
[ "enhancement", "question", "policies" ]
2025-06-21T10:42:07Z
2025-08-20T13:34:30Z
null
AbdElRahmanFarhan
huggingface/lerobot
1,360
Homing offset not taken into account during calibration
### System Info ```Shell As of lerobot commit `c940676bdda5ab92e3f9446a72fafca5c550b505`. Other system information is irrelevant for this issue. ``` ### Information - [x] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction In `lerobot/common/moto...
https://github.com/huggingface/lerobot/issues/1360
open
[ "question", "robots" ]
2025-06-21T01:28:04Z
2025-08-12T09:57:27Z
null
godardt
pytorch/ao
2,419
Benefits of Using QAT Before GGUF Quantization?
Hi, thank you for the amazing project. I have a question regarding quantization workflows. Does applying QAT before convering to GGUF format (e.g. using `Q4, Q4_K_M`) result in better quality fompared to directy quantizing with GGUF alone? I'm planning to serve my model using llama.cpp, so converting to GGUF is requi...
https://github.com/pytorch/ao/issues/2419
closed
[]
2025-06-21T01:22:49Z
2025-06-25T11:56:11Z
5
kiyoonyoo
pytorch/torchtitan
1,323
Why `preserve_rng_state=False` in activation checkpointing
Why does torchtitan set `preserve_rng_state=False` for activation checkpointing? E.g.: https://github.com/pytorch/torchtitan/blob/f4048f8e1b36827156c4dc861c9680333a8542f9/torchtitan/models/llama3/infra/parallelize.py#L238
https://github.com/pytorch/torchtitan/issues/1323
open
[ "question", "high priority", "triage review", "module: activation checkpointing" ]
2025-06-20T20:22:42Z
2025-08-25T04:58:04Z
null
awgu
pytorch/torchtitan
1,322
How to adapt HuggingFace or other models for TorchTitan
Is there any thought on how to adapt HuggingFace or other models for pre-training with TorchTitan ?
https://github.com/pytorch/torchtitan/issues/1322
open
[ "duplicate" ]
2025-06-20T19:39:54Z
2025-08-21T03:22:37Z
null
githubsgi
huggingface/lerobot
1,359
Not clear how to setup a basic interactive simulator demo
Before buying the real robot most people would want to run a visual, interactive demo in the simulator. A demo should provide: - A trained model on the Franka robot - an intuitive way to interact with the cube using the mouse (e.g. drag, move, or β€œkick” it around) so we can see the robot chasing the cube. Many th...
https://github.com/huggingface/lerobot/issues/1359
closed
[ "question", "simulation" ]
2025-06-20T14:12:17Z
2025-10-09T21:49:19Z
null
aguaviva
huggingface/optimum
2,300
Support for EuroBERT models
### Feature request I would like to export and optimize the [EuroBERT models](https://huggingface.co/collections/EuroBERT/eurobert-67ceb6c01804878b1f7999c6). Currently, it doesn't seem to be possible. When I run : ```python from optimum.onnxruntime import ORTModelForSequenceClassification onnx_model = ORTModelForSe...
https://github.com/huggingface/optimum/issues/2300
closed
[ "Stale" ]
2025-06-20T12:35:46Z
2025-08-21T02:11:39Z
2
antonioloison
huggingface/peft
2,601
How to Load Adapters with Per-Layer Variable Shapes in `PeftModel.from_pretrained`
### Feature request Hi PEFT team, Thank you for the great work on the PEFT library! I'm working on an extension to LoKrConfig that supports layer-wise adapters with different internal shapes. Specifically: - Each **adapter assigned to a layer** (e.g., adapter for layer A vs. layer B) may have a different shape. - T...
https://github.com/huggingface/peft/issues/2601
closed
[]
2025-06-20T11:11:19Z
2025-06-21T05:42:58Z
null
yuxuan-z19
huggingface/diffusers
11,762
Could you help fix the backdoor vulnerability caused by two risky pre-trained models used in this repo?
### Describe the bug Hi, @patrickvonplaten, @sayakpaul, I'd like to report that two potentially risky pretrained models are being used in this project, which may pose **backdoor threats**.Please check the following code example: ### Reproduction β€’ **tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upsc...
https://github.com/huggingface/diffusers/issues/11762
open
[ "bug" ]
2025-06-20T09:31:50Z
2025-06-23T05:25:22Z
2
Rockstar292
huggingface/transformers
38,927
Can't load my LoRA checkpoint after gemma3 refactor
### System Info - `transformers` version: 4.52.4 - Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35 - Python version: 3.10.15 - Huggingface_hub version: 0.32.2 - Safetensors version: 0.4.3 - Accelerate version: 1.6.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (GPU?): 2.6.0...
https://github.com/huggingface/transformers/issues/38927
closed
[ "bug" ]
2025-06-20T06:59:34Z
2025-10-07T18:53:15Z
12
jood-canva
huggingface/mcp-course
119
How to preview the project locally?
I'm trying to preview the project locally to see my changes and contribute to the project. But when executing the script the following errors is triggered. Error: ![Image](https://github.com/user-attachments/assets/b9a47af1-e28e-4175-8c33-7ed2aac9121b) Preview: ![Image](https://github.com/user-attachments/assets/2b14...
https://github.com/huggingface/mcp-course/issues/119
closed
[]
2025-06-20T01:05:46Z
2025-09-23T17:29:13Z
null
arimariojesus
huggingface/transformers
38,924
Exporting Llava decoder into ONNX format
I am working on exporting Llava into ONNX format. I came across this previous issue: https://github.com/huggingface/transformers/issues/33637 which had a notebook that outlined how to export in three separate parts. I noticed there wasn't any actual code on how the decoder was exported unlike the other two components. ...
https://github.com/huggingface/transformers/issues/38924
closed
[]
2025-06-19T23:32:47Z
2025-08-12T08:03:14Z
10
EricJi150