repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot | 2,224 | Can i just modify the json the pretrained policy to adapt it to my own robot? | I just want to know if i can just modify the config json(shape of state, size of image .etc) to adapt the model to inference in my modified robot(have different number of feetect and different image resolution)? | https://github.com/huggingface/lerobot/issues/2224 | open | [
"question",
"policies"
] | 2025-10-17T01:33:32Z | 2025-10-20T16:40:26Z | null | shs822 |
huggingface/lerobot | 2,221 | Question about pre-trained weights usability and performance on Hugging Face models | Hello,
I would like to ask whether the weights provided on Hugging Face (for example, under the lerobot author page) can be directly downloaded and used for inference, or if they must be fine-tuned before achieving reasonable performance.
When I directly load and evaluate the models (e.g., lerobot/smolvla_base or ler... | https://github.com/huggingface/lerobot/issues/2221 | closed | [
"question"
] | 2025-10-16T14:14:39Z | 2025-10-31T16:26:45Z | null | MichaelWu99-lab |
vllm-project/vllm | 27,021 | [Usage]: Need guidance reproducing benchmark results from PR #25337 — results differ significantly from reported data | ## Background
Recently, we have been working on optimizing the position computation for multimodal models in vLLM.
During benchmarking, we noticed that our results were not as expected.
To investigate, we decided to reproduce the benchmark results from [PR #25337](https://github.com/vllm-project/vllm/pull/25337), com... | https://github.com/vllm-project/vllm/issues/27021 | open | [
"usage"
] | 2025-10-16T12:31:03Z | 2025-10-17T05:46:32Z | 5 | deitxfge |
vllm-project/vllm | 27,017 | [Doc]: KV Cache Memory allocations | ### 📚 The doc issue
Hello,
When serving a model via vLLM for text(token) generation:
1. Before a new request gets scheduled, does vLLM check if KV cache for a sequence length of `max_model_len` is available for that new request or does it check if KV cache for a sequence length of `input prompt + max_tokens` (if it'... | https://github.com/vllm-project/vllm/issues/27017 | closed | [
"documentation"
] | 2025-10-16T11:43:43Z | 2025-11-04T11:08:02Z | 7 | sneha5gsm |
vllm-project/vllm | 27,011 | [Usage]: Runnig GLM4.5-Air with Speculative Decoding | ### Your current environment
```
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air-FP8) with speculative decoding. From [GLM 4.5](https://huggingface.co/zai-org/GLM-4.5) page, it mentioned `All models use MT... | https://github.com/vllm-project/vllm/issues/27011 | open | [
"usage"
] | 2025-10-16T10:17:54Z | 2025-10-16T10:23:01Z | 0 | aqx95 |
vllm-project/vllm | 27,006 | [Usage]: In vLLM version 0.8.5, when I send an HTTP image URL directly, the model cannot recognize the image content, but it works correctly when I use a base64-encoded image. I’d like to understand why this happens. | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues... | https://github.com/vllm-project/vllm/issues/27006 | open | [
"usage"
] | 2025-10-16T08:09:29Z | 2025-10-16T10:33:49Z | 4 | Lislttt |
huggingface/lerobot | 2,218 | image pad value in pi0/pi05 | ### System Info
```Shell
the latest lerobot version
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
def resize_with_pad_torch( # see openpi `resize_with_pad_torch` (exact copy)
images: torch.Tensor,
height: ... | https://github.com/huggingface/lerobot/issues/2218 | open | [
"bug",
"question",
"policies"
] | 2025-10-16T06:48:13Z | 2025-10-17T09:58:49Z | null | Tgzz666 |
huggingface/transformers | 41,640 | AttributeError: BartTokenizerFast has no attribute image_token. Did you mean: 'mask_token'? | ### System Info
Ubuntu
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
im... | https://github.com/huggingface/transformers/issues/41640 | closed | [
"bug"
] | 2025-10-16T06:34:02Z | 2025-10-17T09:00:36Z | 5 | conceptofmind |
huggingface/transformers.js | 1,439 | Integration to a CLI application created using PKG | ### Question
I'm trying to bundle a Node.js CLI tool that uses `@xenova/transformers` into a single executable using [pkg](https://github.com/vercel/pkg).
The build works fine, but when I run the packaged executable, I get this error:
```
Error: Cannot find module '../bin/napi-v3/linux/x64/onnxruntime_binding.node'
R... | https://github.com/huggingface/transformers.js/issues/1439 | open | [
"question"
] | 2025-10-16T05:30:32Z | 2025-10-26T23:32:41Z | null | JosephJibi |
huggingface/lerobot | 2,216 | gpu memory required to finetune pi05 | I tried to finetune pi05 with rxt a6000 (48GB) and get an insufficient memory error . Does anyone know how much GPU memory is needed to finetune a pi05 policy?
Thanks, | https://github.com/huggingface/lerobot/issues/2216 | open | [
"question",
"policies",
"performance"
] | 2025-10-16T04:46:21Z | 2025-12-22T07:42:45Z | null | jcl2023 |
vllm-project/vllm | 26,981 | [Usage]: Does vllm support use TokensPrompt for Qwen3VL model | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
My truncation strategy differs slightly from the standard approach (I wish to preserve the system prompt and the final suffix, only truncating the middle portion). It seems that the current version of v... | https://github.com/vllm-project/vllm/issues/26981 | open | [
"usage"
] | 2025-10-16T03:22:09Z | 2025-10-27T03:33:53Z | 10 | afalf |
huggingface/lerobot | 2,214 | Potential Scale Imbalance in smolVLA Embedding Pipeline | Hi, I noticed a potential scale inconsistency in the embedding pipeline.
Specifically, state_emb is not normalized, while both img_emb and lang_emb are explicitly scaled by math.sqrt(emb_dim):
https://github.com/huggingface/lerobot/blob/a6ff3cfebb0304f2c378515dd30ea06fff8f473f/src/lerobot/policies/smolvla/modeling_smo... | https://github.com/huggingface/lerobot/issues/2214 | open | [
"question",
"policies"
] | 2025-10-16T02:11:24Z | 2025-10-17T11:29:36Z | null | kkTkk012 |
vllm-project/vllm | 26,964 | [Bug]: Issue with Deepseek Reasoning parser with Qwen3 2507 chat templates | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
# wget https://raw.githubusercontent.com/vllm-project/vllm/main/vllm/collect_env.py
# For security purposes, please feel free to check the contents of collect_env.py before running it.
python collect_env... | https://github.com/vllm-project/vllm/issues/26964 | open | [
"bug"
] | 2025-10-16T00:39:12Z | 2025-10-20T17:47:02Z | 1 | MikeNatC |
vllm-project/vllm | 26,949 | [Bug]: RuntimeError: CUDA driver error: invalid device ordinal when symmetric memory (symm_mem) is enabled in multi-GPU vLLM setup with 4H100 PCIe | ### My current environment
Environment:
Model: RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
vLLM Version: latest main (installed via pip)
Hardware: 4× NVIDIA H100 PCIe (80GB)
Driver: 550.xx
CUDA: 12.2
PyTorch: 2.4.0
OS: Ubuntu 22.04
Launch Command:
python3 -m vllm.entrypoints.api_server \
--model /ephemeral... | https://github.com/vllm-project/vllm/issues/26949 | open | [
"bug"
] | 2025-10-15T22:08:34Z | 2025-12-25T03:42:49Z | 2 | vadapallij |
vllm-project/vllm | 26,940 | [Feature]: Support `inf` value for burstiness in benchmarks | ### 🚀 The feature, motivation and pitch
In the benchmarks, the burstiness value is used in a gamma distribution to sample the delays between consecutive requests.
```
theta = 1.0 / (current_request_rate * burstiness)
delay_ts.append(np.random.gamma(shape=burstiness, scale=theta))
```
[Theoretically ](https://en.wik... | https://github.com/vllm-project/vllm/issues/26940 | closed | [
"feature request"
] | 2025-10-15T19:39:03Z | 2025-11-03T18:33:19Z | 0 | sducouedic |
vllm-project/vllm | 26,914 | [Usage]: 为什么在采集的profiling中看不到通信算子? | ### Your current environment
```text
The output of `python collect_env.py`
```
通过llm.start_profile和stop_profile,我采集到了profiling,但kernel_details里面看不到通信算子。
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitti... | https://github.com/vllm-project/vllm/issues/26914 | open | [
"usage"
] | 2025-10-15T13:38:14Z | 2025-10-15T13:38:14Z | 0 | sheep94lion |
vllm-project/vllm | 26,903 | [Usage]: vLLM for video input | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of qwen2.5-vl or qwen2.5-omni.
When I convert the video to base64 for api calls (e.g. openai format), I found that vLLM seems to use all the video frames by checking the number... | https://github.com/vllm-project/vllm/issues/26903 | open | [
"usage"
] | 2025-10-15T09:29:23Z | 2025-12-11T03:26:33Z | 6 | King-king424 |
huggingface/diffusers | 12,492 | module transformers has no attribute CLIPFeatureExtractor | ### System Info
latest main
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
f... | https://github.com/huggingface/diffusers/issues/12492 | closed | [
"bug"
] | 2025-10-15T08:26:05Z | 2025-11-03T05:02:54Z | 3 | jiqing-feng |
vllm-project/vllm | 26,858 | [RFC]: Top-level CLI interface for KV cache offloading | ### Motivation.
CPU (and tier-2 storage) offloading is an important feature in many cases (multi-round QA, document analysis, agent workflow, and reinforcement learning). With the recent advancement in the offloading connector, we already have the vLLM native CPU offloading implemented via the connector API. Also, the... | https://github.com/vllm-project/vllm/issues/26858 | closed | [
"RFC"
] | 2025-10-15T00:11:15Z | 2025-11-01T07:17:08Z | 8 | ApostaC |
huggingface/diffusers | 12,485 | How to enable Context Parallelism for training | Hi @a-r-r-o-w , I would like to ask you for tips on using Context Parallelism for distributed training.
**Is your feature request related to a problem? Please describe.**
Here is the minimal code for adapting Context Parallelism into diffusion model training
```python
# Diffusers Version: 0.36.0.dev0
from diffusers.m... | https://github.com/huggingface/diffusers/issues/12485 | closed | [] | 2025-10-14T21:48:35Z | 2025-10-15T20:33:30Z | null | liming-ai |
vllm-project/vllm | 26,840 | [Doc]: Update AWQ Guide | ### 📚 The doc issue
Situation: AutoAWQ functionality was adopted by llm-compressor but vllm [docs](https://docs.vllm.ai/en/latest/features/quantization/auto_awq.html) point to AutoAWQ which is deprecated
### Suggest a potential alternative/fix
1) Update the [AutoAWQ guide](https://github.com/vllm-project/vllm/blob... | https://github.com/vllm-project/vllm/issues/26840 | closed | [
"documentation"
] | 2025-10-14T20:02:21Z | 2025-11-03T15:39:12Z | 0 | HDCharles |
vllm-project/vllm | 26,838 | [Performance]: RTX 6000 PRO - FP8 in sglang is faster | ### Proposal to improve performance
Can we have a discussion about the sglang FP8 performance vs VLLM performance -
I'm able to get 133 tokens/sec with sglang GLM-4.5-Air-FP8 vs 78 tokens/sec in VLLM
```PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True USE_TRITON_W8A8_FP8_KERNEL=1 SGL_ENABLE_JIT_DEEPGEMM=0 python -... | https://github.com/vllm-project/vllm/issues/26838 | open | [
"performance"
] | 2025-10-14T19:41:14Z | 2025-12-29T14:52:57Z | 10 | voipmonitor |
vllm-project/vllm | 26,817 | [Feature]: Add process_weights_after_loading to AttentionImpl | ### 🚀 The feature, motivation and pitch
Currently, in the `Attention` layer, we check if `process_weights_after_loading` exists and then call it conditionally, and after that we apply flashinfer-specific logic.
Instead, we should just add a `process_weights_after_loading` method to AttentionImpl (no-op) by default, ... | https://github.com/vllm-project/vllm/issues/26817 | closed | [
"help wanted",
"good first issue",
"feature request"
] | 2025-10-14T15:59:54Z | 2025-10-16T15:02:31Z | 2 | ProExpertProg |
vllm-project/vllm | 26,806 | [Usage]: MCP-USE with VLLM gpt-oss:20b via ChatOpenAI | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
i am trying to create an agent using gpt-oss:20B with mcp-use
most times the model returns "Agent completed the task successfully.", and sometimes the proper output which is required
### code
`vllm ... | https://github.com/vllm-project/vllm/issues/26806 | open | [
"usage"
] | 2025-10-14T13:00:38Z | 2025-11-20T06:33:29Z | 2 | Tahirc1 |
vllm-project/vllm | 26,786 | [Usage]: cuda12.8 docker 0.11.0 Error occurs when launching the model, NCCL error: unhandled cuda error. | When I use only a single graphics card, the system can start up normally.
Below are Docker configuration files, logs, and environment information.
I encountered this issue when upgrading from version 10.1.1 to 10.2.
[The system generates an error when using dual graphics cards; version 10.1.1 functions correctly, but... | https://github.com/vllm-project/vllm/issues/26786 | closed | [
"usage"
] | 2025-10-14T09:01:39Z | 2025-11-07T17:17:32Z | 3 | ooodwbooo |
vllm-project/vllm | 26,774 | [Usage]: how to use vllm on CUDA 12.9 | ### Your current environment
```text
Traceback (most recent call last):
File "/vllm-workspace/collect_env.py", line 825, in <module>
main()
File "/vllm-workspace/collect_env.py", line 804, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "/vllm-workspace/collect_env.py", lin... | https://github.com/vllm-project/vllm/issues/26774 | open | [
"usage"
] | 2025-10-14T07:30:56Z | 2025-10-14T07:40:08Z | 1 | Mrpingdan |
vllm-project/vllm | 26,772 | [Feature]: Option kv_event default config | ### 🚀 The feature, motivation and pitch
Current kv_event config publisher is null, but endpoint is zmq endpoint, so when not set publisher config, vllm cannot start, got a error: `EventPublisher.__init__() got an unexpected keyword argument 'endpoint'`.
Can we change this default publisher to zmq, when start enable_... | https://github.com/vllm-project/vllm/issues/26772 | closed | [
"feature request"
] | 2025-10-14T07:08:58Z | 2025-10-22T19:19:34Z | 5 | lengrongfu |
vllm-project/vllm | 26,762 | [Usage]: about curl http://ip:8000/metrics | ### Your current environment
When I run this command, I get the following results:
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 12286.0
python_gc_objects_collected_total{generation="1"} 1244.0
py... | https://github.com/vllm-project/vllm/issues/26762 | open | [
"usage"
] | 2025-10-14T05:13:30Z | 2025-10-14T05:13:30Z | 0 | Renoshen |
huggingface/lerobot | 2,194 | During training with PI0, the loss is very low. Is this normal, and is the training proceeding correctly? | I am currently training with PI05.
<img width="1039" height="355" alt="Image" src="https://github.com/user-attachments/assets/5ab3f3e0-82bc-403c-8124-416b330dab14" />
`INFO 2025-10-14 04:57:11 ot_train.py:299 step:10 smpl:320 ep:0 epch:0.00 loss:0.468 grdn:3.522 lr:1.6e-07 updt_s:4.906 data_s:4.874 INFO 2025-10-14 04... | https://github.com/huggingface/lerobot/issues/2194 | closed | [
"question",
"policies"
] | 2025-10-14T05:04:31Z | 2025-10-14T08:19:29Z | null | pparkgyuhyeon |
huggingface/peft | 2,832 | Gradient checkpoint with multiple adapters | I'm not sure if it can be considered as a bug since I might be using the library differently from how it's supposed to be used.
**Context:**
I have a PeftModel that need to be infered with 2 different inputs.
For each input I have a pretrained adapter that is frozen and a new adapter for finetuning.
My forward doe... | https://github.com/huggingface/peft/issues/2832 | closed | [] | 2025-10-14T03:53:10Z | 2025-12-15T08:24:03Z | 3 | NguyenRichard |
huggingface/lerobot | 2,192 | how to test PI0's output | i use this code to test pi0's output:
def main():
# Create a directory to store the training checkpoint.
output_directory = Path("outputs/example_aloha_static_coffee")
output_directory.mkdir(parents=True, exist_ok=True)
# # Select your device
device = torch.device("cuda")
# Number of offline ... | https://github.com/huggingface/lerobot/issues/2192 | open | [
"question",
"policies"
] | 2025-10-14T03:36:43Z | 2025-10-17T09:56:46Z | null | Addog666 |
vllm-project/vllm | 26,749 | [Bug]: InternVL: passing image embeddings triggers TypeError: can only concatenate tuple (not "Tensor") to tuple in get_multimodal_embeddings, and v1 sanity check then expects a sequence of 2D tensors | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
# Title
InternVL: passing image **embeddings** triggers `TypeError: can only concatenate tuple (not "Tensor") to tup... | https://github.com/vllm-project/vllm/issues/26749 | closed | [
"bug"
] | 2025-10-14T03:01:33Z | 2025-10-14T09:36:22Z | 1 | BlueBlueFF |
huggingface/transformers | 41,554 | model.from_pretrained( . . . ) not loading needed weights/parameters | I am performing quantization of a PatchTSTForPrediction model and attempting to load a saved quantized model for testing. Model is saved using `model.save_pretrained( . . . )`. Testing proceeds perfectly once performed immediately after QAT (Hugging face trainer's handles loading at the end of training); however, when ... | https://github.com/huggingface/transformers/issues/41554 | closed | [] | 2025-10-13T23:20:20Z | 2025-11-24T08:03:05Z | 5 | lorsonblair |
huggingface/lerobot | 2,186 | how to load pi0? | i use this code to load pi0:
```python
from lerobot.policies.pi0.modeling_pi0 import PI0Policy
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_policy_path = "lerobot/pi0_libero_base"
policy = PI0Policy.from_pretrained(pretrained_policy_path).to(device)
```
but throws a... | https://github.com/huggingface/lerobot/issues/2186 | closed | [
"question",
"policies",
"python"
] | 2025-10-13T12:24:32Z | 2025-10-17T09:53:02Z | null | Addog666 |
huggingface/accelerate | 3,812 | RuntimeError during load_state | ### System Info
This issue is related to [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101), but it hasn’t been fully resolved yet. The current workaround is to avoid using `safetensors`.
@Narsil suggested using [`load_file/save_file`](https://github.com/huggingface/safetensors/issues/657#issue... | https://github.com/huggingface/accelerate/issues/3812 | closed | [] | 2025-10-13T11:25:17Z | 2025-11-21T15:07:49Z | 2 | Silverster98 |
huggingface/lerobot | 2,185 | Has the lerobot data format been modified after June this year? | Has the lerobot data format been modified after June this year? The original data can no longer be used. | https://github.com/huggingface/lerobot/issues/2185 | closed | [
"question",
"dataset"
] | 2025-10-13T10:07:41Z | 2025-10-14T08:05:04Z | null | Addog666 |
huggingface/transformers | 41,539 | All POETRY operations fail on latest version 4.57.0 | ### System Info
I import transformers (always latest) in my poetry project.
I use poetry 2.1.2
After this transformers release (4.57.0) I regenerated the poetry lock with command: `poetry lock`
Then when retrying to generate the lock again after other updates - it fails with message:
`Could not parse constrains ver... | https://github.com/huggingface/transformers/issues/41539 | closed | [
"bug"
] | 2025-10-13T08:40:49Z | 2025-10-13T14:18:02Z | 1 | bfuia |
vllm-project/vllm | 26,692 | [Usage]: How to release KVCache? | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/26692 | open | [
"usage"
] | 2025-10-13T08:28:20Z | 2025-10-13T08:28:20Z | 0 | shenxf1205 |
huggingface/lerobot | 2,184 | How to let an episode realize it has finished the task? | I have successfully trained my real-world lerobot to do several simple tasks from human demonstrations. Say, push an object from point A to point B. I noticed that after the robot arm has finished the task, it would return to its initial pose (same as the human demonstration) and stay idle for the remainder of the epis... | https://github.com/huggingface/lerobot/issues/2184 | open | [] | 2025-10-13T06:27:36Z | 2025-12-22T07:56:00Z | null | genkv |
vllm-project/vllm | 26,660 | [Usage]: Is there any way to enable beam search in online inference? | ### Your current environment
Is there any way to enable beam search in the `vllm serve` command? Or beam search is only available in offline inference code?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submi... | https://github.com/vllm-project/vllm/issues/26660 | closed | [
"usage"
] | 2025-10-12T13:55:07Z | 2025-10-17T17:12:45Z | 1 | tiesanguaixia |
huggingface/transformers | 41,533 | Add_Specifical_tokens and resize_toked_embeddings result in an error | ### System Info
I want to add a few special tokens to my Qwen2.5VL model as separators, and after executing the following code, he received the following error message. I don't know how to solve this problem.
``` bash
[rank1]: Traceback (most recent call last):
[rank1]: RuntimeError: shape '[-1, 151936]' is invalid fo... | https://github.com/huggingface/transformers/issues/41533 | closed | [
"bug"
] | 2025-10-12T13:50:40Z | 2025-10-13T14:09:29Z | 3 | jialiangZ |
huggingface/lerobot | 2,181 | How to chage SmolVLA action_chunk_size? | I want to change 'action_chunk_size' from 50 to 10. I ran the command like this :
'''
python lerobot/scripts/train.py --policy.path=lerobot/smolvla_base --dataset.repo_id=Datasets/grasp_put --batch_size=16 --steps=40000 --output_dir=outputs/train/vla_chunk10 --job_name=smolvla_training --policy.device=cu... | https://github.com/huggingface/lerobot/issues/2181 | closed | [
"question",
"policies",
"python"
] | 2025-10-12T13:29:35Z | 2025-10-17T11:25:55Z | null | CCCY-0304 |
huggingface/transformers | 41,532 | where is examples/rag from original paper? | ### System Info
https://arxiv.org/pdf/2005.11401 mentions https://github.com/huggingface/transformers/blob/main/examples/rag but it is not there. Add redirect if possible
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially... | https://github.com/huggingface/transformers/issues/41532 | closed | [
"bug"
] | 2025-10-12T13:17:53Z | 2025-10-17T09:34:15Z | null | IgorKasianenko |
vllm-project/vllm | 26,653 | [Usage]: Qwen3VL image coordinates issue | ### Your current environment
Hi, i found same image, same prompt, the vLLM serving qwen3vl always have wrong cooridnates back.
this is vllm return:
Response: "{\"click_type\": \"left_click\", \"coordinate\": [815, 961]}"
<img width="1093" height="549" alt="Image" src="https://github.com/user-attachments/assets/f55c... | https://github.com/vllm-project/vllm/issues/26653 | closed | [
"usage"
] | 2025-10-12T07:02:29Z | 2025-10-13T03:56:53Z | 2 | lucasjinreal |
huggingface/accelerate | 3,811 | ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model. | Hi, I am trying to fine-tuning qwen-image-edit using accelerate in FSDP mode. I want to warp the ``QwenImageTransformerBlock`` in transformer and ``Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer`` in text_encoder. I set the environment param
```
def set_fsdp_env():
os.environ["ACCELERATE_USE_FSDP"] = 'true'
os.en... | https://github.com/huggingface/accelerate/issues/3811 | closed | [] | 2025-10-11T10:13:14Z | 2025-11-22T15:06:54Z | 2 | garychan22 |
huggingface/lerobot | 2,172 | Add support for remote GPUs (with async inference!) | Hello,
I'm a student in not the first-world country, and unforturnately, I don't own a PC that would have an NVidia GPU - it costs about $1200 for a decent setup. On the other hand, it costs only $0.12-0.24/hr to rent RTX 4090 instances, so it's pretty cheap to simply rent a computer whenever I need to data collect/tra... | https://github.com/huggingface/lerobot/issues/2172 | open | [
"enhancement",
"question"
] | 2025-10-11T08:49:32Z | 2025-12-19T06:35:21Z | null | MRiabov |
huggingface/transformers | 41,518 | Add Structured Prompt Templates Registry for LLM / VLM / Diffusion Tasks | ### Feature request
Introduce transformers.prompt_templates — a YAML-based registry and accessor API:
```
from transformers import PromptTemplates
PromptTemplates.get("summarization") # "Summarize the following text:"
PromptTemplates.list_tasks() # ["summarization","vqa","ocr",...]
```
- Templates... | https://github.com/huggingface/transformers/issues/41518 | open | [
"Feature request"
] | 2025-10-11T08:10:20Z | 2025-10-13T15:06:20Z | 2 | Aki-07 |
vllm-project/vllm | 26,616 | [Usage]: How to enable MTP when using Qwen3-Next in local infer ( not vllm serve) | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.2 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/26616 | open | [
"usage"
] | 2025-10-11T03:58:14Z | 2025-10-16T08:45:35Z | 1 | Kimagure7 |
vllm-project/vllm | 26,614 | [Usage]: attn_metadata.seq_lens is not equal to attn_metadata.num_actual_tokens | ### Your current environment
```
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 20.04.6 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version ... | https://github.com/vllm-project/vllm/issues/26614 | open | [
"usage"
] | 2025-10-11T03:35:38Z | 2025-10-11T03:36:31Z | 0 | betacatZ |
vllm-project/vllm | 26,612 | [Usage]: qwen3vl 30 A3B 启动vllm 服务报错 | ### 📚 The doc issue
A_A800-SXM4-80GB.json']
(Worker pid=1939690) INFO 10-11 10:42:13 [monitor.py:34] torch.compile takes 85.33 s in total
(Worker pid=1939690) INFO 10-11 10:42:14 [gpu_worker.py:298] Available KV cache memory: 13.69 GiB
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] EngineCore failed ... | https://github.com/vllm-project/vllm/issues/26612 | closed | [
"usage"
] | 2025-10-11T02:45:20Z | 2025-10-16T23:00:39Z | 1 | renkexuan369 |
huggingface/lerobot | 2,171 | Data diffusion and data format conversion | 1. Can datasets collected in Lerobot format be disseminated?
2. Can data formats between different Lerobot versions be converted? I noticed that the data format collected in version 0.2.0 is different from the latest data format.
Thank you! | https://github.com/huggingface/lerobot/issues/2171 | open | [
"question",
"dataset"
] | 2025-10-11T02:16:55Z | 2025-10-17T02:02:36Z | null | FALCONYU |
vllm-project/vllm | 26,607 | [Bug]: Since version 0.9.2 comes with nccl built-in, using PCIE causes sys errors. How to disable nccl in vllm for versions after 0.9.2? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
<img width="833" height="138" alt="Image" src="https://github.com/user-attachments/assets/a42c415b-8c5b-4698-aa6f-879edc44d512" />
### 🐛 De... | https://github.com/vllm-project/vllm/issues/26607 | open | [
"bug"
] | 2025-10-11T01:48:50Z | 2025-10-17T01:09:03Z | 0 | tina0852 |
huggingface/hf-hub | 131 | InvalidCertificate and how to fix it | I am trying to install a DuckDB extension written in Rust (https://github.com/martin-conur/quackformers) that uses the library.
During the install, I am getting a
```
HfHub(RequestError(Transport(Transport { kind: ConnectionFailed, message: Some("tls connection init failed"), url: Some(Url { scheme: "https", cannot_be... | https://github.com/huggingface/hf-hub/issues/131 | open | [] | 2025-10-10T14:42:12Z | 2025-10-10T18:18:28Z | null | sahuguet |
vllm-project/vllm | 26,585 | [Usage]: use vllm embedding to extract last token hidden states? | ### Your current environment
```/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
import p... | https://github.com/vllm-project/vllm/issues/26585 | closed | [
"usage"
] | 2025-10-10T13:01:42Z | 2025-12-15T06:54:05Z | 2 | rxqy |
vllm-project/vllm | 26,582 | [Bug]: which triton-kernels version for MXFP4 Triton backend? | ### Your current environment
vllm v0.11.0 installed via `uv pip install vllm --torch-backend=auto`
triton + triton-kernels at different commits installed from source
### 🐛 Describe the bug
**Which triton + triton-kernels version does one have to install to run GPT-OSS with the MXFP4 Triton backend?**
No matter wh... | https://github.com/vllm-project/vllm/issues/26582 | closed | [
"bug"
] | 2025-10-10T11:51:59Z | 2025-12-12T20:30:06Z | 8 | matkle |
huggingface/lerobot | 2,162 | [Question] How to suppress verbose Svt[info] logs from video encoding during save_episode()? | Hi, thank you for this fantastic library!
I am currently using lerobot (Version: 0.3.3) to record and save robotics data. When I use the `dataset.save_episode() method`, I get a large number of verbose log messages prefixed with Svt[info]:
```shell
Svt[info]: ------------------------------------------- ... | https://github.com/huggingface/lerobot/issues/2162 | closed | [
"question",
"dataset"
] | 2025-10-10T08:56:52Z | 2025-10-13T05:43:01Z | null | zxytql |
huggingface/transformers | 41,494 | Incorrect tokenizer created for gemma gguf files | ### System Info
- `transformers` version: 4.57.0
- Platform: Linux-5.15.0-144-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 0.34.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accel... | https://github.com/huggingface/transformers/issues/41494 | closed | [
"bug"
] | 2025-10-09T23:27:25Z | 2025-11-29T08:02:57Z | 4 | amychen85 |
vllm-project/vllm | 26,530 | [Bug]: Fix CVE-2023-48022 in docker image | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
Not required for this.
</details>
### 🐛 Describe the bug
The vllm/vllm-openai:v0.10.2 image seems to be affected by the [CVE-2023-48022](https://avd.aquasec.com/nvd/2023/cve-2023-48022/) **Critical** CVE wi... | https://github.com/vllm-project/vllm/issues/26530 | closed | [
"bug"
] | 2025-10-09T20:16:02Z | 2025-10-10T21:14:49Z | 3 | geodavic |
huggingface/lerobot | 2,156 | How to reproduce lerobot/pi0_libero_finetuned? | Thanks for the great work!
I evaluated lerobot/pi0_libero_finetuned on libero goal datasets.
When using n_action_steps=50, the success rate is ~ 75%
When using n_action_steps=10, the success rate is ~ 90%
I tried to reproduce the training results, so I mainly refered to [train_config.json](https://huggingface.co/lero... | https://github.com/huggingface/lerobot/issues/2156 | open | [
"question",
"policies",
"simulation"
] | 2025-10-09T18:11:47Z | 2025-10-22T09:27:03Z | null | PuzhenYuan |
huggingface/lerobot | 2,153 | Why can’t I find something like train_expert_only in the latest version of pi0? Do the current versions of pi0 and pi0.5 only support full-parameter training? | Why can’t I find something like “train_expert_only” in the latest version of pi0?
Do the current versions of pi0 and pi0.5 only support full-parameter training? | https://github.com/huggingface/lerobot/issues/2153 | closed | [
"enhancement",
"question",
"policies",
"good first issue"
] | 2025-10-09T13:08:10Z | 2025-12-31T14:54:29Z | null | ZHHhang |
huggingface/datasets | 7,802 | [Docs] Missing documentation for `Dataset.from_dict` | Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the... | https://github.com/huggingface/datasets/issues/7802 | open | [] | 2025-10-09T02:54:41Z | 2025-10-19T16:09:33Z | 2 | aaronshenhao |
huggingface/transformers | 41,431 | gradient scaling occurs even though total gradient remains < max_grad_norm in trainer.py | Even though gradients remain < max_grad_norm throughout training, the gradient still goes through a scaling process. For instance, I set max_grad_norm = 1, and grad_norm consistently remains <= 0.33. Because the trainer takes you through the grad clip process if max_grad_norm > 0 or not None, this operation always gets... | https://github.com/huggingface/transformers/issues/41431 | closed | [] | 2025-10-07T22:13:08Z | 2025-11-15T08:02:51Z | 7 | lorsonblair |
huggingface/candle | 3,120 | AutoModel / PreTrainedModel equivalent magic ? | Hello all, first, thanks a lot for this wonderful crate.
I was wondering if it's on the roadmap or if there is a solution to have the same magic as in python with a `AutoModel.from_pretrained("the_model_name_string")`
As I'm protoyping and am often changing models... which requires to change the architecture everyti... | https://github.com/huggingface/candle/issues/3120 | open | [] | 2025-10-07T21:27:31Z | 2025-10-09T13:02:35Z | 2 | ierezell |
huggingface/lerobot | 2,134 | what is the transformers version for latest lerobot pi0? | ### System Info
```Shell
- lerobot version: 0.3.4
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.18
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 1.26.4
- PyTorch version: 2.7.1+cu126
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.6
- GPU... | https://github.com/huggingface/lerobot/issues/2134 | closed | [] | 2025-10-07T12:06:52Z | 2025-11-14T20:04:50Z | null | PuzhenYuan |
huggingface/diffusers | 12,441 | Support Wan2.2-Animate | [Wan2.2-Animate-14B](https://humanaigc.github.io/wan-animate), it's a unified model for character animation and replacement, with holistic movement and expression replication.
https://github.com/user-attachments/assets/351227d0-4edc-4f6c-9bf9-053e53f218e4
We would like open to the community, if anyone is interested, ... | https://github.com/huggingface/diffusers/issues/12441 | closed | [
"help wanted",
"contributions-welcome"
] | 2025-10-06T18:08:21Z | 2025-11-13T02:52:32Z | 0 | asomoza |
huggingface/lerobot | 2,124 | Question regarding downsampling and resizing dataset | Hi,
Thank you for providing this wonderful library! I was curious about how one can take an existing dataset (collected or downloaded) and modify the fps (downsample, resize images, or delete specific episodes (for v3) prior to policy training. I am finding this tricky to do particularly when the dataset is not loaded... | https://github.com/huggingface/lerobot/issues/2124 | open | [
"question",
"dataset",
"good first issue"
] | 2025-10-06T16:07:47Z | 2025-10-07T20:25:20Z | null | karthikm-0 |
huggingface/transformers | 41,363 | RT-Detr docs should reflect fixed 640x640 input size | The authors of RT-Detr mention that the model was trained on 640x640 images and was meant to be used for inference on 640x640 images. Also, the current implementation has certain quirks that make training/inferring on images of different sizes problematic. For example, the pixel masks used for batching images of varyin... | https://github.com/huggingface/transformers/issues/41363 | closed | [
"Documentation"
] | 2025-10-06T11:04:37Z | 2025-11-06T13:24:01Z | 4 | konstantinos-p |
huggingface/tokenizers | 1,873 | Why is my Python implementation faster than the Rust implementation? | I am comparing the tokenizers in the Python and the huggingface implementation as follows
```python
import json
import time
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
[... Define and save the texts as data.json]
with open('./data.json', 'w', encoding='utf-8') a... | https://github.com/huggingface/tokenizers/issues/1873 | closed | [] | 2025-10-05T08:02:47Z | 2025-10-08T17:41:28Z | 4 | sambaPython24 |
huggingface/transformers | 41,336 | is there a bug in group_videos_by_shape for qwenvl video preprocessiong? | ### System Info
in src/transformers/video_utils.py,
group_videos_by_shape
grouped_videos = {shape: torch.stack(videos, dim=0) for shape, videos in grouped_videos.items()}, where each video is of shape BTCHW. This will create a new dimension.
However, in qwenvl video preprocess
batch_size, grid_t, channel = patches.... | https://github.com/huggingface/transformers/issues/41336 | closed | [
"bug"
] | 2025-10-03T22:26:26Z | 2025-10-03T22:44:43Z | 1 | dichencd |
huggingface/lerobot | 2,111 | frame deletion | Great work on this project! I have a quick question - does LeRobotDataset support frame deletion? For example, in the DROID_lerobot dataset, the first few frames have an action value of 0 and I need to remove them.
I'd appreciate any insights you can provide. Thank you for your time and help! | https://github.com/huggingface/lerobot/issues/2111 | closed | [
"question",
"dataset"
] | 2025-10-03T13:05:12Z | 2025-10-10T12:17:53Z | null | Yysrc |
huggingface/lerobot | 2,108 | HIL-SERL Transform order for (tanh → rescale) is reversed | In `TanhMultivariateNormalDiag`:
```
transforms = [TanhTransform(cache_size=1)]
if low is not None and high is not None:
transforms.insert(0, RescaleFromTanh(low, high)) # puts Rescale *before* tanh
```
this applies RescaleFromTanh then Tanh, which is backwards. should we change it to tanh first, then rescale?
... | https://github.com/huggingface/lerobot/issues/2108 | open | [
"question",
"policies"
] | 2025-10-02T21:44:22Z | 2025-10-07T20:36:31Z | null | priest-yang |
huggingface/lerobot | 2,107 | Low Success Rate When Training SmolVLA-0.24B on LIBERO | Hi folks, I'm trying to replicate the 0.24B SmolVLA model on the LIBERO dataset. Intuitively, I just changed the base model `vlm_model_name: str = "HuggingFaceTB/SmolVLM2-256M-Video-Instruct"`. Here is the command I used to train.
`lerobot-train --policy.type=smolvla --policy.load_vlm_weights=true --dataset.repo_id=H... | https://github.com/huggingface/lerobot/issues/2107 | open | [
"question",
"policies",
"simulation"
] | 2025-10-02T19:11:55Z | 2025-12-20T09:30:58Z | null | zimgong |
huggingface/optimum-onnx | 66 | How to export a stateless whisper model via optimum-cli? | I observe that when exporting a Whisper model via Python API, the resulting model is stateless, i.e. the decoder is split into two models.
```python
import os
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
ORTModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny", export=True).save_pretrained("./whisper/... | https://github.com/huggingface/optimum-onnx/issues/66 | closed | [
"question"
] | 2025-10-02T09:50:03Z | 2025-10-13T05:33:25Z | null | nikita-savelyevv |
huggingface/lerobot | 2,104 | Select the VLM backbone for SmolVLA | Hi may I ask about the vlm_model_name, is there any model more powerful than HuggingFaceTB/SmolVLM2-500M-Video-Instruct which can be used to train SmolVLA for Lerobot SO101? | https://github.com/huggingface/lerobot/issues/2104 | open | [
"question",
"policies",
"good first issue"
] | 2025-10-02T07:35:29Z | 2025-10-11T16:53:59Z | null | Llkhhb |
huggingface/diffusers | 12,415 | SVG 2 kernels | Can we support new sparse kernels in (Neurips 2025)
https://svg-project.github.io/v2/ | https://github.com/huggingface/diffusers/issues/12415 | open | [] | 2025-10-01T10:52:50Z | 2025-10-01T10:52:50Z | 0 | bhack |
huggingface/lerobot | 2,096 | How can I change the task name of already recorded episodes? | I recorded the dataset using:
--dataset.single_task="slice the clay until it becomes 4 pieces"
Now I want to update those recorded episodes to a different task name. How can I do that? | https://github.com/huggingface/lerobot/issues/2096 | open | [
"question",
"dataset",
"good first issue"
] | 2025-10-01T02:15:49Z | 2025-10-30T03:48:47Z | null | pparkgyuhyeon |
huggingface/transformers | 41,235 | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ? | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transforme... | https://github.com/huggingface/transformers/issues/41235 | closed | [
"bug"
] | 2025-09-30T17:07:07Z | 2025-11-08T08:04:40Z | null | ldh127 |
huggingface/accelerate | 3,802 | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ? | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transfor... | https://github.com/huggingface/accelerate/issues/3802 | closed | [] | 2025-09-30T15:58:32Z | 2025-11-09T15:06:58Z | null | ldh127 |
huggingface/transformers | 41,211 | Add DEIMv2 | ### Model description
It would be nice to integrate DEIMv2, a new state-of-the-art model for real-time object detection based on DINOv3. The weights are released under Apache 2.0.
Related thread: https://github.com/Intellindust-AI-Lab/DEIMv2/issues/20
### Open source status
- [x] The model implementation is availab... | https://github.com/huggingface/transformers/issues/41211 | open | [
"New model"
] | 2025-09-30T09:43:07Z | 2025-10-04T18:44:06Z | 4 | NielsRogge |
huggingface/transformers | 41,208 | Integrate mamba SSM kernels from the hub | ### Feature request
Currently, mamba kernels are imported via the main source package ex, for [GraniteMoeHybrid](https://github.com/huggingface/transformers/blob/main/src/transformers/models/granitemoehybrid/modeling_granitemoehybrid.py#L44-L46)
Can we migrate this to use the kernels-hub (`kernels-community/mamba-ssm... | https://github.com/huggingface/transformers/issues/41208 | closed | [
"Feature request"
] | 2025-09-30T07:50:52Z | 2025-12-18T10:17:06Z | 15 | romitjain |
huggingface/tokenizers | 1,870 | How can I convert a trained tokenizer into `transformers` format | Hi guys,
I have trained a tokenizer which works pretty well and it is stored in a single `.json` file. Is there any method / API to convert it into a `transformers` toeknizer format?
If there's no such implementation I am happy to contribute. | https://github.com/huggingface/tokenizers/issues/1870 | closed | [] | 2025-09-30T06:09:52Z | 2025-09-30T13:53:53Z | 1 | dibbla |
huggingface/lighteval | 999 | How to print all pass@k scores when generating 16 samples? | Hi,
I want to print all results of pass@k metrics when generating 16 samples. (e.g., k=1, 2, 4, 8, 16)
```python
math_500_pass_k_at_16 = LightevalTaskConfig(
name="math_500_pass_k_at_16",
suite=["custom"],
prompt_function=math_500_prompt_fn,
hf_repo="HuggingFaceH4/MATH-500",
hf_subset="default",
... | https://github.com/huggingface/lighteval/issues/999 | open | [] | 2025-09-29T21:49:44Z | 2025-10-14T08:04:17Z | null | passing2961 |
huggingface/lerobot | 2,083 | How to train this RL model with my trained data | I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?
`{ "output_dir": "outputs/train/2025-09-28/17-28-55_default",
"job_name": "default", "resume": true,
"seed": 1000, "nu... | https://github.com/huggingface/lerobot/issues/2083 | open | [] | 2025-09-29T07:22:08Z | 2025-10-07T20:32:04Z | null | 993984583 |
huggingface/lerobot | 2,082 | How to train this RL model with my model data | I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?
`{
"output_dir": "outputs/train/2025-09-28/17-28-55_default",
"job_name": "default",
"resume": true,
"se... | https://github.com/huggingface/lerobot/issues/2082 | closed | [] | 2025-09-29T07:18:52Z | 2025-10-07T20:33:11Z | null | 993984583 |
huggingface/sentence-transformers | 3,532 | What is the proper way to use prompts? Do we have to format/render them ourselves? | Hi. First time using the Sentence Transformers library and I had a question regarding using prompts. Specifically, it seems like the [`SentenceTransformer.encode_document`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode_document) m... | https://github.com/huggingface/sentence-transformers/issues/3532 | closed | [] | 2025-09-28T06:32:51Z | 2025-09-30T10:59:24Z | null | seanswyi |
huggingface/transformers | 41,186 | Qwen2.5-VL restore tensor multi-image form |
Hello, I have recently been experimenting with qwen2.5-vl (https://github.com/huggingface/transformers/blob/v4.52-release/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py). I noticed that multiple images are pre-merged here,
```
image_embeds = self.get_image_features(pixel_values, image_grid_thw)
```
but I w... | https://github.com/huggingface/transformers/issues/41186 | closed | [] | 2025-09-28T03:36:24Z | 2025-11-05T08:02:55Z | 2 | NiFangBaAGe |
huggingface/peft | 2,802 | Guide on training that requires both LoRA and base model forward calls ? | Hi, I'm working on some training variants that require hidden states from the base model and the hidden states produced with LoRA. I'm currently initializing two separate model objects:
```
from peft import get_peft_model
m1=AutoModelForCausalLM.from_pretrained(model_path)
m2=AutoModelForCausalL... | https://github.com/huggingface/peft/issues/2802 | closed | [] | 2025-09-27T23:12:23Z | 2025-10-15T10:26:15Z | 3 | thangld201 |
huggingface/lerobot | 2,072 | How to run lerobot with RTX 5090? If not possible, please add support | ### System Info
```Shell
- lerobot version: 0.3.4
- Platform: Linux-6.14.0-32-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface Hub version: 0.35.1
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.8.0+cu128
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.8
- GPU m... | https://github.com/huggingface/lerobot/issues/2072 | closed | [] | 2025-09-27T19:52:42Z | 2025-11-08T07:53:00Z | null | cijerezg |
huggingface/text-generation-inference | 3,333 | How to use prefix caching | Hi
I can't find a way to turn on the prefix caching
When I run any model, I always get:
Using prefix caching = False
Thanks a lot | https://github.com/huggingface/text-generation-inference/issues/3333 | open | [] | 2025-09-27T14:14:37Z | 2025-09-29T11:52:48Z | null | Noha-Magdy |
huggingface/smol-course | 259 | [QUESTION] Is this a bug in smollmv3's chat template? |
Hi
I am reading this
https://huggingface.co/learn/smol-course/unit1/2#chat-templates-with-tools
I feel like there is a bug in `HuggingFaceTB/SmolLM3-3B` 's chat template
from the example
```
# Conversation with tool usage
messages = [
{"role": "system", "content": "You are a helpful assistant with access to ... | https://github.com/huggingface/smol-course/issues/259 | closed | [
"question"
] | 2025-09-27T10:19:37Z | 2025-11-24T18:40:09Z | null | Nevermetyou65 |
huggingface/accelerate | 3,797 | Question: ReduceLROnPlateau wrapped by AcceleratedScheduler in DDP may multiply LR by num_processes? | Hi,
I’m using ReduceLROnPlateau wrapped by AcceleratedScheduler in a multi-GPU / DDP setup (num_processes=8).
My main process calls:
```
lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=self.hyper_params['lr_decay_factor'], patience=self.hyper_params['lr_reduce_patient']
)
... | https://github.com/huggingface/accelerate/issues/3797 | closed | [] | 2025-09-26T10:02:20Z | 2025-11-03T15:08:09Z | 1 | nicelulu |
huggingface/lerobot | 2,050 | I wonder how to use RL on so101 within sim environment? | https://github.com/huggingface/lerobot/issues/2050 | closed | [
"question",
"simulation",
"good first issue"
] | 2025-09-26T06:52:38Z | 2025-10-08T18:04:44Z | null | Temmp1e | |
huggingface/lerobot | 2,045 | I would appreciate it if you could explain how to train the slicing clay model | I am planning to conduct a clay-cutting task using pi0. Since this type of task is not typically included among pi0’s foundation model tasks, I would like to inquire how many episodes (and the approximate duration of each) would generally be required for such a custom task.
The task I have in mind involves cutting cla... | https://github.com/huggingface/lerobot/issues/2045 | open | [] | 2025-09-26T00:51:59Z | 2025-09-26T00:51:59Z | null | pparkgyuhyeon |
huggingface/lerobot | 2,042 | Question: How to train to get Task Recovery behavior? | We would need the robot to be able to detect a failure (like dropping an object) and attempt to correct it to continue with the task.
How would the training data would look like for this?
Thanks | https://github.com/huggingface/lerobot/issues/2042 | open | [] | 2025-09-25T15:52:55Z | 2025-09-25T15:52:55Z | null | raul-machine-learning |
huggingface/accelerate | 3,794 | Error when evaluating with multi-gpu | I met a problem when evaluating Llada-8B with multi-gpu ( **Nvidia V100** ) using accelerate+lm_eval. Error occurs when **num_processes>1**.
but there is no problem with single GPU, all the other cfgs are the same.
How can i solve this problem?
I use this command to evaluate
accelerate launch --config_file config... | https://github.com/huggingface/accelerate/issues/3794 | closed | [] | 2025-09-25T14:42:29Z | 2025-11-03T15:08:12Z | 1 | adfad1 |
huggingface/text-embeddings-inference | 728 | Compile error in multiple environments for CPU backend | ### System Info
TEI source code:
- Latest main branch(0c1009bfc49b759fe75eed4fd377b4fbad534ad5);
- Latest release `v1.8.2`;
- Release `v1.8.1`
Tested platform:
- Win: AMD 7950X+Windows 10 x64 Version 10.0.19045.6332;
- WSL2: AMD 7950X+Debian 13 on wsl2 (Linux DESKTOP 5.15.167.4-microsoft-standard-WSL2 # 1 SMP ... | https://github.com/huggingface/text-embeddings-inference/issues/728 | open | [
"documentation",
"question"
] | 2025-09-25T11:52:16Z | 2025-11-18T14:49:01Z | null | nkh0472 |
huggingface/transformers | 41,141 | Need a concise example of Tensor Parallelism (TP) training using Trainer/SFTTrainer. | ### Feature request
I have checked the code and there are few places which talk about TP. I saw from_pretrained method for model contains tp_plan and device_mesh. I also checked that the TrainingArgument can take parallelism_config which defines the TP/CP plan along with FSDP. However, I am not able to successfully st... | https://github.com/huggingface/transformers/issues/41141 | open | [
"Documentation",
"Feature request",
"Tensor Parallel"
] | 2025-09-25T03:01:02Z | 2026-01-04T14:05:36Z | 10 | meet-minimalist |
huggingface/lerobot | 2,034 | dataset v2.1 and groot n1.5 | for now, groot dose not support dataset v3.0 to fine_tune ? in this case, should we continue use v2.1 ? and if we already collect data from v3, how we can convert it back to v2.1? | https://github.com/huggingface/lerobot/issues/2034 | open | [
"question",
"policies",
"dataset"
] | 2025-09-24T21:12:26Z | 2025-12-24T00:05:45Z | null | zujian-y |
huggingface/tokenizers | 1,868 | How to set the cache_dir in the Rust implementation? | Hey, thank you for your great work with these tokenizers.
When I use the tokenizers through the Python API via transformers, I can set a specific cache_dir like this
```
from transformers import AutoTokenizer
self.tokenizer = AutoTokenizer.from_pretrained(self.tokenizer_name,cache_dir = self.cache_dir)
```
How can ... | https://github.com/huggingface/tokenizers/issues/1868 | open | [] | 2025-09-24T18:50:38Z | 2025-10-06T04:25:46Z | null | sambaPython24 |
huggingface/diffusers | 12,386 | Implement missing features on ModularPipeline | as i'm looking to take advantage of new `ModularPipeline` ask is to implement some currently missing features
my use case is to convert existing loaded model using standard pipeline into modular pipeline. that functionality was provided via #11915 and is now working.
first minor obstacle is that modular pipeline does... | https://github.com/huggingface/diffusers/issues/12386 | open | [
"roadmap"
] | 2025-09-24T15:49:23Z | 2025-09-29T05:46:29Z | 0 | vladmandic |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.