repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
vllm-project/vllm
28,564
[Usage]: Can't get ModernBert models to run in vllm serve
### Your current environment I am trying to download and use ModernBertModel with the vllm serve feature. At first I thought it was an issue with the model so I switched from trying to use BertEmbed with Alibaba-NLP/gte-modernbert-base since it appears in the docs as a model that supports embedding. Source: https://...
https://github.com/vllm-project/vllm/issues/28564
open
[ "usage" ]
2025-11-12T15:51:18Z
2025-11-12T15:51:18Z
0
Logikschleifen
vllm-project/vllm
28,527
💡 Bounty Platform for vLLM
Hi vLLM team! 👋 I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development. **What is Roxonn?** ✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN) ✅ Notify 300+ AI/ML developers ✅ Auto-pay when PRs merge via blockchain ✅ Zero crypto setup needed **Quick flow:** 1. Reg...
https://github.com/vllm-project/vllm/issues/28527
closed
[]
2025-11-12T07:50:33Z
2025-11-13T12:36:15Z
0
dineshroxonn
huggingface/transformers
42,154
💡 Bounty Platform for Hugging Face Transformers
Hi Hugging Face Transformers team! 👋 I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development. **What is Roxonn?** ✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN) ✅ Notify 300+ AI/ML developers ✅ Auto-pay when PRs merge via blockchain ✅ Zero crypto setup needed *...
https://github.com/huggingface/transformers/issues/42154
closed
[]
2025-11-12T07:49:59Z
2025-11-17T11:40:10Z
2
dineshroxonn
vllm-project/vllm
28,508
[Usage]: KVCacheManager Parameter question
I noticed that the parameter “self.req_to_block_hashes” has been removed from KVCacheManager since version v0.10.0. But this parameter is still preserved in the official documentation. Could you please provide an explanation of this change? - [Document Description](https://docs.vllm.ai/en/v0.9.2/api/vllm/v1/core/kv...
https://github.com/vllm-project/vllm/issues/28508
closed
[ "usage" ]
2025-11-12T03:10:18Z
2025-11-16T08:33:45Z
1
Liziqi-77
huggingface/diffusers
12,638
How to design network with DiT blocks that are friendly to Tensorrt fp16 conversion?
We had a network that structed as `a convnet pre-encoder -> DiT blocks -> final block for last sampling`, it worked well with torch format and onnx format, but when we tried to convert it into tensorrt fp16 format, the inference will get value overflow. we had seen the data differene [between onnx and trt fp16, wit...
https://github.com/huggingface/diffusers/issues/12638
open
[]
2025-11-12T02:23:37Z
2025-11-12T02:23:37Z
null
JohnHerry
huggingface/lerobot
2,428
how to eval the real world recorded dataset?
can lerobot eval the real world dataset with metric such as mse? I check the eval script and found that now it can only eval the sim env dataset
https://github.com/huggingface/lerobot/issues/2428
open
[ "question", "evaluation" ]
2025-11-12T02:08:44Z
2025-11-19T16:55:42Z
null
shs822
vllm-project/vllm
28,505
[Feature]: Is there a plan to introduce the new feature nano-pearl, a new engineering effort in speculative reasoning.
### 🚀 The feature, motivation and pitch Nano-pearl can support speculative inference with higher concurrency (larger batch sizes) and is seamlessly compatible with algorithms like Eagle. Is there a plan to introduce it? github:https://github.com/smart-lty/nano-PEARL ### Alternatives _No response_ ### Additional c...
https://github.com/vllm-project/vllm/issues/28505
open
[ "feature request" ]
2025-11-12T01:34:22Z
2025-11-17T06:14:09Z
1
Lexlum
vllm-project/vllm
28,498
[Bug][RL]: Port Conflict
### Your current environment - bug report: ``` Hello vLLM team, I'm running into a suspicious ZMQ socket bug with my 2P 4D configuration for DeepSeek-V3 (see below). I thought it is caused by reusing same nodes for many vLLM launches, but now it happened also at a clean node. Seems like a DP bug of sorts. Please find...
https://github.com/vllm-project/vllm/issues/28498
open
[ "bug", "help wanted", "good first issue" ]
2025-11-11T22:51:35Z
2025-12-04T07:35:31Z
13
robertgshaw2-redhat
vllm-project/vllm
28,489
[Usage]: Online continuous batching
### Current environment ``` ============================== System Info ============================== OS : macOS 26.1 (arm64) GCC version : Could not collect Clang version : 17.0.0 (clang-1700.4.4.1) CMake version : Could not collect Libc...
https://github.com/vllm-project/vllm/issues/28489
open
[ "usage" ]
2025-11-11T20:51:58Z
2025-11-11T20:53:47Z
0
GenVr
huggingface/trl
4,507
Can a multimodal model like Gemma be trained in the same way as a text-only model like Qwen, but with the goal of improving only its text capabilities?
As stated in the title, I hope to improve only the text capabilities of Gemma 3, but it doesn’t seem to have worked as expected. The model I used is gemma-3-4b-it, and I conducted the following simple tests: ```python dataset = Dataset.from_list( [ {"prompt": "What is 2+2?", "task": "math"}, ...
https://github.com/huggingface/trl/issues/4507
open
[ "🐛 bug", "⏳ needs more info" ]
2025-11-11T15:59:51Z
2025-11-21T05:58:50Z
0
Tuziking
vllm-project/vllm
28,472
[Usage]: Will the reasoning_content in the chat template still be applied correctly after switching reasoning_content to reasoning
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm Will the message.reasoning_content for (which exists in default chat_template for qwen3-next-thinking qwen3-vl-thinking or other qwen3-thinking series or glm4.5 or kimi-k2-thinking or other models) in t...
https://github.com/vllm-project/vllm/issues/28472
closed
[ "usage" ]
2025-11-11T15:04:11Z
2025-11-13T06:25:29Z
4
zhcn000000
vllm-project/vllm
28,456
[Usage]: benchmark_moe Usage
### Your current environment ```text (EngineCore_DP0 pid=7498) INFO 11-10 11:42:48 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation). (APIServer pid=7416) INFO 11-10 11:42:50...
https://github.com/vllm-project/vllm/issues/28456
open
[ "usage" ]
2025-11-11T09:22:33Z
2025-11-21T01:43:41Z
6
ekmekovski
huggingface/lerobot
2,422
Running inference on Libero with pi0
Hello, I am trying to run inference with pi0 but the commands referenced in this issue #683 are outdated I believe. What would the commands be to run inference in Lerobot, and also running inference with pi0 in Libero? Additionally, if there is any documentation for these commands in general for fine-tuning and eval, ...
https://github.com/huggingface/lerobot/issues/2422
open
[ "question", "policies", "evaluation" ]
2025-11-11T09:22:25Z
2025-11-19T16:53:27Z
null
thomasdeng2027
huggingface/lerobot
2,421
Seeking assistance with tactile data acquisition
I want to simultaneously collect tactile and visual data, with tactile data sampled at 150 fps and visual data at 30 fps. Each time an image frame is saved, I also want to store all tactile data collected during that time interval as additional features associated with the image. What would be the best approach to imp...
https://github.com/huggingface/lerobot/issues/2421
open
[ "question" ]
2025-11-11T02:49:57Z
2025-11-19T16:53:05Z
null
zhoushaoxiang
vllm-project/vllm
28,438
[Usage]: How do I install vLLM nightly?
### Your current environment The output of collect_env.py ```text ============================== System Info ============================== OS : Ubuntu 20.04.5 LTS (x86_64) GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version : Could not co...
https://github.com/vllm-project/vllm/issues/28438
closed
[ "usage" ]
2025-11-11T02:24:47Z
2025-11-12T01:54:42Z
2
LittleLucifer1
vllm-project/vllm
28,425
[Feature][RL]: Fix Fp8 Weight Loading for RL
### 🚀 The feature, motivation and pitch Feedback from RL community that vLLM weight loading in fp8 is bad for RL - https://vllm-dev.slack.com/archives/C07UUL8E61Z/p1762811441757529 The cause is clear: in [fp8.py](https://github.com/vllm-project/vllm/blob/bf6a3d0ff5a69e0a30567f2ad417530c002eaa4e/vllm/model_executor/l...
https://github.com/vllm-project/vllm/issues/28425
open
[ "feature request" ]
2025-11-10T21:59:02Z
2025-11-10T23:25:37Z
1
robertgshaw2-redhat
huggingface/transformers.js
1,450
SmolVLM2 500M Video Instruct - Video inference
### Question Hey, is it possible to setup **video** inference through **transformers.js** (may be somehow else?) for the model SmolVLM2 500M Video Instruct? I can't make it work, but I saw, that it is possible in py transformers. I want to create something similar to https://huggingface.co/spaces/HuggingFaceTB/SmolVL...
https://github.com/huggingface/transformers.js/issues/1450
open
[ "question" ]
2025-11-10T19:51:07Z
2025-11-12T07:46:32Z
null
youchi1
vllm-project/vllm
28,409
[Usage]: There is any performance benchmark between running vLLM server via docker image and python?
### Your current environment ```text I mean, if I run a service with the vLLM docker image, it has any performance upgrade if comparing with running it as a python service (e.g., importing vllm package, setting up vllm inference, handling payload/responses, etc)? ``` ### How would you like to use vllm _No respons...
https://github.com/vllm-project/vllm/issues/28409
open
[ "usage" ]
2025-11-10T17:56:14Z
2025-11-10T17:56:14Z
0
rafaelsandroni
vllm-project/vllm
28,393
[Feature]: Does vllm-jax plan to support GPU acceleration?
### 🚀 The feature, motivation and pitch Does vllm-jax plan to support GPU acceleration? ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of th...
https://github.com/vllm-project/vllm/issues/28393
closed
[ "feature request" ]
2025-11-10T12:28:20Z
2025-11-10T21:44:57Z
2
south-ocean
vllm-project/vllm
28,388
[Bug]: 新版的vllm已经废弃了v0代码,而对qwen-omni系列的模型支持仅限于v0,似乎是因为这个原因,我们无法使用最新版的vllm推理qwen-omni模型
### Your current environment Name: vllm Version: 0.10.2 ### 🐛 Describe the bug 下面的官方样例代码似乎是无法运行的,会对其中的音频使用参数 "mm_processor_kwargs": { "use_audio_in_video": True, }, 进行报错: ```python # SPDX-License-Identifier: Apache-2.0 # SPDX-FileCopyrightText: Copyright contributors to the vLLM project ...
https://github.com/vllm-project/vllm/issues/28388
open
[ "bug" ]
2025-11-10T09:23:33Z
2025-11-16T05:51:42Z
1
Lee-xeo
huggingface/accelerate
3,836
When using gradient accumulation, does the order of optimizer.zero_grad() affect training?
if I use accelerate+deepspeed to train a model, and I set `deepspeed_config: gradient_accumulation_steps: 8 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2` does the order of the order of backward(), step(), zero_grad() affect training? For example: `for batch in...
https://github.com/huggingface/accelerate/issues/3836
closed
[]
2025-11-10T03:11:21Z
2025-12-20T15:24:00Z
3
polestarss
huggingface/transformers
42,113
Add AutoMergeAdapters: Official Utility to Combine Multiple LoRA Adapters into One Unified Model
### Feature request Introduce a new built-in class AutoMergeAdapters to the Transformers/PEFT ecosystem that enables users to merge multiple LoRA adapters trained on different domains or datasets into a single model. This feature simplifies the process of creating multi-domain fine-tuned models for inference and depl...
https://github.com/huggingface/transformers/issues/42113
closed
[ "Feature request" ]
2025-11-09T18:43:20Z
2025-11-10T16:58:34Z
1
3015pavan
huggingface/transformers
42,111
Add thinking-budget support (max_thinking_tokens) for reasoning-capable chat models
### Feature request A built-in way to cap how many tokens a reasoning model spends inside its ``<think> … </think>`` block. Today, we can only control the total response length via ``max_new_tokens``. No parameter limits the internal reasoning segment when ``enable_thinking=True``. ### Motivation - Reasoning models ...
https://github.com/huggingface/transformers/issues/42111
open
[ "Feature request" ]
2025-11-09T10:09:11Z
2025-11-09T10:09:11Z
0
AndresAlgaba
vllm-project/vllm
28,362
[Usage]: Can't get vLLM to run on an Intel 125H with XPU and Arc graphics
### Your current environment ```text Collecting environment information... ...
https://github.com/vllm-project/vllm/issues/28362
open
[ "usage", "intel-gpu" ]
2025-11-09T09:45:05Z
2025-11-12T00:19:39Z
2
phlibi
vllm-project/vllm
28,350
[Doc]: Running VLLM via Docker Swarm With Support for Tensor Parallelism
### 📚 Running VLLM via Docker Swarm With Support for Tensor Parallelism There's no documentation that I have found outlining how to run VLLM in a docker swarm when utilizing tensor parallelism. The issue is that ```ipc=host``` is not an available option within docker swarm. Consulting the AI feature on the VLLM we...
https://github.com/vllm-project/vllm/issues/28350
closed
[ "documentation" ]
2025-11-08T21:11:15Z
2025-11-19T16:37:31Z
2
ep5000
vllm-project/vllm
28,348
[Usage]: Does vllm support max_pixels in prompt on Qwen3-VL reasoning?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm I want to run inference of Qwen3-VL-A3B-Instruct, I tried to set max_pixels but it doesn't work. import json import base64 import requests img_path = r".\images\MMMU\735_1.jpg" base64_str = base64.b64e...
https://github.com/vllm-project/vllm/issues/28348
open
[ "usage" ]
2025-11-08T16:06:07Z
2025-11-08T16:56:17Z
1
leijie-ww
vllm-project/vllm
28,344
[Usage]: Function calling Request's sampling_params.structured_outputs is None?
Hi, I used openai server API to build a LLM backend when I tried to deploy a MCP server. I discovered that the prompt of vllm engine combined system prompt, tool lists and user prompt. but i saw sampling_params.structured_outputs is None. Although the result seemed correct, I think it's important to use structured ou...
https://github.com/vllm-project/vllm/issues/28344
closed
[ "usage" ]
2025-11-08T08:57:17Z
2025-11-10T07:51:51Z
5
wtr0504
vllm-project/vllm
28,340
[Installation]: Need offline wheel for vLLM 0.11.0rc2 (pip download fails) to deploy qwen3_vl_235b_a22b_instruct_i18n
### Your current environment I need to install vLLM 0.11.0rc2 in an offline environment. Is there an official wheel (.whl) available for vLLM==0.11.0rc2 that I can download directly? Running: ``` pip download vllm==0.11.0rc2 --pre --extra-index-url https://wheels.vllm.ai/nightly -d wheels ``` fails with an error: L...
https://github.com/vllm-project/vllm/issues/28340
closed
[ "installation" ]
2025-11-08T06:05:31Z
2025-11-08T06:08:37Z
0
FateForever0222
vllm-project/vllm
28,310
[Doc]: Update GPU requirements to include AMD gfx1150/gfx1151
### 📚 The doc issue Summary: The documentation for GPU requirements does not list AMD gfx1150 and gfx1151 architectures, which are now supported. Background: Support for AMD gfx1150 and gfx1151 GPUs was added in https://github.com/vllm-project/vllm/pull/25908. The GPU requirements page should be updated to reflect t...
https://github.com/vllm-project/vllm/issues/28310
closed
[ "documentation", "rocm" ]
2025-11-07T17:26:47Z
2025-11-08T03:01:08Z
1
hammmmy
huggingface/transformers
42,093
Mbart decoder ignoring index 0 from labels | index 1 from dec in
### System Info I am creating a ocr model using VisionEncoderDecoderModel class by connecting plm vision tower and donut base decoder (mbart model). I am using teacher forcing method to train the model ( default training and i found out that the model is ignoring index 0 from the target ( index 1 from the decoder_i...
https://github.com/huggingface/transformers/issues/42093
closed
[ "bug" ]
2025-11-07T15:46:08Z
2025-11-07T16:27:10Z
1
jaaabir
vllm-project/vllm
28,292
[Usage]: Failure to Deploy Llama-3.2-11B-Vision-Instruct Locally via vllm Due to OOM
### Your current environment The output of <code>python collect_env.py</code> ```text ============================== System Info ============================== OS : Ubuntu 20.04.5 LTS (x86_64) GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version ...
https://github.com/vllm-project/vllm/issues/28292
closed
[ "usage" ]
2025-11-07T12:01:04Z
2026-01-06T00:06:43Z
5
LittleLucifer1
huggingface/transformers
42,086
Does Trainer uses grad scaler for training?
I am not able to see the grad scaler usage in Trainer code. If not using it then I need to understand how are we using mixed precision training with fp16 precision without grad scaler.
https://github.com/huggingface/transformers/issues/42086
closed
[]
2025-11-07T10:10:16Z
2025-11-13T07:58:33Z
2
quic-meetkuma
vllm-project/vllm
28,283
[Bug]: nccl stuck issue
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug I am using a docker container for vLLM. I noticed that when I use `nvidia/cuda:13.0.X-cudnn-devel-ubuntu24.04` with ...
https://github.com/vllm-project/vllm/issues/28283
open
[ "bug" ]
2025-11-07T09:36:01Z
2025-11-07T09:40:17Z
1
seindum
vllm-project/vllm
28,262
[Bug]: [gpt-oss] Responses API incorrect input/output handling
### Your current environment Any env ### 🐛 Describe the bug There is currently an implementation issue with gpt-oss on the Responses API in vLLM. This can be seen clearly in the [test which continues a conversation between API requests here](https://github.com/vllm-project/vllm/blob/4bf56c79cc252d285d0cb4f5edf323f0...
https://github.com/vllm-project/vllm/issues/28262
open
[ "bug" ]
2025-11-07T02:51:56Z
2025-11-08T19:39:06Z
1
alecsolder
huggingface/lerobot
2,399
Are there plans to support LoRa fine-tuning?
https://github.com/huggingface/lerobot/issues/2399
open
[ "question", "performance", "training" ]
2025-11-07T02:37:45Z
2025-11-10T10:23:33Z
null
Hukongtao
huggingface/candle
3,167
Qwen 3-1.7b looks like something is wrong and doesn't stop properly.
Candle version: main Platform: Mac Studio Max M1 Mode: Qwen 3-1.7b, (download by huggingface-cli) Execute cmd: git clone https://github.com/huggingface/candle.git cd candle-examples cargo run --release --example qwen -- \ --prompt "What is the speed of light?" \ --model 3-1.7b \ --tokenizer-file ../../models/qwen3-1.7...
https://github.com/huggingface/candle/issues/3167
open
[]
2025-11-07T02:23:05Z
2025-11-08T07:52:18Z
6
xiuno
huggingface/lerobot
2,398
how to accelerate the iteration in dataset
hi, i want to get the frames of specific episode index when `episode_index_target` is large, like 100, it takes a lot of time to run. any solution to improve the iteration speed ? thanks. `lerobot.__version__ == '0.1.0'` ```python dataset = LeRobotDataset('yananchen/robomimic_lift') frames = [] for sample in datas...
https://github.com/huggingface/lerobot/issues/2398
closed
[ "question" ]
2025-11-06T21:37:33Z
2025-11-10T20:52:57Z
null
yanan1116
vllm-project/vllm
28,246
[Bug]: Return Token Ids not returning Gen Token Ids for GPT-OSS-120b
### Your current environment <details> Using docker image vllm/vllm-openai:latest </details> ### 🐛 Describe the bug When passing in return_token_ids flag to v1/chat/completions endpoint for GPTOSS-120b, only prompt_token_ids are returned and not token_ids. We have not seen this happen with any other model except ...
https://github.com/vllm-project/vllm/issues/28246
open
[ "bug" ]
2025-11-06T21:08:16Z
2025-11-07T00:18:25Z
1
sophies-cerebras
vllm-project/vllm
28,236
[Feature]: Implement naive prepare/finalize class to replace naive dispatching in fused_moe/layer.py
### 🚀 The feature, motivation and pitch The `FusedMoE` layer has a special case dispatch/combine for EP+DP when there is no specific all2all backend specified. This makes the code in `layer.py` a bit confusing and hard to follow. One way to simplify this is to implement a proper `FusedMoEPrepareAndFinalize` subclas...
https://github.com/vllm-project/vllm/issues/28236
open
[ "help wanted", "good first issue", "feature request" ]
2025-11-06T18:38:38Z
2025-11-12T06:36:29Z
4
bnellnm
vllm-project/vllm
28,233
[Usage]: LogitProcessor vLLM 0.9.1 run the same prompt 50 times with batching, apply logitprocessor independently on each
### Your current environment Goal Run the same prompt 50 times through vLLM 0.9.1, generating independent outputs with a custom LogitsProcessor that forces a comma token after some pattern "xyz" appears in each generation. What You Want Batched execution: Process all 50 generations efficiently in parallel Independent...
https://github.com/vllm-project/vllm/issues/28233
open
[ "usage" ]
2025-11-06T18:11:32Z
2025-11-06T18:11:32Z
0
jindalankush28
vllm-project/vllm
28,230
[Bug]: GPU VRAM continuously increase during Qwen3-VL usage over days until OOM
### Your current environment Setup: docker run -d \ --runtime nvidia \ --gpus '"device=3,4,5,6"' \ -e TRANSFORMERS_OFFLINE=1 \ -e DEBUG="true" \ -p 8000:8000 \ --ipc=host \ vllm/vllm-openai:v0.11.0 \ --gpu-memory-utilization 0.95 \ --model Qwen/Qwen3-VL-235B-A22B-Instruct-FP8 \ --tensor-parallel-si...
https://github.com/vllm-project/vllm/issues/28230
open
[ "bug" ]
2025-11-06T17:19:18Z
2025-12-02T16:50:26Z
15
yz342
huggingface/datasets
7,852
Problems with NifTI
### Describe the bug There are currently 2 problems with the new NifTI feature: 1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503) 2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative p...
https://github.com/huggingface/datasets/issues/7852
closed
[]
2025-11-06T11:46:33Z
2025-11-06T16:20:38Z
2
CloseChoice
huggingface/peft
2,901
AttributeError: 'float' object has no attribute 'meta'
### System Info peft== 0.17.1 torch== 2.5.1+cu118 transformers==4.57.0 python==3.12.7 ### Who can help? I am trying to use LoRA with DINOv3 (so a slightly modified vit-b). However, I am hitting after a random number of iterations this error. It is sadly difficult to reproduce. Maybe someone can hint at what is going...
https://github.com/huggingface/peft/issues/2901
closed
[]
2025-11-06T11:24:18Z
2025-11-17T15:34:08Z
6
Karol-G
vllm-project/vllm
28,192
[RFC]: Support separate NICs for KV cache traffic and MoE traffic
### Motivation. In MoE models with large KV caches, KV cache all-to-all and MoE expert communication share the same RNIC, causing congestion and degrading performance. Using dedicated NICs for each traffic type can improve bandwidth utilization and reduce interference. ### Proposed Change. Does vLLM currently suppor...
https://github.com/vllm-project/vllm/issues/28192
open
[ "RFC" ]
2025-11-06T07:31:17Z
2025-11-06T08:19:56Z
1
JayFzh
vllm-project/vllm
28,186
[Bug] Cannot load qwen3-vl series with lora adapter
I fine-tuned the `Qwen3-VL-8B-Instruct` model using Unsloth. I moved the saved QLoRA adapter and the `Qwen3-VL-2B-Instruct` model to my vLLM server. Then I ran a command to start model serving with vLLM as shown below. (For reference, the vLLM server has no issues—it was already serving official Qwen3-VL models.) ``` ...
https://github.com/vllm-project/vllm/issues/28186
open
[ "bug" ]
2025-11-06T06:02:33Z
2025-11-09T11:16:27Z
4
deepNoah
huggingface/trl
4,481
DPOTrainer._prepare_dataset() adds an extra eos_token to conversationally formatted inputs
## Overview The DPOTrainer unconditionally appends the eos_token to both the "chosen" and "rejected" sequences. Because conversationally formatted inputs will already have the chat template applied, this causes them to have duplicate eos_tokens (Ex. `...<|im_end|><|im_end|>`). A related problem was reported for the [...
https://github.com/huggingface/trl/issues/4481
open
[ "🐛 bug", "🏋 DPO" ]
2025-11-06T01:17:05Z
2025-11-06T18:40:39Z
0
DevonPeroutky
huggingface/trl
4,468
Move RLOOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move RLOOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - ...
https://github.com/huggingface/trl/issues/4468
closed
[ "📚 documentation", "✨ enhancement" ]
2025-11-05T21:30:15Z
2025-12-05T18:21:41Z
2
behroozazarkhalili
huggingface/trl
4,466
Move PPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move PPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [...
https://github.com/huggingface/trl/issues/4466
closed
[ "📚 documentation", "✨ enhancement", "🏋 PPO" ]
2025-11-05T21:29:54Z
2025-11-13T19:01:20Z
0
behroozazarkhalili
huggingface/trl
4,465
Move ORPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move ORPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - ...
https://github.com/huggingface/trl/issues/4465
closed
[ "📚 documentation", "✨ enhancement", "🏋 ORPO" ]
2025-11-05T21:29:44Z
2025-11-21T06:36:32Z
0
behroozazarkhalili
huggingface/trl
4,463
Move KTOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move KTOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [...
https://github.com/huggingface/trl/issues/4463
open
[ "📚 documentation", "✨ enhancement", "🏋 KTO" ]
2025-11-05T21:29:25Z
2025-11-05T21:29:50Z
0
behroozazarkhalili
huggingface/trl
4,461
Move OnlineDPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move OnlineDPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old locati...
https://github.com/huggingface/trl/issues/4461
closed
[ "📚 documentation", "✨ enhancement", "🏋 Online DPO" ]
2025-11-05T21:28:08Z
2025-11-24T01:13:07Z
1
behroozazarkhalili
vllm-project/vllm
28,152
[Feature]: Factor out `zero_expert_num` from `FusedMoE`
### 🚀 The feature, motivation and pitch We have many special cases in `FusedMoE` for `zero_expert_num` This parameter is used exclusively for `LongCatFlash`. We should factor this out of `FusedMoe` and put the complexity into the model file. ### Alternatives _No response_ ### Additional context _No response_ ##...
https://github.com/vllm-project/vllm/issues/28152
open
[ "help wanted", "feature request" ]
2025-11-05T19:05:54Z
2025-11-06T20:08:23Z
0
robertgshaw2-redhat
vllm-project/vllm
28,150
[Bug]: -O.mode=NONE (or -cc.mode=NONE) should work
### Your current environment main ### 🐛 Describe the bug Right now -O.mode only accepts integer levels. Ideally it would accept ints and the string. `vllm serve -O.mode=NONE` # doesn't work `vllm serve -O.mode=0` # does work ### Before submitting a new issue... - [x] Make sure you already searched for relevant...
https://github.com/vllm-project/vllm/issues/28150
closed
[ "bug", "help wanted", "good first issue", "torch.compile" ]
2025-11-05T18:28:23Z
2025-11-12T00:46:20Z
1
zou3519
vllm-project/vllm
28,137
[Feature]: Refactor `aiter_shared_expert_fusion`
### 🚀 The feature, motivation and pitch We have a special case in `FusedMoE` layer for `aiter_shared_expert_fusion` which creates various if branches spattered across the layer We should factor this out ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [...
https://github.com/vllm-project/vllm/issues/28137
open
[ "help wanted" ]
2025-11-05T15:54:09Z
2025-12-20T22:00:55Z
3
robertgshaw2-redhat
vllm-project/vllm
28,132
[Usage]: How do I assign a specific GPU to a vLLM docker container?
### Your current environment stock vllm-openai:v0.11.0 docker image rootless Docker v.27.5.1 on Ubuntu 22.04.5 LTS on physical hardware Nvidia Driver Version: 570.133.20 CUDA Version: 12.8 GPUs: 4x H100 (NVLink), numbered 0,1,2,3 ### How would you like to use vllm I want to run inference of [SmolLM3-3B](https://hugg...
https://github.com/vllm-project/vllm/issues/28132
closed
[ "usage" ]
2025-11-05T14:42:17Z
2025-11-06T14:54:41Z
1
lindner-tj
huggingface/lerobot
2,389
How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log.
How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log. accelerate launch \ --multi_gpu \ --num_processes=2 \ $(which lerobot-train) \ --output_dir=./outputs/groot_training \ --save_checkpoint=true \ --batch_size=8 \ --steps=200000 \ --save_freq=2000...
https://github.com/huggingface/lerobot/issues/2389
open
[ "training" ]
2025-11-05T10:17:59Z
2025-11-07T17:47:50Z
null
wuxiaolianggit
huggingface/lerobot
2,388
how to improve the generalization of the vla model like gr00t
After fine-tuning the gr00t, i found that it only work for the prompt within the dataset, it is difficult for it to understand new words and new item that need to grab. so whether there is a method can protect the generalization, if i can create a new layer to map the output of the model to new dimensionality?
https://github.com/huggingface/lerobot/issues/2388
open
[]
2025-11-05T10:06:11Z
2025-11-05T10:44:38Z
null
Temmp1e
vllm-project/vllm
28,119
[Feature]: Will we support async scheduler for pipeline parallel?
### 🚀 The feature, motivation and pitch SGLang already have https://github.com/sgl-project/sglang/pull/11852 And I see huge perf gap on SM120 PP because of this. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for rel...
https://github.com/vllm-project/vllm/issues/28119
closed
[ "feature request" ]
2025-11-05T09:55:57Z
2025-11-07T06:14:19Z
4
weireweire
huggingface/gsplat.js
122
I want to add an object (such as a robot) to move around in the model. How can this be achieved?
I want to add an object (such as a robot) to move around in the model. How can this be achieved?
https://github.com/huggingface/gsplat.js/issues/122
open
[]
2025-11-05T09:16:39Z
2025-11-05T09:16:39Z
null
ThinkingInGIS
vllm-project/vllm
28,104
[Usage]: vllm bench serve不能用sharegpt数据集
### Your current environment ```text 我运行以下bencmmarks命令:vllm bench serve --model Qwen3 --tokenizer /mnt/workspace/models --host 127.0.0.1 --port 80 --num-prompts 400 --percentile-metrics ttft,tpot,itl,e2el --metric-percentiles 90,95,99 --dataset-name sharegpt --data set-path /mnt/workspace/benchmarks/sharegpt/ShareGPT_...
https://github.com/vllm-project/vllm/issues/28104
open
[ "usage" ]
2025-11-05T06:18:02Z
2025-11-06T14:24:46Z
1
uOnePiece
vllm-project/vllm
28,070
[Usage]: Is there a way to control default thinking behaviour of a model?
### Your current environment Is there a way to control default thinking behaviour for models deployed through vllm. As per https://docs.vllm.ai/en/stable/features/reasoning_outputs.html, IBM Grantie 3.2 reasoning is disabled by default. Qwen3, GLM 4.6, Deepseek V3.1 all have reasoning enabled by default. It would be g...
https://github.com/vllm-project/vllm/issues/28070
closed
[ "usage" ]
2025-11-04T22:03:32Z
2025-12-30T03:38:48Z
0
yz342
vllm-project/vllm
28,056
[Bug]: Missing libarm_compute.so in Arm CPU pip installed wheels
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug We now have vllm wheels for Arm CPUs in pypi thanks to https://github.com/vllm-project/vllm/pull/26931 and https://g...
https://github.com/vllm-project/vllm/issues/28056
closed
[ "bug" ]
2025-11-04T17:22:55Z
2025-11-13T05:43:10Z
2
fadara01
vllm-project/vllm
28,046
Qwen3-Omni model inference : ValueError: Either SamplingParams or PoolingParams must be provided.
### Your current environment ```text The output of `python web_demo.py` ``` The above mentioned method provides the error below ``` qwen/Qwen3-Omni/collect_env.py", line 287, in get_vllm_version from vllm import __version__, __version_tuple__ ImportError: cannot import name '__version__' from 'vllm' (unknown lo...
https://github.com/vllm-project/vllm/issues/28046
closed
[ "usage" ]
2025-11-04T13:59:57Z
2025-11-24T19:24:39Z
22
Tortoise17
vllm-project/vllm
28,045
[Doc]: Any detailed documentation about how to load_weights in customized vllm model?
### 📚 The doc issue I don't know how to modify the attention and how the load_model works. The documentation says too few, I find it's hard to understand. Anyone has some more detailed experience? Thank you! ### Suggest a potential alternative/fix _No response_ ### Before submitting a new issue... - [x] Make su...
https://github.com/vllm-project/vllm/issues/28045
open
[ "documentation" ]
2025-11-04T13:23:25Z
2025-11-05T02:07:55Z
0
sleepwalker2017
vllm-project/vllm
28,035
[Usage]: deepseek-ocr The output token count is too low and unstable.
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm python3 -m vllm.entrypoints.openai.api_server --served-model-name deepseek-ocr --model deepseekocr --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --disable-log-requests --logits_processors vllm....
https://github.com/vllm-project/vllm/issues/28035
open
[ "usage" ]
2025-11-04T09:50:53Z
2025-11-04T09:50:53Z
0
sixgod-666
vllm-project/vllm
28,031
[Usage]: Error: Failed to initialize the TMA descriptor 700
### Your current environment vllm0.11.0 to train Qwen3-vl-8B The following error message appears intermittently during training. ``` [36m(WorkerDict pid=82555) TMA Desc Addr: 0x7f4e2736b080 (WorkerDict pid=82555) format 9 (WorkerDict pid=82555) dim 4 (WorkerDict pid=...
https://github.com/vllm-project/vllm/issues/28031
open
[ "usage" ]
2025-11-04T08:13:45Z
2025-12-11T08:18:15Z
4
DBMing
vllm-project/vllm
28,016
[Usage]: How to recognize PDFs in DeepSeek-OCR with openai
### Your current environment ``` vllm serve deepseek-ai/DeepSeek-OCR --logits_processors vllm.model_executor.models.deepseek_ocr.NGramPerReqLogitsProcessor --no-enable-prefix-caching --mm-processor-cache-gb 0 ``` ### How would you like to use vllm How to recognize PDFs and convert PDFs to Markdown with DeepSeek-OCR...
https://github.com/vllm-project/vllm/issues/28016
open
[ "usage" ]
2025-11-04T03:35:38Z
2025-11-04T07:33:07Z
2
shoted
vllm-project/vllm
28,003
[Usage]:
### Your current environment ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version : Cou...
https://github.com/vllm-project/vllm/issues/28003
open
[ "usage" ]
2025-11-03T21:19:15Z
2025-11-26T15:32:40Z
1
amitmvyas
vllm-project/vllm
27,995
[RFC]: Make PassConfig flags less verbose
### Motivation. Almost all `PassConfig` field names have `enable_` in the name, which is unnecessarily verbose. They are also pretty long, and sometimes not descriptive enough. Finally, `enable_fusion` should be split into rmsnorm+quant and activation+quant flags as we want to control these flags separately. ### Prop...
https://github.com/vllm-project/vllm/issues/27995
closed
[ "help wanted", "good first issue", "RFC", "torch.compile" ]
2025-11-03T17:49:29Z
2025-12-03T19:53:01Z
7
ProExpertProg
huggingface/peft
2,888
Potential remote code execution via untrusted tokenizer_kwargs in PromptEmbedding
### Description A remote code execution vector exists in the PEFT prompt-tuning flow. A remote `adapter_config.json` can inject loader kwargs that are forwarded to `AutoTokenizer.from_pretrained` calls. If an attacker sets `"tokenizer_kwargs": {"trust_remote_code": true}` and points `tokenizer_name_or_path` at an atta...
https://github.com/huggingface/peft/issues/2888
closed
[]
2025-11-03T16:04:52Z
2025-11-04T17:50:28Z
3
Vancir
huggingface/lerobot
2,371
memory increase continuously during training Groot
### System Info ```Shell - lerobot version: 0.4.1 - Platform: Linux-5.4.250-2-velinux1u3-amd64-x86_64-with-glibc2.31 - Python version: 3.10.15 - Huggingface Hub version: 0.35.3 - Datasets version: 4.1.1 - Numpy version: 2.1.3 - PyTorch version: 2.7.1+cu126 - Is PyTorch built with CUDA support?: True - Cuda version: 12...
https://github.com/huggingface/lerobot/issues/2371
open
[ "question", "policies", "performance" ]
2025-11-03T14:38:52Z
2025-12-31T13:17:11Z
null
caoran2025
vllm-project/vllm
27,982
[Usage]: How can I access or return hidden states (representations) after generation?
### Your current environment In my training pipeline (GRPO), I need to access hidden-state representations of all layers and store prompt representations alongside generated sequences. Is there any supported way to extract or return hidden states from the vLLM inference engine? Environment vllm==0.11.0 Python 3.12 #...
https://github.com/vllm-project/vllm/issues/27982
open
[ "usage" ]
2025-11-03T13:01:51Z
2025-11-04T03:07:40Z
1
hakbari14
huggingface/lerobot
2,368
Release 0.5.0
A Github Issue created for the upcoming release to discuss the planned features & changes: * Audio PR #967 * Bump transformers dependency to +v5
https://github.com/huggingface/lerobot/issues/2368
open
[ "bug", "question", "dependencies" ]
2025-11-03T12:46:51Z
2025-12-24T00:08:16Z
null
imstevenpmwork
vllm-project/vllm
27,981
[Usage]: qwenvl2.5如何指定max_pixels
### Your current environment 如题,我尝试了``--mm-processor-kwargs {"max_pixels": $MAX_PIXELS}``无效 ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for rel...
https://github.com/vllm-project/vllm/issues/27981
open
[ "usage" ]
2025-11-03T12:38:34Z
2025-11-04T08:19:54Z
3
aJupyter
huggingface/accelerate
3,829
Does Accelerate automatically set the DataLoader’s sampler to a DistributedSampler?
```python from accelerate import Accelerator accelerator = Accelerator() device = accelerator.device model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: optimizer.zero_grad() inputs, targets = batch ...
https://github.com/huggingface/accelerate/issues/3829
closed
[]
2025-11-03T07:17:29Z
2025-12-16T15:09:43Z
2
caixxiong
vllm-project/vllm
27,957
[Usage]: What is the difference between embedding task and pooler task?
### Your current environment Any document about this? ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot l...
https://github.com/vllm-project/vllm/issues/27957
closed
[ "usage" ]
2025-11-03T03:38:39Z
2025-11-03T10:20:18Z
1
sleepwalker2017
vllm-project/vllm
27,949
[Usage]: How do I deploy GGUF models with vLLM via Docker correct?
### Your current environment ```text The output of `python collect_env.py` ``` Here is the output from `sudo python3 collect_env.py` ``` Traceback (most recent call last): File "/export/nvme/vllm/collect_env.py", line 18, in <module> import regex as re ModuleNotFoundError: No module named 'regex' ``` ### How w...
https://github.com/vllm-project/vllm/issues/27949
open
[ "usage" ]
2025-11-02T23:33:49Z
2025-11-02T23:36:44Z
1
alpha754293
huggingface/xet-core
549
How to get the "Xet backed hash"?
Hi, On HuggingFace, every page has a "Xet backed hash" (I've attached an example below) and I am trying to figure out how to compute that locally. I've read the documentation and it says there are 4 types of different hashes but it's not really clear how a "Xet backed hash" is calculated. So I was just wondering if ...
https://github.com/huggingface/xet-core/issues/549
closed
[]
2025-11-02T09:40:39Z
2025-11-06T16:20:25Z
null
arch-btw
huggingface/lerobot
2,360
diffusion transformer
请问有大佬在lerobot中将diffusion unet改为DiT过吗
https://github.com/huggingface/lerobot/issues/2360
open
[ "question", "policies" ]
2025-11-02T09:05:30Z
2025-11-12T09:01:59Z
null
Benxiaogu
vllm-project/vllm
27,928
[Bug]: What happened to /get_world_size ?
### Your current environment vllm 0.11.0 trl 0.24.0 python 3.12 linux amd64 ### 🐛 Describe the bug TRL is expecting a `/get_world_size` route https://github.com/huggingface/trl/blob/main/trl/extras/vllm_client.py#L279 for its GRPO trainer. That gives a 404 on the latest version of vLLM. Was this changed to anothe...
https://github.com/vllm-project/vllm/issues/27928
open
[ "bug" ]
2025-11-01T22:56:45Z
2025-11-03T02:42:14Z
1
pbarker-synth
huggingface/lerobot
2,356
AsyncInference only running one action chunk
I have my SO101 arms connected to my computer, and I'm running an asynchronous server on a cloud GPU with a RTX 4090. When I start running Pi0.5, the model is loaded and the SO101 makes its first move by setting the robot to be at its middle position, but then no further actions are made although the server logs new o...
https://github.com/huggingface/lerobot/issues/2356
open
[ "question", "robots" ]
2025-11-01T20:31:10Z
2025-12-23T01:10:35Z
null
kevinjosethomas
vllm-project/vllm
27,916
[Feature]: Does the latest version support LoRa for visual models?
### 🚀 The feature, motivation and pitch When I loaded the QWEN2.5-VL model fine-tuned by LoRa using vllm version 0.8.4, I encountered the following prompt: > Regarding multimodal models, vLLM currently only supports adding LoRA to language model, visual.blocks.31.mlp.up_proj will be ignored. I found an issue https:...
https://github.com/vllm-project/vllm/issues/27916
closed
[ "feature request" ]
2025-11-01T12:23:36Z
2025-12-26T12:48:22Z
1
SmartNight-cc
huggingface/lerobot
2,354
Cannot reproduce SmolVLA results on LIBERO benchmark
Hello, I am trying to reproduce LIBERO benchmark results of [SmolVLA](https://huggingface.co/HuggingFaceVLA/smolvla_libero). However, I can't reproduce results on neither [leaderboard](https://huggingface.co/spaces/HuggingFaceVLA/libero-vla-leaderboard) and [paper](https://arxiv.org/abs/2506.01844) I am working on NV...
https://github.com/huggingface/lerobot/issues/2354
open
[ "question", "policies", "simulation" ]
2025-11-01T11:20:05Z
2026-01-05T08:38:48Z
null
Hesh0629
huggingface/trl
4,419
GRPO with reward model. CUDA out of memory. How to fix? Thank you very much.
train_grpo.py: ```python import argparse import os from typing import Callable, Dict, List, Optional import torch from datasets import Dataset, load_dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, AutoModelForSequenceClassification, pipeline, set_seed, ) from trl import GRPO...
https://github.com/huggingface/trl/issues/4419
open
[ "🏋 Reward", "🏋 GRPO" ]
2025-11-01T10:29:28Z
2025-11-20T12:26:50Z
null
guotong1988
vllm-project/vllm
27,912
[Usage]: How should I use the CPU to deploy QWEN3 VL 30B-A3B?
### Your current environment ```text The output of `python collect_env.py` ``` (APIServer pid=1033476) Traceback (most recent call last): (APIServer pid=1033476) File "/home/maxgameone/anaconda3/bin/vllm", line 33, in <module> (APIServer pid=1033476) sys.exit(load_entry_point('vllm==0.11.1rc6.dev33+g3a5de7d2d.cp...
https://github.com/vllm-project/vllm/issues/27912
open
[ "usage" ]
2025-11-01T07:40:04Z
2025-11-01T07:40:04Z
0
maxgameone
vllm-project/vllm
27,899
[Bug]: Inductor specialize after 2.9 rebase
### Your current environment NA ### 🐛 Describe the bug Could you or someone have a look at compile ranges [PR](https://github.com/vllm-project/vllm/pull/24252) again? It seems to stop working with the update to pytorch 2.9. We started getting failed assertions in generated code like it was compiled for a single sha...
https://github.com/vllm-project/vllm/issues/27899
closed
[ "bug" ]
2025-10-31T22:16:27Z
2025-11-07T00:03:25Z
7
laithsakka
vllm-project/vllm
27,898
[Doc]: Multi-node EP on EFA (i.e. no IBGDA/DeepEP)
### 📚 The doc issue Usecase: On AWS we have EFA for high bandwidth interconnect, not Infiniband, so no IBGDA. The [documentation](https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment.html#backend-selection-guide) indicates that the DeepEP kernels should be used for multi/inter-node EP, and pplx for sing...
https://github.com/vllm-project/vllm/issues/27898
open
[ "documentation" ]
2025-10-31T21:22:28Z
2025-11-06T19:50:07Z
1
nathan-az
huggingface/peft
2,884
[Question/Bug] How to safely continue LoRA fine-tuning under DeepSpeed ZeRO-3 (multi-stage training with modules_to_save)
Hi, I’m trying to perform multi-stage LoRA fine-tuning under DeepSpeed ZeRO-3 using PEFT. However, continuing training on an existing LoRA checkpoint without merging causes a series of errors and conflicts. Problem When I load the LoRA from Stage 1 and attempt to continue training: • load_state_dict() throws shape ...
https://github.com/huggingface/peft/issues/2884
closed
[]
2025-10-31T20:13:12Z
2025-12-09T15:05:26Z
null
XiangZhang-zx
huggingface/lerobot
2,351
Details of adapting SmolVLA to other robotic arms with different configurations
I want to deploy the untuned `smolvla_base` model directly onto my AgileX PIPER robotic arm.I ran into the following two issues along the way: 1. Missing normalization parameters in the metadata. ``` File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate...
https://github.com/huggingface/lerobot/issues/2351
closed
[ "question", "policies" ]
2025-10-31T14:55:35Z
2025-12-14T14:47:04Z
null
yquanli
vllm-project/vllm
27,880
[Installation]: [HELP]How to install the latest main version of vllm
### Your current environment I clone the vllm code, and run install commands, but it fails, Help!! ### How you are installing vllm ```sh VLLM_USE_PRECOMPILED=1 uv pip install --editable . Using Python 3.10.12 environment at: /home/alice/.venv × No solution found when resolving dependencies: ╰─▶ Because there is ...
https://github.com/vllm-project/vllm/issues/27880
closed
[ "installation" ]
2025-10-31T13:57:20Z
2025-11-13T07:25:13Z
7
sleepwalker2017
vllm-project/vllm
27,877
[Usage]: How to install nightly version??? Why this command doesn't work?
### Your current environment I run this to install vllm with the latest code. But, the installed vllm doesn't include the code I need. I check the `siglip.py` file, it's modified 4 days ago. But in the vllm installed, it doesn't contain this commit! https://github.com/vllm-project/vllm/pull/27566/files#diff-ca771...
https://github.com/vllm-project/vllm/issues/27877
open
[ "usage" ]
2025-10-31T12:29:51Z
2025-10-31T12:38:19Z
0
sleepwalker2017
vllm-project/vllm
27,875
[Usage]: how to get profiler on OpenAI server
### Your current environment ```text INFO 10-31 10:27:06 [importing.py:17] Triton not installed or not compatible; certain GPU-related functions will not be available. WARNING 10-31 10:27:06 [importing.py:29] Triton is not installed. Using dummy decorators. Install it via `pip install triton` to enable kernel compilat...
https://github.com/vllm-project/vllm/issues/27875
closed
[ "usage" ]
2025-10-31T10:33:49Z
2025-10-31T14:38:04Z
1
zhaohaixu
vllm-project/vllm
27,872
[Feature]: AFD support load customer connect model from local path.
### 🚀 The feature, motivation and pitch Add `afd_connector_module_path` field in AFDConfig, user can implement customer afd connect, but don't need change vllm code. https://github.com/vllm-project/vllm/pull/25162 merge after. ### Alternatives _No response_ ### Additional context _No response_ ### Before subm...
https://github.com/vllm-project/vllm/issues/27872
open
[ "feature request" ]
2025-10-31T09:08:50Z
2025-12-08T03:32:33Z
1
lengrongfu
huggingface/trl
4,413
What is the default value of num_processes?
Based on the documentation on page docs/source/grpo_trainer.md, num_processes is used but nowhere does the documentation define what num_processes is or what is its default value.
https://github.com/huggingface/trl/issues/4413
closed
[ "📚 documentation", "❓ question", "🏋 GRPO" ]
2025-10-31T05:01:23Z
2025-10-31T17:31:33Z
null
thisisraghavkumar
huggingface/diffusers
12,564
[Proposals Welcome] Fal Flashpack integration for faster model loading
Hey! 👋 We've had a request to explore integrating Fal's Flashpack for faster DiT and Text Encoder loading (https://github.com/huggingface/diffusers/issues/12550). Before we jump into implementation, we wanted to open this up to the community to gather ideas and hear from anyone who's experimented with this. We'd lov...
https://github.com/huggingface/diffusers/issues/12564
open
[ "help wanted", "contributions-welcome" ]
2025-10-31T02:25:55Z
2025-10-31T12:26:13Z
2
yiyixuxu
vllm-project/vllm
27,832
[RFC]: Remap `CompilationConfig` from `-O` to `-cc` in CLI
### Motivation. With #20283 (and #26847), we're repurposing `-O0`/`-O1`/`-O2`/`-O3` to map to `optimization_level` instead of `CompilationConfig.level`/`CompilationConfig.mode`. This leaves us in a slightly confusing state where `-O` can refer to optimization level or compilation config depending on what follows it: -...
https://github.com/vllm-project/vllm/issues/27832
closed
[ "help wanted", "good first issue", "RFC", "torch.compile" ]
2025-10-30T20:29:31Z
2025-11-28T21:51:13Z
3
ProExpertProg
huggingface/trl
4,407
Complete paper index
These are the papers mentioned at least one in the codebase. - [ ] https://huggingface.co/papers/1707.06347 - [x] https://huggingface.co/papers/1909.08593 (only mentioned in notebook, no need to have in paper index) - [x] https://huggingface.co/papers/1910.02054 #4551 - [ ] https://huggingface.co/papers/1910.10683 - [...
https://github.com/huggingface/trl/issues/4407
open
[ "📚 documentation" ]
2025-10-30T20:23:26Z
2025-12-24T05:50:21Z
4
qgallouedec
vllm-project/vllm
27,830
[Usage]: GPS OSS 120b on L40S (Ada)
### Your current environment (Just a general question) ### How would you like to use vllm I want to run inference of a GPT OSS 120b with multiple L40S. I read the [docs](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html) as it clearly says it is not natively supported yet. After I had no success w...
https://github.com/vllm-project/vllm/issues/27830
closed
[ "usage" ]
2025-10-30T20:07:42Z
2025-11-17T12:46:43Z
6
Hansehart
vllm-project/vllm
27,823
[Doc]: Multi-node distributed guide issues
### 📚 The doc issue For context, see a recent issue (https://github.com/ROCm/ROCm/issues/5567) where a user was trying to set up distributed inference with `ray` by following guidance at https://docs.vllm.ai/en/v0.8.0/serving/distributed_serving.html#running-vllm-on-multiple-nodes. I ran into several issues setting t...
https://github.com/vllm-project/vllm/issues/27823
open
[ "documentation" ]
2025-10-30T18:33:04Z
2025-10-30T18:33:04Z
0
schung-amd
huggingface/trl
4,399
Update or remove some of the notebooks
I suspect these notebooks to be outdated, if so they should be either updated or removed. - gpt2-sentiment-control.ipynb - best_of_n.ipynb - gpt2-sentiment.ipynb
https://github.com/huggingface/trl/issues/4399
closed
[ "📚 documentation" ]
2025-10-30T15:34:36Z
2025-11-04T23:52:50Z
0
qgallouedec