repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm | 30,832 | [Performance]: DeepSeek-V3.2 on 8xH20 30 decode tokens/sec | ### Proposal to improve performance
**My Env:**
vllm 0.13.0rc2.dev178+g676db55ee
deep_gemm 2.1.1+c9f8b34
cuda. 12.9
python. 3.10.18
**command** is the same as:
vllm serve mypath/DeepSeek-V3.2 \
--tensor-parallel-size 8 \
--tokenizer-mode deepseek_v32 \
-... | https://github.com/vllm-project/vllm/issues/30832 | open | [
"performance"
] | 2025-12-17T03:08:52Z | 2025-12-18T08:01:30Z | 1 | lisp2025 |
huggingface/candle | 3,247 | Parakeet V3 support? | Any plans to support Parakeet V3 by any chance? Thank you 🙏 | https://github.com/huggingface/candle/issues/3247 | open | [] | 2025-12-16T19:05:33Z | 2025-12-16T19:05:33Z | 0 | mobicham |
vllm-project/vllm | 30,798 | [Usage]: vllm offline server lora model | ### Your current environment
Hi team,
I have a question about deploying LoRA models with a vLLM offline server.
Currently, we have a base model **A**. After LoRA training, we obtain adapter parameters **P**. When we serve model A with vLLM (offline server) and enable LoRA, we can select either the **base model A**... | https://github.com/vllm-project/vllm/issues/30798 | open | [
"usage"
] | 2025-12-16T16:38:49Z | 2025-12-18T11:52:39Z | 4 | zapqqqwe |
sgl-project/sglang | 15,266 | Multi-Adapter Support for Embed Qwen3 8B Embedding Model | ### Checklist
- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Motivation
Hi Team, do we currently support multi-adapter (LoRA) suppo... | https://github.com/sgl-project/sglang/issues/15266 | open | [] | 2025-12-16T14:14:16Z | 2025-12-16T14:14:22Z | 0 | dawnik17 |
vllm-project/vllm | 30,776 | [Usage]: Qwen3-omni's offline usage | ### Your current environment
I used the code below in vllm==0.12.0, but failed.
```
import os
import torch
from vllm import LLM, SamplingParams
from transformers import Qwen3OmniMoeProcessor
from qwen_omni_utils import process_mm_info
def build_input(processor, messages, use_audio_in_video):
text = processor.app... | https://github.com/vllm-project/vllm/issues/30776 | open | [
"bug",
"usage"
] | 2025-12-16T12:30:18Z | 2025-12-17T17:03:34Z | 50 | Auraithm |
sgl-project/sglang | 15,260 | SGLang installs newer PyTorch automatically – is there an official SGLang ↔ PyTorch compatibility guide? | Hi SGLang team, thank you for the great project!
I have a question regarding **PyTorch version compatibility and installation**.
Currently, the recommended installation command from the website is:
```bash
uv pip install "sglang" --prerelease=allow
```
However, when using this command, `pip/uv` automatically upgrad... | https://github.com/sgl-project/sglang/issues/15260 | open | [] | 2025-12-16T12:27:59Z | 2025-12-16T12:27:59Z | 0 | David-19940718 |
vllm-project/vllm | 30,757 | [Performance]: Async sched: Why return AsyncGPUModelRunnerOutput util func sample_tokens | ### Proposal to improve performance
Why is AsyncGPUModelRunnerOutput returned only after sample_tokens, not immediately after execute_model?
https://github.com/vllm-project/vllm/blob/0d0c929f2360cde5bae6817ad0f555641329e79d/vllm/v1/engine/core.py#L420-L422
If we defer returning AsyncGPUModelRunnerOutput until after sa... | https://github.com/vllm-project/vllm/issues/30757 | open | [
"performance"
] | 2025-12-16T08:26:08Z | 2025-12-16T08:26:49Z | 0 | iwzbi |
vllm-project/vllm | 30,736 | [Bug] DCP/DBO: 'NoneType' error building attention_metadata during DeepSeek-V3.1 deployment dummy run | ### Your current environment
```bash
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/30736 | open | [
"bug",
"help wanted"
] | 2025-12-16T03:07:59Z | 2025-12-22T17:11:48Z | 3 | Butterfingrz |
huggingface/transformers.js | 1,487 | License clarification for some of the converted models | ### Question
Hello!
I want to use [Xenova/whisper-small](https://huggingface.co/Xenova/whisper-small) and [Xenova/UAE-Large-V1](https://huggingface.co/Xenova/UAE-Large-V1) in a project, but I noticed that these model cards on Hugging Face do not have a license specified in their metadata or README.
Since the origina... | https://github.com/huggingface/transformers.js/issues/1487 | closed | [
"question"
] | 2025-12-16T00:27:16Z | 2025-12-16T19:13:09Z | null | rmahdav |
vllm-project/vllm | 30,722 | [Bug]: llama4_pythonic tool parser fails with SyntaxError on nested list parameters | ### Your current environment
I don't have direct access to the cluster the model is running in. But it's running on 8x H100 GPUs using TP 8, expert parallel.
This is the fp8 model from Huggingface.
These are the vllm serve args I'm using:
VLLM Version: 0.11.0
```
--port 8002
--model /config/models/maverick
--de... | https://github.com/vllm-project/vllm/issues/30722 | open | [
"bug"
] | 2025-12-15T21:26:24Z | 2025-12-15T21:26:24Z | 0 | mphilippnv |
huggingface/tokenizers | 1,913 | Wrong and unsuppressable print when instantiating BPE | I am running Python code that is of the form
```python
from transformers import PreTrainedTokenizerFast
from tokenizers import Tokenizer
from tokenizers.models import BPE
vocab = {"a": 5, "b": 6, "ab": 7}
merges = [("a","b")]
backend_of_backend_of_backend = BPE(vocab=vocab, merges=merges, dropout=None)
backend_of_ba... | https://github.com/huggingface/tokenizers/issues/1913 | closed | [] | 2025-12-15T16:30:46Z | 2026-01-05T13:02:45Z | 4 | bauwenst |
vllm-project/vllm | 30,694 | [Feature]: CompressedTensors: NVFP4A16 not supported for MoE models | ### 🚀 The feature, motivation and pitch
NVFP4A16 (W4A16 FP4) quantization via compressed_tensors works for dense models but fails on MoE models like Qwen3-30B-A3B.
Looking at `compressed_tensors_moe.py`, `_is_fp4a16_nvfp4` is checked for Linear layers but not in `get_moe_method()` for FusedMoE. Only W4A4 has a MoE m... | https://github.com/vllm-project/vllm/issues/30694 | open | [
"feature request"
] | 2025-12-15T13:29:09Z | 2025-12-21T09:27:38Z | 2 | zhangyimi |
vllm-project/vllm | 30,685 | [Feature]: fp8 kv cache for finer-grained scaling factors (e.g., per channel). | ### 🚀 The feature, motivation and pitch
Currently, the FP8 KV cache feature (in the FlashMLA interface) only supports per-tensor (scalar) scaling factors. Are you developing support for finer-grained scaling factors (e.g., per-channel)? If so, when can we expect the FP8 KV cache with such finer-grained scaling factor... | https://github.com/vllm-project/vllm/issues/30685 | open | [
"feature request"
] | 2025-12-15T09:32:48Z | 2025-12-15T09:32:48Z | 0 | zx-ai |
huggingface/transformers | 42,868 | sdpa_paged: How does it handle paged cache without padding? | Hi @ArthurZucker ,
I was analyzing the [sdpa_paged](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/sdpa_paged.py#L18) implementation and found the approach quite fascinating. I have a question regarding how the input shapes are handled.
If I have a batch of 4 sequences with length... | https://github.com/huggingface/transformers/issues/42868 | closed | [] | 2025-12-15T08:39:00Z | 2025-12-16T03:08:27Z | 4 | jiqing-feng |
huggingface/trl | 4,692 | LLVM error during GRPO training with Apple M4 Max | I have the below error while doing GRPO training. I am using HuggingFace example codes for GRPO. I couldn't run the model on MPS because of this issue.
How can I run GRPO on MPS?
loc("mps_matmul"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/4~B_wkugAG-524HdEQLaK0kvU7Y_D8Jtm6UxMaIoY/Library/Caches/com.apple.xbs/S... | https://github.com/huggingface/trl/issues/4692 | open | [
"🐛 bug",
"🏋 GRPO"
] | 2025-12-14T23:01:49Z | 2025-12-14T23:02:11Z | 0 | neslihaneti |
vllm-project/vllm | 30,654 | [Feature][Attention][UX]: Incorporate Features into Attention Selection | ### 🚀 The feature, motivation and pitch
SUMMARY:
* we have default attention backends by priority and a notion of which backend supports what hw
* however, certain features are not considered in this (e.g. fp8 kv cache, e.g. attention sinks)
Recent example, we had test failures because we updated the logic to load k... | https://github.com/vllm-project/vllm/issues/30654 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-14T18:04:14Z | 2025-12-30T05:38:40Z | 11 | robertgshaw2-redhat |
huggingface/diffusers | 12,838 | Merge Loras for FLUX | The issue is based on https://huggingface.co/docs/diffusers/main/using-diffusers/merge_loras
Is there a similar procedure for merging loras for FLUX models? The guide seems to be specific for UNet based methods. I'm working on FLUX-dev and I would like to perform a linear merge of my loras. | https://github.com/huggingface/diffusers/issues/12838 | open | [] | 2025-12-14T12:39:41Z | 2025-12-14T12:39:41Z | 0 | shrikrishnalolla |
vllm-project/vllm | 30,633 | [Installation]: How to install vLLM 0.11.0 with CUDA < 12.9 (Driver 535)? No matching wheels found | ### Your current environment
I’m trying to install vLLM 0.11.0 on a machine with NVIDIA Driver 535, and I ran into issues related to CUDA version compatibility.
Environment
OS: Linux (Ubuntu 20.04 / 22.04)
GPU: NVIDIA GPU H20
NVIDIA Driver: 535.xx
Python: 3.10
vLLM version: 0.11.0
Problem
According to the rel... | https://github.com/vllm-project/vllm/issues/30633 | open | [
"installation"
] | 2025-12-14T04:29:41Z | 2026-01-01T16:50:50Z | 1 | whu125 |
vllm-project/vllm | 30,630 | [Usage]: SymmMemCommunicator: Device capability 10.3 not supported | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Hi, I am seeing following warning using vllm serve on B300 instances.
```
WARNING 12-13 16:31:15 [symm_mem.py:67] SymmMemCommunicator: Device capability 10.3 not supported, communicator is not available... | https://github.com/vllm-project/vllm/issues/30630 | open | [
"usage",
"nvidia"
] | 2025-12-14T01:00:34Z | 2025-12-18T21:17:42Z | 4 | navmarri14 |
huggingface/transformers.js | 1,484 | Should npm @xenova/transformers be deleted or marked deprecated? | ### Question
Hello,
I was surprised that none of the models I tried were supported by transformerjs, even if they were using transformerjs in their README, until I realized that I was using the old npm package.
Shouldn't this package be removed ? Or marked as deprecated in favour of huggingface's ?
Best, | https://github.com/huggingface/transformers.js/issues/1484 | open | [
"question"
] | 2025-12-13T19:49:08Z | 2025-12-17T12:21:12Z | null | matthieu-talbot-ergonomia |
huggingface/tokenizers | 1,910 | [Docs] `Visualizer` dead links | It seems like documentation for `Visualizer` is out of date and all the links return 404.
Docs: https://huggingface.co/docs/tokenizers/api/visualizer
Github Source: https://github.com/huggingface/tokenizers/blob/main/bindings/python/py_src/tokenizers/tools/visualizer.py | https://github.com/huggingface/tokenizers/issues/1910 | open | [] | 2025-12-13T19:23:33Z | 2025-12-13T19:23:33Z | 0 | dudeperf3ct |
vllm-project/vllm | 30,621 | [Feature]: Remove MXFP4 Logic From `fused_experts` | ### 🚀 The feature, motivation and pitch
SUMMARY:
* as part of effort to refactor MoE, trying to reduce cruft
* we currently only have MX emulation in vLLM
* the logic for this emulation should be moved into quark
https://github.com/vllm-project/vllm/blame/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L1866-... | https://github.com/vllm-project/vllm/issues/30621 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-13T18:30:30Z | 2026-01-04T14:47:45Z | 13 | robertgshaw2-redhat |
vllm-project/vllm | 30,620 | [Feature]: Remove Chunking From FusedMoE | ### 🚀 The feature, motivation and pitch
* we have some chunking logic in the triton kernels to avoid IMA: https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L1807
* we chunk in ~65k tokens
* this case does not happen anymore because of chunked prefill
We should remove th... | https://github.com/vllm-project/vllm/issues/30620 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-13T18:22:30Z | 2025-12-13T23:27:22Z | 3 | robertgshaw2-redhat |
vllm-project/vllm | 30,570 | [Usage]: Why is VLLM still using SSE at all for mcp? | ### Your current environment
This is a broad question: Why is vllm still using/hardcoding sse usage at all, when its been deprecated for well over six months at this point?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
#... | https://github.com/vllm-project/vllm/issues/30570 | open | [
"usage"
] | 2025-12-12T20:02:08Z | 2025-12-18T10:50:37Z | 1 | bags307 |
sgl-project/sglang | 14,984 | Can the source code compilation and installation of sgl-kernel support the SM86 driver for CUDA12.9 | ### Checklist
- [x] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/14984 | open | [] | 2025-12-12T10:29:50Z | 2025-12-15T09:41:18Z | 1 | zwt-1234 |
vllm-project/vllm | 30,548 | [Feature]: Support for Q.ANT Photonic Computing ? | ### 🚀 The feature, motivation and pitch
https://qant.com/
https://qant.com/wp-content/uploads/2025/11/20251111_QANT-Photonic-AI-Accelerator-Gen-2.pdf
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues... | https://github.com/vllm-project/vllm/issues/30548 | open | [
"feature request"
] | 2025-12-12T10:16:53Z | 2025-12-12T14:45:53Z | 2 | plitc |
huggingface/tokenizers | 1,909 | [Docs] `Encode Inputs` rendering issues | It seems like the documentation for Encode Inputs is not rendered properly.
Official URL: https://huggingface.co/docs/tokenizers/main/en/api/encode-inputs?code=python
GitHub URL: https://github.com/huggingface/tokenizers/blob/main/docs/source-doc-builder/api/encode-inputs.mdx | https://github.com/huggingface/tokenizers/issues/1909 | open | [] | 2025-12-12T09:47:48Z | 2025-12-12T09:47:48Z | 0 | ariG23498 |
vllm-project/vllm | 30,541 | [Usage]: missing dsml token "| DSML | " with DeepSeek-V3.2 tools call | ### Your current environment
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not c... | https://github.com/vllm-project/vllm/issues/30541 | open | [
"usage"
] | 2025-12-12T06:47:03Z | 2025-12-12T20:59:40Z | 1 | crischeng |
vllm-project/vllm | 30,511 | Potential Deadlock? | Consider using proper synchronization primitives like threading.Event or queue.Queue.get(timeout=...) | https://github.com/vllm-project/vllm/issues/30511 | closed | [] | 2025-12-11T19:57:43Z | 2025-12-12T18:00:20Z | 1 | ChuanLi1101 |
sgl-project/sglang | 14,903 | Does the current Qwen3-VL (or Qwen3-VL-MoE) officially support TBO? | Hi team,
I noticed that Qwen3-VL and Qwen3-MoE adopt different model architectures.
When profiling the execution path, I found that:
Qwen3-MoE eventually falls back to the Qwen2-MoE implementation, which explicitly supports TBO (Two-Batch Overlap).
However, Qwen3-VL takes the path of Qwen3-VL-MoE, and I did not find... | https://github.com/sgl-project/sglang/issues/14903 | open | [] | 2025-12-11T13:26:50Z | 2025-12-11T13:26:50Z | 0 | jerry-dream-fu |
huggingface/transformers | 42,804 | [`Quantization FP8`] Native `from_config` support | ### Feature request
Related to https://github.com/huggingface/transformers/pull/42028#discussion_r2592235170
Since FP8 is becoming more and more standard, it would be nice to create fp8 native models via config or more like using `from_config`. Atm, quant configs are not respected apparently - either that or we need ... | https://github.com/huggingface/transformers/issues/42804 | open | [
"Feature request"
] | 2025-12-11T10:17:47Z | 2025-12-14T22:49:48Z | 3 | vasqu |
huggingface/trl | 4,679 | [SFT] High vRAM consumption during eval loop | ### Reproduction
### Unexpected behavior
When training a model on large sequences (>=20k tokens) with `PEFT LoRA` + `SFTTrainer` + `liger-kernel`, the vRAM usage spikes during the evaluation loop, consuming way more vRAM than during the training.
The size of this vRAM spike seem to scale with the length of the input... | https://github.com/huggingface/trl/issues/4679 | open | [
"🐛 bug",
"🏋 SFT",
"⚡ PEFT"
] | 2025-12-11T10:01:49Z | 2026-01-02T09:23:17Z | 3 | Khreas |
vllm-project/vllm | 30,477 | [Usage]: How to disable thinking for Qwen-8B | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/30477 | closed | [
"usage"
] | 2025-12-11T09:28:40Z | 2025-12-22T06:10:43Z | 3 | fancyerii |
huggingface/diffusers | 12,823 | How to use quantizer after pipeline loaded? | How to use quantizer after pipeline loaded?
- Currently
```python
# Quantization occurs at load time.
pipe = QwenImagePipeline.from_pretrained(
(
args.model_path
if args.model_path is not None
else os.environ.get(
"QWEN_IMAGE_DIR",
"Qwen/Qwen-Image",
)
... | https://github.com/huggingface/diffusers/issues/12823 | open | [] | 2025-12-11T06:32:38Z | 2025-12-11T14:18:28Z | null | DefTruth |
huggingface/transformers | 42,794 | `decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation. | ### System Info
latest transformers
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction... | https://github.com/huggingface/transformers/issues/42794 | closed | [
"bug"
] | 2025-12-11T06:22:58Z | 2025-12-18T18:33:40Z | 1 | jiqing-feng |
vllm-project/vllm | 30,464 | [Usage]: How can I use the local pre-compiled wheel of vllm | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Every time I use `VLLM_USE_PRECOMPILED=1 uv pip install --editable .` to build vllm, it always takes much time to download the pre-compiled wheel. Would it be possible to build it by using a locally dow... | https://github.com/vllm-project/vllm/issues/30464 | open | [
"usage"
] | 2025-12-11T06:22:43Z | 2025-12-12T01:02:22Z | 1 | gcanlin |
huggingface/transformers | 42,791 | Add support for GPT_OSS with tp_plan or enable native tensor parallelism | ### Model description
#[https://huggingface.co/docs/transformers/main/perf_infer_gpu_multi?tp_plan=auto+plan](url)
> https://github.com/huggingface/transformers/issues/41819
There are a list of supported models here, but GPT-OSS is not one of them. Please add support for GPT_OSS too to enable `tp_plan`. Please help... | https://github.com/huggingface/transformers/issues/42791 | open | [
"New model"
] | 2025-12-11T04:31:19Z | 2025-12-19T08:38:31Z | 1 | quic-akuruvil |
sgl-project/sglang | 14,868 | How to train vicuna EAGLE3 model? | I have carefully reviewed the official tutorials and source code, but I was unable to find the relevant config and template files specific to Vicuna.
Could you please provide an example, specifically regarding the template structure? | https://github.com/sgl-project/sglang/issues/14868 | open | [] | 2025-12-11T03:59:39Z | 2025-12-11T03:59:39Z | 0 | Sylvan820 |
vllm-project/vllm | 30,447 | [Usage]: how to load kv cache data into local file | ### Your current environment
pthon3.10+vllm0.10.0
### How would you like to use vllm
I want to get int8 kv cache data from [qwen-int8](https://www.modelscope.cn/models/Qwen/Qwen-7B-Chat-Int8). I don't know how if vllm can do that? Thank you.
### Before submitting a new issue...
- [x] Make sure you already searched... | https://github.com/vllm-project/vllm/issues/30447 | open | [
"usage"
] | 2025-12-11T01:43:58Z | 2025-12-12T15:11:50Z | 1 | chx725 |
vllm-project/vllm | 30,441 | [Usage]: vllm serve setup issues on B300 | ### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Amazon Linux 2023.9.20251208 (x86_64)
GCC version : (GCC) 11.5.0... | https://github.com/vllm-project/vllm/issues/30441 | open | [
"usage"
] | 2025-12-10T23:50:27Z | 2025-12-13T02:01:04Z | 1 | navmarri14 |
sgl-project/sglang | 14,824 | Throughput degradation on Qwen3-30B-A3B with EAGLE3 | I observed a throughput degradation when trying to use EAGLE3 to speed up Qwen3-30B-A3B (on 2x H100).
I suspect the overhead might be overshadowing the gains. It would be great if we could have some profiling analysis to pinpoint exactly where the cost is coming from.
Also, tuning parameters for MoE models feels much... | https://github.com/sgl-project/sglang/issues/14824 | open | [] | 2025-12-10T14:22:05Z | 2025-12-19T21:36:54Z | 1 | Zzsf11 |
vllm-project/vllm | 30,392 | [Bug]: Docker image v0.12.0 Fail to serve via Docker image | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : C... | https://github.com/vllm-project/vllm/issues/30392 | open | [
"usage"
] | 2025-12-10T13:43:59Z | 2026-01-04T14:24:56Z | 7 | kuopching |
huggingface/transformers | 42,771 | FSDP of Trainer does not work well with Accelerate | ### System Info
- `transformers` version: 4.57.3
- Platform: Linux-6.6.97+-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2... | https://github.com/huggingface/transformers/issues/42771 | open | [
"bug"
] | 2025-12-10T12:54:49Z | 2025-12-11T07:07:19Z | 2 | gouchangjiang |
vllm-project/vllm | 30,381 | [Usage]: | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues... | https://github.com/vllm-project/vllm/issues/30381 | closed | [
"usage"
] | 2025-12-10T09:27:51Z | 2025-12-10T09:28:26Z | 0 | tobeprozy |
vllm-project/vllm | 30,380 | [Usage]: 大家一般怎么使用vllm/tests的? | ### Your current environment
anywhere
### How would you like to use vllm
I don't know how to use vllm test.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/lat... | https://github.com/vllm-project/vllm/issues/30380 | open | [
"usage"
] | 2025-12-10T09:27:46Z | 2025-12-10T13:19:18Z | 1 | tobeprozy |
vllm-project/vllm | 30,379 | [Usage]: how to use vllm/tests/? | ### Your current environment
大家一般怎么使用[vllm](https://github.com/vllm-project/vllm/tree/main)/[tests](https://github.com/vllm-project/vllm/tree/main/tests)的?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitt... | https://github.com/vllm-project/vllm/issues/30379 | closed | [
"usage"
] | 2025-12-10T09:25:52Z | 2025-12-10T09:26:25Z | 0 | tobeprozy |
vllm-project/vllm | 30,375 | [Bug]: [TPU] ShapeDtypeStruct error when loading custom safetensors checkpoint on TPU v5litepod | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
PyTorch version: 2.9.0+cu128
vLLM version: 0.12.0 (vllm-tpu)
JAX version: 0.8.0
Python version: 3.12.8 (main, Jan 14 2025, 22:49:14) [Clang 19.1.6]
TPU: v5litepod-4 (4 chips, single host)
OS: Amazon Linux 2023 ... | https://github.com/vllm-project/vllm/issues/30375 | open | [
"bug"
] | 2025-12-10T08:12:57Z | 2025-12-11T05:34:19Z | 1 | Baltsat |
sgl-project/sglang | 14,800 | How should we set piecewise-cuda-graph-max-tokens according to TP DP and chunked-prefill-size? | How should we set piecewise-cuda-graph-max-tokens according to TP DP and chunked-prefill-size?
For TP only, should we set piecewise-cuda-graph-max-tokens = chunked-prefill-size?
and for DP attention DP<=TP, should we set piecewise-cuda-graph-max-tokens = chunked-prefill-size/DP?
Thanks. | https://github.com/sgl-project/sglang/issues/14800 | open | [] | 2025-12-10T07:26:36Z | 2025-12-10T07:26:36Z | 0 | llc-kc |
sgl-project/sglang | 14,783 | [Bug][ConvertLinalgRToBinary] encounters error: bishengir-compile: Unknown command line argument '--target=Ascend910B2C'. Try: '/usr/local/Ascend/ascend-toolkit/latest/bin/bishengir-compile --help' bishengir-compile: Did you mean '--pgso=Ascend910B2C'? | ### Checklist
- [x] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/14783 | closed | [
"npu"
] | 2025-12-10T03:54:50Z | 2025-12-13T12:28:26Z | 1 | rsy-hub4121 |
huggingface/transformers | 42,757 | cannot import name 'is_offline_mode' from 'huggingface_hub' | ### System Info
- transformers-5.0.0
- huggingface_hub-1.2.1
```
ImportError: cannot import name 'is_offline_mode' from 'huggingface_hub' (/root/miniconda3/envs/transformers/lib/python3.10/site-packages/huggingface_hub/__init__.py)
```
### Who can help?
_No response_
### Information
- [ ] The official example scri... | https://github.com/huggingface/transformers/issues/42757 | closed | [
"bug"
] | 2025-12-10T02:43:43Z | 2025-12-23T17:15:20Z | 0 | dollarser |
vllm-project/vllm | 30,359 | [RFC] [QeRL]: Online Quantization and Model Reloading | ### Motivation.
## What is Quantized Model Reloading and Why is it Useful?
vLLM serves not only as a inference runtime for serving requests from end users, but also as a means of serving requests for large language model post-training. One particularly important use case is using vLLM to serve rollouts (required by ... | https://github.com/vllm-project/vllm/issues/30359 | open | [
"RFC"
] | 2025-12-09T21:24:20Z | 2025-12-19T18:19:22Z | 8 | kylesayrs |
vllm-project/vllm | 30,358 | [Bug]: NIXL PD disaggregate with host_buffer has accuracy issue - Prefill scheduled num_block mismatch at update_state_after_alloc and request_finished | ### Your current environment
vllm-commit-id: 73a484caa1ad320d6e695f098c25c479a71e6774
Tested with A100
### 🐛 Describe the bug
How to reproduce
```
PREFILL_BLOCK_SIZE=16 DECODE_BLOCK_SIZE=16 bash tests/v1/kv_connector/nixl_integration/run_accuracy_test.sh --kv_buffer_device cpu
```
accuracy is ~0.3 much lower tha... | https://github.com/vllm-project/vllm/issues/30358 | open | [
"bug"
] | 2025-12-09T20:15:48Z | 2025-12-10T17:07:38Z | 3 | xuechendi |
huggingface/datasets | 7,900 | `Permission denied` when sharing cache between users | ### Describe the bug
We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors.
It looks like this was sup... | https://github.com/huggingface/datasets/issues/7900 | open | [] | 2025-12-09T16:41:47Z | 2025-12-16T15:39:06Z | 2 | qthequartermasterman |
sgl-project/sglang | 14,746 | Cannot join SGL slack Channel | same issue with [#3929](https://github.com/sgl-project/sglang/issues/3929) and [#11983](https://github.com/sgl-project/sglang/issues/11983)
Can we get a new invitation link? Thanks a lot! | https://github.com/sgl-project/sglang/issues/14746 | closed | [] | 2025-12-09T15:43:51Z | 2025-12-10T08:33:01Z | 2 | alphabetc1 |
huggingface/transformers | 42,740 | how to train trocr with transformers 4.57+? | i train trocr with tranfomers 4.15, the results is right,but train with 4.57.1,the acc is always 0 , i did't find the reason,did t can train succ with latest transofrmers? | https://github.com/huggingface/transformers/issues/42740 | open | [] | 2025-12-09T14:07:50Z | 2026-01-05T06:46:34Z | null | cqray1990 |
huggingface/transformers | 42,739 | How about adding local kernel loading to `transformers.KernelConfig()` | ### Feature request
As title.
### Motivation
Currently, the class `KernelConfig()` creates the `kernel_mapping` through the `LayerRepository` provided by `huggingface/kernels`. The `LayerRepository` downloads and loads kernel from the hub. I think adding the ability for it to load kernel locally should be very helpf... | https://github.com/huggingface/transformers/issues/42739 | closed | [
"Feature request"
] | 2025-12-09T12:22:41Z | 2025-12-17T01:21:57Z | null | zheliuyu |
huggingface/peft | 2,945 | Return base model state_dict with original keys | ### Feature request
TL;DR: `from peft import get_base_model_state_dict`
Hi!
I'm looking for a way to get the state dict of the base model after it has been wrapped in a `PeftModel` while preserving the original model's state dict keys. To the best of my knowledge, the only way this can be done right now is getting t... | https://github.com/huggingface/peft/issues/2945 | open | [] | 2025-12-09T11:23:52Z | 2025-12-09T17:06:13Z | 6 | dvmazur |
vllm-project/vllm | 30,325 | [Performance]: Can we enable triton_kernels on sm120 | ### Proposal to improve performance
Since PR (https://github.com/triton-lang/triton/pull/8498) had been merged, we may enable triton_kernels on sm120.
https://github.com/vllm-project/vllm/blob/67475a6e81abea915857f82e6f10d80b03b842c9/vllm/model_executor/layers/quantization/mxfp4.py#L153-L160
Although I haven't looke... | https://github.com/vllm-project/vllm/issues/30325 | open | [
"performance"
] | 2025-12-09T09:21:04Z | 2025-12-10T10:16:18Z | 2 | ijpq |
vllm-project/vllm | 30,296 | [Usage]: Is it possible to configure P2P kv-cache in multi-machine and multi-gpu scenarios? | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/30296 | open | [
"usage"
] | 2025-12-09T03:29:48Z | 2025-12-09T03:29:48Z | 0 | lululu-1997 |
huggingface/trl | 4,641 | Further improving `GRPOTrainer` doc to include Qwen SAPO in Loss Types | ### Feature request
Hello,
I'd like to further document the Qwen SAPO implementation from @pramodith , not in the `paper_index` (he already did a good job) but in the `loss-types` subsection of the `GRPOTrainer`: https://huggingface.co/docs/trl/main/en/grpo_trainer#loss-types.
I'd like to add the formula, a short pa... | https://github.com/huggingface/trl/issues/4641 | closed | [
"📚 documentation",
"✨ enhancement",
"🏋 GRPO"
] | 2025-12-08T20:06:59Z | 2025-12-12T17:28:06Z | 1 | casinca |
huggingface/transformers | 42,713 | mulitmodal forward pass for ministral 3 family | ### System Info
https://github.com/huggingface/transformers/blob/main/src/transformers/models/ministral3/modeling_ministral3.py#L505
seems like here we are using generic class which takes only the input ids as input ignoring the pixel values. when can we expect this implemented ?
### Who can help?
@Cyrilvallez
... | https://github.com/huggingface/transformers/issues/42713 | closed | [
"bug"
] | 2025-12-08T18:46:14Z | 2025-12-15T11:21:08Z | 4 | rishavranaut |
vllm-project/vllm | 30,271 | [Usage]: Qwen 3 VL Embedding | ### Your current environment
Hi I would like to ask if there is a way to extract Qwen 3 VL multimodal embeddings, similar to Jina Embeddings V4, for retrieval purposes?
I've tried to initialize the model this way but it doesn't work:
```
model = LLM(
model="Qwen/Qwen3-VL-8B-Instruct",
task="embed",
trust_... | https://github.com/vllm-project/vllm/issues/30271 | closed | [
"usage"
] | 2025-12-08T17:26:41Z | 2025-12-09T07:18:35Z | 2 | MingFengC |
huggingface/optimum | 2,390 | Request for input shapes to be specified | ### Feature request
Currently,
optimum-cli does not provide a way to specify static input shapes, it defaults to dynamic shapes. Is there a way to make it possible to specify the input shape? If not, why do we not allow this?
An example would be:
`optimum-cli export openvino --model microsoft/resnet-50 graph_convert... | https://github.com/huggingface/optimum/issues/2390 | open | [] | 2025-12-08T15:24:04Z | 2025-12-20T19:38:02Z | 3 | danielliuce |
huggingface/transformers | 42,698 | parse_response must not accept detokenized text | ### System Info
[parse_response](https://github.com/huggingface/transformers/blob/5ee9ffe386c5ecc77d8009ab648b8c4c109931ea/src/transformers/tokenization_utils_base.py#L3525) function must only accept raw tokens, but never detokenized text. Parsing from text is a vulnerability and therefore must not be possible.
Once ... | https://github.com/huggingface/transformers/issues/42698 | open | [
"bug"
] | 2025-12-08T12:20:39Z | 2025-12-08T15:59:19Z | 2 | kibergus |
vllm-project/vllm | 30,248 | [Feature]: any plan to support Relaxed Acceptance in v1? | ### 🚀 The feature, motivation and pitch
[NV Relaxed Acceptance](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/blogs/tech_blog/blog2_DeepSeek_R1_MTP_Implementation_and_Optimization.md#relaxed-acceptance)
There are PRs ([vllm](https://github.com/vllm-project/vllm/pull/21506), [vllm](https://github.com/vl... | https://github.com/vllm-project/vllm/issues/30248 | open | [
"feature request"
] | 2025-12-08T08:45:20Z | 2025-12-09T10:18:22Z | 4 | chengda-wu |
vllm-project/vllm | 30,246 | [Usage]: How to disable reasoning for gpt-oss-120b | ### Your current environment
```
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/30246 | open | [
"usage"
] | 2025-12-08T08:23:58Z | 2025-12-08T08:23:58Z | 0 | WiiliamC |
huggingface/transformers | 42,690 | How to run Phi4MultimodalProcessor | ### System Info
transformers version: 4.57.1
python version: 3.9
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give deta... | https://github.com/huggingface/transformers/issues/42690 | open | [
"bug"
] | 2025-12-08T03:27:02Z | 2025-12-09T12:30:27Z | null | wcrzlh |
vllm-project/vllm | 30,222 | [Bug]: gpt-oss response api: streaming + code interpreter has bugs | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
Gpt-oss in streaming mode cannot see internal code interpreter output
the problem is with https://github.com/vllm-... | https://github.com/vllm-project/vllm/issues/30222 | open | [
"bug"
] | 2025-12-08T01:32:35Z | 2025-12-08T09:49:55Z | 4 | jordane95 |
vllm-project/vllm | 30,211 | [Bug]: How to make vLLM support multi stream torch compile and each stream capture cuda graph. | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
SGLang now supports multi stream torch compile and each stream capture cuda graph. The code link is
https://git... | https://github.com/vllm-project/vllm/issues/30211 | open | [
"bug",
"feature request",
"nvidia"
] | 2025-12-07T15:12:04Z | 2025-12-15T05:39:39Z | 3 | lambda7xx |
vllm-project/vllm | 30,193 | [Bug]: Behavioral Difference in hidden_states[-1] between vLLM and Transformers for Qwen3VLForConditionalGeneration | ### Your current environment
- vLLM Version: 0.11.2
- Transformers Version: 4.57
- Model: Qwen3VLForConditionalGeneration
### 🐛 Describe the bug
I have observed an inconsistency in the output of the forward method for the `Qwen3VLForConditionalGeneration` class between vLLM (version 0.11.2) and Transformers (version ... | https://github.com/vllm-project/vllm/issues/30193 | closed | [
"bug"
] | 2025-12-07T04:50:11Z | 2025-12-16T03:24:00Z | 3 | guodongxiaren |
huggingface/transformers | 42,674 | Missing imports for DetrLoss and DetrHungarianMatcher | Previously, I was able to import these classes as
```
from transformers.models.detr.modeling_detr import DetrLoss, DetrObjectDetectionOutput, DetrHungarianMatcher
```
In v4.57.3, the import fails and I also cannot find DetrLoss or DetrHungarianMatcher anywhere in the codebase. Have they been removed/replaced with an ... | https://github.com/huggingface/transformers/issues/42674 | open | [] | 2025-12-06T15:32:14Z | 2026-01-06T08:02:43Z | 1 | sammlapp |
vllm-project/vllm | 30,163 | [Usage]: Help Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node) | ### Your current environment
# Help: Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node)
## Hardware
- **2x DGX Spark** (GB10 GPU each, sm_121a / compute capability 12.1)
- Connected via 200GbE ConnectX-7/Ethernet
- Driver: 580.95.05, Host CUDA: 13.0
## Goal
Run `lukealonso/GLM-4.6-NVFP4` (357B MoE mode... | https://github.com/vllm-project/vllm/issues/30163 | open | [
"usage"
] | 2025-12-06T00:24:52Z | 2025-12-07T16:22:40Z | 2 | letsrock85 |
huggingface/accelerate | 3,876 | Why TP can't be used with pure DP? | As per [this](https://github.com/huggingface/accelerate/blob/b9ca0de682f25f15357a3f9f1a4d94374a1d451d/src/accelerate/parallelism_config.py#L332), we can not be use TP along with pure DP (or DDP). We need to shard the model across further nodes by specifying dp_shard_size as well. Why this limitation exists? Is it just ... | https://github.com/huggingface/accelerate/issues/3876 | open | [] | 2025-12-05T16:11:22Z | 2025-12-26T10:07:09Z | 3 | quic-meetkuma |
huggingface/lerobot | 2,589 | Clarification on XVLA folding checkpoint | Hi Lerobot team, great work on the XVLA release!
I have tried finetuning on my custom dataset and have a few clarifications:
1. Is the [lerobot/xvla-folding](https://huggingface.co/lerobot/xvla-folding) checkpoint finetuned on [lerobot/xvla-soft-fold](https://huggingface.co/datasets/lerobot/xvla-soft-fold)?
- I a... | https://github.com/huggingface/lerobot/issues/2589 | open | [
"question",
"policies"
] | 2025-12-05T11:42:46Z | 2025-12-22T08:43:05Z | null | brycegoh |
vllm-project/vllm | 30,129 | [Feature]: About video input for qwen3vl | ### 🚀 The feature, motivation and pitch
I tried using base64 encoding to provide video input for vllm inference, but it seems this input method is not yet supported by Qwen3VL (I've seen similar issues reported elsewhere). Currently, I can only specify parameters like fps/maximum frames and then pass the local path o... | https://github.com/vllm-project/vllm/issues/30129 | open | [
"feature request"
] | 2025-12-05T10:32:06Z | 2025-12-19T03:32:30Z | 4 | lingcco |
huggingface/sentence-transformers | 3,585 | How to choose negative instance when using MultipleNegativesRankingLoss train embedding model? | Firstly, I am still confused how to choose negative instance if I use MultipleNegativesRankingLoss, in https://github.com/huggingface/sentence-transformers/blob/main/sentence_transformers/losses/MultipleNegativesRankingLoss.py# L113
`embeddings = [self.model(sentence_feature)["sentence_embedding"] for sentence_feature ... | https://github.com/huggingface/sentence-transformers/issues/3585 | open | [] | 2025-12-05T09:50:26Z | 2025-12-09T11:49:26Z | null | 4daJKong |
vllm-project/vllm | 30,124 | [Bug]: How to run DeepSeek-V3.2 on 2 H100 nodes? |
### 🐛 Describe the bug
How to run DeepSeek-V3.2 on 2 H100 nodes?
I only found the cmd for H200/B200:
vllm serve deepseek-ai/DeepSeek-V3.2 -tp 8
but it does not work in multi-node scenarios (e.g., 2 H100 nodes).
So what should the cmd be for two H100 nodes?
how should params --tp/--dp/--pp be configured?
### Befo... | https://github.com/vllm-project/vllm/issues/30124 | open | [
"bug"
] | 2025-12-05T09:40:45Z | 2025-12-14T08:57:52Z | 2 | XQZ1120 |
vllm-project/vllm | 30,121 | [Feature]: Could you please provide Chinese documentation for vLLM? 😊 | ### 🚀 The feature, motivation and pitch
Could you please provide Chinese documentation for vLLM? 😊
### Alternatives
Could you please provide Chinese documentation for vLLM? 😊
### Additional context
Could you please provide Chinese documentation for vLLM? 😊
### Before submitting a new issue...
- [x] Make su... | https://github.com/vllm-project/vllm/issues/30121 | open | [
"feature request"
] | 2025-12-05T08:13:46Z | 2025-12-08T04:31:05Z | 4 | moshilangzi |
huggingface/transformers | 42,641 | Cannot inference llava-next with transformers==4.57.1 on dtype="auto" bug | ### System Info
```
- `transformers` version: 4.57.1
- Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (a... | https://github.com/huggingface/transformers/issues/42641 | open | [
"bug"
] | 2025-12-05T04:39:35Z | 2025-12-23T11:08:56Z | 5 | rebel-seinpark |
vllm-project/vllm | 30,098 | [Doc]: Misleading Logic & Docstring in `block_quant_to_tensor_quant` (Block FP8) | ### 📚 The doc issue
The docstring and implementation of the `block_quant_to_tensor_quant` function have a critical mismatch regarding the dequantization process, leading to numerical errors when used outside of specific fused kernel backends.
### Problematic Function
The function is currently implemented as:
```py... | https://github.com/vllm-project/vllm/issues/30098 | closed | [
"documentation"
] | 2025-12-05T02:12:07Z | 2025-12-24T17:22:50Z | 0 | xqoasis |
huggingface/transformers | 42,638 | Routing Replay for MoEs | ### Feature request
RecentRL approaches for training MoE models increasingly rely on **Routing Replay**, as described in the following papers:
- https://huggingface.co/papers/2507.18071
- https://huggingface.co/papers/2510.11370
- https://huggingface.co/papers/2512.01374
Without going into the training details, Rout... | https://github.com/huggingface/transformers/issues/42638 | open | [
"Feature request"
] | 2025-12-04T23:58:14Z | 2025-12-05T16:29:05Z | 2 | qgallouedec |
vllm-project/vllm | 30,084 | [Performance]: Should I expect linear scaling with pure DP? | ### Proposal to improve performance
_No response_
### Report of performance regression
_No response_
### Misc discussion on performance
I decided to benchmark vLLM 0.11.2 with pure DP of Qwen/Qwen2.5-32B-Instruct deployment(before benchmarking DP+EP with Qwen/Qwen3-30B-A3B-Instruct-2507) on DP1 vs DP8 (H200):
DP1... | https://github.com/vllm-project/vllm/issues/30084 | open | [
"performance"
] | 2025-12-04T19:52:45Z | 2025-12-16T04:09:24Z | 7 | pbelevich |
vllm-project/vllm | 30,082 | [Usage]: Turn off reasoning for Kimi-K2-Thinking? | ### Your current environment
```text
Output of collect_env.py-
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang... | https://github.com/vllm-project/vllm/issues/30082 | open | [
"usage"
] | 2025-12-04T19:32:13Z | 2025-12-08T23:02:58Z | 2 | vikrantdeshpande09876 |
vllm-project/vllm | 30,075 | [Feature]: Default eplb num_redundant_experts to the lowest valid value if unspecified | ### 🚀 The feature, motivation and pitch
EPLB requires the number of experts to be chosen up front and there is a known minimum valid value that can be derived from the vllm startup configuration. Since extra EPLB experts trades kv cache memory for potential performance improvements, but that is not guaranteed to pay... | https://github.com/vllm-project/vllm/issues/30075 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-04T18:19:03Z | 2025-12-20T21:00:23Z | 4 | smarterclayton |
vllm-project/vllm | 30,058 | [Feature]: Multi-Adapter Support for Embed Qwen3 8B Embedding Model | ### 🚀 The feature, motivation and pitch
Hi Team, do we currently support multi-adapter (LoRA) support for embedding models, specifically Qwen3 8B Embedding model? If not, when can we expect the support? Thanks :)
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issu... | https://github.com/vllm-project/vllm/issues/30058 | open | [
"feature request"
] | 2025-12-04T12:05:15Z | 2025-12-04T19:42:04Z | 4 | dawnik17 |
huggingface/accelerate | 3,873 | How to specify accelerate launch yaml config item when running with torchrun | I've read the doc [Launching Accelerate scripts](https://huggingface.co/docs/accelerate/basic_tutorials/launch), and would like to launch with torchrun. However, the doc does not mention how to specify configs like `distribute_type` when using torchrun.
What are the equivalent of these configurations when using torchr... | https://github.com/huggingface/accelerate/issues/3873 | open | [] | 2025-12-04T07:27:43Z | 2026-01-03T15:07:19Z | null | WhoisZihan |
huggingface/lerobot | 2,580 | How can the leader arm be synchronized to follow the follower arm during inference? | https://github.com/huggingface/lerobot/issues/2580 | open | [] | 2025-12-04T07:22:07Z | 2025-12-11T02:53:11Z | null | zhoushaoxiang | |
vllm-project/vllm | 30,023 | [Feature]: Support qwen3next with GGUF? | ### 🚀 The feature, motivation and pitch
With v0.11.0, `vllm` report:
```
vllm | (APIServer pid=1) ValueError: GGUF model with architecture qwen3next is not supported yet.
```
https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking-GGUF
I did a simple dig for this, seems the vllm has support of `Qwen3-Next` as ar... | https://github.com/vllm-project/vllm/issues/30023 | open | [
"feature request"
] | 2025-12-04T03:40:26Z | 2025-12-18T05:31:57Z | 0 | zeerd |
vllm-project/vllm | 29,998 | [Bug]: cannot send two POST to /v1/chat/completions endpoint with identic tool function name with model GPT-OSS-120B | ### Your current environment
<details>
<summary>The bug is reproducible with docker image vllm/vllm-openai:v0.12.0</summary>
```yaml
services:
vllm-gptoss-large:
image: vllm/vllm-openai:v0.12.0
restart: always
shm_size: '64gb'
deploy:
resources:
reservations:
devices:
... | https://github.com/vllm-project/vllm/issues/29998 | open | [
"bug"
] | 2025-12-03T21:41:35Z | 2025-12-19T15:53:43Z | 14 | pd-t |
huggingface/transformers | 42,589 | Incorrect tokenization `tokenizers` for escaped strings / Mismatch with `mistral_common` | ### System Info
```
In [3]: mistral_common.__version__
Out[3]: '1.8.6'
```
```
In [4]: import transformers; transformers.__version__
Out[4]: '5.0.0.dev0'
```
```
In [5]: import tokenizers; tokenizers.__version__
Out[5]: '0.22.1'
```
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official exam... | https://github.com/huggingface/transformers/issues/42589 | closed | [
"bug"
] | 2025-12-03T10:57:35Z | 2025-12-16T10:45:35Z | 5 | patrickvonplaten |
huggingface/diffusers | 12,781 | Impossible to log into Huggingface/Diffusers Discord | ### Describe the bug
When trying to verify my Discord/Huggingface account, no matter what I do, I end up with this message:
<img width="512" height="217" alt="Image" src="https://github.com/user-attachments/assets/d1d0f18b-c80f-4862-abde-fb49ee505ddd" />
Has the HF Discord died? If that is the case, what alternative... | https://github.com/huggingface/diffusers/issues/12781 | closed | [
"bug"
] | 2025-12-03T09:42:55Z | 2025-12-04T15:11:42Z | 4 | tin2tin |
vllm-project/vllm | 29,944 | [Usage]:It seems that the prefix cache has not brought about any performance benefits. | ### Your current environment
```
root@ubuntu:/vllm-workspace# python3 collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~... | https://github.com/vllm-project/vllm/issues/29944 | open | [
"usage"
] | 2025-12-03T07:03:49Z | 2025-12-03T07:04:37Z | 0 | wenba0 |
vllm-project/vllm | 29,940 | [Usage]: QWen2-Audio-7B support | ### Your current environment
We encountered numerous peculiar issues during the QWen2-Audio-7B conversion process. Do we currently support Qwen2-Audio-7B? If so, could you provide a demo?
Thank you very much!
### 🐛 Describe the bug
Refer to Whisper's demo
### Before submitting a new issue...
- [x] Make sure you ... | https://github.com/vllm-project/vllm/issues/29940 | closed | [
"usage"
] | 2025-12-03T06:04:07Z | 2025-12-04T14:23:05Z | 1 | freedom-cui |
huggingface/datasets | 7,893 | push_to_hub OOM: _push_parquet_shards_to_hub accumulates all shard bytes in memory | ## Summary
Large dataset uploads crash or hang due to memory exhaustion. This appears to be the root cause of several long-standing issues.
### Related Issues
This is the root cause of:
- #5990 - Pushing a large dataset on the hub consistently hangs (46 comments, open since 2023)
- #7400 - 504 Gateway Timeout when u... | https://github.com/huggingface/datasets/issues/7893 | closed | [] | 2025-12-03T04:19:34Z | 2025-12-05T22:45:59Z | 2 | The-Obstacle-Is-The-Way |
vllm-project/vllm | 29,920 | [Feature]: Add support for fused fp8 output to FlashAttention 3 | ### 🚀 The feature, motivation and pitch
On Hopper, we use FlashAttention as the default attention backend. When o-proj is quantized to fp8, we are leaving performance on the table as FA3 does not support fused output fp8 quant. With Triton/ROCm/AITER backends we saw up to 8% speedups with attention+quant fusion.
vLL... | https://github.com/vllm-project/vllm/issues/29920 | open | [
"help wanted",
"performance",
"feature request",
"torch.compile"
] | 2025-12-02T20:16:31Z | 2026-01-05T20:53:11Z | 4 | ProExpertProg |
vllm-project/vllm | 29,917 | [Feature]: VLLM_DISABLE_COMPILE_CACHE should be a config flag | ### 🚀 The feature, motivation and pitch
`vllm serve` does a nice printout of non-default config flags. VLLM_DISABLE_COMPILE_CACHE gets used enough that it should have an equivalent config flag for it
Offline @ProExpertProg mentioned we can treat it like VLLM_DEBUG_DUMP_PATH where we have both and the env var overrid... | https://github.com/vllm-project/vllm/issues/29917 | open | [
"help wanted",
"feature request",
"torch.compile"
] | 2025-12-02T20:06:01Z | 2025-12-05T05:19:12Z | 6 | zou3519 |
huggingface/inference-playground | 102 | How to know when a model is outdated ? | I'm testing https://huggingface.co/chat/models/openai/gpt-oss-20b and there I asked this:
```
do you know any github repository created in 2025?
<p>Sure! Here are a few GitHub repositories that were created in 2025 (all with their public “created date” and a short description):</p>
Repository | Created | Short descri... | https://github.com/huggingface/inference-playground/issues/102 | open | [] | 2025-12-02T17:10:51Z | 2025-12-02T17:10:51Z | null | mingodad |
vllm-project/vllm | 29,875 | [Usage]: Is there a way to inject the grammar into the docker directly | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/29875 | open | [
"usage"
] | 2025-12-02T12:30:56Z | 2025-12-03T11:53:43Z | 1 | chwundermsft |
vllm-project/vllm | 29,871 | [Usage]: Extremly low token input speed for DeepSeek-R1-Distill-Llama-70B | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version ... | https://github.com/vllm-project/vllm/issues/29871 | open | [
"usage"
] | 2025-12-02T11:25:25Z | 2025-12-02T15:30:53Z | 2 | muelphil |
vllm-project/vllm | 29,866 | [Doc]: | ### 📚 The doc issue
# Installation des bibliothèques XAI
!pip install shap
!pip install lime
!pip install alibi
!pip install interpret
!pip install dalex
!pip install eli5
### Suggest a potential alternative/fix
# Installation des bibliothèques XAI
!pip install shap
!pip install lime
!pip install alibi
!pip instal... | https://github.com/vllm-project/vllm/issues/29866 | closed | [
"documentation"
] | 2025-12-02T10:43:04Z | 2025-12-02T10:50:10Z | 0 | hassaballahmahamatahmat5-cpu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.