repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm | 29,865 | [Doc]: | ### 📚 The doc issue
# Installation des bibliothèques XAI
!pip install shap
!pip install lime
!pip install alibi
!pip install interpret
!pip install dalex
!pip install eli5
### Suggest a potential alternative/fix
# Installation des bibliothèques XAI
!pip install shap
!pip install lime
!pip install alibi
!pip instal... | https://github.com/vllm-project/vllm/issues/29865 | closed | [
"documentation"
] | 2025-12-02T10:43:01Z | 2025-12-02T10:50:00Z | 0 | hassaballahmahamatahmat5-cpu |
vllm-project/vllm | 29,864 | [Usage]: I am unable to run the GLM-4.5-Air-REAP-82B-A12B-nvfp4 model on an RTX 5090. | ### Your current environment
I am unable to run the GLM-4.5-Air-REAP-82B-A12B-nvfp4 model on an RTX 5090.
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version ... | https://github.com/vllm-project/vllm/issues/29864 | open | [
"usage"
] | 2025-12-02T10:13:31Z | 2025-12-05T17:06:30Z | 2 | east612-ai |
huggingface/diffusers | 12,772 | How to convert diffusers model to wan2.2 format | I see convert_wan_to_diffusers.py in diffusers repo, but no convert_diffusers_to_wan.py. Do you have plan to upload a convert scripts?
| https://github.com/huggingface/diffusers/issues/12772 | open | [] | 2025-12-02T09:19:29Z | 2025-12-02T09:19:29Z | null | wikiwen |
huggingface/diffusers | 12,764 | When will the img2img pipeline of FLUX.2-dev be released? | I see that the current version(0.36.0-dev) only updated the text-to-image pipeline for Flux2. We are looking forward to the update of the image-to-image pipeline!
| https://github.com/huggingface/diffusers/issues/12764 | open | [] | 2025-12-01T11:25:35Z | 2025-12-01T11:41:56Z | 1 | guanxyu |
huggingface/smolagents | 1,890 | Question: how to use sever-side tools provided by Google Gemini or OpenAI GPT? | Gemini has some server-side tools like google_search (https://ai.google.dev/gemini-api/docs/google-search) or google_map. OpenAI also has server-side tools like web_search. Does Smolagents support using such server-side tools from agents? If so, how? | https://github.com/huggingface/smolagents/issues/1890 | open | [] | 2025-12-01T05:16:01Z | 2025-12-23T10:49:45Z | null | victorx-deckard |
huggingface/agents-course | 623 | Message: Submission received, but no valid/matching task IDs were found in the 1 answers provided. Score did not improve previous record, leaderboard not updated. | I am correctly downloading the GAIA 2023 Level 1 validation dataset using snapshot_download and load_dataset. This submission is for Unit 4 Agent Course.
data_dir = snapshot_download(
repo_id="gaia-benchmark/GAIA",
repo_type="dataset"
)
dataset = load_dataset(data_dir, "2023_level1", spl... | https://github.com/huggingface/agents-course/issues/623 | open | [
"question"
] | 2025-12-01T02:09:21Z | 2025-12-01T02:09:21Z | null | ShwetaBorole |
huggingface/tokenizers | 1,902 | Guide: Compiling `tokenizers` on Android/Termux | Hello Hugging Face team and fellow developers,
This is a guide for anyone trying to install `tokenizers` (or packages that depend on it, like `transformers` or `docling`) on an Android device using [Termux](https://termux.dev/). Currently, there are no other issues mentioning Termux, so hopefully, this guide can help ... | https://github.com/huggingface/tokenizers/issues/1902 | open | [] | 2025-12-01T00:46:42Z | 2025-12-01T00:46:42Z | 0 | Manamama-Gemini-Cloud-AI-01 |
vllm-project/vllm | 29,747 | [Bug]: --scheduling-policy=priority & n>1 crashes engine | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
When running with priority scheduling, e.g.:
```bash
vllm serve Qwen/Qwen3-0.6B --scheduling-policy=priority
```
an... | https://github.com/vllm-project/vllm/issues/29747 | closed | [
"bug"
] | 2025-11-30T13:20:23Z | 2025-12-02T22:42:30Z | 3 | hibukipanim |
vllm-project/vllm | 29,735 | [Usage]:Accessing free_blocks count from LLMEngine or LLM ? | ### Your current environment
```text
None
```
### How would you like to use vllm
I'm doing research on key-value caching optimization. I want to know how to determine the number of free blocks during runtime. I tried manually creating the engine, but I couldn't find the method after searching through the code.
AI ke... | https://github.com/vllm-project/vllm/issues/29735 | closed | [
"usage"
] | 2025-11-29T19:21:50Z | 2025-12-05T14:01:42Z | 4 | H-T-H |
vllm-project/vllm | 29,722 | [RFC]: Add Balance Scheduling | ### Motivation.
**Limitations of the current vLLM v1 scheduling strategy**
vLLM v1 scheduling currently enables chunkedprefill by default, which processes prefill and decode requests simultaneously in a single scheduling session. This can impact the overall system throughput and performance in some scenarios.
Balance... | https://github.com/vllm-project/vllm/issues/29722 | open | [
"RFC"
] | 2025-11-29T09:28:43Z | 2025-12-02T08:23:33Z | 0 | GDzhu01 |
vllm-project/vllm | 29,707 | [Usage]: Workaround to run model on GPUs with Compute Capability < 8.0? | ### Your current environment
Problem:
I am unable to run the Qwen3-VL-32B-Instruct-AWQ-4bit model due to a CUDA compute capability requirement. My hardware consists of two NVIDIA QUADRO RTX 5000 cards (16GB each, 32GB total) with a compute capability of 7.5. The software framework (likely a recent version of PyTorch o... | https://github.com/vllm-project/vllm/issues/29707 | closed | [
"usage"
] | 2025-11-29T00:47:39Z | 2025-11-30T06:04:29Z | 5 | seasoncool |
vllm-project/vllm | 29,679 | [Usage]: Get request total time | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/29679 | closed | [
"usage"
] | 2025-11-28T14:03:16Z | 2025-12-01T09:34:12Z | 5 | chwundermsft |
huggingface/lerobot | 2,543 | Different finetune loss given policy.type=pi0 / policy.path=lerobot/pi0_base. What is the difference? | Hi, I have two different configurations:
1. ` --dataset.repo_id=BBBBBBob/libero_goal_lerobot \
--dataset.root=/home/j84403411/data/libero/libero_goal_lerobot \
--policy.path=lerobot/pi0_base \
--policy.push_to_hub=false \
--policy.use_proprio=true \
--output_dir=/home/j84403411/checkpoint/libero/pi0/libero_goal_pr... | https://github.com/huggingface/lerobot/issues/2543 | closed | [] | 2025-11-28T12:34:38Z | 2025-12-01T11:25:17Z | null | BBBBBBob |
huggingface/transformers.js | 1,467 | Missing the following inputs: input_points, input_labels (or input_boxes) | ### Question
thanks for your excellent works!
I just write test code for SlimSAM model powered by transformers.js referring to this example(with some improvements): https://github.com/huggingface/transformers.js-examples/blob/main/segment-anything-webgpu/index.js
my code for `decode` method:
```js
// Decode segment... | https://github.com/huggingface/transformers.js/issues/1467 | closed | [
"question"
] | 2025-11-28T10:01:04Z | 2025-12-01T04:04:59Z | null | sherlockchou86 |
vllm-project/vllm | 29,643 | [Usage]: Enabling Tool call in the Python SDK | ### Your current environment
Hi Team,
I am currently exploring VLLM to enable tool calling, and I need some support with this. It would be very helpful if you could provide the corresponding Python code.
What I’m trying to achieve is to configure the Python package with the same settings that I use when starting the... | https://github.com/vllm-project/vllm/issues/29643 | open | [
"usage"
] | 2025-11-28T04:39:47Z | 2025-12-01T14:54:47Z | 2 | Madan1215 |
vllm-project/vllm | 29,641 | [Bug]: Max Tokens not being honoured in Chat Completions for GPTOSS model | ### Your current environment
It seems that in the latest version of vllm 0.11+ Chat Completions has stopped honouring `max_tokens` with GPTOSS 120B model, the below request payload has stopped working with `max_tokens` earlier the same payload would provide an output to the limit of the `max_tokens` provided..
Inter... | https://github.com/vllm-project/vllm/issues/29641 | closed | [
"bug"
] | 2025-11-28T03:39:34Z | 2025-12-21T02:39:32Z | 16 | soodrohit |
huggingface/transformers | 42,464 | Add SAM 3D Objects Encoder | ### Model description
## Model Description
SAM 3D Objects is Meta AI's foundation model for 3D object reconstruction from single images. I'm proposing to add the **encoder component** (DINOv2-based Vision Transformer) to Transformers.
**Scope**: Encoder only, not the full 3D generation pipeline (which includes Gauss... | https://github.com/huggingface/transformers/issues/42464 | open | [
"New model"
] | 2025-11-27T19:48:28Z | 2025-12-05T10:32:33Z | 1 | Aznix07 |
vllm-project/vllm | 29,584 | [Usage]: Can KV Cache be disabled in non-autoregressive generation tasks? | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/29584 | open | [
"usage"
] | 2025-11-27T05:30:08Z | 2025-12-05T02:40:28Z | 5 | GitEventhandler |
vllm-project/vllm | 29,574 | [Performance]: Using vLLM to accelerate VLM models, does the vision encoding part currently support parallel processing, or is it still being processed serially? | ### Proposal to improve performance
I found that currently, images of different sizes are processed sequentially, which significantly slows down the processing speed. How can we adapt to parallel processing? Should we resize or pad all images to the same size for batch processing, or can we run multiple encoder models... | https://github.com/vllm-project/vllm/issues/29574 | open | [
"performance"
] | 2025-11-27T03:51:36Z | 2025-11-27T10:54:09Z | 2 | NewZxy |
vllm-project/vllm | 29,564 | [Doc]: Make PyTorch profiler gzip and CUDA time dump configurable | ### 📚 The doc issue
We observed that enabling both use_gzip and dump_self_cuda_time_total in the vLLM torch profiler introduces significant overhead during profiling.
For example, when profiling 10 randomly generated requests (1000 input tokens, 200 output tokens) on an A100 using the Qwen3-32B model, we found that ... | https://github.com/vllm-project/vllm/issues/29564 | closed | [
"documentation"
] | 2025-11-27T02:21:20Z | 2025-12-01T04:30:48Z | 1 | zhangruoxu |
vllm-project/vllm | 29,562 | [Bug]: "\n\n" content between reasoning and tool_call content when tool_call and stream mode | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
https://github.com/QwenLM/Qwen3/issues/1755
When stream mode true, the response contains content "\n\n" between rea... | https://github.com/vllm-project/vllm/issues/29562 | open | [
"bug"
] | 2025-11-27T01:49:04Z | 2025-11-27T01:49:04Z | 0 | NiuBlibing |
vllm-project/vllm | 29,560 | [Doc]: Batch Invariance on Ampere Platforms | ### 📚 The doc issue
Does the batch invariance feature released in vllm 0.11.2 support the Ampere architecture? If adaptations are required, what modifications need to be made?
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for releva... | https://github.com/vllm-project/vllm/issues/29560 | closed | [
"documentation"
] | 2025-11-27T01:06:49Z | 2025-11-27T14:21:30Z | 0 | luo1206 |
huggingface/trl | 4,582 | Does the GRPO Trainer support multi-image input for Qwen3-VL? | Does the GRPO Trainer support multi-image input for Qwen3-VL? | https://github.com/huggingface/trl/issues/4582 | open | [
"🏋 GRPO"
] | 2025-11-26T14:03:57Z | 2025-11-27T08:08:25Z | 1 | Lestoky |
huggingface/diffusers | 12,722 | How to run qwen-image in kaggle gpu T4 * 2 successfully? | ```python3
!python3 -m pip install -U diffusers peft bitsandbytes
import diffusers, torch, math
qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16, low_cpu_mem_usage=True, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwarg... | https://github.com/huggingface/diffusers/issues/12722 | open | [] | 2025-11-26T12:53:30Z | 2025-11-28T03:54:07Z | null | chaowenguo |
vllm-project/vllm | 29,494 | [Doc]: Documentation inconsistency: Blog mentions append_slots() but codebase uses allocate_slots() | ### 📚 The doc issue
The Automatic Prefix Caching blog post mentions:
> "The scheduler calls kv_cache_manager.append_slots()"
However, the actual codebase uses a unified `kv_cache_manager.allocate_slots()` method that handles both prefill and decode requests.
**Location:**
- Blog: [[link to blog post](https://docs.v... | https://github.com/vllm-project/vllm/issues/29494 | closed | [
"documentation"
] | 2025-11-26T11:37:40Z | 2025-11-26T11:46:08Z | 1 | pradsgit |
huggingface/transformers | 42,418 | Custom nn.Parameter initialization in PreTrainedModel subclasses is overwritten by post_init()/from_pretrained() causing NaNs/Zeros | ### System Info
- `transformers` version: 4.57.1
- Platform: Linux-4.18.0-147.mt20200626.413.el8_1.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: 0.18.2
- PyTorch v... | https://github.com/huggingface/transformers/issues/42418 | open | [
"Usage",
"Feature request",
"bug"
] | 2025-11-26T10:29:57Z | 2025-12-01T15:10:32Z | 10 | Noietch |
huggingface/diffusers | 12,720 | how to quantization wan 2.2 vace after loading lora? | ```python3
diffusers.WanVACEPipeline.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', vae=diffusers.AutoencoderKLWan.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', subfolder='vae', torch_dtype=torch.float32), torch_dtype=torch.bfloat16, quantization_config=diffusers.PipelineQuantizationConfig(quant_ba... | https://github.com/huggingface/diffusers/issues/12720 | open | [] | 2025-11-26T10:11:38Z | 2025-12-11T17:29:30Z | null | chaowenguo |
vllm-project/vllm | 29,489 | [Usage]: Removing last generated token from output and kv cache | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/29489 | open | [
"usage"
] | 2025-11-26T09:35:37Z | 2025-11-26T09:36:37Z | 0 | josefdra |
huggingface/diffusers | 12,719 | how to use quantization and device_map=balance to run qwen-image on kaggle T4 * 2 | ```python3
!python3 -m pip install -U diffusers peft bitsandbytes protobuf
import diffusers, torch, math
qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_ty... | https://github.com/huggingface/diffusers/issues/12719 | open | [] | 2025-11-26T08:35:46Z | 2025-11-26T09:15:54Z | null | chaowenguo |
vllm-project/vllm | 29,474 | [P/D][Metrics] Consider combined/summed metrics (e.g. ttft and e2e_request_latency) for prefill and decode instances | ### Your current environment
<details>
<summary>Env info snipped</summary>
```
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 24.04.1 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ub... | https://github.com/vllm-project/vllm/issues/29474 | open | [
"usage",
"kv-connector"
] | 2025-11-26T02:50:17Z | 2025-11-26T08:31:18Z | 1 | mgw2168-1 |
vllm-project/vllm | 29,472 | [Installation]: how to Install vllm on dell promax gb10 | ### Your current environment
I failed to install vllm on dell promax gb10 , mesages as followed
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Aug_20_01:57:39_PM_PDT_2025
Cuda compilation tools, release 13.0, V13.0.88
Build cuda_13.0.r13.0/compiler.3642471... | https://github.com/vllm-project/vllm/issues/29472 | open | [
"installation"
] | 2025-11-26T02:41:18Z | 2026-01-01T12:28:29Z | 2 | goactiongo |
vllm-project/vllm | 29,436 | [Bug]: vLLM Serve with LMCache enabled produces wrong output for GPT-OSS-20B | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
vLLM serve command with LMCache enabled produces wrong output with GPT OSS 20B for subsequent invocations with the s... | https://github.com/vllm-project/vllm/issues/29436 | open | [
"bug"
] | 2025-11-25T19:27:24Z | 2025-11-25T19:27:24Z | 0 | ksuma2109 |
vllm-project/vllm | 29,409 | [Usage]: Custom Logits Processors V1 how to get tokenizer into processor | ### Problem with tokenizer
For the second day now, I've been unable to figure out how to get a tokenizer inside a custom processor. I used the processor from the documentation as an example. I examined each object through debug, but couldn't find where to extract the tokenizer. In v0, this was done simply at the reque... | https://github.com/vllm-project/vllm/issues/29409 | closed | [
"usage"
] | 2025-11-25T13:24:17Z | 2025-12-02T10:33:18Z | 6 | cvadim130 |
vllm-project/vllm | 29,389 | [Bug]: race condition in shm_broadcast.py | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
# Problem
`ShmRingBuffer` is a lock-free queue, the implementation of which https://github.com/vllm-project/vllm/blo... | https://github.com/vllm-project/vllm/issues/29389 | open | [
"bug"
] | 2025-11-25T09:25:52Z | 2025-11-25T09:25:52Z | 0 | nvjullin |
vllm-project/vllm | 29,382 | [Doc]: Expert Parallel Deployment says "Tensor parallel size (always 1 for now)" is confusing | ### 📚 The doc issue
On page https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment/#single-node-deployment it says Tensor parallel size can only be 1 but didn't mention the behavior of Attention Layers
On page https://docs.vllm.ai/en/latest/serving/data_parallel_deployment/ it says The expert layers will ... | https://github.com/vllm-project/vllm/issues/29382 | closed | [
"documentation"
] | 2025-11-25T07:54:42Z | 2025-12-13T17:38:01Z | 0 | xeonliu |
huggingface/transformers | 42,375 | SAM3 single image inference with multiple text prompt | Hi
I'm trying to run inference on a single image, aiming to get the bbox of objects from several different categories (e.g. "a person" and "a car").
the only example i found for prompting with multiple categories is in the "Batched Inference with Text Prompts" example, but then i need to unnecessarily duplicate my imag... | https://github.com/huggingface/transformers/issues/42375 | open | [] | 2025-11-25T06:20:09Z | 2026-01-05T16:16:01Z | 9 | iariav |
huggingface/trl | 4,569 | [doc issue] doc on "GRPO with replay buffer" buggy | ### Reproduction
The code example in [doc for "GRPO with replay buffer"](https://huggingface.co/docs/trl/main/en/experimental#grpo-with-replay-buffer) is kind of buggy.
- It imports `GRPOWithReplayBufferTrainer` but never used.
- It uses `GRPOWithReplayBufferConfig` but never imported
- The code is apparently not e... | https://github.com/huggingface/trl/issues/4569 | closed | [
"🐛 bug",
"📚 documentation",
"🏋 GRPO"
] | 2025-11-25T01:30:28Z | 2025-11-25T21:28:00Z | 2 | DNXie |
vllm-project/vllm | 29,306 | [Usage]: dots.llm.inst is not running due to a type error | ### Your current environment
I'm trying to run dots llm on 4xH100
```
vllm serve \
--uvicorn-log-level=info \
rednote-hilab/dots.llm1.inst \
--dtype auto \
--api-key xxx \
--host 0.0.0.0 \
--port 8000 \
--tensor-parallel-size 4
--ipc=host \
--trust-remote-code
```
It failed to run, I got the following crash... | https://github.com/vllm-project/vllm/issues/29306 | closed | [
"usage"
] | 2025-11-24T09:48:08Z | 2025-11-28T23:25:27Z | 1 | rain-1 |
huggingface/transformers | 42,353 | SAM3 point mode is not supported yet? | In [SAM3 official example](https://github.com/facebookresearch/sam3/blob/main/examples/sam3_for_sam1_task_example.ipynb
), they also support point mode. But it seems that transforms has not supported yet?
| https://github.com/huggingface/transformers/issues/42353 | closed | [] | 2025-11-24T07:16:52Z | 2025-11-26T15:16:25Z | 1 | haofanwang |
vllm-project/vllm | 29,297 | [Bug]: What should the image embedding input be like? I have tested with multiple cases but it all fails | ### Your current environment
```text
==============================
System Info
==============================
OS : Red Hat Enterprise Linux release 8.10 (Ootpa) (x86_64)
GCC version : (GCC) 8.5.0 20210514 (Red Hat 8.5.0-26)
Clang version : Could not co... | https://github.com/vllm-project/vllm/issues/29297 | closed | [
"usage"
] | 2025-11-24T06:02:09Z | 2025-11-26T13:00:17Z | 2 | DamonZhao-sfu |
vllm-project/vllm | 29,294 | [CPU Backend] [Doc]: Update Installation Docs for Arm CPUs | ### 📚 The doc issue
This page https://docs.vllm.ai/en/stable/getting_started/installation/cpu/#arm-aarch64 is very out-dated.
We now release Arm CPU wheels and images thanks to #26931 and #27331
We need to update that page to reflect that :)
### Suggest a potential alternative/fix
_No response_
### Before submitt... | https://github.com/vllm-project/vllm/issues/29294 | closed | [
"documentation",
"cpu"
] | 2025-11-24T05:33:46Z | 2025-12-15T19:46:26Z | 5 | fadara01 |
vllm-project/vllm | 29,286 | [Performance]: cache system prompt token ids | ### Proposal to improve performance
As system prompt can be very long now, tokenize the system prompt can be slow.
Using H20, tokenize 5000 tokens cost about 10ms as below:

System prompts are usually fixed and reusable, so ca... | https://github.com/vllm-project/vllm/issues/29286 | open | [
"performance"
] | 2025-11-24T01:55:32Z | 2025-11-28T08:57:06Z | 2 | Eviannn |
vllm-project/vllm | 29,281 | [Usage]: Removing last generated token from output and kv cache | ### Your current environment
```text
vLLM 0.11.2
```
### How would you like to use vllm
Hey guys,
i am currently working on a research project where i load a moe-like model and i want to do routing based on the sequence state.
The goal is to let expert 0 generate until it reaches the eos token, then remove the eos... | https://github.com/vllm-project/vllm/issues/29281 | closed | [
"usage"
] | 2025-11-23T22:39:16Z | 2025-11-26T09:33:53Z | 0 | josefdra |
vllm-project/vllm | 29,277 | [Usage]: Creating and accessing per request arguments inside vLLM model | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to implement token compression techniques on the output embeddings of Qwen-2.5VL which would occur dynamically as the number of requests change. Is there anyway to implement this in vLLM? I see t... | https://github.com/vllm-project/vllm/issues/29277 | open | [
"usage"
] | 2025-11-23T21:59:31Z | 2025-11-23T21:59:31Z | 0 | minlu21 |
huggingface/transformers | 42,344 | How to fine-tune SAM 3D models? | ### Model description
The recently released SAM 3D work is truly remarkable. Do you plan to integrate it into Transformers and enable fine-tuning?
https://huggingface.co/facebook/sam-3d-objects
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide usefu... | https://github.com/huggingface/transformers/issues/42344 | open | [
"New model"
] | 2025-11-23T17:40:57Z | 2025-11-23T17:40:57Z | null | bruno686 |
vllm-project/vllm | 29,264 | [Usage]: Monkey Patching SamplingParams | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/29264 | closed | [
"usage"
] | 2025-11-23T11:45:54Z | 2025-11-24T13:03:50Z | 2 | josefdra |
vllm-project/vllm | 29,263 | [Feature]: Enable flash attention (and/or FlashMLA) for AMD GPUs | ### 🚀 The feature, motivation and pitch
In [this page from flash-attention](https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#amd-rocm-support), I checked that the upstream `flash-attention` currently has composable_kernel (for newer AMD GPUs) and WIP triton (for older RNDA GPUs, etc.) implementations. ... | https://github.com/vllm-project/vllm/issues/29263 | closed | [
"feature request",
"rocm"
] | 2025-11-23T11:28:47Z | 2025-12-05T01:54:08Z | 4 | Inokinoki |
vllm-project/vllm | 29,245 | [Usage]: 启动 qwen3 vl 超级超级超级慢,sglang 启动很快,可能的原因是什么? | ### Your current environment
连执行 python collect_env.py 都很慢,环境是直接 uv 安装的
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04... | https://github.com/vllm-project/vllm/issues/29245 | open | [
"usage"
] | 2025-11-22T20:41:27Z | 2025-12-11T11:23:54Z | 3 | hucorz |
huggingface/candle | 3,208 | `cudarc` dynamic loading support | Currently, `candle` uses `cudarc` with the `dynamic-linking` feature, which requires the executable to find the DLLs or SOs at startup. However, it would be more convenient if `candle` also supported the `dynamic-loading` feature from `cudarc` to load DLLs or SOs at runtime.
Is it possible for `candle` to support it? | https://github.com/huggingface/candle/issues/3208 | open | [] | 2025-11-22T18:18:25Z | 2025-11-25T09:00:27Z | 7 | mayocream |
huggingface/transformers | 42,331 | SAM3 does not support custom inference resolutions | ### System Info
Note: I am running the latest git version, sys Info should not be relevant to the issue
$ transformers env
Traceback (most recent call last):
File "/home/master-andreas/panopticon/test_env/bin/transformers", line 3, in <module>
from transformers.cli.transformers import main
File "/home/master... | https://github.com/huggingface/transformers/issues/42331 | closed | [
"bug"
] | 2025-11-21T22:17:08Z | 2025-12-10T22:46:39Z | 3 | Kallinteris-Andreas |
huggingface/lerobot | 2,500 | question about the gr00t policy | hi,
I see here https://huggingface.co/docs/lerobot/en/groot that gr00t is intergrated into lerobot.
is it in sync with the original repo: https://github.com/NVIDIA/Isaac-GR00T ?
I see in original repo that the dataset used to fine-tune, is a bit different from the original lerobot format, like libero dataset (https... | https://github.com/huggingface/lerobot/issues/2500 | open | [
"question",
"policies"
] | 2025-11-21T21:45:19Z | 2025-12-03T14:03:34Z | null | yanan1116 |
vllm-project/vllm | 29,192 | Tool Calling Parsers Fail to Populate tool_calls Array for Qwen2.5-Coder Models | # Tool Calling Parsers Fail to Populate `tool_calls` Array for Qwen2.5-Coder Models
## Environment
- **vLLM Version**: v0.11.2.dev115+g56669c1f2 (Blackwell build)
- **Model**: Qwen/Qwen2.5-Coder-14B-Instruct-AWQ
- **Quantization**: AWQ
- **Python Version**: 3.x (Docker container)
- **GPU**: NVIDIA GeForce RTX 5080 (16... | https://github.com/vllm-project/vllm/issues/29192 | open | [] | 2025-11-21T18:31:19Z | 2025-11-21T18:31:19Z | 0 | Platano78 |
vllm-project/vllm | 29,180 | [Bug]: Recorded `EngineCoreEventType.QUEUED` time is off | ### Your current environment
<details>
</details>
### 🐛 Describe the bug
When running benchmarking with the CLI:
- on one side the serving point `vllm serve ...`
- on the other side the benchmarking client : `vllm bench serve...`
(note that the two are running on the same machine, there is no networking delay)
I ... | https://github.com/vllm-project/vllm/issues/29180 | closed | [
"bug"
] | 2025-11-21T12:58:36Z | 2025-11-30T20:56:44Z | 4 | sducouedic |
vllm-project/vllm | 29,177 | [Usage]: Vllm + Intervl model local infra Image preprocessing / request adding becomes bottleneck even with more CPU cores — how to accelerate? | ### Your current environment
vllm 0.11.0
### How would you like to use vllm
### current phenomenon
When doing **batched image classification** (64 images per batch) with InternVL3_5-1B, the bottleneck is clearly in the **"Adding requests"** phase (image preprocessing).
Even after increasing CPU cores and setting ... | https://github.com/vllm-project/vllm/issues/29177 | open | [
"usage"
] | 2025-11-21T10:56:29Z | 2025-12-01T14:08:22Z | 3 | Passenger12138 |
huggingface/trl | 4,554 | Better packing of data with best-fit decrease strategy | Hello,
When using packing with the bfd strategy, it looks like too much truncation is done when the seq_length is smaller than the average length of the sequences we want to pack.
For example :
```python
from datasets import Dataset
from trl import pack_dataset
examples = {
"input_ids": [[1, 2, 3, 4], [5, 6], [... | https://github.com/huggingface/trl/issues/4554 | closed | [
"✨ enhancement",
"❓ question"
] | 2025-11-21T07:53:55Z | 2025-12-16T20:37:02Z | 3 | ntnq4 |
vllm-project/vllm | 29,148 | [Usage]: Deployment of the embedding models | ### Your current environment
```text
==============================
System Info ... | https://github.com/vllm-project/vllm/issues/29148 | closed | [
"usage"
] | 2025-11-21T03:57:59Z | 2025-11-21T06:17:18Z | 3 | Root970103 |
vllm-project/vllm | 29,139 | [Feature]: Optimize collectives in TP MoE case using torch.compile pass | ### 🚀 The feature, motivation and pitch
To avoid redundant work in MoE models in the TP case, sequence parallelism was added to the Deepseek model definition in #24134 and expanded to other models in #24982. However, to avoid performing surgery on the linear layer, the current approach performs more communication tha... | https://github.com/vllm-project/vllm/issues/29139 | open | [
"help wanted",
"good first issue",
"performance",
"feature request",
"torch.compile"
] | 2025-11-21T01:36:06Z | 2025-12-07T15:39:48Z | 19 | ProExpertProg |
vllm-project/vllm | 29,097 | [Docs] Feedback for `/en/latest/` | ### 📚 The doc issue
no
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of... | https://github.com/vllm-project/vllm/issues/29097 | closed | [
"documentation"
] | 2025-11-20T14:53:44Z | 2025-11-21T07:51:57Z | 2 | ch950684-svg |
vllm-project/vllm | 29,089 | [Performance]: Can we use CUDA graph to accelerate the Qwen2_5omniAudioEncoder in Qwen2.5-Omni-3B? | ### Proposal to improve performance
<img width="3088" height="1264" alt="Image" src="https://github.com/user-attachments/assets/535d7854-b9db-4e40-8f85-1abe08b4d35e" />
The trace graph shows that Qwen2_5omniAudioEncoder has a large number of small kernel startups, indicating significant room for optimization.
Can we u... | https://github.com/vllm-project/vllm/issues/29089 | open | [
"performance"
] | 2025-11-20T12:13:58Z | 2025-11-20T12:13:58Z | 0 | xq25478 |
vllm-project/vllm | 29,078 | [Performance]: 多实例导致的cpu占用过高 | ### Your current environment
GPU: RTX4090
cuda version: cuda12.8
vllm version: 0.11.0
中文:我使用triton server的 vllm backend 启动了4个 minerU2.5 模型的实例,我的服务器上有2张卡,我每张卡启动了1个实例,我发现cpu负载有时候极高,几乎占满了我的服务器,我的服务器有96核,vllm backend使用的是AsyncLLMEngine,我观察到在单卡上启动一个实例时,我发送200张小尺寸的文字图做OCR时,fps可以达到最高,也就是每秒可以处理200张的图片,cpu负载在40-50%左右,为了进一步增加性能,... | https://github.com/vllm-project/vllm/issues/29078 | closed | [
"usage"
] | 2025-11-20T08:26:35Z | 2025-11-21T02:17:51Z | 4 | zjq1996518 |
huggingface/transformers | 42,291 | Can we disable IPython progress bar and use normal tqdm bar? | I like the normal tqdm bar much better, it is lighter, cleaner, simpler, and less stress on my eyes (no green color). I would love to have an option to use tqdm bar and not IPython bar. | https://github.com/huggingface/transformers/issues/42291 | closed | [] | 2025-11-20T01:26:11Z | 2025-12-28T08:02:45Z | 1 | weathon |
vllm-project/vllm | 29,023 | [Feature]: Disable logging `/metrics` | ### 🚀 The feature, motivation and pitch
- IGW hits `/metrics` continuously to understand the current load on the system
- This leads to an overload of logs
- We can disable this with `--disable-uvicorn-access-log`, but lose access to all access logs
We should have `--disable-uvicorn-metrics-access-log` to avoid logg... | https://github.com/vllm-project/vllm/issues/29023 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-11-19T18:25:48Z | 2025-11-19T21:57:34Z | 5 | robertgshaw2-redhat |
huggingface/sentence-transformers | 3,575 | How to override model's `max_seq_length`? | It seems that impossible to override model's max length from `sentence_bert_config.json`.
```python
from sentence_transformers import SentenceTransformer
m = SentenceTransformer("intfloat/e5-small", tokenizer_kwargs={"model_max_length":3})
print(m.tokenize(["hi hi hi hi hi hi hi hi hi hi hi hi hi"]))
# {'input_ids':... | https://github.com/huggingface/sentence-transformers/issues/3575 | open | [] | 2025-11-19T16:42:27Z | 2025-11-20T13:47:13Z | null | Samoed |
huggingface/trl | 4,546 | Does TRL support PipelineRL for compute efficiency? | Hi 👋,
I'm trying to understand whether TRL currently supports (or plans to support) the PipelineRL approach described here:
- Paper: [https://arxiv.org/pdf/2509.19128v2](https://arxiv.org/pdf/2509.19128v2?utm_source=chatgpt.com)
- Overview: [https://arxiv.org/html/2509.19128](https://arxiv.org/html/2509.19128?utm_so... | https://github.com/huggingface/trl/issues/4546 | open | [
"✨ enhancement",
"❓ question"
] | 2025-11-19T12:39:29Z | 2025-11-22T12:43:54Z | 3 | harisarang |
vllm-project/vllm | 28,996 | [Usage]: How to run a single data parallel deployment across multiple nodes without ray | ### Your current environment
2 Nodes, each node has 8 H20 GPUs.
### How would you like to use vllm
According to https://docs.vllm.ai/en/latest/serving/data_parallel_deployment/#internal-load-balancing
```shell
# node0
vllm serve Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-num-seqs 64 --max-model-len 13... | https://github.com/vllm-project/vllm/issues/28996 | closed | [
"usage"
] | 2025-11-19T06:47:22Z | 2025-11-27T06:17:22Z | 3 | crystalww |
vllm-project/vllm | 28,986 | [Feature]: Fused Kernel for GPT-OSS Router | ### 🚀 The feature, motivation and pitch
<img width="1257" height="250" alt="Image" src="https://github.com/user-attachments/assets/31eba061-522c-4521-b0a9-9f25bb36c3df" />
- Right now, we spend ~3.5% of the layer in the expert selection
- The operation is unfused
Write a fused kernel like we have for deepseek group... | https://github.com/vllm-project/vllm/issues/28986 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-11-19T03:18:25Z | 2025-12-12T16:16:37Z | 7 | robertgshaw2-redhat |
huggingface/transformers.js | 1,458 | ONNX Backend Env variable | ### Question
Hi,
For some context, I'm building an application that uses some of the models on huggingface as an annotation tool that helps create annotations for training a specialised model.
As for the specialised model, I am able to export them to onnx, and I was able to run this model in the same application, b... | https://github.com/huggingface/transformers.js/issues/1458 | open | [
"question"
] | 2025-11-19T01:26:02Z | 2025-11-25T15:36:13Z | null | Heinrik-20 |
vllm-project/vllm | 28,956 | [Bug]: OOM when profiling multimodal model with multiple images | ### Your current environment
vLLM 0.11.0
### 🐛 Describe the bug
As per title.
The error log is as follows:
```
[multiproc_executor.py:671] Traceback (most recent call last):
[multiproc_executor.py:671] File "/root/miniconda3/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 666, in work... | https://github.com/vllm-project/vllm/issues/28956 | closed | [
"bug"
] | 2025-11-18T17:36:55Z | 2025-11-25T12:38:37Z | 7 | imShZh |
huggingface/lerobot | 2,475 | Why there is difference between async inference and local inference in image resize? | I read code between `src/lerobot/async_inference/policy_server.py` and `src/lerobot/scripts/lerobot_record.py`. I found difference in these 2 code about inference which causes different image shape
1. `src/lerobot/scripts/lerobot_record.py` use this to deal with observation
And `prepare_observation_for_inference` is li... | https://github.com/huggingface/lerobot/issues/2475 | open | [
"question"
] | 2025-11-18T14:32:17Z | 2025-11-24T02:23:13Z | null | milong26 |
vllm-project/vllm | 28,943 | [Usage]: what's the right way to run embedding model in vllm 0.11.0 | ### Your current environment
```text
The output of `python collect_env.py`
```
in vllm 0.8.7,I use following code to run local vllm,all is right:
```
self.engine_args = EngineArgs(
model=self.model_path,
dtype='half',
task="embed",
trust_remote_code=True,
... | https://github.com/vllm-project/vllm/issues/28943 | open | [
"usage"
] | 2025-11-18T13:47:57Z | 2025-11-20T10:49:12Z | 3 | neverneverendup |
huggingface/trl | 4,541 | Is attn_implementation=sdpa not supported when using SFTTrainer with mllama? | When trying to use `sdpa` with mllama I get an error using the default collator. Upon writing my own collator it works.
When using `eager` implementation it gives cuda oom error. Is `sdpa` not supported? | https://github.com/huggingface/trl/issues/4541 | open | [] | 2025-11-18T11:57:01Z | 2025-11-18T11:57:01Z | 0 | osaidr |
vllm-project/vllm | 28,930 | [Usage]: How to build a qwen3vl embedding model with a custom mlp layer on the top use vllm? | ### Your current environment
```text
The output of `python collect_env.py`
```
Hi friends! I train a sft model built upon qwen3vl 2b model, we put a mlp layer on it to compress the embedding size of the backbone model. Now I want to use vllm 0.11.0 to serve it but I meet some confuse. Here is my custom class code
`... | https://github.com/vllm-project/vllm/issues/28930 | closed | [
"usage"
] | 2025-11-18T10:32:07Z | 2025-12-23T04:49:30Z | 10 | neverneverendup |
vllm-project/vllm | 28,929 | [Usage]: How | = | https://github.com/vllm-project/vllm/issues/28929 | closed | [
"usage"
] | 2025-11-18T10:26:17Z | 2025-11-18T10:30:53Z | 0 | neverneverendup |
huggingface/datasets | 7,869 | Why does dataset merge fail when tools have different parameters? | Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions... | https://github.com/huggingface/datasets/issues/7869 | open | [] | 2025-11-18T08:33:04Z | 2025-11-30T03:52:07Z | 1 | hitszxs |
vllm-project/vllm | 28,903 | [Bug]: vllm inference on qwen3-vl when use_upstream_fa is False | ### Your current environment
pip show torch vllm flash-attn
Name: torch
Version: 2.8.0
---
Name: vllm
Version: 0.11.0
Name: flash_attn
Version: 2.8.3
### 🐛 Describe the bug
unit-test code as the follows,
when simple qwen3-0.6B can run; but qwen3-vl-4b not run
```python
#coding=utf-8
"""
写单元测试来验证FA和VLLM的可用性和兼容... | https://github.com/vllm-project/vllm/issues/28903 | closed | [
"bug"
] | 2025-11-18T03:54:11Z | 2025-11-18T08:18:09Z | 1 | hedes1992 |
huggingface/lerobot | 2,465 | loss:nan grdn:nan How to solve the gradient explosion problem in PI05 training? | When training Pi05 using Lerobot, has anyone encountered a situation where gradients explode immediately after training? Errors occur when the batch_size is set to 64 or 32. How can this be resolved?
Below are my training commands and error logs.
python src/lerobot/scripts/lerobot_train.py --dataset.repo_id=aa_merge... | https://github.com/huggingface/lerobot/issues/2465 | open | [
"bug",
"policies",
"training"
] | 2025-11-18T03:46:28Z | 2025-12-03T16:13:56Z | null | Lilgeneric |
huggingface/lerobot | 2,464 | Questions about Pi0.5 Model Training Details and High Level Planning Implementation | Hello, while studying the Pi0.5 model, I have two questions regarding the model implementation that I would like to ask you:
1、The paper mentions that the model adopts two-stage pre-training and designs a comprehensive loss function. However, when checking the compute_loss part in the open-source code, it is found that... | https://github.com/huggingface/lerobot/issues/2464 | open | [
"question",
"training"
] | 2025-11-18T01:27:59Z | 2025-11-20T10:45:34Z | null | Ginldaj |
vllm-project/vllm | 28,876 | [CI Failure]: should test_cumem.py use spawn or fork in cuda? | ### Name of failing test
tests/basic_correctness/test_cumem.py
### Basic information
- [ ] Flaky test
- [x] Can reproduce locally
- [ ] Caused by external libraries (e.g. bug in `transformers`)
### 🧪 Describe the failing test
The test only fails locally for me when I use vllm main branch and on the CI of my PR, e... | https://github.com/vllm-project/vllm/issues/28876 | open | [
"ci-failure"
] | 2025-11-17T18:58:08Z | 2025-11-17T20:59:14Z | 1 | jerryzh168 |
vllm-project/vllm | 28,868 | [Bug]: When compiling with ranges, we should pass the range information to Inductor | ### Your current environment
main
### 🐛 Describe the bug
Might be more of a feature request. Context is that https://github.com/vllm-project/vllm/pull/24248 adds a new compile ranges API, where a user can specify which ranges to compile on.
We should tell Inductor how to constrain the compilation on the symints of... | https://github.com/vllm-project/vllm/issues/28868 | open | [
"bug",
"torch.compile"
] | 2025-11-17T15:41:50Z | 2026-01-05T23:37:12Z | 1 | zou3519 |
vllm-project/vllm | 28,866 | [Usage]: When is going to be the next release? | Hi everyone,
Thank you for developing such a great tool!
I was wondering when the next release is scheduled. I’m interested in running Gemma3-text type architecture GGUF quantized models with VLLM. Are there any alternatives to do this with the latest release (v0.11.0)?
I also noticed that you merged this PR with th... | https://github.com/vllm-project/vllm/issues/28866 | open | [
"usage"
] | 2025-11-17T15:24:47Z | 2025-11-19T10:51:47Z | 1 | Invalid-coder |
huggingface/transformers | 42,241 | How to use padding with Mistral? | I'm trying to understand how to use Mistral with `batch_size` > 1. One aspect of this is setting `padding="longest"` in, e.g., `MistralCommonTokenizer.encode()`. But I'm getting `TypeError: 'set' object is not callable` when I try this. Example:
```python
import torch
from transformers import MistralForCausalLM, Mistra... | https://github.com/huggingface/transformers/issues/42241 | closed | [] | 2025-11-17T12:54:21Z | 2025-11-19T06:11:44Z | null | TopCoder2K |
huggingface/chat-ui | 1,986 | HI i would like to use default_headers={ "X-HF-Bill-To": "org-name" } in my chatui local deployment how i can?? | Hi,
So i want to bill my Inference usage to my organization and like to pass default_headers={
"X-HF-Bill-To": "org-name"
} parameter how i can do that?? | https://github.com/huggingface/chat-ui/issues/1986 | open | [
"support"
] | 2025-11-17T08:33:41Z | 2025-11-17T08:33:41Z | null | aditya-oss-prog |
huggingface/diffusers | 12,672 | How to set pipe "requires_grad=true"? | I have set the variable and the model "requires_grad=true" with the following:
` pipe.transformer.requires_grad = True
pipe.vae.requires_grad = True`
`prev_sample = prev_sample.detach().requires_grad_(True)`
but the "requires_grad" of result by the pipe is still not true:
`image_tar = pipe.vae.decode(prev_sampl... | https://github.com/huggingface/diffusers/issues/12672 | closed | [] | 2025-11-17T03:36:43Z | 2025-11-20T12:19:20Z | null | micklexqg |
huggingface/diffusers | 12,669 | Flux1-Dev inference with single file ComfyUI/SD-Forge Safetensors | Is it possible to run inference with diffusers using a single-file safetensors created for ComfyUI/SD-Forge?
It looks like FluxPipeline.from_single_file() might be intended for this purpose, but I'm getting the following errors:
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_single_fil... | https://github.com/huggingface/diffusers/issues/12669 | open | [] | 2025-11-16T11:57:48Z | 2025-12-03T16:53:58Z | 12 | ddpasa |
huggingface/ai-deadlines | 41 | How to indicate ARR deadlines | Right now the yaml format assumes conferences with locations and dates, but ACL ARR has rolling deadlines not tied to a physical conference. We are largely operating around these deadlines. How can we incorporate these into this system? | https://github.com/huggingface/ai-deadlines/issues/41 | open | [] | 2025-11-15T00:26:33Z | 2025-11-15T00:26:33Z | null | morrisalp |
huggingface/diffusers | 12,662 | question on stable_audio_transformer.py | Execuse me, I am leaning the code of `class StableAudioDiTModel` , I do not know what is the argument ` global_states_input_dim` used to? It seems that it is a must component that should be packed before the hidden_states sequence. and its default dim seems larger then the transformer inner_dim. What is that... | https://github.com/huggingface/diffusers/issues/12662 | open | [] | 2025-11-14T09:26:01Z | 2025-11-25T08:53:39Z | 1 | JohnHerry |
vllm-project/vllm | 28,717 | [Usage]: Errors running vLLM docker in a closed environment with gpt-oss-120b on RTX 6000 Pro | ### Your current environment
Can't get vLLM to start with the below configuration. Seems to have issues loading in the model .safetensors. Any ideas on what could be causing it?
vllm version: 0.11.1
CPU: Intel Xeon w7-2595X
GPU: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Model: https://huggingface.co/... | https://github.com/vllm-project/vllm/issues/28717 | open | [
"usage"
] | 2025-11-14T08:49:48Z | 2025-11-20T15:45:21Z | 3 | antonkarlsson1 |
huggingface/trl | 4,525 | How to modify the advantage computation in GRPOTrainer | I’m looking to customize the advantage computation used in the DAPO algorithm. Do I need to subclass the full GRPOTrainer to do this, or is it sufficient to overwrite the logic in _generate_and_score_completions, since that method appears to handle the advantage calculation? | https://github.com/huggingface/trl/issues/4525 | open | [
"❓ question",
"🏋 GRPO"
] | 2025-11-14T03:48:17Z | 2025-11-14T11:37:18Z | null | Tuziking |
huggingface/transformers | 42,200 | Request of rewriting implementation of prediction_step in trainer.py | ### System Info
Any system. Because it's a problem coming from source code.
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (gi... | https://github.com/huggingface/transformers/issues/42200 | open | [
"Good Second Issue",
"bug"
] | 2025-11-14T00:13:40Z | 2025-12-18T14:29:32Z | 3 | Yacklin |
huggingface/transformers | 42,197 | Attempt to access socket despite HF_HUB_OFFLINE = 1 if cache warmed outside current process | ### System Info
- `transformers` version: 4.57.1
- Platform: Linux-6.6.84.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.13.0
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.6.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTor... | https://github.com/huggingface/transformers/issues/42197 | closed | [
"Good Second Issue",
"bug"
] | 2025-11-13T21:38:29Z | 2025-11-24T09:33:54Z | 6 | fr1ll |
vllm-project/vllm | 28,646 | [Feature][P2]: Implement CI Build Time and Size Guards | ### 🚀 The feature, motivation and pitch
### Description
Once we optimize the Docker build, we need to prevent regressions. Create CI checks that fail if build time exceeds thresholds or if image size grows beyond acceptable limits. Also set up monitoring dashboards.
### What You'll Do
1. Create Python scripts to che... | https://github.com/vllm-project/vllm/issues/28646 | open | [
"feature request",
"ci/build"
] | 2025-11-13T12:50:34Z | 2025-11-13T18:55:29Z | 0 | rzabarazesh |
huggingface/diffusers | 12,650 | Question about the `# Copied from` system | Hi team! 👋
While working on improving docstrings and type hints across scheduler files (issue #9567), I've noticed the `# Copied from` pattern used extensively throughout the codebase.
Examples:
- Functions like `betas_for_alpha_bar` are duplicated across multiple schedulers
- Output classes like `DDPMSchedulerOutpu... | https://github.com/huggingface/diffusers/issues/12650 | open | [] | 2025-11-13T11:53:22Z | 2025-12-21T22:44:03Z | 3 | delmalih |
huggingface/transformers | 42,179 | Add TileLang Kernel Support | ### Feature request
I would like to propose adding support for TileLang kernel in the transformers library. TileLang is a modular approach for writing attention kernels that could provide flexibility and performance benefits.
github link: https://github.com/tile-ai/tilelang
- Add TileLang as an optional attention back... | https://github.com/huggingface/transformers/issues/42179 | open | [
"Feature request"
] | 2025-11-13T11:38:33Z | 2025-11-13T11:38:33Z | 0 | crownz248 |
huggingface/tokenizers | 1,885 | Feature request: Characters delimiter argument | I wish to develop a k-mer-character-based BPE tokenizer using your beautiful Rust package, for genomic applications. Unfortunately, it doesn't seem to support defining a characters delimiter. As I see it, it is a pretty straightforward change, instead of iterating a word by character, first split it by the delimiter an... | https://github.com/huggingface/tokenizers/issues/1885 | open | [] | 2025-11-13T10:40:29Z | 2025-11-28T07:51:07Z | 1 | VasLem |
vllm-project/vllm | 28,629 | [Usage]: TPOT per request information was not collected by vllm bench serve | ### Your current environment
```text
The output of `python collect_env.py`
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.... | https://github.com/vllm-project/vllm/issues/28629 | open | [
"usage"
] | 2025-11-13T09:20:19Z | 2025-11-13T09:20:19Z | 0 | jlwang1996 |
vllm-project/vllm | 28,626 | [Bug]:Qwen3-VL-32B-AWQ model memory usage: 8k context limit with 40GB VRAM? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
Running models on the latest stable vLLM release: https://huggingface.co/QuantTrio/Qwen3-VL-32B-Instruct-AWQ
The mod... | https://github.com/vllm-project/vllm/issues/28626 | open | [
"bug"
] | 2025-11-13T08:00:20Z | 2025-11-17T07:08:47Z | 3 | maxin9966 |
vllm-project/vllm | 28,622 | [Bug]: Can we able to benchmark Quantized MOE models Either W8A8 or W8A16 ? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version ... | https://github.com/vllm-project/vllm/issues/28622 | open | [
"bug"
] | 2025-11-13T07:26:56Z | 2025-11-13T07:27:06Z | 0 | logesh13 |
vllm-project/vllm | 28,610 | [Usage]: Does 0.11.0 suport tree attenton with eagle? | ### Your current environment
Does 0.11.0 suport tree attenton with eagle? Do I need to enable it manually?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already s... | https://github.com/vllm-project/vllm/issues/28610 | open | [
"usage"
] | 2025-11-13T03:35:02Z | 2025-12-03T17:08:16Z | 1 | wincle |
huggingface/datasets | 7,864 | add_column and add_item erroneously(?) require new_fingerprint parameter | ### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason i... | https://github.com/huggingface/datasets/issues/7864 | open | [] | 2025-11-13T02:56:49Z | 2025-12-07T14:41:40Z | 2 | echthesia |
vllm-project/vllm | 28,566 | [Usage]: pd disagg scenario , I discover in the decoder , also has the prefill operation, is it normal ? | ### Your current environment
when num_computed_tokens is less than num_prompt_tokens, it will enter prefill operation
<img width="633" height="149" alt="Image" src="https://github.com/user-attachments/assets/bab96187-37c8-4ea2-ba68-9f52dda07f6b" />
and i found, num_computed_tokens is possible less than num_prompt_t... | https://github.com/vllm-project/vllm/issues/28566 | open | [
"usage"
] | 2025-11-12T16:18:53Z | 2025-11-12T16:18:53Z | 0 | yangshanjun |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.