repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm | 31,567 | [RFC]: Why custom_mask is not exposed on FlashInfer to get more flexible use case? | ### Motivation.
Like what tensorrt-llm does https://github.com/NVIDIA/TensorRT-LLM/blob/6c1abf2d45c77d04121ebe10f6b29abf89373c60/tensorrt_llm/_torch/attention_backend/flashinfer.py#L411C17-L411C28
### Proposed Change.
expose the custom_weight to support use case like relative attention bias
### Feedback Period.
_N... | https://github.com/vllm-project/vllm/issues/31567 | open | [
"RFC"
] | 2025-12-31T06:00:07Z | 2025-12-31T06:00:07Z | 0 | npuichigo |
vllm-project/vllm | 31,564 | [Bug]: Qwen3-VL-8B-Instruct has accuracy issue - Multi modal accuracy issue | ### Your current environment
**Current input format:**
messages = [
{"role": "system", "content": system_prompt},
{
"role": "user",
"content": [
{"type": "text", "text": user_prompt},
{
"type": "ima... | https://github.com/vllm-project/vllm/issues/31564 | open | [
"bug"
] | 2025-12-31T05:13:32Z | 2026-01-02T04:29:14Z | 3 | Dineshkumar-Anandan-ZS0367 |
huggingface/lerobot | 2,737 | SARM WITH PI05: Why trainning loss getting more noise? | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
```
### Description
[SARM with pi05 training for folding towel task _ fold_towel_v3_0 – Weights & Biases.pdf](https://github.com/user-attachments/files/24389716/SARM.with.pi05.training.for.folding.towel.task._.fold_towel_v3_0.Weights.Bias... | https://github.com/huggingface/lerobot/issues/2737 | closed | [
"question",
"training"
] | 2025-12-31T03:20:16Z | 2026-01-02T08:01:25Z | null | xianglunkai |
huggingface/lerobot | 2,736 | Questions about VLA multi-task training. | ### Ticket Type
💡 Feature Request / Improvement
### Environment & System Info
```Shell
- LeRobot version: 0.4.2
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.10.18
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: 6.1.1
- PyTorch ver... | https://github.com/huggingface/lerobot/issues/2736 | open | [
"enhancement",
"question",
"examples",
"training"
] | 2025-12-31T03:12:02Z | 2026-01-04T20:02:02Z | null | yquanli |
vllm-project/vllm | 31,555 | [Docs] Feedback for `/en/stable/`MONSTERDOG | ### 📚 The doc issue
[Projets (1).csv](https://github.com/user-attachments/files/24389184/Projets.1.csv)
[Projets.csv](https://github.com/user-attachments/files/24389185/Projets.csv)
[MonsterDog_Pilot_ROI_ISO42001_Report.pdf](https://github.com/user-attachments/files/24389187/MonsterDog_Pilot_ROI_ISO42001_Report.pdf)
... | https://github.com/vllm-project/vllm/issues/31555 | closed | [
"documentation"
] | 2025-12-31T01:20:55Z | 2025-12-31T05:18:48Z | 0 | s33765387-cpu |
huggingface/lerobot | 2,735 | Buy the camera? | Hi! Where do I buy the camera and the whole SO-ARM101 kit?
I find the kit at a chinese website like WoWRobo Robotics with only Paypal payment. But is that it? How do I buy the camera otherwise? | https://github.com/huggingface/lerobot/issues/2735 | open | [
"question",
"sensors"
] | 2025-12-30T22:32:42Z | 2025-12-30T22:51:39Z | null | JFI12 |
huggingface/candle | 3,272 | Added support for Vulkan, any interest? | I have a Intel Arc A770 16GB GPU and wanted to use it with candle.
I took niklasha's work on niklas-vulkan-2 branch cherry-pick's into the current main branch.
I (when I say I, I mean I was the navigator, Codex 5.2 max did the work) added the following:
Added Vulkan queue-family selection and synchronize() so VulkanD... | https://github.com/huggingface/candle/issues/3272 | open | [] | 2025-12-30T02:58:27Z | 2025-12-30T03:00:12Z | 0 | davidwynter |
vllm-project/vllm | 31,515 | [Feature]: need scheduler solution with high priority to process prefill | ### 🚀 The feature, motivation and pitch
I have a model situiation which is that the model just care about the throughtput not care about the time delay, so I need a schedule solution which can get the high priority to process prefill and after all prefill is finished in the batch and then process the decode, this sol... | https://github.com/vllm-project/vllm/issues/31515 | open | [
"feature request"
] | 2025-12-30T02:09:35Z | 2025-12-30T02:09:35Z | 0 | 184603418 |
vllm-project/vllm | 31,486 | [Feature]: GLM 4.7 vocab padding feature | ### 🚀 The feature, motivation and pitch
The number of attention heads in GLM-4.7 is 96, so I’m trying to run the FP8 version with 6× H20 GPUs using tensor parallelism (tp=6).
However, vllm serve fails and due to `151552 cannot be divided by 6`.
This seems to be caused by the vocab size 151552 not being divisible by... | https://github.com/vllm-project/vllm/issues/31486 | open | [
"feature request"
] | 2025-12-29T09:30:35Z | 2026-01-06T02:45:22Z | 3 | H100-H200-B200 |
vllm-project/vllm | 31,484 | [Usage]: RuntimeError when running Qwen2.5-VL-7B-Instruct with vllm: Potential version incompatibility | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/31484 | open | [
"usage"
] | 2025-12-29T08:36:11Z | 2025-12-30T02:40:38Z | 1 | puyuan1996 |
huggingface/diffusers | 12,899 | Training script of z-image controlnet? | Can diffusers provide training script of z-image controlnet? | https://github.com/huggingface/diffusers/issues/12899 | open | [] | 2025-12-29T08:30:09Z | 2025-12-29T08:30:09Z | 0 | universewill |
vllm-project/vllm | 31,480 | [Usage]: run deepseek v3.2 failed | ### Your current environment
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not c... | https://github.com/vllm-project/vllm/issues/31480 | open | [
"usage"
] | 2025-12-29T07:33:04Z | 2025-12-29T07:33:04Z | 0 | ljwps |
vllm-project/vllm | 31,479 | [Feature]: Enable LoRA support for tower and connector in more MM models | ### 🚀 The feature, motivation and pitch
Regarding multi-modal models, we have supported adding LoRA to the tower encoder and connector,see: #26674, but have only implemented it for a few models (`Qwen VL series` and `idefics3`). There is no reason not to support other multi-modal models.
### Solution
For the remai... | https://github.com/vllm-project/vllm/issues/31479 | open | [
"help wanted",
"feature request"
] | 2025-12-29T07:28:52Z | 2026-01-06T02:03:29Z | 4 | jeejeelee |
vllm-project/vllm | 31,474 | [Feature]: GLM 4.7 vocab padding feature | ### 🚀 The feature, motivation and pitch
The number of attention heads in GLM-4.7 is 96, so I’m trying to run the FP8 version with 6× H20 GPUs using tensor parallelism (tp=6).
However, vllm serve fails and due to `151552 cannot be divided by 6`.
This seems to be caused by the vocab size 151552 not being divisible by... | https://github.com/vllm-project/vllm/issues/31474 | closed | [
"feature request"
] | 2025-12-29T04:55:28Z | 2025-12-29T09:28:17Z | 0 | H100-H200-B200 |
vllm-project/vllm | 31,469 | [Feature]: Optimize the definition of the fake function in the code. | ### 🚀 The feature, motivation and pitch
The current code contains some fake function definitions, which are placed together with the main logic, such as `all_reduce_fake`. In the `parallel_state.py` file, can we define a file called `parallel_state_fake.py` and move all the corresponding fake functions to this file, ... | https://github.com/vllm-project/vllm/issues/31469 | open | [
"feature request"
] | 2025-12-29T03:14:26Z | 2025-12-29T06:16:08Z | 3 | lengrongfu |
vllm-project/vllm | 31,467 | [RFC]: A Triton operator dispatch mechanism through modified `CustomOp` | ### Motivation.
Triton is becoming increasingly important in vLLM, and we've noticed its use in many models, quantization processes, and general workflows. Meanwhile, vLLM supports various backends. Typically, to achieve high performance, **different implementations of the Triton kernels** are used on different hardwa... | https://github.com/vllm-project/vllm/issues/31467 | open | [
"RFC"
] | 2025-12-29T02:44:13Z | 2026-01-06T07:38:29Z | 12 | MengqingCao |
vllm-project/vllm | 31,437 | [Bug]: Streaming tool calls missing id/type/name in finish chunk | ### Your current environment
vLLM 0.14.0rc1.dev3 (but also affects main branch as of today)
### Model
GLM-4.7-AWQ with `--tool-call-parser glm47` (also affects other parsers that emit complete tool calls)
### What is the issue?
When streaming tool calls, the finish chunk code in `serving_chat.py` overwrites the to... | https://github.com/vllm-project/vllm/issues/31437 | closed | [] | 2025-12-27T23:54:20Z | 2025-12-29T13:10:54Z | 0 | amittell |
vllm-project/vllm | 31,414 | [Feature][Cleanup]: Unify `vllm.utils.flashinfer` and `vllm.model_executor.layers.quantization.utils.flashinfer_utils` | ### 🚀 The feature, motivation and pitch
its confusing to have both
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page... | https://github.com/vllm-project/vllm/issues/31414 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-27T18:27:00Z | 2025-12-31T22:25:36Z | 4 | robertgshaw2-redhat |
vllm-project/vllm | 31,398 | [Doc]: Eagle3 with tensor parallelism | ### 📚 The doc issue
According to https://docs.vllm.ai/en/latest/features/spec_decode/#speculating-using-eagle-based-draft-models:
> The EAGLE based draft models need to be run without tensor parallelism (i.e. draft_tensor_parallel_size is set to 1 in speculative_config), although it is possible to run the main mode... | https://github.com/vllm-project/vllm/issues/31398 | open | [
"documentation"
] | 2025-12-27T03:10:50Z | 2026-01-04T01:21:07Z | 3 | JSYRD |
huggingface/transformers | 43,048 | Need to understand difference between TP support via transformers code v/s Pytorch's native parallelize_module API. | Based on the existing code base of transformers, below sequence of operations are performed on model object to make it TP compatible.
- TP Plan for Llama: https://github.com/huggingface/transformers/blob/a7f29523361b2cc12e51c1f5133d95f122f6f45c/src/transformers/models/llama/configuration_llama.py#L113
- self._tp_plan ... | https://github.com/huggingface/transformers/issues/43048 | open | [] | 2025-12-26T10:05:38Z | 2026-01-05T15:35:13Z | 1 | quic-meetkuma |
huggingface/lerobot | 2,721 | The virtual machine is unable to recognize the keyboard. | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
(base) tom@tom-VMware-Virtual-Platform:~/lerobot_alohamini$ python check_lerobot.py
使用现有的DISPLAY: :0
=== 环境诊断 ===
Python 版本: 3.12.12 | packaged by conda-forge | (main, Oct 22 2025, 23:25:55) [GCC 14.3.0]
DISPLAY 环境变量: :0
XDG_SESSION_TYPE 环境... | https://github.com/huggingface/lerobot/issues/2721 | open | [
"question"
] | 2025-12-26T08:02:27Z | 2025-12-26T08:02:37Z | null | ht202 |
huggingface/transformers | 43,045 | Multimodal chat sample | ### Feature request
Add a sample covering chat scenario including images, videos or audio.
### Motivation
`AutoModelForCausalLM`'s `use_cache` is barely documented.
Describe a pattern handling the following cases
1. Tokenizer replaces tokens that are already in kv cache with a different token. For example, the model... | https://github.com/huggingface/transformers/issues/43045 | closed | [
"Feature request"
] | 2025-12-26T06:16:53Z | 2025-12-31T10:36:38Z | 9 | Wovchena |
sgl-project/sglang | 15,860 | [Ask for help] How to deploy GLM-4.7 | Hi, can anyone help me to deploy GLM-4.7? I encounter a bug when using `sglang==0.5.6.post2` (which is latest on `https://github.com/sgl-project/sglang`). What is the correct version for GLM-4.7?
```
launch_server.py: error: argument --tool-call-parser: invalid choice: 'glm47' (choose from 'deepseekv3', 'deepseekv31', ... | https://github.com/sgl-project/sglang/issues/15860 | open | [] | 2025-12-26T02:59:06Z | 2025-12-28T21:21:17Z | 2 | sunjie279 |
huggingface/tokenizers | 1,919 | De/tokenization on CUDA | Could at least de-tokenization be done directly on CUDA? Like in my hack `bpedecode_vec` in https://github.com/pytorch/pytorch/issues/135704#issue-2520180382 which indexes into a detokenization vocab byte table via `repeat_interleave`
Also, maybe for better CUDAGraph-ability / no CPU syncs, there should be some static... | https://github.com/huggingface/tokenizers/issues/1919 | open | [] | 2025-12-26T02:20:49Z | 2026-01-05T10:51:17Z | 1 | vadimkantorov |
vllm-project/vllm | 31,361 | [Usage]: Question about the dummy run。It seems the dummy run use different precision? | ### Question
I am trying to modify vllm. especially the **tp** communication, i'am tring to **break all-reduce into reduce-scatter + all-gather**.
However I encountered precision problem, after i print the hidden states. it seems each layer has around +-0.01 diff, when it accumulated over all the layers, the result... | https://github.com/vllm-project/vllm/issues/31361 | closed | [
"usage"
] | 2025-12-25T16:38:03Z | 2025-12-27T03:41:27Z | 0 | Dingjifeng |
vllm-project/vllm | 31,353 | [Bug]: KV Cache grows continuously with just one chat completion request using meta-llama/Llama-3.2-1B on L40 GPU with Flash Attention and finally completed after 10 minutes | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version ... | https://github.com/vllm-project/vllm/issues/31353 | open | [
"bug",
"help wanted"
] | 2025-12-25T13:56:52Z | 2025-12-27T15:55:34Z | 1 | aravilli |
sgl-project/sglang | 15,825 | Is it normal that Qwen3-30B-A3B runs slower than Qwen3-8B? | I serve two models on the Ascend 910 platform (following sglang's ascend examples) with the same tp2dp8 and benchmarked them.
Before testing, I suppose A3B will be faster than 8B for fewer activated tensor blocks.
But the result is different:
### qwen 30B A3B
```
export SGLANG_SET_CPU_AFFINITY=1
export PYTORCH_NPU_AL... | https://github.com/sgl-project/sglang/issues/15825 | open | [] | 2025-12-25T11:26:10Z | 2025-12-25T11:26:10Z | 0 | yucc-leon |
vllm-project/vllm | 31,344 | [Usage]: how to pass param logits_processors in AsyncEngineArgs? | ### Your current environment
import torch
from transformers import LogitsProcessor
from transformers.generation.logits_process import _calc_banned_ngram_tokens
from typing import List, Set
class NoRepeatNGramLogitsProcessor(LogitsProcessor):
def __init__(self, ngra... | https://github.com/vllm-project/vllm/issues/31344 | open | [
"usage"
] | 2025-12-25T10:12:02Z | 2025-12-25T13:30:54Z | 0 | cqray1990 |
huggingface/diffusers | 12,889 | Question about qwen-image-edit-2511 loading warning | When loading the model qwen-image-edit-2511 using the diffusers library, I encounter the following warning:
The config attributes {'zero_cond_t': True} were passed to QwenImageTransformer2DModel, but are not expected and will be ignored. Please verify your config.json configuration file.
This suggests that the zero_c... | https://github.com/huggingface/diffusers/issues/12889 | closed | [] | 2025-12-25T07:06:28Z | 2025-12-25T08:56:28Z | 2 | wizardbob |
sgl-project/sglang | 15,810 | [Bug] hicache 3fs backend global metadata much instance deploy bug | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15810 | open | [] | 2025-12-25T06:52:45Z | 2025-12-25T09:42:30Z | 4 | weibingo |
vllm-project/vllm | 31,319 | [Bug]: GLM-4.7-FP8 missing beginning <think> tag | ### Your current environment
I am on docker nightly vLLM API server version 0.14.0rc1.dev104+g8ee90c83f
### 🐛 Describe the bug
I hosted the model via vllm and already without reasoning_parser, I found the model output with directly output without <think> but having close tag </think> later.
```
root@iv-ydzbs5zs... | https://github.com/vllm-project/vllm/issues/31319 | open | [
"bug"
] | 2025-12-24T18:45:34Z | 2026-01-06T07:59:45Z | 16 | Nemo-G |
vllm-project/vllm | 31,278 | [Usage]:请问Qwen3-VL本地加载模式支持单独加载LoRA么? | 请问Qwen3-VL本地加载模式支持单独加载LoRA么? | https://github.com/vllm-project/vllm/issues/31278 | open | [
"usage"
] | 2025-12-24T11:33:08Z | 2025-12-25T03:52:16Z | 3 | dengdeng-cat |
vllm-project/vllm | 31,272 | [Performance]: b200x8 deepseek-ai/DeepSeek-V3.2-Exp max perf | ### Proposal to improve performance
_No response_
### Report of performance regression
Do you have any ideas on how to increase TPS? I have two servers — one with H200 ×8 and another with B200 ×8. They use the same startup script, but the performance is almost identical. In my opinion, B200 should be faster than H20... | https://github.com/vllm-project/vllm/issues/31272 | open | [
"performance"
] | 2025-12-24T09:48:01Z | 2025-12-24T10:09:29Z | 0 | evgeniiperepelkin |
huggingface/trl | 4,747 | Addition of Supervised Reinforcement Learning | ### Feature request
https://arxiv.org/pdf/2510.25992 can i work on its implementation ?
### Motivation
Better approach then previous RL's
### Your contribution
I can work on it following reference paper | https://github.com/huggingface/trl/issues/4747 | open | [] | 2025-12-24T09:20:32Z | 2025-12-24T09:20:32Z | 0 | kushalgarg101 |
vllm-project/vllm | 31,270 | [Bug]: Can run Speculative decode with PP >2? | ### Your current environment
vllm:0.12.0
### 🐛 Describe the bug
I run vllm:0.12.0 with start args like this:
`python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 --port 8080 --dtype bfloat16 --model /Qwen3-32B \
--pipeline-parallel-size 2 \
--gpu-memory-utilization 0.9 --max-model-len 32768 --max-num-b... | https://github.com/vllm-project/vllm/issues/31270 | open | [
"bug"
] | 2025-12-24T09:10:05Z | 2025-12-26T07:27:11Z | 1 | frankie-ys |
sgl-project/sglang | 15,739 | [Bug] Failed to deploy DeepSeek-V3.2 with LMCache | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15739 | open | [] | 2025-12-24T08:45:29Z | 2025-12-29T22:55:27Z | 1 | niceallen |
sgl-project/sglang | 15,710 | [Bug] Using TBO, but no overlap in decoding phase? | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15710 | open | [] | 2025-12-24T02:22:19Z | 2025-12-24T02:22:19Z | 0 | ziyuhuang123 |
sgl-project/sglang | 15,707 | [Feature] diffusion: TurboDiffusion achieves a 200x speedup on a single GPU, bringing video into the second-level era | ### Checklist
- [ ] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Motivation
https://github.com/thu-ml/TurboDiffusion
When can it be in... | https://github.com/sgl-project/sglang/issues/15707 | open | [] | 2025-12-24T01:50:02Z | 2025-12-30T08:45:43Z | 1 | xiaolin8 |
huggingface/transformers | 43,023 | How to investigate "CAS service error" during model downloading? | ### System Info
(nm) PS C:\Users\myuser\AppData\Local\anaconda3\envs\nm\Lib\site-packages\transformers\commands> python .\transformers_cli.py env
```
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.57.3
- Platform: Windows-10-10.0.19045-SP0
- Python v... | https://github.com/huggingface/transformers/issues/43023 | open | [
"bug"
] | 2025-12-23T14:48:51Z | 2025-12-25T14:36:42Z | null | satyrmipt |
vllm-project/vllm | 31,217 | [Usage]: suffix decoding | ### Your current environment
Does suffix decoding necessarily require a repetition penalty of 1?
### How would you like to use vllm
Does suffix decoding necessarily require a repetition penalty of 1?
In suffix decoding, I found that when the repetition penalty is not equal to 1, the acceleration is not significant. ... | https://github.com/vllm-project/vllm/issues/31217 | open | [
"usage"
] | 2025-12-23T10:43:45Z | 2025-12-24T02:56:35Z | 1 | jiangix-paper |
huggingface/lerobot | 2,707 | Transformers dependency | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
- lerobot version: 0.4.3
- Platform: Linux-5.14.0-570.26.1.el9_6.x86_64-x86_64-with-glibc2.34
- Python version: 3.12.12
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.3.5
- PyTorch version: ... | https://github.com/huggingface/lerobot/issues/2707 | closed | [
"bug",
"question",
"dependencies"
] | 2025-12-23T10:37:53Z | 2025-12-23T23:43:10Z | null | RomDeffayet |
vllm-project/vllm | 31,216 | [RFC]: Sampling Optimization: move gather of logits after argmax. | ### Motivation.
As shown in the left part of the following picture, in the original sampling procedure we perform `llm_head` and `gather` first, then perform `argmax` to full `logits`. However, we can in fact move `gather` after `argmax` to reduce both the communication volume of `gather` and the computation load of `... | https://github.com/vllm-project/vllm/issues/31216 | open | [
"RFC"
] | 2025-12-23T10:23:34Z | 2025-12-26T03:33:04Z | 2 | whx-sjtu |
huggingface/diffusers | 12,884 | Compatibility issues regarding checkpoint/VAE dependency conflicts when Diffusers load Civitai LoRA | Hello everyone, I'm currently learning to use diffusers and would like to ask all my friends a question. I saw a good lora on Civitai, but this lora has requirements for checkpoint and vea. So I downloaded both models as the author requested. However, when I ran the following code, an error occurred.
The specific code ... | https://github.com/huggingface/diffusers/issues/12884 | closed | [] | 2025-12-23T10:11:27Z | 2025-12-23T13:41:47Z | 1 | hhhFuture |
vllm-project/vllm | 31,211 | [Doc]: Add missing GPT-OSS tool calling instructions | ### 📚 The doc issue
Currently the `openai` tool calling format is not documented in [the tool calling documentation](https://docs.vllm.ai/en/stable/features/tool_calling/). However it is documented in the [cookbook](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#tool-use)
### Suggest a potential... | https://github.com/vllm-project/vllm/issues/31211 | closed | [
"documentation"
] | 2025-12-23T08:35:09Z | 2025-12-25T05:29:11Z | 0 | amithkk |
huggingface/lerobot | 2,704 | Training XVLA: IndexError with auto mode; size mismatch with joint mode on 14D joint-action dataset | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
```
### Description
I am trying to train XVLA with base and folding checkpoint on a 14D joint-action dataset.
When I set --policy.action_mode=auto
lerobot-train \
--dataset.repo_id= \
--output_dir=./outputs/xvla_bim... | https://github.com/huggingface/lerobot/issues/2704 | closed | [
"bug",
"documentation",
"question",
"policies",
"dataset",
"CI",
"examples",
"training"
] | 2025-12-23T07:20:25Z | 2025-12-23T08:54:21Z | null | DaKhanh |
vllm-project/vllm | 31,205 | ValueError: Qwen3OmniMoeThinkerForConditionalGeneration does not support LoRA yet. |
hi, I have trained qwen3-omni thinker via ms-swift. However, when I tried to infer qwen3-omni with lora ckpt, an error occurred:
```
ValueError: Qwen3OmniMoeThinkerForConditionalGeneration does not support LoRA yet.
```
I have tried many verions of vllm including 0.9.2, 0.11.0 and 0.12.0
here is my script:
```
CUD... | https://github.com/vllm-project/vllm/issues/31205 | open | [
"usage"
] | 2025-12-23T06:52:11Z | 2025-12-29T14:50:37Z | 2 | VJJJJJJ1 |
vllm-project/vllm | 31,204 | [RFC]: Supporting Multi MTP layers in Speculative Decoding (EagleProposer) | ### Motivation.
The EagleProposer for speculative decoding is only able to utilize the first MTP layer.
However, the model [XiaomiMiMo/MiMo-V2-Flash](https://huggingface.co/XiaomiMiMo/MiMo-V2-Flash) has 3 MTP layers.
Is there any plan or ongoing PR to extend support for multi MTP layers in speculative decoding?
btw, [... | https://github.com/vllm-project/vllm/issues/31204 | open | [
"RFC"
] | 2025-12-23T03:34:05Z | 2025-12-23T03:34:05Z | 0 | DingYibin |
huggingface/lerobot | 2,701 | Image keys with underscores not supported when migrating to v0.4.x | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
Python 3.12.3, LeRobot versions 0.3.4 and 0.4.2
From v0.4.2:
lerobot version: 0.4.2
- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface Hub version: 0.35.3
- Datasets version:... | https://github.com/huggingface/lerobot/issues/2701 | open | [
"bug",
"question",
"policies",
"sensors",
"processor"
] | 2025-12-23T03:27:41Z | 2025-12-23T03:27:50Z | null | dangr |
huggingface/lerobot | 2,700 | Training an Smolvla model on the lerobot/aloha_sim_insertion_human dataset does not converge | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
Ubuntu 22.04
lerobot 0.4.1
python 3.10
lerobot-train \
--job_name aloha_smolvla \
--output_dir $OUTPUT_DIR \
--env.type=aloha \
--env.task="AlohaInsertion-v0" \
--policy.type=smolvla \
--policy.load_vlm_weights=true \
--steps=... | https://github.com/huggingface/lerobot/issues/2700 | open | [
"question",
"policies",
"dataset",
"simulation",
"robots",
"training"
] | 2025-12-23T03:13:47Z | 2025-12-30T21:05:50Z | null | sslndora0612-max |
vllm-project/vllm | 31,202 | [Bug]: Mixtral Fp8 Accuracy is Degraded | ### Your current environment
H200
### 🐛 Describe the bug
- launch
```bash
vllm serve amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV --enforce-eager -tp 2
```
- eval
```bash
lm_eval \
--model local-completions \
--tasks gsm8k \
--model_args "model=amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV,base_url=http://localhost:8000/v1/co... | https://github.com/vllm-project/vllm/issues/31202 | closed | [
"bug",
"help wanted"
] | 2025-12-23T02:27:28Z | 2025-12-23T02:42:58Z | 1 | robertgshaw2-redhat |
vllm-project/vllm | 31,200 | [Bug]: class Request and block_hasher has cirular reference, may cause memory leak. | ### Your current environment
<summary> Running MultiModal Network with prefix caching will cause memory leak. </summary>
<details>
<code>
class Request:
def __init__(
...
self.block_hashes: list[BlockHash] = []
self.get_hash_new_full_blocks: Callable[[], list[BlockHash]] | None = None
... | https://github.com/vllm-project/vllm/issues/31200 | open | [
"bug"
] | 2025-12-23T01:55:47Z | 2025-12-23T15:02:37Z | 1 | frelam |
huggingface/diffusers | 12,881 | Is that a bug of prompt2prompt pipeline with replace word pormpt? | ### Describe the bug
It performance the same when return different cross attention map, is implement error or just the problem with prompt2prompt.
### Reproduction
Use stable-diffusion-2-1:
`images = pipe(["A turtle playing with a ball", "A monkey playing with a ball"],
generator=torch.Generator("cu... | https://github.com/huggingface/diffusers/issues/12881 | open | [
"bug"
] | 2025-12-23T01:55:06Z | 2025-12-23T01:55:06Z | 0 | lincion |
sgl-project/sglang | 15,641 | [Feature] In the event_loop_overlap function of the scheduler, can the recv operation be processed asynchronously? | ### Checklist
- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Motivation
In the _offline large-scale high-concurrency multimodal det... | https://github.com/sgl-project/sglang/issues/15641 | open | [] | 2025-12-22T14:04:10Z | 2025-12-22T14:04:10Z | 0 | titanium-temu |
sgl-project/sglang | 15,634 | [Bug] sgl-kernel does not support fa3??? | ### Checklist
- [ ] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15634 | open | [] | 2025-12-22T10:50:36Z | 2025-12-22T10:50:55Z | 0 | ziyuhuang123 |
huggingface/lerobot | 2,697 | Run pi0.5 on Libero, incorrect version of transformers | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
Copy-and-paste the text below in your GitHub issue and FILL OUT the last point.
- lerobot version: 0.4.0
- Platform: Linux-6.8.0-87-generic-x86_64-with-glibc2.35
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3... | https://github.com/huggingface/lerobot/issues/2697 | open | [
"bug",
"question",
"evaluation"
] | 2025-12-22T08:54:56Z | 2025-12-22T16:20:01Z | null | yqi19 |
huggingface/lerobot | 2,696 | RTC does not work. | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
- lerobot version: 0.4.3
- Platform: Linux-5.10.134-17.3.al8.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.7.... | https://github.com/huggingface/lerobot/issues/2696 | closed | [
"bug",
"question",
"policies",
"dataset",
"CI",
"python",
"examples",
"training"
] | 2025-12-22T03:22:23Z | 2025-12-22T05:20:39Z | null | xiaozhisky1 |
huggingface/sentence-transformers | 3,601 | how to finetuning a bi-encoder embedding model of multimodel input | I want to cluster ecommerce products by bi-encoder. For each product, it has a name(text) and an image. Can I use sentence-transfomer to finetune a bi-encoder model? The training dataset contains product clusters, like:
```
product1_name, product1_img, cluster_id1
product2_name, product2_img, cluster_id1
product3_nam... | https://github.com/huggingface/sentence-transformers/issues/3601 | open | [] | 2025-12-22T02:46:43Z | 2025-12-22T09:09:31Z | null | fancyerii |
vllm-project/vllm | 31,096 | [Usage]: Qwen3-Next: Both Instruct and Thinking models don't support function calling |
Does the Qwen3-Next model not support the function calling feature? Test results show some common error scenarios:
1. The tools should be called, but content returned something like the following:
```
{
"choices": [
{
"message": {
"content": "</think>\n{\"name\": \"send_email\", \"arguments\": {\"u... | https://github.com/vllm-project/vllm/issues/31096 | open | [
"usage"
] | 2025-12-21T12:02:08Z | 2025-12-23T03:02:02Z | 0 | PHOEBEMOON0802 |
huggingface/lerobot | 2,694 | The GT00T algorithm simply won't run and throws the following error. Could someone please help me fix it? | The GT00T algorithm simply won't run and throws the following error. Could someone please help me fix it?
n_model.post_layernorm.bias', 'backbone.eagle_model.vision_model.vision_model.post_layernorm.weight']
Traceback (most recent call last):
File "/home/ruijia/miniconda3/envs/lerobot/bin/lerobot-train", line 7, i... | https://github.com/huggingface/lerobot/issues/2694 | open | [
"bug",
"question",
"policies",
"CI",
"python",
"processor",
"examples",
"training"
] | 2025-12-21T09:12:14Z | 2025-12-24T00:06:08Z | null | wuxiaolianggit |
huggingface/lerobot | 2,693 | Wrist Roll motor not responding | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
lerobot version 0.4.0
```
### Description
I connected to the lerobot so101 bot ->setup motors->callibrated->tested teleoperation
,everything wewnt fine .But after few hours when recallibration is done in some other syste... | https://github.com/huggingface/lerobot/issues/2693 | open | [
"bug",
"question",
"teleoperators"
] | 2025-12-21T09:01:51Z | 2025-12-26T10:19:17Z | null | CHIRANJEET1729DAS |
huggingface/lerobot | 2,692 | [Bug] Too many errors when Train RL in Simulation | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
`
- LeRobot version: 0.4.3
- Platform: Linux-6.8.0-90-generic-x86_64-with-glibc2.35
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: N/A
- PyTor... | https://github.com/huggingface/lerobot/issues/2692 | open | [
"bug",
"documentation",
"question",
"dataset",
"simulation",
"tests",
"examples",
"training"
] | 2025-12-21T08:22:16Z | 2026-01-04T06:19:05Z | null | Hukongtao |
huggingface/accelerate | 3,894 | How to specify different number of process per node | I've 2 node. First node has 8 gpus while second node has 2 GPUs. I want to specify the number of process to be 8 and 2 respectively in both nodes. I'm using this config in both node. But it always tries to divide equal number of process in both node. With below config file, it's starting 5 process in both nodes:-
Node... | https://github.com/huggingface/accelerate/issues/3894 | open | [] | 2025-12-21T07:09:15Z | 2025-12-21T07:09:15Z | null | AIML001 |
vllm-project/vllm | 31,091 | [Usage]: Image Embedding Models (CLIP, Siglip, etc) | ### Your current environment
```text
root@3904bdeddb91:/vllm-workspace# python3 collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0... | https://github.com/vllm-project/vllm/issues/31091 | closed | [
"usage"
] | 2025-12-21T04:10:10Z | 2025-12-23T03:26:40Z | 2 | JamesDConley |
huggingface/lerobot | 2,690 | [Bug] Pi0 Inference RuntimeError: Dimension mismatch in Gemma eager_attention_forward (Causal Mask vs Attn Weights) | https://github.com/huggingface/lerobot/issues/2690 | closed | [
"bug",
"question",
"policies",
"dataset",
"CI",
"performance",
"robots",
"examples",
"training"
] | 2025-12-20T16:08:36Z | 2025-12-22T09:34:57Z | null | SMWTDDY | |
huggingface/lerobot | 2,689 | problem regarding to update aloha sim dataset version v2.1 to v3.0 | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
lerobot version 3.0, h100 gpu, openpi repository, training aloha simulation with pi0.5
```
### Description
During training aloha simulation, I updated lerobot aloha sim insertion dataset from compatible with 2.1 to 3.0, ... | https://github.com/huggingface/lerobot/issues/2689 | open | [
"bug",
"question",
"dataset",
"simulation",
"CI",
"robots",
"training"
] | 2025-12-20T13:42:39Z | 2025-12-24T00:06:09Z | null | conscious-choi |
sgl-project/sglang | 15,524 | [Bug] Deepseek R1 multi-turn tool calling not working | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15524 | closed | [] | 2025-12-20T10:31:36Z | 2025-12-21T01:29:43Z | 2 | ynwang007 |
vllm-project/vllm | 31,066 | [Doc]: Formatting issue in markdown file | ### 📚 The doc issue
in [paged_attention.md](https://github.com/vllm-project/vllm/blob/ff2168bca3a195b835c64a5c9012d7b6a9f34e61/docs/design/paged_attention.md#query), there is an issue where a pictures arent formatted correctly and only show the html link .
For example, specifically, in the Query subsection, we can se... | https://github.com/vllm-project/vllm/issues/31066 | closed | [
"documentation"
] | 2025-12-20T06:23:44Z | 2025-12-22T01:38:56Z | 1 | ssaketh-ch |
vllm-project/vllm | 31,044 | [CI Failure]: Blackwell Fusion Tests | ### Name of failing test
FAILED tests/compile/test_fusion_attn.py::test_attention_quant_pattern[AttentionBackendEnum.TRITON_ATTN-nvidia/Llama-4-Scout-17B-16E-Instruct-FP8-TestAttentionFp8StaticQuantPatternModel--quant_fp8-dtype1-533-128-40-8] - AssertionError: Tensor-likes are not close!
### Basic information
- [x] ... | https://github.com/vllm-project/vllm/issues/31044 | open | [
"help wanted",
"torch.compile",
"ci-failure"
] | 2025-12-19T18:49:59Z | 2025-12-26T21:58:25Z | 3 | robertgshaw2-redhat |
vllm-project/vllm | 31,043 | [BugFix]: move torch.Size across graphs in split_graph | ### 🚀 The feature, motivation and pitch
When fixing a moe x cudagraph issue (see #30914), we found that `split_graph` may generate a submodule that returns a torch.Size and later another submodule that takes torch.Size. This errors since pt2 somehow does not support `torch.Size` as output yet.
One fix is to manuall... | https://github.com/vllm-project/vllm/issues/31043 | open | [
"help wanted",
"feature request",
"torch.compile"
] | 2025-12-19T18:24:58Z | 2025-12-22T21:23:04Z | 1 | BoyuanFeng |
vllm-project/vllm | 31,039 | [Feature]: Integrate Sonic MoE | ### 🚀 The feature, motivation and pitch
https://x.com/wentaoguo7/status/2001773245318541324?s=46&t=jLcDgQXDbYe6HgFmTNYgpg
https://github.com/Dao-AILab/sonic-moe
Curious to see benchmarks!
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure yo... | https://github.com/vllm-project/vllm/issues/31039 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-19T17:29:59Z | 2026-01-04T14:10:21Z | 4 | robertgshaw2-redhat |
sgl-project/sglang | 15,481 | [Bug] Seeded Deterministic/Batch Invariant Inference Not Working on v1/completions endpoint | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15481 | closed | [
"bug",
"high priority"
] | 2025-12-19T15:04:26Z | 2025-12-20T04:32:15Z | 8 | jamesheavey |
huggingface/lerobot | 2,684 | How to manually push a dataset | Say you `lerobot-record` a dataset with the flag `--dataset.push_to_hub=False`, or you encounter any problem at uploading time.
Is using `hf upload` enough, or does `lerobot` datasets need additional stuff? | https://github.com/huggingface/lerobot/issues/2684 | open | [
"documentation",
"question",
"dataset"
] | 2025-12-19T13:00:20Z | 2025-12-19T15:41:42Z | null | mcres |
vllm-project/vllm | 31,023 | [Doc]: FP8 KV Cache: Does softmax output multiply with FP8 V directly or after dequantization? | ### 📚 The doc issue
https://docs.vllm.ai/en/v0.8.5.post1/features/quantization/quantized_kvcache.html
Question:
In the FP8 KV Cache implementation, after computing attention scores and softmax at higher precision (FP16/BF16), is the resulting attention weight matrix:
Quantized to FP8 and multiplied directly with FP8 ... | https://github.com/vllm-project/vllm/issues/31023 | closed | [
"documentation"
] | 2025-12-19T10:33:22Z | 2025-12-22T00:41:38Z | 0 | jorjiang |
vllm-project/vllm | 31,019 | [Bug]: Qwen3-VL 2:4 sparsity llm-compressor RuntimeError: shape mismatch (0.12, 0.13rc2) | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version ... | https://github.com/vllm-project/vllm/issues/31019 | open | [
"bug",
"help wanted",
"good first issue"
] | 2025-12-19T09:18:00Z | 2025-12-24T12:16:01Z | 4 | SorenDreano |
vllm-project/vllm | 31,016 | [Bug]: FlashInfer Incompatible with Sleep Mode | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
Here is a script to reproduce the bug:
I use vllm=v0.10.1 and flashinfer-python=v0.5.3.
```
from vllm import LLM, S... | https://github.com/vllm-project/vllm/issues/31016 | open | [
"bug",
"help wanted"
] | 2025-12-19T08:04:19Z | 2025-12-19T23:17:47Z | 1 | xiaoxiaosuaxuan |
huggingface/transformers.js | 1,490 | Example models for each pipeline | ### Question
Right now, I sorta use the docs and some searches to find good default models for https://workglow.dev/ for each pipeline that transformerjs has to offer. But they are not really the best, either in size or performance.
It would be great to have a list for each pipeline for fast and effective, best of br... | https://github.com/huggingface/transformers.js/issues/1490 | open | [
"question"
] | 2025-12-19T07:37:16Z | 2025-12-19T17:41:01Z | null | sroussey |
vllm-project/vllm | 31,004 | [New Model]: T5Gemma 2 | ### The model to consider.
https://huggingface.co/collections/google/t5gemma-2
### The closest model vllm already supports.
_No response_
### What's your difficulty of supporting the model you want?
I know vLLM dropped encoder-decoder support, but can we bring it back?
https://huggingface.co/docs/transformers/mo... | https://github.com/vllm-project/vllm/issues/31004 | open | [
"new-model"
] | 2025-12-19T03:55:00Z | 2025-12-20T21:37:34Z | 1 | ducviet00-h2 |
sgl-project/sglang | 15,443 | SGLang Diffusion Cookbook Proposal | # 🎨 [Community Contribution] Create SGLang Diffusion Models Cookbook
## 🎯 Goal
Create a comprehensive cookbook for diffusion models in SGLang, demonstrating SGLang's performance advantages for image and video generation workloads.
## 📋 Scope
### Models to Cover
**Image Generation:**
- Flux-1 Dev
- Flux-2
- SDX... | https://github.com/sgl-project/sglang/issues/15443 | open | [] | 2025-12-19T03:44:33Z | 2025-12-23T13:09:31Z | 1 | Richardczl98 |
vllm-project/vllm | 30,969 | [Bug]: SmolLM3-3B FP8 Fails to Load [`compressed-tensors` and `transformers-impl` compatibility issue] | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
Running in official Docker image: vllm/vllm-openai:v0.11.1
GPU: NVIDIA L4 (GCP g2-standard-8)
`| NVIDIA-SMI 570.195.03 Driver Version: 570.195.03 CUDA Version: 12.9 |`
vLLM version: 0.11.1
`... | https://github.com/vllm-project/vllm/issues/30969 | closed | [
"bug",
"help wanted",
"good first issue"
] | 2025-12-18T14:36:30Z | 2025-12-20T21:54:47Z | 3 | GauthierRoy |
huggingface/lerobot | 2,680 | Invalid frame index when training on merged datasets [RuntimeError] | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: Linux-5.4.0-165-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: 4.4.2-0ubunt... | https://github.com/huggingface/lerobot/issues/2680 | open | [
"bug",
"question",
"dataset",
"visualization",
"examples",
"training"
] | 2025-12-18T13:29:50Z | 2025-12-26T06:26:37Z | null | RiccardoIzzo |
huggingface/trl | 4,719 | Loss calculation of `GKDTrainer` may be inaccurate when performing gradient accumulation? | It seems that `GKDTrainer` averages the loss of tokens in a micro batch ahead?
https://github.com/huggingface/trl/blob/8918c9836a3e0b43a6851c08d01b69072f56ca52/trl/experimental/gkd/gkd_trainer.py#L284 | https://github.com/huggingface/trl/issues/4719 | open | [
"🐛 bug",
"🏋 GKD"
] | 2025-12-18T12:50:05Z | 2025-12-18T12:50:49Z | 0 | jue-jue-zi |
huggingface/lerobot | 2,679 | Merging datasets removes fps from scalar features | ### Ticket Type
🐛 Bug Report (Something isn't working)
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: Linux-6.17.9-arch1-1-x86_64-with-glibc2.42
- Python version: 3.12.11
- Huggingface Hub version: 0.34.4
- Datasets version: 4.1.1
- Numpy version: 2.3.5
- FFmpeg version: n8.0.1
- PyTorc... | https://github.com/huggingface/lerobot/issues/2679 | open | [
"bug",
"enhancement",
"question",
"dataset",
"performance",
"examples"
] | 2025-12-18T12:47:14Z | 2025-12-18T15:25:12Z | null | reeceomahoney |
vllm-project/vllm | 30,956 | [Feature]: could output the given format logger ? | ### 🚀 The feature, motivation and pitch
hi,dear ,
i have def the logger from py scripts ,etc, logger_utils.py
and could i use shell run the command with the logger,
such as ,
`vllm serve qwen3-embedding-0.6b --logger_file logger_utils.py `
thx
i really need your help
SOS ,thx
### Alternatives
_No response_
#... | https://github.com/vllm-project/vllm/issues/30956 | open | [
"feature request"
] | 2025-12-18T09:35:22Z | 2025-12-19T01:52:41Z | 5 | ucas010 |
huggingface/lerobot | 2,678 | Bug: lerobot-dataset-viz IndexError when visualizing specific episodes | # Bug Report: `lerobot-dataset-viz` IndexError when visualizing specific episodes
## Description
The `lerobot-dataset-viz` command fails with an `IndexError` when trying to visualize a specific episode using the `--episode-index` parameter. The issue is caused by `EpisodeSampler` using global dataset indices while th... | https://github.com/huggingface/lerobot/issues/2678 | open | [
"bug",
"question",
"dataset",
"visualization",
"python",
"examples"
] | 2025-12-18T08:45:05Z | 2025-12-24T08:31:00Z | null | apeSh1t |
vllm-project/vllm | 30,941 | [Performance]: Why Does Latency Remain Unchanged in vLLM 0.11.0 When Input Token Count Decreases for qwen3-vl-30b-a3b? | ### Proposal to improve performance
_No response_
### Report of performance regression
_No response_
### Misc discussion on performance
Using vLLM version 0.11.0 to run the qwen3-vl-30b-a3b model, the stress test results show that although the number of input tokens decreases, the latency does not change.
The mod... | https://github.com/vllm-project/vllm/issues/30941 | open | [
"performance"
] | 2025-12-18T07:40:35Z | 2025-12-18T07:40:35Z | 0 | Hormoney |
vllm-project/vllm | 30,933 | [Usage]: What is the latest instruction to run DeepSeek V3.2? | ### Your current environment
vLLM 0.12.0
### How would you like to use vllm
I am following the guidelines here https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html for running DeepSeek v3.2. By following the instructions I installed vLLM 0.12.0 on my H200 node. However, when I try to run it wi... | https://github.com/vllm-project/vllm/issues/30933 | open | [
"usage"
] | 2025-12-18T06:18:29Z | 2025-12-18T15:50:29Z | 1 | IKACE |
vllm-project/vllm | 30,923 | [Bug]: Use the offical doucment vllm online method deploy DeepSeek-OCR,the result is very bad . but I ust the offline method the result is normal. why ? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
I use https://github.com/vllm-project/recipes/blob/main/DeepSeek/DeepSeek-OCR.md
the offline and online mehtod is wo... | https://github.com/vllm-project/vllm/issues/30923 | closed | [
"bug"
] | 2025-12-18T04:14:33Z | 2025-12-18T04:25:20Z | 0 | git-liweichao |
vllm-project/vllm | 30,922 | [Bug]: Use the offical doucment vllm online method deploy DeepSeek-OCR,the result is very bad . but I ust the offline method the result is normal. why ? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
I use https://github.com/vllm-project/recipes/blob/main/DeepSeek/DeepSeek-OCR.md
the offline and online mehtod is wo... | https://github.com/vllm-project/vllm/issues/30922 | open | [
"bug"
] | 2025-12-18T04:08:46Z | 2025-12-18T04:25:36Z | 1 | git-liweichao |
sgl-project/sglang | 15,359 | [Bug] The handling logic for tool_choice = 'auto' in the DeepseekV3.2 model may be incorrect. | ### Checklist
- [ ] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15359 | closed | [] | 2025-12-18T02:47:26Z | 2025-12-18T03:36:38Z | 4 | JerryKwan |
huggingface/lerobot | 2,673 | Dataset v2 not working anymore | ### Ticket Type
Feature
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: macOS-26.2-arm64-arm-64bit
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: 7.1.1
- PyTorch version: 2.7.1
- Is PyTorch built with CUDA sup... | https://github.com/huggingface/lerobot/issues/2673 | closed | [
"enhancement",
"question",
"dataset",
"dependencies",
"training"
] | 2025-12-17T21:35:31Z | 2025-12-17T23:26:54Z | null | imstevenpmwork |
huggingface/lerobot | 2,670 | Async inference for simulation (libero benchmark) | ### Issue Type
{"label" => "❓ Technical Question"}
### Environment & System Info
```Shell
```
### Description
Is there any way that we can support async inference for simulator (e.g., libero)? This makes it possible to test RTC with simulators.
### Context & Reproduction
A question re a feature.
### Expected... | https://github.com/huggingface/lerobot/issues/2670 | open | [
"question",
"simulation",
"performance",
"evaluation"
] | 2025-12-17T18:57:07Z | 2026-01-02T05:40:18Z | null | dywsjtu |
huggingface/transformers | 42,930 | Inconsistent handling of video_metadata in Qwen3VLVideoProcessor usage example | ### System Info
transformers==4.57.3
### Who can help?
@zucchini-nlp @yonigozlan @molbap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details belo... | https://github.com/huggingface/transformers/issues/42930 | closed | [
"bug"
] | 2025-12-17T17:21:00Z | 2025-12-18T10:32:23Z | 3 | wagoriginal |
vllm-project/vllm | 30,882 | [Bug]: Marlin Fp8 Block Quant Failure | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
```bash
MODEL := "Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8"
#MODEL := "RedHatAI/Mixtral-8x7B-Instruct-v0.1-FP8"
launch... | https://github.com/vllm-project/vllm/issues/30882 | closed | [
"bug",
"help wanted",
"good first issue"
] | 2025-12-17T15:55:18Z | 2025-12-17T16:02:54Z | 2 | robertgshaw2-redhat |
vllm-project/vllm | 30,879 | [Doc]: Add some documentation about encoder compilation | ### 📚 The doc issue
I want something like a design doc for encoder compilation. For example:
- It uses support_torch_compile and set_model_tag to avoid cache collisions
- it supports or doesn't support the following features that VllmBackend does: cudagraphs, compile_ranges, and a high-level explanation for how these... | https://github.com/vllm-project/vllm/issues/30879 | open | [
"documentation",
"torch.compile"
] | 2025-12-17T15:44:50Z | 2025-12-17T16:27:38Z | 1 | zou3519 |
vllm-project/vllm | 30,865 | [Usage]:Tools GLM4.6v with vLLM | ### Your current environment
Hello,
I am running tests on this model, which I find excellent. However, I am encountering a few issues and would like to know whether it is possible to fix them or if I am simply asking for the impossible.
First of all, here is my vLLM configuration:
`docker run -d \ --name vllm-llm \... | https://github.com/vllm-project/vllm/issues/30865 | open | [
"usage"
] | 2025-12-17T10:51:34Z | 2025-12-18T08:33:44Z | 1 | qBrabus |
sgl-project/sglang | 15,321 | [Feature][VLM] Support ViT Piecewise CUDA Graph for VLMs | ### Checklist
- [ ] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Motivation
Support ViT Piecewise CUDA Graph for VLMs can improve prefi... | https://github.com/sgl-project/sglang/issues/15321 | open | [
"performance",
"Multi-modal",
"vlm"
] | 2025-12-17T09:17:18Z | 2026-01-04T02:09:13Z | 0 | yuan-luo |
vllm-project/vllm | 30,859 | [Bug]: set_current_vllm_config() is only done during the initialization stage but not the runtime stage | ### Your current environment
Any env
### 🐛 Describe the bug
# Issue Statement
Currently, `set_current_vllm_config()` is only done during the initialization stage but not the runtime stage. If the code tries to call `get_current_vllm_config()`, vLLM prints a warning "Current vLLM config is not set." and returns a d... | https://github.com/vllm-project/vllm/issues/30859 | open | [
"bug"
] | 2025-12-17T08:59:49Z | 2025-12-22T18:09:55Z | 7 | nvpohanh |
sgl-project/sglang | 15,319 | [Feature] RFC: AutoSpec, Automatic Runtime Speculative Inference Parameter Tuning | ### Checklist
- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Motivation
## Summary
This proposal introduces automatic runtime tuni... | https://github.com/sgl-project/sglang/issues/15319 | open | [] | 2025-12-17T08:53:57Z | 2025-12-22T03:37:45Z | 3 | maodoudou168 |
vllm-project/vllm | 30,855 | [Usage]: Qwen3-30B-A3B-NVFP4 fails on Dell Pro Max GB10 with "no kernel image is available for execution on the device" | ### Your current environment
```
Hardware: Dell Pro Max GB10
OS: Ubuntu 24
CUDA: cuda_13.0.r13.0
Cuda compilation tools, release 13.0, V13.0.88;
vllm: V0.12.0
torch_version: 2.9.0+cu128
model: RedHatAI/Qwen3-30B-A3B-NVFP4 or nvidia/Qwen3-30B-A3B-NVFP4 or nvidia/Qwen3-30B-A3B-FP4
```
### How would you like to use... | https://github.com/vllm-project/vllm/issues/30855 | open | [
"usage"
] | 2025-12-17T08:44:11Z | 2025-12-17T08:44:11Z | 0 | nanbogong |
vllm-project/vllm | 30,847 | [Bug]: Qwen 3VL via Efficient Video Sampling (EVS) to trim video embeddings and found that the number of tokens after timestamp in the Prompt was not aligned with the actual number of tokens after pruning? | ### Your current environment
<details>
vllm serve Qwen3-VL-8B --video-pruning-rate=0.75
messages=[
{
"role": "user",
"content": [
# {"type": "text", "text": "What's in this video?"},
{"type": "text", "text": "这个视频和图片分别描述的是什么内容?"},
... | https://github.com/vllm-project/vllm/issues/30847 | open | [
"bug"
] | 2025-12-17T06:46:15Z | 2026-01-04T07:39:17Z | 5 | xshqhua |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.