repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm | 28,564 | [Usage]: Can't get ModernBert models to run in vllm serve | ### Your current environment
I am trying to download and use ModernBertModel with the vllm serve feature.
At first I thought it was an issue with the model so I switched from trying to use BertEmbed with Alibaba-NLP/gte-modernbert-base since it appears in the docs as a model that supports embedding.
Source: https://docs.vllm.ai/en/latest/models/supported_models/#pooling-models
I download and run it like this.
Download:
`huggingface-cli download Alibaba-NLP/gte-modernbert-base --local-dir models/bert --local-dir-use-symlinks False`
Serve (example, I have used many iterations):
`vllm serve models/bert2 --host 0.0.0.0 --port 8003 --task embed --trust-remote-code --gpu-memory-utilization 0.3`
No matter what I get this: Assertion failed, The model should be a generative or pooling model when task is set to 'embedding'. [type=assertion_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]
I tried setting runner but that didn't do a thing. I really have no clue why it says this model is supported in the docs. I have searched through other issues and documentation to try out a bunch of solutions but obviously none have worked so far. Been trying to figure this out for hours now and I am losing my mind (not relevant ig, need to vent).
### How would you like to use vllm
I want to run inference of a Alibaba-NLP/gte-modernbert-base or any ModernBertModel. I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28564 | open | [
"usage"
] | 2025-11-12T15:51:18Z | 2025-11-12T15:51:18Z | 0 | Logikschleifen |
vllm-project/vllm | 28,527 | 💡 Bounty Platform for vLLM | Hi vLLM team! 👋
I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development.
**What is Roxonn?**
✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN)
✅ Notify 300+ AI/ML developers
✅ Auto-pay when PRs merge via blockchain
✅ Zero crypto setup needed
**Quick flow:**
1. Register repo (GitHub App)
2. Fund pool with USDC (stable pricing)
3. Assign bounties to features
4. PR merged → automatic payment
**Perfect for AI/ML:**
- Access to research community
- **Only 1% total platform fee**
- Transparent payments
Learn more: **https://roxonn.com**
*No pressure - sharing a resource!* | https://github.com/vllm-project/vllm/issues/28527 | closed | [] | 2025-11-12T07:50:33Z | 2025-11-13T12:36:15Z | 0 | dineshroxonn |
huggingface/transformers | 42,154 | 💡 Bounty Platform for Hugging Face Transformers | Hi Hugging Face Transformers team! 👋
I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development.
**What is Roxonn?**
✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN)
✅ Notify 300+ AI/ML developers
✅ Auto-pay when PRs merge via blockchain
✅ Zero crypto setup needed
**Quick flow:**
1. Register repo (GitHub App)
2. Fund pool with USDC (stable pricing)
3. Assign bounties to features
4. PR merged → automatic payment
**Perfect for AI/ML:**
- Access to research community
- **Only 1% total platform fee**
- Transparent payments
Learn more: **https://roxonn.com**
*No pressure - sharing a resource!* | https://github.com/huggingface/transformers/issues/42154 | closed | [] | 2025-11-12T07:49:59Z | 2025-11-17T11:40:10Z | 2 | dineshroxonn |
vllm-project/vllm | 28,508 | [Usage]: KVCacheManager Parameter question |
I noticed that the parameter “self.req_to_block_hashes” has been removed from KVCacheManager since version v0.10.0. But this parameter is still preserved in the official documentation. Could you please provide an explanation of this change?
- [Document Description](https://docs.vllm.ai/en/v0.9.2/api/vllm/v1/core/kv_cache_manager.html)
- [Version v0.10.0 code](https://github.com/vllm-project/vllm/blob/v0.10.0/vllm/v1/core/kv_cache_manager.py)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28508 | closed | [
"usage"
] | 2025-11-12T03:10:18Z | 2025-11-16T08:33:45Z | 1 | Liziqi-77 |
huggingface/diffusers | 12,638 | How to design network with DiT blocks that are friendly to Tensorrt fp16 conversion? | We had a network that structed as `a convnet pre-encoder -> DiT blocks -> final block for last sampling`, it worked well with torch format and onnx format, but when we tried to convert it into tensorrt fp16 format, the inference will get value overflow. we had seen the data differene [between onnx and trt fp16, with polygraphy.] get larger and larger following those DiT blocks. My question is, how to make the whole model design more friendly to mix-precision inference? to let the DiT blocks less sensitive to value precision. Should I make the convnet pre-encoder and final blocks more complex, or more simple? Thanks | https://github.com/huggingface/diffusers/issues/12638 | open | [] | 2025-11-12T02:23:37Z | 2025-11-12T02:23:37Z | null | JohnHerry |
huggingface/lerobot | 2,428 | how to eval the real world recorded dataset? | can lerobot eval the real world dataset with metric such as mse? I check the eval script and found that now it can only eval the sim env dataset | https://github.com/huggingface/lerobot/issues/2428 | open | [
"question",
"evaluation"
] | 2025-11-12T02:08:44Z | 2025-11-19T16:55:42Z | null | shs822 |
vllm-project/vllm | 28,505 | [Feature]: Is there a plan to introduce the new feature nano-pearl, a new engineering effort in speculative reasoning. | ### 🚀 The feature, motivation and pitch
Nano-pearl can support speculative inference with higher concurrency (larger batch sizes) and is seamlessly compatible with algorithms like Eagle. Is there a plan to introduce it?
github:https://github.com/smart-lty/nano-PEARL
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28505 | open | [
"feature request"
] | 2025-11-12T01:34:22Z | 2025-11-17T06:14:09Z | 1 | Lexlum |
vllm-project/vllm | 28,498 | [Bug][RL]: Port Conflict | ### Your current environment
- bug report:
```
Hello vLLM team, I'm running into a suspicious ZMQ socket bug with my 2P 4D configuration for DeepSeek-V3 (see below). I thought it is caused by reusing same nodes for many vLLM launches, but now it happened also at a clean node. Seems like a DP bug of sorts. Please find logs attached. vllm==0.11.0.
```
```bash
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 134, in __init__
[1;36m(APIServer pid=670293)[0;0m self.engine_core = EngineCoreClient.make_async_mp_client(
[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 101, in make_async_mp_client
[1;36m(APIServer pid=670293)[0;0m return DPLBAsyncMPClient(*client_args)
[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 1125, in __init__
[1;36m(APIServer pid=670293)[0;0m super().__init__(vllm_config, executor_class, log_stats,
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 975, in __init__
[1;36m(APIServer pid=670293)[0;0m super().__init__(vllm_config, executor_class, log_stats,
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 769, in __init__
[1;36m(APIServer pid=670293)[0;0m super().__init__(
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 466, in __init__
[1;36m(APIServer pid=670293)[0;0m self.resources.output_socket = make_zmq_socket(
[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/vllm/utils/__init__.py", line 2983, in make_zmq_socket
[1;36m(APIServer pid=670293)[0;0m socket.bind(path)
[1;36m(APIServer pid=670293)[0;0m File "XXX/.venv/lib/python3.12/site-packages/zmq/sugar/socket.py", line 320, in bind
[1;36m(APIServer pid=670293)[0;0m super().bind(addr)
[1;36m(APIServer pid=670293)[0;0m File "zmq/backend/cython/_zmq.py", line 1009, in zmq.backend.cython._zmq.Socket.bind
[1;36m(APIServer pid=670293)[0;0m File "zmq/backend/cython/_zmq.py", line 190, in zmq.backend.cython._zmq._check_rc
[1;36m(APIServer pid=670293)[0;0m zmq.error.ZMQError: Address already in use (addr='tcp://slurm-h200-206-017:59251')
```
### 🐛 Describe the bug
From Nick:
```
I think the problem is that each DP worker finds/assigns free ports dynamically/independently.. so there is a race condtion. I'm not sure of an immediate workaround apart from just re-attempt to start things when this happens. We'll have to look at how to catch and re-find a port if possible (though I have a memory this might be nontrivial).
```
From Reporter:
```
Received init message: EngineHandshakeMetadata(addresses=EngineZmqAddresses(inputs=['tcp://slurm-h200-207-083:60613'], outputs=['tcp://slurm-h200-207-083:36865'], coordinator_input='tcp://slurm-h200-207-083:34575', coordinator_output='tcp://slurm-h200-207-083:48025', frontend_stats_publish_address='ipc:///tmp/88ec875f-3de9-46ec-9947-6d1d6573b910'), parallel_config={'data_parallel_master_ip': 'slurm-h200-207-083', 'data_parallel_master_port': 41917, '_data_parallel_master_port_list': [60545, 36835, 47971, 37001], 'data_parallel_size': 32})
```
I'm looking at the code and I see that all code paths for getting ports eventually to go to _get_open_port, and that in _get_open_port there is basically no defence against choosing the same port twice. Can you please confirm my understanding?
_get_open_port in main is here: https://github.com/vllm-project/vllm/blob/main/vllm/utils/network_utils.py#L177
UPD: I imagine the assumption here is that once a code path gets a port, that code path will use it immediately, and thus the port will be come busy. It doesn't seem to hold though.
Even where all sockets that vLLM chose for itself are unique, I get the stack trace below.
I have the following explanation in mind:
- vLLM chooses zmq ports before launching the engines
- launching the engines takes ~5 mins
- by the time the engines are launched, something can listen on this port, like for example Ray
- **It looks the right solution is to hold on to then chosen ports immediately are they are chosen.**
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28498 | open | [
"bug",
"help wanted",
"good first issue"
] | 2025-11-11T22:51:35Z | 2025-12-04T07:35:31Z | 13 | robertgshaw2-redhat |
vllm-project/vllm | 28,489 | [Usage]: Online continuous batching | ### Current environment
```
==============================
System Info
==============================
OS : macOS 26.1 (arm64)
GCC version : Could not collect
Clang version : 17.0.0 (clang-1700.4.4.1)
CMake version : Could not collect
Libc version : N/A
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0
Is debug build : False
CUDA used to build PyTorch : None
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.6 (v3.12.6:a4a2d2b0d85, Sep 6 2024, 16:08:03) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform : macOS-26.1-arm64-arm-64bit
==============================
CUDA / GPU Info
==============================
Is CUDA available : False
CUDA runtime version : No CUDA
CUDA_MODULE_LOADING set to : N/A
GPU models and configuration : No CUDA
Nvidia driver version : No CUDA
cuDNN version : No CUDA
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Apple M2
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-ml-py==13.580.82
[pip3] pyzmq==27.0.0
[pip3] sentence-transformers==5.1.2
[pip3] spacy-transformers==1.3.9
[pip3] torch==2.8.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.1
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.11.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
Could not collect
==============================
Environment Variables
==============================
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
```
Hello,
I am looking to run an LLM (using vLLM) within a FastAPI application. My goal is to achieve online, continuous batching.
I want the application to continuously receive requests from external clients, and have vLLM automatically batch them up for parallel inference.
In the past, I used the LLM() engine wrapped in RayServe. While this worked, it seemed to create a new internal deployment each time, which I want to avoid.
I am now trying to achieve this without RayServe, using the AsyncLLMEngine directly (don't know If I need the async, read online).
Here is an example of my current code. I'm running for test purposes on a cpu, but I have another issue on GPU (very long inference time, like minutes, with Ray, only 2-3 seconds).
```
# Model:
engine_args = AsyncEngineArgs(
model=path,
tensor_parallel_size=1,
gpu_memory_utilization=0.7,
enforce_eager=False,
disable_custom_all_reduce=False,
max_model_len=2048,
trust_remote_code=True,
enable_log_requests=False,
max_num_seqs=10
)
model_ = AsyncLLMEngine.from_engine_args(engine_args)
# Params
sampling_params = SamplingParams(
n=1,
best_of=None,
presence_penalty=0.0,
frequency_penalty=0.0,
temperature=0,
top_p=1.0,
top_k=1,
stop=my_stop_token,
stop_token_ids=[my_eos_token_id],
ignore_eos=False,
max_tokens=2048,
logprobs=None,
skip_special_tokens=True
)
outputs_generator = model_.generate(prompt, sampling_params, request_id)
final_output = None
async for request_output in outputs_generator:
if request_output.finished:
final_output = request_output
break
if final_output and final_output.outputs:
result = final_output.outputs[0].text
```
In my local test, I got the error when I try as example 3 inferences, calling 3 times self.model.generate() with 1 inputs and not 1 time self.model.generate() with 3 inputs.
Error: `Assertion failed: !_current_out (src/router.cpp:166)`
Is it possible to achieve what I'm asking by always calling a generate() for internal batching, or the solution it's only by "collecting" the prompts with a management and then calling a centralized generate()?
Thanks
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28489 | open | [
"usage"
] | 2025-11-11T20:51:58Z | 2025-11-11T20:53:47Z | 0 | GenVr |
huggingface/trl | 4,507 | Can a multimodal model like Gemma be trained in the same way as a text-only model like Qwen, but with the goal of improving only its text capabilities? | As stated in the title, I hope to improve only the text capabilities of Gemma 3, but it doesn’t seem to have worked as expected. The model I used is gemma-3-4b-it, and I conducted the following simple tests:
```python
dataset = Dataset.from_list(
[
{"prompt": "What is 2+2?", "task": "math"},
{"prompt": "Write a function that returns the sum of two numbers.", "task": "code"},
{"prompt": "What is 3*4?", "task": "math"},
{"prompt": "Write a function that returns the product of two numbers.", "task": "code"},
]
)
```
These data shouldn’t cause Gemma to generate excessively long responses, but according to the logs, its output length is quite large: ```'completions/mean_length': 4096.0, 'completions/min_length': 4096.0, 'completions/max_length': 4096```
This doesn’t seem normal.
| https://github.com/huggingface/trl/issues/4507 | open | [
"🐛 bug",
"⏳ needs more info"
] | 2025-11-11T15:59:51Z | 2025-11-21T05:58:50Z | 0 | Tuziking |
vllm-project/vllm | 28,472 | [Usage]: Will the reasoning_content in the chat template still be applied correctly after switching reasoning_content to reasoning | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Will the message.reasoning_content for (which exists in default chat_template for qwen3-next-thinking qwen3-vl-thinking or other qwen3-thinking series or glm4.5 or kimi-k2-thinking or other models) in the chat template still be applied correctly after changing reasoning_content to reasoning (apply reasoning on ai message to reasoning_content on chat template)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28472 | closed | [
"usage"
] | 2025-11-11T15:04:11Z | 2025-11-13T06:25:29Z | 4 | zhcn000000 |
vllm-project/vllm | 28,456 | [Usage]: benchmark_moe Usage | ### Your current environment
```text
(EngineCore_DP0 pid=7498) INFO 11-10 11:42:48 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation).
(APIServer pid=7416) INFO 11-10 11:42:50 [loggers.py:127] Engine 000: Avg prompt throughput: 104162.6 tokens/s, Avg generation throughput: 10.0tokens/s, Running: 100 reqs, Waiting: 0 reqs, GPU KV cache usage: 10.1%, Prefix cache hit rate: 98.6%
(APIServer pid=7416) INFO 11-10 11:43:00 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 100 reqs, Waiting: 0 reqs, GPU KV cache usage: 10.1%, Prefix cache hit rate: 98.6%
(APIServer pid=7416) INFO 11-10 11:43:20 [loggers.py:127] Engine 000: Avg prompt throughput: 5.1 tokens/s, Avg generation throughput: 0.1 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.1%, Prefix cache hit rate: 98.6%
Collecting environment information...==============================
System Info==============================
OS : Ubuntu 24.04.3 LTS (x86_64)GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collectCMake version : version 3.28.3
Libc version : glibc-2.39
============================== PyTorch Info
==============================
PyTorch version : 2.8.0+cu128Is debug build : False
CUDA used to build PyTorch : 12.8ROCM used to build PyTorch : N/A
==============================
Python Environment==============================
Python version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)Python platform : Linux-6.8.0-85-generic-x86_64-with-glibc2.39
============================== CUDA / GPU Info
==============================Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
Nvidia driver version : 570.195.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402P 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: disabled
CPU(s) scaling MHz: 74%
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.64
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 1 | https://github.com/vllm-project/vllm/issues/28456 | open | [
"usage"
] | 2025-11-11T09:22:33Z | 2025-11-21T01:43:41Z | 6 | ekmekovski |
huggingface/lerobot | 2,422 | Running inference on Libero with pi0 | Hello, I am trying to run inference with pi0 but the commands referenced in this issue #683 are outdated I believe. What would the commands be to run inference in Lerobot, and also running inference with pi0 in Libero? Additionally, if there is any documentation for these commands in general for fine-tuning and eval, that would be great! | https://github.com/huggingface/lerobot/issues/2422 | open | [
"question",
"policies",
"evaluation"
] | 2025-11-11T09:22:25Z | 2025-11-19T16:53:27Z | null | thomasdeng2027 |
huggingface/lerobot | 2,421 | Seeking assistance with tactile data acquisition | I want to simultaneously collect tactile and visual data, with tactile data sampled at 150 fps and visual data at 30 fps. Each time an image frame is saved, I also want to store all tactile data collected during that time interval as additional features associated with the image.
What would be the best approach to implement this? Which parts of the source code should I modify? | https://github.com/huggingface/lerobot/issues/2421 | open | [
"question"
] | 2025-11-11T02:49:57Z | 2025-11-19T16:53:05Z | null | zhoushaoxiang |
vllm-project/vllm | 28,438 | [Usage]: How do I install vLLM nightly? | ### Your current environment
The output of collect_env.py
```text
==============================
System Info
==============================
OS : Ubuntu 20.04.5 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version : Could not collect
CMake version : version 3.16.3
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.5.1+cu121
Is debug build : False
CUDA used to build PyTorch : 12.1
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.4.131
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version : 535.129.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-108
Off-line CPU(s) list: 109-111
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
Stepping: 6
CPU MHz: 2294.608
BogoMIPS: 4589.21
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.3 MiB
L1i cache: 896 KiB
L2 cache: 35 MiB
L3 cache: 54 MiB
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq av | https://github.com/vllm-project/vllm/issues/28438 | closed | [
"usage"
] | 2025-11-11T02:24:47Z | 2025-11-12T01:54:42Z | 2 | LittleLucifer1 |
vllm-project/vllm | 28,425 | [Feature][RL]: Fix Fp8 Weight Loading for RL | ### 🚀 The feature, motivation and pitch
Feedback from RL community that vLLM weight loading in fp8 is bad for RL
- https://vllm-dev.slack.com/archives/C07UUL8E61Z/p1762811441757529
The cause is clear: in [fp8.py](https://github.com/vllm-project/vllm/blob/bf6a3d0ff5a69e0a30567f2ad417530c002eaa4e/vllm/model_executor/layers/quantization/fp8.py#L490) in process_weights_after_loading there is a lot of parameter wrapping that drops .weight_loader attribute.
There's a patch from the Moonshot team that fixes this issue and there's a [PR](https://github.com/vllm-project/vllm/pull/24488) with this patch that never got any comments. The [patch](https://github.com/MoonshotAI/checkpoint-engine/blob/main/patches/vllm_fp8.patch) only works on top of v0.10.2rc1. Shortly after that tag, this [PR](https://github.com/vllm-project/vllm/pull/23280) made fp8 weight updates even trickier by transposing weight_inv_scale parameter for CUTLASS.
I don't know how to patch any vLLM version after this PR to be able to call model.load_weights after the engine has started. It is a bummer, because DeepSeek wide EP inference is quite a bit faster in v0.11.0.
We need to fix this ASAP
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28425 | open | [
"feature request"
] | 2025-11-10T21:59:02Z | 2025-11-10T23:25:37Z | 1 | robertgshaw2-redhat |
huggingface/transformers.js | 1,450 | SmolVLM2 500M Video Instruct - Video inference | ### Question
Hey, is it possible to setup **video** inference through **transformers.js** (may be somehow else?) for the model SmolVLM2 500M Video Instruct? I can't make it work, but I saw, that it is possible in py transformers.
I want to create something similar to https://huggingface.co/spaces/HuggingFaceTB/SmolVLM2-HighlightGenerator/tree/main but with full local WebGPU inference.
Thanks in advance. cc: @xenova | https://github.com/huggingface/transformers.js/issues/1450 | open | [
"question"
] | 2025-11-10T19:51:07Z | 2025-11-12T07:46:32Z | null | youchi1 |
vllm-project/vllm | 28,409 | [Usage]: There is any performance benchmark between running vLLM server via docker image and python? | ### Your current environment
```text
I mean, if I run a service with the vLLM docker image, it has any performance upgrade if comparing with running it as a python service (e.g., importing vllm package, setting up vllm inference, handling payload/responses, etc)?
```
### How would you like to use vllm
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28409 | open | [
"usage"
] | 2025-11-10T17:56:14Z | 2025-11-10T17:56:14Z | 0 | rafaelsandroni |
vllm-project/vllm | 28,393 | [Feature]: Does vllm-jax plan to support GPU acceleration? | ### 🚀 The feature, motivation and pitch
Does vllm-jax plan to support GPU acceleration?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28393 | closed | [
"feature request"
] | 2025-11-10T12:28:20Z | 2025-11-10T21:44:57Z | 2 | south-ocean |
vllm-project/vllm | 28,388 | [Bug]: 新版的vllm已经废弃了v0代码,而对qwen-omni系列的模型支持仅限于v0,似乎是因为这个原因,我们无法使用最新版的vllm推理qwen-omni模型 | ### Your current environment
Name: vllm
Version: 0.10.2
### 🐛 Describe the bug
下面的官方样例代码似乎是无法运行的,会对其中的音频使用参数
"mm_processor_kwargs": {
"use_audio_in_video": True,
},
进行报错:
```python
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
"""
This example shows how to use vLLM for running offline inference
with the correct prompt format on Qwen2.5-Omni (thinker only).
"""
from typing import NamedTuple
import vllm.envs as envs
from vllm import LLM, SamplingParams
from vllm.assets.audio import AudioAsset
from vllm.assets.image import ImageAsset
from vllm.assets.video import VideoAsset
from vllm.multimodal.image import convert_image_mode
from vllm.utils import FlexibleArgumentParser
class QueryResult(NamedTuple):
inputs: dict
limit_mm_per_prompt: dict[str, int]
# NOTE: The default `max_num_seqs` and `max_model_len` may result in OOM on
# lower-end GPUs.
# Unless specified, these settings have been tested to work on a single L4.
default_system = (
"You are Qwen, a virtual human developed by the Qwen Team, Alibaba "
"Group, capable of perceiving auditory and visual inputs, as well as "
"generating text and speech."
)
def get_mixed_modalities_query() -> QueryResult:
question = (
"What is recited in the audio? "
"What is the content of this image? Why is this video funny?"
)
prompt = (
f"<|im_start|>system\n{default_system}<|im_end|>\n"
"<|im_start|>user\n<|audio_bos|><|AUDIO|><|audio_eos|>"
"<|vision_bos|><|IMAGE|><|vision_eos|>"
"<|vision_bos|><|VIDEO|><|vision_eos|>"
f"{question}<|im_end|>\n"
f"<|im_start|>assistant\n"
)
return QueryResult(
inputs={
"prompt": prompt,
"multi_modal_data": {
"audio": AudioAsset("mary_had_lamb").audio_and_sample_rate,
"image": convert_image_mode(
ImageAsset("cherry_blossom").pil_image, "RGB"
),
"video": VideoAsset(name="baby_reading", num_frames=16).np_ndarrays,
},
},
limit_mm_per_prompt={"audio": 1, "image": 1, "video": 1},
)
def get_use_audio_in_video_query() -> QueryResult:
question = (
"Describe the content of the video, then convert what the baby say into text."
)
prompt = (
f"<|im_start|>system\n{default_system}<|im_end|>\n"
"<|im_start|>user\n<|vision_bos|><|VIDEO|><|vision_eos|>"
f"{question}<|im_end|>\n"
f"<|im_start|>assistant\n"
)
asset = VideoAsset(name="baby_reading", num_frames=16)
audio = asset.get_audio(sampling_rate=16000)
assert not envs.VLLM_USE_V1, (
"V1 does not support use_audio_in_video. "
"Please launch this example with "
"`VLLM_USE_V1=0`."
)
return QueryResult(
inputs={
"prompt": prompt,
"multi_modal_data": {
"video": asset.np_ndarrays,
"audio": audio,
},
"mm_processor_kwargs": {
"use_audio_in_video": True,
},
},
limit_mm_per_prompt={"audio": 1, "video": 1},
)
def get_multi_audios_query() -> QueryResult:
question = "Are these two audio clips the same?"
prompt = (
f"<|im_start|>system\n{default_system}<|im_end|>\n"
"<|im_start|>user\n<|audio_bos|><|AUDIO|><|audio_eos|>"
"<|audio_bos|><|AUDIO|><|audio_eos|>"
f"{question}<|im_end|>\n"
f"<|im_start|>assistant\n"
)
return QueryResult(
inputs={
"prompt": prompt,
"multi_modal_data": {
"audio": [
AudioAsset("winning_call").audio_and_sample_rate,
AudioAsset("mary_had_lamb").audio_and_sample_rate,
],
},
},
limit_mm_per_prompt={
"audio": 2,
},
)
query_map = {
"mixed_modalities": get_mixed_modalities_query,
"use_audio_in_video": get_use_audio_in_video_query,
"multi_audios": get_multi_audios_query,
}
def main(args):
model_name = "Qwen/Qwen2.5-Omni-7B"
query_result = query_map[args.query_type]()
llm = LLM(
model=model_name,
max_model_len=5632,
max_num_seqs=5,
limit_mm_per_prompt=query_result.limit_mm_per_prompt,
seed=args.seed,
)
# We set temperature to 0.2 so that outputs can be different
# even when all prompts are identical when running batch inference.
sampling_params = SamplingParams(temperature=0.2, max_tokens=64)
outputs = llm.generate(query_result.inputs, sampling_params=sampling_params)
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
def parse_args():
parser = FlexibleArgumentParser(
description="Demo on using vLLM for offline inference with "
"audio language models"
)
| https://github.com/vllm-project/vllm/issues/28388 | open | [
"bug"
] | 2025-11-10T09:23:33Z | 2025-11-16T05:51:42Z | 1 | Lee-xeo |
huggingface/accelerate | 3,836 | When using gradient accumulation, does the order of optimizer.zero_grad() affect training? | if I use accelerate+deepspeed to train a model, and I set
`deepspeed_config:
gradient_accumulation_steps: 8
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: false
zero_stage: 2`
does the order of the order of backward(), step(), zero_grad() affect training?
For example:
`for batch in training_dataloader:
with accelerator.accumulate(model):
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()`
and
`for batch in training_dataloader:
with accelerator.accumulate(model):
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
`
I want to know whether the two situations will yield the same result. During gradient accumulation training, when the model needs to update the parameters and `accelerate.sync_gradients=True`, will using the second method clear the gradients, causing the gradient accumulation to be incorrect, so that at this point there is only one sample? | https://github.com/huggingface/accelerate/issues/3836 | closed | [] | 2025-11-10T03:11:21Z | 2025-12-20T15:24:00Z | 3 | polestarss |
huggingface/transformers | 42,113 | Add AutoMergeAdapters: Official Utility to Combine Multiple LoRA Adapters into One Unified Model | ### Feature request
Introduce a new built-in class AutoMergeAdapters to the Transformers/PEFT ecosystem that enables users to merge multiple LoRA adapters trained on different domains or datasets into a single model.
This feature simplifies the process of creating multi-domain fine-tuned models for inference and deployment, without manual merging scripts
### Motivation
Today, users can fine-tune models with LoRA adapters easily using PEFT, but they face a major bottleneck when trying to combine more than one adapter.
Current limitations:
Only one LoRA adapter can be merged using merge_and_unload()
Manual merges are error-prone and undocumented
Model config alignment must be handled manually
No built-in CLI or user-friendly API for adapter composition
A high-level API for multi-adapter merging would:
Promote adapter reusability across domains
Simplify deployment of multi-domain, multi-skill models
Reduce code duplication across community projects
### Your contribution
I would like to implement this feature and contribute the following:
Develop the AutoMergeAdapters class under src/transformers/adapters/auto_merge_adapters.py to support merging multiple LoRA adapters with optional weighted combination and compatibility validation.
Extend transformers-cli by adding a new merge-adapters command for CLI-based merging and model export.
Add unit and integration tests in tests/adapters/test_auto_merge_adapters.py to ensure correctness for weighted merges, config mismatches, and adapter integrity.
Provide documentation including a usage guide and a sample notebook under examples/adapters/merge_multiple_adapters.ipynb.
Publish a demo merged model to the Hugging Face Hub for reproducibility and reference.
Open a clean, well-tested PR and iterate based on maintainer feedback.
Happy to start implementation once the approach is approved. Looking forward to guidance if any adjustments are required. | https://github.com/huggingface/transformers/issues/42113 | closed | [
"Feature request"
] | 2025-11-09T18:43:20Z | 2025-11-10T16:58:34Z | 1 | 3015pavan |
huggingface/transformers | 42,111 | Add thinking-budget support (max_thinking_tokens) for reasoning-capable chat models | ### Feature request
A built-in way to cap how many tokens a reasoning model spends inside its ``<think> … </think>`` block. Today, we can only control the total response length via ``max_new_tokens``. No parameter limits the internal reasoning segment when ``enable_thinking=True``.
### Motivation
- Reasoning models (e.g., Qwen3 series) often produce very long thought blocks, which can blow past latency budgets before the final answer starts.
- Users need a simple, model-agnostic control to bound that “thinking” cost without disabling reasoning entirely.
- The Qwen docs (https://qwen.readthedocs.io/en/latest/getting_started/quickstart.html#thinking-budget) already describe a brute-force approach (two-step generation) to implement “thinking budgets”.
### Your contribution
I want to submit a PR that:
- Extends ``GenerationConfig`` with:
``max_thinking_tokens``: integer budget for reasoning tokens.
``begin_thinking_token_id / end_thinking_token_id``: marker IDs so generation knows where the thinking span begins/ends.
- Add a ``MaxThinkingTokensLogitsProcessor`` that watches the active ``<think>`` block. Once the budget is reached, it forces end_thinking_token_id, ensuring the model exits reasoning and continues with the final response.
- Document the new parameter in reasoning-model guides (EXAONE, CWM, etc.) and show how to wire the thinking-token IDs until configs do it automatically.
- Provide unit coverage so ``_get_logits_processor`` injects the new processor whenever the config is fully specified. | https://github.com/huggingface/transformers/issues/42111 | open | [
"Feature request"
] | 2025-11-09T10:09:11Z | 2025-11-09T10:09:11Z | 0 | AndresAlgaba |
vllm-project/vllm | 28,362 | [Usage]: Can't get vLLM to run on an Intel 125H with XPU and Arc graphics | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 4.1.2
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+xpu
Is debug build : False | https://github.com/vllm-project/vllm/issues/28362 | open | [
"usage",
"intel-gpu"
] | 2025-11-09T09:45:05Z | 2025-11-12T00:19:39Z | 2 | phlibi |
vllm-project/vllm | 28,350 | [Doc]: Running VLLM via Docker Swarm With Support for Tensor Parallelism | ### 📚 Running VLLM via Docker Swarm With Support for Tensor Parallelism
There's no documentation that I have found outlining how to run VLLM in a docker swarm when utilizing tensor parallelism. The issue is that ```ipc=host``` is not an available option within docker swarm. Consulting the AI feature on the VLLM website suggests to use the ```shm``` option which is available to swarm, but this produces continued failures on startup.
Please advise how to run VLLM via docker swarm utilizing tensor parallelism. thx
| https://github.com/vllm-project/vllm/issues/28350 | closed | [
"documentation"
] | 2025-11-08T21:11:15Z | 2025-11-19T16:37:31Z | 2 | ep5000 |
vllm-project/vllm | 28,348 | [Usage]: Does vllm support max_pixels in prompt on Qwen3-VL reasoning? | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of Qwen3-VL-A3B-Instruct, I tried to set max_pixels but it doesn't work.
import json
import base64
import requests
img_path = r".\images\MMMU\735_1.jpg"
base64_str = base64.b64encode(open(img_path, 'rb').read()).decode()
url = "http://71.10.29.136:8000/v1/chat/completions"
payload = json.dumps(
{
"model": "qwen3-vl-30b",
"messages": [
{
"role": "system",
"content": ""
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Question: "
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpg;base64,{base64_str}"
},
"max_pixels": 192 * 96 ## this is not work.... ##
},
{
"type": "text",
"text": " How does the green and photosynthesising mistletoe impact the tree it is hosting? Options:\\nA. It will grow down into the roots and kill the tree.\\nB. Mistletoe is beneficial and increases the growth of the plant.\\nC. It just uses the tree for support and does not damage it.\\nD. I don't know and don't want to guess.\\nE. It has a very damaging impact on the health of the plant but localised to the place of infection.\\n Please select the correct answer from the options above. \\n Only answer with the option letter, e.g. A, B, C, D, E, F, G, H, I. *DO NOT output any other information*. \\n"
}
]
}
],
"n": 1,
"top_p": 0.001,
"top_k": 1,
"temperature": 0.01,
"max_tokens": 8192
}
)
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer EMPTY'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28348 | open | [
"usage"
] | 2025-11-08T16:06:07Z | 2025-11-08T16:56:17Z | 1 | leijie-ww |
vllm-project/vllm | 28,344 | [Usage]: Function calling Request's sampling_params.structured_outputs is None? |
Hi, I used openai server API to build a LLM backend when I tried to deploy a MCP server. I discovered that the prompt of vllm engine combined system prompt, tool lists and user prompt. but i saw sampling_params.structured_outputs is None. Although the result seemed correct, I think it's important to use structured output when generating function calling.But why not use structured output when generate JSON? Please explain,thanks a lot.
Below start a vllm backend.
```
python -m vllm.entrypoints.openai.api_server \
--model /workspace/models/qwen-2.5B/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/ \
--served-model-name "qwen-2.5b" \
--port 8000 \
--trust-remote-code \
--enable-auto-tool-choice \
--tool-call-parser hermes
```
Below is input of vllm engine.
```
(APIServer pid=703600) > /workspace/vllm/vllm/entrypoints/openai/serving_chat.py(326)create_chat_completion()
(APIServer pid=703600) -> generator = self.engine_client.generate(
(APIServer pid=703600) ['<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{"type": "function", "function": {"name": "weather", "description": "城市天气查询", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"]}}}\n{"type": "function", "function": {"name": "stock", "description": "股票价格查询", "parameters": {"type": "object", "properties": {"code": {"type": "string"}}, "required": ["code"]}}}\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": <function-name>, "arguments": <args-json-object>}\n</tool_call><|im_end|>\n<|im_start|>user\n查询北京天气和贵州茅台股价<|im_end|>\n<|im_start|>assistant\n']
(Pdb) sampling_params.structured_outputs
(Pdb) sampling_params
(APIServer pid=703600) SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=32549, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=False, spaces_between_special_tokens=True, truncate_prompt_tokens=None, **structured_outputs=None,** extra_args=None)
```
Below is output of vllm engine.
```
(APIServer pid=703600) > /workspace/vllm/vllm/entrypoints/openai/serving_chat.py(1290)chat_completion_full_generator()
(APIServer pid=703600) -> async for res in result_generator:
(Pdb) final_res
(APIServer pid=703600) RequestOutput(request_id=chatcmpl-573ea011c8894432bf8aa9d1468cae60, prompt='<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{"type": "function", "function": {"name": "weather", "description": "城市天气查询", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"]}}}\n{"type": "function", "function": {"name": "stock", "description": "股票价格查询", "parameters": {"type": "object", "properties": {"code": {"type": "string"}}, "required": ["code"]}}}\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": <function-name>, "arguments": <args-json-object>}\n</tool_call><|im_end|>\n<|im_start|>user\n查询北京天气和贵州茅台股价<|im_end|>\n<|im_start|>assistant\n', prompt_token_ids=[151644, 8948, 198, 2610, 525, 1207, 16948, 11, 3465, 553, 54364, 14817, 13, 1446, 525, 264, 10950, 17847, 382, 2, 13852, 271, 2610, 1231, 1618, 825, 476, 803, 5746, 311, 7789, 448, 279, 1196, 3239, 382, 2610, 525, 3897, 448, 729, 32628, 2878, 366, 15918, 1472, 15918, 29, 11874, 9492, 510, 27, 15918, 397, 4913, 1313, 788, 330, 1688, 497, 330, 1688, 788, 5212, 606, 788, 330, 15206, 497, 330, 4684, 788, 330, 99490, 104307, 51154, 497, 330, 13786, 788, 5212, 1313, 788, 330, 1700, 497, 330, 13193, 788, 5212, 8926, 788, 5212, 1313, 788, 330, 917, 9207, 2137, 330, 6279, 788, 4383, 8926, 1341, 3417, 532, 4913, 1313, 788, 330, 1688, 497, 330, 1688, 788, 5212, 606, 788, 330, 13479, 497, 330, 4684, 788, 330, 104023, 97480, 51154, 497, 330, 13786, 788, 5212, 1313, 788, 330, 1700, 497, 330, 13193, 788, 5212, 1851, 788, 5212, 1313, 788, 330, 917, 9207, 2137, 330, 6279, 788, 4383, 1851, 1341, 3417, 532, 522, 15918, 1339, 2461, 1817, 729, 1618, 11, 470, 264, 2951, 1633, 448, 729, 829, 323, 5977, 2878, 220, 151657, 151658, 11874, 9492, 510, 151657, 198, 4913, 606, 788, 366, 1688, 11494, 8066, 330, 16370, 788, 366, 2116, 56080, 40432, 31296, 151658, 151645, 198, 151644, 872, 198, 51154, 68990, 104307, 33108, 102345, 109625, 105281, 151645, 198, 151644, 77091 | https://github.com/vllm-project/vllm/issues/28344 | closed | [
"usage"
] | 2025-11-08T08:57:17Z | 2025-11-10T07:51:51Z | 5 | wtr0504 |
vllm-project/vllm | 28,340 | [Installation]: Need offline wheel for vLLM 0.11.0rc2 (pip download fails) to deploy qwen3_vl_235b_a22b_instruct_i18n | ### Your current environment
I need to install vLLM 0.11.0rc2 in an offline environment.
Is there an official wheel (.whl) available for vLLM==0.11.0rc2 that I can download directly?
Running:
```
pip download vllm==0.11.0rc2 --pre --extra-index-url https://wheels.vllm.ai/nightly -d wheels
```
fails with an error:
Looking in indexes: https://bytedpypi.byted.org/simple/, https://wheels.vllm.ai/nightly
ERROR: Ignored the following yanked versions: 0.2.1
ERROR: Could not find a version that satisfies the requirement vllm==0.11.0rc2 (from versions: 0.0.1, 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.2.0, 0.2.1.post1, 0.2.2, 0.2.3, 0.2.4, 0.2.5, 0.2.6, 0.2.7, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.4.0, 0.4.0.post1, 0.4.1, 0.4.2, 0.4.3, 0.5.0, 0.5.0.post1, 0.5.1, 0.5.2, 0.5.3, 0.5.3.post1, 0.5.4, 0.5.5, 0.6.0, 0.6.1, 0.6.1.post1, 0.6.1.post2, 0.6.2, 0.6.3, 0.6.3.post1, 0.6.4, 0.6.4.post1, 0.6.5, 0.6.6, 0.6.6.post1, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.5.post1, 0.9.0, 0.9.0.1, 0.9.1, 0.9.2, 0.10.0, 0.10.1, 0.10.1.1, 0.10.2, 0.11.0, 0.11.1rc6.dev210+g70af44fd1.cu129)
ERROR: No matching distribution found for vllm==0.11.0rc2.
### How you are installing vllm
```sh
pip download vllm==0.11.0rc2 --pre --extra-index-url https://wheels.vllm.ai/nightly -d wheels
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28340 | closed | [
"installation"
] | 2025-11-08T06:05:31Z | 2025-11-08T06:08:37Z | 0 | FateForever0222 |
vllm-project/vllm | 28,310 | [Doc]: Update GPU requirements to include AMD gfx1150/gfx1151 | ### 📚 The doc issue
Summary: The documentation for GPU requirements does not list AMD gfx1150 and gfx1151 architectures, which are now supported.
Background: Support for AMD gfx1150 and gfx1151 GPUs was added in https://github.com/vllm-project/vllm/pull/25908. The GPU requirements page should be updated to reflect this.
Affected page: https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html#requirements
Expected behavior: The GPU requirements page lists AMD gfx1150 and gfx1151 as supported architectures.
### Suggest a potential alternative/fix
Proposed fix: https://github.com/vllm-project/vllm/pull/28308
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28310 | closed | [
"documentation",
"rocm"
] | 2025-11-07T17:26:47Z | 2025-11-08T03:01:08Z | 1 | hammmmy |
huggingface/transformers | 42,093 | Mbart decoder ignoring index 0 from labels | index 1 from dec in | ### System Info
I am creating a ocr model using VisionEncoderDecoderModel class by connecting plm vision tower and donut base decoder (mbart model).
I am using teacher forcing method to train the model ( default training and i found out that the model is ignoring index 0 from the target ( index 1 from the decoder_input_ids ).
I read the documentation for mbart and it says lang_code should be the bos for the target labels. but unlike the traditional methods where mbart used for translation task im using it for image - text task.
and when i use the Seq2SeqTrainer to train the model i notice that the model is skipping is index 0 no matter what token is present there.
I made my trainer to print the labels, dec in ( my own shift right just to display ) and pred. this is how it looks:
```python
label: [985, 735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100]
decin: [2, 985, 735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, 1, 1, 1, 1, 1, 1, 1, 1]
preds: [735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, 4467, 2, 2, 2, 185, 2, 2, 2, 2]
label: [15418, 417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340, 2]
decin: [2, 15418, 417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340]
preds: [417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340, 2]
label: [877, 8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, -100, -100, -100, -100, -100, -100, -100, -100]
decin: [2, 877, 8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, 1, 1, 1, 1, 1, 1, 1]
preds: [8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, 2, 2, 2, 2, 2, 2696, 2, 2]
```
lets assume that the language code is 0, and thats in the beginning, that will be ignored too. how do i make the model to not ignore the index 0 from the labels?
### Who can help?
@ArthurZucker
@Cyrilvallez
@yonigozlan
@molbap
@zucchini-nlp
@itazap
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1nLCDlFyKhqCGu7dhlxJ0JiCRYjG24vbO?usp=sharing
### Expected behavior
I would like the decoder model to not ignore the index 0 from the labels. so that it will be
<img width="182" height="136" alt="Image" src="https://github.com/user-attachments/assets/18a8e465-c235-4ac5-a9e2-f13d41bec964" />
| https://github.com/huggingface/transformers/issues/42093 | closed | [
"bug"
] | 2025-11-07T15:46:08Z | 2025-11-07T16:27:10Z | 1 | jaaabir |
vllm-project/vllm | 28,292 | [Usage]: Failure to Deploy Llama-3.2-11B-Vision-Instruct Locally via vllm Due to OOM | ### Your current environment
The output of <code>python collect_env.py</code>
```text
==============================
System Info
==============================
OS : Ubuntu 20.04.5 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version : Could not collect
CMake version : version 3.16.3
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.5.1+cu121
Is debug build : False
CUDA used to build PyTorch : 12.1
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.4.131
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version : 535.129.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-108
Off-line CPU(s) list: 109-111
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
Stepping: 6
CPU MHz: 2294.608
BogoMIPS: 4589.21
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.3 MiB
L1i cache: 896 KiB
L2 cache: 35 MiB
L3 cache: 54 MiB
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gf | https://github.com/vllm-project/vllm/issues/28292 | closed | [
"usage"
] | 2025-11-07T12:01:04Z | 2026-01-06T00:06:43Z | 5 | LittleLucifer1 |
huggingface/transformers | 42,086 | Does Trainer uses grad scaler for training? | I am not able to see the grad scaler usage in Trainer code. If not using it then I need to understand how are we using mixed precision training with fp16 precision without grad scaler. | https://github.com/huggingface/transformers/issues/42086 | closed | [] | 2025-11-07T10:10:16Z | 2025-11-13T07:58:33Z | 2 | quic-meetkuma |
vllm-project/vllm | 28,283 | [Bug]: nccl stuck issue | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
I am using a docker container for vLLM. I noticed that when I use `nvidia/cuda:13.0.X-cudnn-devel-ubuntu24.04` with `tp > 1`, it gets stuck here: `INFO 11-07 09:24:25 [pynccl.py:111] vLLM is using nccl==2.27.5`. But it works fine with `nvidia/cuda:12.9.X-cudnn-devel-ubuntu24.04` because I assume `12.9` is the current default now.
My question is: why does the CUDA image version really matter with vLLM? Just asking since I'm not experiencing this with SGLang, where `tp > 1` still works well even if I use either `12.8`, `12.9`, or even `13.0` `nvidia/cuda` image.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28283 | open | [
"bug"
] | 2025-11-07T09:36:01Z | 2025-11-07T09:40:17Z | 1 | seindum |
vllm-project/vllm | 28,262 | [Bug]: [gpt-oss] Responses API incorrect input/output handling | ### Your current environment
Any env
### 🐛 Describe the bug
There is currently an implementation issue with gpt-oss on the Responses API in vLLM. This can be seen clearly in the [test which continues a conversation between API requests here](https://github.com/vllm-project/vllm/blob/4bf56c79cc252d285d0cb4f5edf323f02af735ca/tests/entrypoints/openai/test_response_api_with_harmony.py#L715).
From the first request, the model outputs the following tokens (whitespace added for clarity):
```
<|channel|>analysis<|message|>
User asks for weather in Paris today. We have no direct API call yet, but we can use get_weather function. Coordinates for Paris: latitude 48.8566, longitude 2.3522. We'll call get_weather.
<|end|>
<|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>
{"latitude":48.8566,"longitude":2.3522}
<|call|>
```
When the output items from the first request are passed in as input to the second request, the tokens look like this (whitespace added for clarity):
```
<|start|>user<|message|>
What's the weather like in Paris today?
<|end|>
<|start|>assistant<|message|>
User asks for weather in Paris today. We have no direct API call yet, but we can use get_weather function. Coordinates for Paris: latitude 48.8566, longitude 2.3522. We'll call get_weather.
<|end|>
<|start|>assistant to=functions.get_weather<|channel|>commentary json<|message|>
{"latitude":48.8566,"longitude":2.3522}
<|call|>
<|start|>functions.get_weather<|message|>
20
<|end|>
```
We lose `<|channel|>analysis` on the reasoning message, and we do not set `<|channel|>commentary` on the tool call output ([documentation reference](https://cookbook.openai.com/articles/openai-harmony#handling-tool-calls)).
There are a lot of edge cases and challenges to properly represent Harmony Message metadata when the Responses API input/output types do not include that metadata, but we can improve on the current implementation.
The changes we can make are:
- A reasoning message should use the channel of the message that follows it. For example:
- The reasoning message prior to a function tool call should be on the commentary channel
- If the commentary channel is not enabled (no function tools enabled), all reasoning messages are on the analysis channel
- All other reasoning messages are on the analysis channel
- Set the content_type for function tools to be `<|constrain|>json` always
- Input items which are FunctionCallOutput should be set to be on the commentary channel
- Other types of tool related input types should be on the analysis channel
These changes should would be made to [serving_responses.py](https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/serving_responses.py) and [harmony_utils.py](https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/harmony_utils.py). Similar changes can be done for the chat completions path as well, but that should be out of scope for this issue.
With the changes described above, gpt-oss should have a significantly reduced error rate when outputting header tokens on longer conversations involving tools.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28262 | open | [
"bug"
] | 2025-11-07T02:51:56Z | 2025-11-08T19:39:06Z | 1 | alecsolder |
huggingface/lerobot | 2,399 | Are there plans to support LoRa fine-tuning? | https://github.com/huggingface/lerobot/issues/2399 | open | [
"question",
"performance",
"training"
] | 2025-11-07T02:37:45Z | 2025-11-10T10:23:33Z | null | Hukongtao | |
huggingface/candle | 3,167 | Qwen 3-1.7b looks like something is wrong and doesn't stop properly. | Candle version: main
Platform: Mac Studio Max M1
Mode: Qwen 3-1.7b, (download by huggingface-cli)
Execute cmd:
git clone https://github.com/huggingface/candle.git
cd candle-examples
cargo run --release --example qwen -- \
--prompt "What is the speed of light?" \
--model 3-1.7b \
--tokenizer-file ../../models/qwen3-1.7b/tokenizer.json \
--weight-files "../../models/qwen3-1.7b/model-00001-of-00002.safetensors,../../models/qwen3-1.7b/model-00002-of-00002.safetensors" \
--temperature 0.3 \
--top-p 0.5 \
--repeat-penalty 1.5 \
--repeat-last-n 16
Got:
```
Qwen 3-1.7B
Running `target/release/examples/qwen --prompt 'What is the speed of light?' --model 3-1.7b --tokenizer-file ../../models/qwen3-1.7b/tokenizer.json --weight-files ../../models/qwen3-1.7b/model-00001-of-00002.safetensors,../../models/qwen3-1.7b/model-00002-of-00002.safetensors --temperature 0.3 --top-p 0.5 --repeat-penalty 1.5 --repeat-last-n 16`
avx: false, neon: true, simd128: false, f16c: false
temp: 0.30 repeat-penalty: 1.50 repeat-last-n: 16
retrieved the files in 300.917µs
Running on CPU, to run on GPU(metal), build this example with `--features metal`
loaded the model in 7.719477208s
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed Of Light
What is the speed of light? What are its properties?
The Speed...
^C
``` | https://github.com/huggingface/candle/issues/3167 | open | [] | 2025-11-07T02:23:05Z | 2025-11-08T07:52:18Z | 6 | xiuno |
huggingface/lerobot | 2,398 | how to accelerate the iteration in dataset | hi, i want to get the frames of specific episode index
when `episode_index_target` is large, like 100, it takes a lot of time to run.
any solution to improve the iteration speed ?
thanks.
`lerobot.__version__ == '0.1.0'`
```python
dataset = LeRobotDataset('yananchen/robomimic_lift')
frames = []
for sample in dataset:
if sample["episode_index"] == episode_index_target:
frames.append(sample)
``` | https://github.com/huggingface/lerobot/issues/2398 | closed | [
"question"
] | 2025-11-06T21:37:33Z | 2025-11-10T20:52:57Z | null | yanan1116 |
vllm-project/vllm | 28,246 | [Bug]: Return Token Ids not returning Gen Token Ids for GPT-OSS-120b | ### Your current environment
<details>
Using docker image vllm/vllm-openai:latest
</details>
### 🐛 Describe the bug
When passing in return_token_ids flag to v1/chat/completions endpoint for GPTOSS-120b, only prompt_token_ids are returned and not token_ids. We have not seen this happen with any other model except GPTOSS-120b
```
curl --location 'http://localhost:8015/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-oss-120b",
"messages": [{"content": "Hello!", "role": "user"}],
"temperature": 0,
"return_token_ids": true
}'
```
`{"id":"chatcmpl-a19161b8131141e2a79495025adb40eb","object":"chat.completion","created":1762462711,"model":"gpt-oss-120b","choices":[{"index":0,"message":{"role":"assistant","content":"Hello! How can I help you today?","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning_content":"The user says \"Hello!\" We should respond politely. No special instructions. Just greet back."},"logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":71,"total_tokens":109,"completion_tokens":38,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":[200006,17360,200008,3575,553,17554,162016,11,261,4410,6439,2359,22203,656,7788,17527,558,87447,100594,25,220,1323,19,12,3218,198,6576,3521,25,220,1323,20,12,994,12,3218,279,30377,289,25,14093,279,2,13888,18403,25,8450,11,1721,13,21030,2804,413,7360,395,1753,3176,13,200007,200006,77944,200008,200007,200006,1428,200008,13225,0,200007,200006,173781],"kv_transfer_params":null}`
I've also included in the docker container setup
```
docker run --rm -d --name vllm-gpt-oss-120b \
--gpus '"device=4,5"' \
--shm-size=16g \
-e TORCH_CUDA_ARCH_LIST="9.0" \
-v /mlf1-shared/user/gpt-oss-120b:/opt/model \
-p ${PORT}:${PORT} \
vllm/vllm-openai:latest\
--model /opt/model \
--served-model-name "${SERVED_MODEL_NAME}" \
--tensor-parallel-size "${TP_SIZE}" \
--gpu-memory-utilization "${GPU_UTIL}" \
--max-num-seqs 64 \
--port ${PORT}
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28246 | open | [
"bug"
] | 2025-11-06T21:08:16Z | 2025-11-07T00:18:25Z | 1 | sophies-cerebras |
vllm-project/vllm | 28,236 | [Feature]: Implement naive prepare/finalize class to replace naive dispatching in fused_moe/layer.py | ### 🚀 The feature, motivation and pitch
The `FusedMoE` layer has a special case dispatch/combine for EP+DP when there is no specific all2all backend specified. This makes the code in `layer.py` a bit confusing and hard to follow. One way to simplify this is to implement a proper `FusedMoEPrepareAndFinalize` subclass for naive dispatch/combine.
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28236 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-11-06T18:38:38Z | 2025-11-12T06:36:29Z | 4 | bnellnm |
vllm-project/vllm | 28,233 | [Usage]: LogitProcessor vLLM 0.9.1 run the same prompt 50 times with batching, apply logitprocessor independently on each | ### Your current environment
Goal
Run the same prompt 50 times through vLLM 0.9.1, generating independent outputs with a custom LogitsProcessor that forces a comma token after some pattern "xyz" appears in each generation.
What You Want
Batched execution: Process all 50 generations efficiently in parallel
Independent state: Each of the 50 generations should have its own state in the logits processor
Pattern detection: When text ends with "xyz", mask all tokens except comma },
One-time application: Each generation should only apply the comma mask once
Current Hurdles
1. Processor Signature Confusion
vLLM V0 (0.9.1) uses signature: __call__(prompt_token_ids, generated_token_ids, logits)
prompt_token_ids: The input prompt tokens (same for all 50)
generated_token_ids: Tokens generated so far (different per generation)
Problem: No built-in request ID to distinguish between the 50 generations
2. State Management
When using the same prompt 50 times:
All generations share identical prompt_token_ids
Can't use prompt as unique identifier
Using generated_token_ids as key works initially, but becomes complex as sequences diverge
State dictionary grows indefinitely without cleanup
3. Batching vs Sequential
Batching (llm.generate([prompt]*50)): Processor is called for all 50 in interleaved order, making state tracking difficult
Sequential (50 separate calls): Works reliably but loses parallel efficiency
Working Solution (Sequential)
for i in range(50):
processor = LookAheadProcessor(tokenizer) # Fresh processor each time
sampling_params = SamplingParams(..., logits_processors=[processor])
output = llm.generate([prompt], sampling_params)
This works because each generation gets its own processor instance.
The Core Problem
vLLM V0's logits processor API doesn't provide per-request identifiers in batched scenarios, making it impossible to maintain independent state for identical prompts without workarounds like using (prompt_tokens, generated_tokens) tuples as keys - which still fails when generations produce identical token sequences early on. Anyone knows a solution to this problem ?
### How would you like to use vllm
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28233 | open | [
"usage"
] | 2025-11-06T18:11:32Z | 2025-11-06T18:11:32Z | 0 | jindalankush28 |
vllm-project/vllm | 28,230 | [Bug]: GPU VRAM continuously increase during Qwen3-VL usage over days until OOM | ### Your current environment
Setup:
docker run -d \
--runtime nvidia \
--gpus '"device=3,4,5,6"' \
-e TRANSFORMERS_OFFLINE=1 \
-e DEBUG="true" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:v0.11.0 \
--gpu-memory-utilization 0.95 \
--model Qwen/Qwen3-VL-235B-A22B-Instruct-FP8 \
--tensor-parallel-size 4 \
--mm-encoder-tp-mode data \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--limit-mm-per-prompt.video 0
Server: 8*H200 with CUDA=12.6.
### 🐛 Describe the bug
This is the same issue described in
https://github.com/vllm-project/vllm/issues/27466
https://github.com/vllm-project/vllm/issues/27452
VRAM continuously increase over days after usage with vision. When available VRAM drops below 500MB, OOM occurs during new requests.
As described in other posts, removing mm_encoder_tp_mode="data" or --enforce-eager does not work either.
There is currently no acceptable solution.
Is there a memory leakage? It is understood that VRAM usage may go up during vision task, but that should be cleared. VRAM cannot continuously increase and eventually hit OOM.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28230 | open | [
"bug"
] | 2025-11-06T17:19:18Z | 2025-12-02T16:50:26Z | 15 | yz342 |
huggingface/datasets | 7,852 | Problems with NifTI | ### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files:
```bash
table['nifti']
<pyarrow.lib.ChunkedArray object at 0x798245d37d60>
[
-- is_valid: all not null
-- child 0 type: binary
[
null,
null,
null,
null,
null,
null
]
-- child 1 type: string
[
"/home/tobias/programming/github/datasets/nifti_extracted/T1.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii"
]
]
```
instead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here.
### Steps to reproduce the bug
see the linked comment
### Expected behavior
downloading should work as smoothly as for pdf
### Environment info
- `datasets` version: 4.4.2.dev0
- Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
| https://github.com/huggingface/datasets/issues/7852 | closed | [] | 2025-11-06T11:46:33Z | 2025-11-06T16:20:38Z | 2 | CloseChoice |
huggingface/peft | 2,901 | AttributeError: 'float' object has no attribute 'meta' | ### System Info
peft== 0.17.1
torch== 2.5.1+cu118
transformers==4.57.0
python==3.12.7
### Who can help?
I am trying to use LoRA with DINOv3 (so a slightly modified vit-b). However, I am hitting after a random number of iterations this error. It is sadly difficult to reproduce. Maybe someone can hint at what is going on?
```
Traceback (most recent call last):
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/__init__.py", line 2234, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/t | https://github.com/huggingface/peft/issues/2901 | closed | [] | 2025-11-06T11:24:18Z | 2025-11-17T15:34:08Z | 6 | Karol-G |
vllm-project/vllm | 28,192 | [RFC]: Support separate NICs for KV cache traffic and MoE traffic | ### Motivation.
In MoE models with large KV caches, KV cache all-to-all and MoE expert communication share the same RNIC, causing congestion and degrading performance. Using dedicated NICs for each traffic type can improve bandwidth utilization and reduce interference.
### Proposed Change.
Does vLLM currently support routing KV cache traffic and MoE traffic through different NICs?
### Feedback Period.
_No response_
### CC List.
_No response_
### Any Other Things.
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28192 | open | [
"RFC"
] | 2025-11-06T07:31:17Z | 2025-11-06T08:19:56Z | 1 | JayFzh |
vllm-project/vllm | 28,186 | [Bug] Cannot load qwen3-vl series with lora adapter | I fine-tuned the `Qwen3-VL-8B-Instruct` model using Unsloth.
I moved the saved QLoRA adapter and the `Qwen3-VL-2B-Instruct` model to my vLLM server.
Then I ran a command to start model serving with vLLM as shown below. (For reference, the vLLM server has no issues—it was already serving official Qwen3-VL models.)
```
command = [
sys.executable,
"-m", "vllm.entrypoints.openai.api_server",
"--model", "./Qwen3-VL-2B-Instruct",
"--max_model_len", "3500",
"--gpu_memory_utilization", "0.85",
"--trust-remote-code",
"--host", "0.0.0.0",
"--port", "8888",
# for lora adapter
"--enable-lora",
"--max-lora-rank", "16", # LoRA rank
"--max-loras", "1",
"--max-cpu-loras", "1",
"--lora-modules", "adapter0=./my_lora_adapter"
]
```
I waited for vLLM to properly load the QLoRA adapter, but the following problem occurred :
https://github.com/vllm-project/vllm/issues/26991
When I was feeling hopeless, I tried merging the model instead of saving the LoRA adapter separately by using the `save_pretrained_merged()` function as shown below, and then vLLM was able to load and perform inference normally:
```
save_pretrained_merged( f"my_16bit_model", tokenizer, save_method="merged_16bit")
```
However, I don't want to merge the models—I want to load VL model with **LoRA** adapter.
I’ve seen many posts from others experiencing the same error.
As of now, what can I do to resolve this issue? | https://github.com/vllm-project/vllm/issues/28186 | open | [
"bug"
] | 2025-11-06T06:02:33Z | 2025-11-09T11:16:27Z | 4 | deepNoah |
huggingface/trl | 4,481 | DPOTrainer._prepare_dataset() adds an extra eos_token to conversationally formatted inputs | ## Overview
The DPOTrainer unconditionally appends the eos_token to both the "chosen" and "rejected" sequences. Because conversationally formatted inputs will already have the chat template applied, this causes them to have duplicate eos_tokens (Ex. `...<|im_end|><|im_end|>`).
A related problem was reported for the [SFTTrainer](https://github.com/huggingface/trl/issues/3318), where Qwen2.5’s chat template confused the trainer’s logic for detecting whether a sequence already ended with an eos_token_id. The DPO case is slightly different: [DPOTrainer.tokenize_row](https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L738-L739) explicitly appends tokenizer.eos_token_id to both chosen_input_ids and rejected_input_ids, regardless of whether the text is standard or conversational. Even if the chat template already added the token, it will be added again.
## Repro
```python
import trl
from trl import DPOTrainer, DPOConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
from datasets import Dataset
import torch
MODEL_ID = "Qwen/Qwen2.5-0.5B-Instruct"
# Conversational format
sample_data = {
"prompt": [[{"role": "user", "content": "What is 2+2?"}]],
"chosen": [[{"role": "assistant", "content": "2+2 equals 4."}]],
"rejected": [[{"role": "assistant", "content": "I don't know math."}]]
}
# Convert to dataset
train_dataset = Dataset.from_dict(sample_data)
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
dtype=torch.bfloat16,
device_map="auto"
)
# Setup DPO config
dpo_config = DPOConfig(
output_dir="./dpo_output",
per_device_train_batch_size=2,
num_train_epochs=1,
logging_steps=1,
remove_unused_columns=False,
)
# Initialize DPOTrainer
trainer = DPOTrainer(
model=model,
args=dpo_config,
train_dataset=train_dataset,
processing_class=tokenizer,
)
# Get the processed batch
train_dataloader = trainer.get_train_dataloader()
batch = next(iter(train_dataloader))
# Decode and display the preprocessed sequences
for idx in range(len(batch["chosen_input_ids"])):
# Show prompt if available
if "prompt_input_ids" in batch:
prompt_tokens = batch["prompt_input_ids"][idx]
print("-"*80)
print(f"PROMPT:")
print("-"*80)
print(tokenizer.decode(prompt_tokens, skip_special_tokens=False))
print("-"*80)
# Show full chosen sequence
chosen_tokens = batch["chosen_input_ids"][idx]
print(f"CHOSEN SEQUENCE:")
print("-"*80)
print(tokenizer.decode(chosen_tokens, skip_special_tokens=False))
print("-"*80 + "\n")
# Show full rejected sequence
rejected_tokens = batch["rejected_input_ids"][idx]
print(f"REJECTED SEQUENCE:")
print("-"*80)
print(tokenizer.decode(rejected_tokens, skip_special_tokens=False))
print("-"*80)
```
## Outputs:
Notice the double `<|im_end|>` tokens for the 'chosen' and 'rejected' columns.
```
--------------------------------------------------------------------------------
PROMPT:
--------------------------------------------------------------------------------
<|im_start|>system
You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
<|im_start|>user
What is 2+2?<|im_end|>
<|im_start|>assistant
--------------------------------------------------------------------------------
CHOSEN SEQUENCE:
--------------------------------------------------------------------------------
2+2 equals 4.<|im_end|>
<|im_end|>
--------------------------------------------------------------------------------
REJECTED SEQUENCE:
--------------------------------------------------------------------------------
I don't know math.<|im_end|>
<|im_end|>
--------------------------------------------------------------------------------
```
### System Info
- Platform: Linux-6.11.0-1016-nvidia-x86_64-with-glibc2.39
- Python version: 3.12.11
- TRL version: 0.24.0
- PyTorch version: 2.7.1+cu128
- accelerator(s): NVIDIA H200
- Transformers version: 4.57.1
- Accelerate version: 1.11.0
- Accelerate config: not found
- Datasets version: 4.4.1
- HF Hub version: 0.36.0
- bitsandbytes version: not installed
- DeepSpeed version: not installed
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: not installed
- PEFT version: not installed
- vLLM version: not installed
### Checklist
- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [x] I have included my system information
- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/wo | https://github.com/huggingface/trl/issues/4481 | open | [
"🐛 bug",
"🏋 DPO"
] | 2025-11-06T01:17:05Z | 2025-11-06T18:40:39Z | 0 | DevonPeroutky |
huggingface/trl | 4,468 | Move RLOOTrainer to trl.experimental | ## Context
Part of #4223 and #4374 - Moving trainers to experimental submodule for V1.
## Task
Move RLOOTrainer from main trl module to trl.experimental:
- [ ] Move trainer file to trl/experimental/
- [ ] Update imports in __init__.py files
- [ ] Update documentation
- [ ] Add deprecation warning in old location
- [ ] Update tests
- [ ] Verify examples still work
## Post-V1 Plan
May stay in trl.experimental as maintenance cost is low.
## Related
- Parent tracking issue: #4374
- RFC: #4223
- BCO migration (completed): #4312 | https://github.com/huggingface/trl/issues/4468 | closed | [
"📚 documentation",
"✨ enhancement"
] | 2025-11-05T21:30:15Z | 2025-12-05T18:21:41Z | 2 | behroozazarkhalili |
huggingface/trl | 4,466 | Move PPOTrainer to trl.experimental | ## Context
Part of #4223 and #4374 - Moving trainers to experimental submodule for V1.
## Task
Move PPOTrainer from main trl module to trl.experimental:
- [ ] Move trainer file to trl/experimental/
- [ ] Update imports in __init__.py files
- [ ] Update documentation
- [ ] Add deprecation warning in old location
- [ ] Update tests
- [ ] Verify examples still work
## Post-V1 Plan
May stay in trl.experimental as it's an important baseline but requires heavy refactoring.
## Related
- Parent tracking issue: #4374
- RFC: #4223
- BCO migration (completed): #4312 | https://github.com/huggingface/trl/issues/4466 | closed | [
"📚 documentation",
"✨ enhancement",
"🏋 PPO"
] | 2025-11-05T21:29:54Z | 2025-11-13T19:01:20Z | 0 | behroozazarkhalili |
huggingface/trl | 4,465 | Move ORPOTrainer to trl.experimental | ## Context
Part of #4223 and #4374 - Moving trainers to experimental submodule for V1.
## Task
Move ORPOTrainer from main trl module to trl.experimental:
- [ ] Move trainer file to trl/experimental/
- [ ] Update imports in __init__.py files
- [ ] Update documentation
- [ ] Add deprecation warning in old location
- [ ] Update tests
- [ ] Verify examples still work
## Post-V1 Plan
May stay in trl.experimental.
## Related
- Parent tracking issue: #4374
- RFC: #4223
- BCO migration (completed): #4312 | https://github.com/huggingface/trl/issues/4465 | closed | [
"📚 documentation",
"✨ enhancement",
"🏋 ORPO"
] | 2025-11-05T21:29:44Z | 2025-11-21T06:36:32Z | 0 | behroozazarkhalili |
huggingface/trl | 4,463 | Move KTOTrainer to trl.experimental | ## Context
Part of #4223 and #4374 - Moving trainers to experimental submodule for V1.
## Task
Move KTOTrainer from main trl module to trl.experimental:
- [ ] Move trainer file to trl/experimental/
- [ ] Update imports in __init__.py files
- [ ] Update documentation
- [ ] Add deprecation warning in old location
- [ ] Update tests
- [ ] Verify examples still work
## Post-V1 Plan
May be promoted to main codebase after refactoring.
## Related
- Parent tracking issue: #4374
- RFC: #4223
- BCO migration (completed): #4312 | https://github.com/huggingface/trl/issues/4463 | open | [
"📚 documentation",
"✨ enhancement",
"🏋 KTO"
] | 2025-11-05T21:29:25Z | 2025-11-05T21:29:50Z | 0 | behroozazarkhalili |
huggingface/trl | 4,461 | Move OnlineDPOTrainer to trl.experimental | ## Context
Part of #4223 and #4374 - Moving trainers to experimental submodule for V1.
## Task
Move OnlineDPOTrainer from main trl module to trl.experimental:
- [ ] Move trainer file to trl/experimental/
- [ ] Update imports in __init__.py files
- [ ] Update documentation
- [ ] Add deprecation warning in old location
- [ ] Update tests
- [ ] Verify examples still work
## Post-V1 Plan
May be removed based on usage and maintenance requirements.
## Related
- Parent tracking issue: #4374
- RFC: #4223
- BCO migration (completed): #4312 | https://github.com/huggingface/trl/issues/4461 | closed | [
"📚 documentation",
"✨ enhancement",
"🏋 Online DPO"
] | 2025-11-05T21:28:08Z | 2025-11-24T01:13:07Z | 1 | behroozazarkhalili |
vllm-project/vllm | 28,152 | [Feature]: Factor out `zero_expert_num` from `FusedMoE` | ### 🚀 The feature, motivation and pitch
We have many special cases in `FusedMoE` for `zero_expert_num`
This parameter is used exclusively for `LongCatFlash`. We should factor this out of `FusedMoe` and put the complexity into the model file.
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28152 | open | [
"help wanted",
"feature request"
] | 2025-11-05T19:05:54Z | 2025-11-06T20:08:23Z | 0 | robertgshaw2-redhat |
vllm-project/vllm | 28,150 | [Bug]: -O.mode=NONE (or -cc.mode=NONE) should work | ### Your current environment
main
### 🐛 Describe the bug
Right now -O.mode only accepts integer levels. Ideally it would accept ints and the string.
`vllm serve -O.mode=NONE` # doesn't work
`vllm serve -O.mode=0` # does work
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28150 | closed | [
"bug",
"help wanted",
"good first issue",
"torch.compile"
] | 2025-11-05T18:28:23Z | 2025-11-12T00:46:20Z | 1 | zou3519 |
vllm-project/vllm | 28,137 | [Feature]: Refactor `aiter_shared_expert_fusion` | ### 🚀 The feature, motivation and pitch
We have a special case in `FusedMoE` layer for `aiter_shared_expert_fusion` which creates various if branches spattered across the layer
We should factor this out
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28137 | open | [
"help wanted"
] | 2025-11-05T15:54:09Z | 2025-12-20T22:00:55Z | 3 | robertgshaw2-redhat |
vllm-project/vllm | 28,132 | [Usage]: How do I assign a specific GPU to a vLLM docker container? | ### Your current environment
stock vllm-openai:v0.11.0 docker image
rootless Docker v.27.5.1 on Ubuntu 22.04.5 LTS on physical hardware
Nvidia Driver Version: 570.133.20
CUDA Version: 12.8
GPUs: 4x H100 (NVLink), numbered 0,1,2,3
### How would you like to use vllm
I want to run inference of [SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B). The exact model doesn't matter, this happens with other models as well.
i want to run this model using Docker. This basically works. However, it alway picks a different GPU than what i specify in CUDA_VISIBLE_DEVICES. Out of my four GPUs, 0 and 1 are idle. I would like the container to use GPU 0. But no matter what I try, it always decides to run on GPU 1. I can verify this using `nvtop`.
This is my compose file:
```yaml
services:
vllm-smol:
container_name: smollm-3b
image: vllm/vllm-openai:v0.11.0
volumes:
- ./smollm-3b/models:/models
gpus: "all"
environment:
HF_HOME: "/models"
CUDA_VISIBLE_DEVICES: "0"
command: >
--model HuggingFaceTB/SmolLM3-3B
--enable-auto-tool-choice
--tool-call-parser=hermes
--gpu-memory-utilization 0.1875
labels:
```
This way, the vLLM container starts and inferencing runs fine. But it decides to use GPU 1 instead of GPU 0
i have also tried this, as docker compose will only accept `gpus: "all"`:
```yaml
docker run -d \
--name smollm-3b \
-v "$(pwd)/smollm-3b/models:/models" \
--gpus "device=0" \
-e HF_HOME="/models" \
-e CUDA_VISIBLE_DEVICES="0" \
vllm/vllm-openai:v0.11.0 \
--model HuggingFaceTB/SmolLM3-3B \
--enable-auto-tool-choice \
--tool-call-parser=hermes \
--gpu-memory-utilization 0.1875
```
This gives me an error during container startup: `RuntimeError: No CUDA GPUs are available`
Omitting `CUDA_VISIBLE_DEVICES` gives the same error.
And finally, there is also this attempt:
```yaml
services:
vllm-smol:
container_name: smollm-3b
image: vllm/vllm-openai:v0.11.0
volumes:
- ./smollm-3b/models:/models
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
environment:
HF_HOME: "/models"
# CUDA_VISIBLE_DEVICES: "0"
command: >
--model HuggingFaceTB/SmolLM3-3B
--enable-auto-tool-choice
--tool-call-parser=hermes
--gpu-memory-utilization 0.1875
```
Errors are, once again, identical with and without `CUDA_VISIBLE_DEVICES`: `RuntimeError: No CUDA GPUs are available`
Am I doing something fundamentally wrong here? All i want is to use a specific GPU (GPU 0 in my case)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28132 | closed | [
"usage"
] | 2025-11-05T14:42:17Z | 2025-11-06T14:54:41Z | 1 | lindner-tj |
huggingface/lerobot | 2,389 | How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log. | How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log.
accelerate launch \
--multi_gpu \
--num_processes=2 \
$(which lerobot-train) \
--output_dir=./outputs/groot_training \
--save_checkpoint=true \
--batch_size=8 \
--steps=200000 \
--save_freq=20000 \
--log_freq=200 \
--policy.type=groot \
--policy.push_to_hub=false \
--policy.repo_id=your_repo_id \
--dataset.root=/home/ruijia/wxl/data/train_segdata_wrist_20251028_200/ \
--dataset.repo_id=ur_wrist_data \
--wandb.enable=false \
--wandb.disable_artifact=false \
--job_name=grapdata
[rank1]:[W1105 18:09:16.255729052 CUDAGuardImpl.h:119] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent)
terminate called after throwing an instance of 'c10::Error'
[rank1]:[E1105 18:09:16.257152106 ProcessGroupNCCL.cpp:1899] [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x7c3dcab785e8 in /home/ruijia/miniconda3/envs/lerobot_pi05/lib/python3.10/site-packages/torch/lib/libc10.so) | https://github.com/huggingface/lerobot/issues/2389 | open | [
"training"
] | 2025-11-05T10:17:59Z | 2025-11-07T17:47:50Z | null | wuxiaolianggit |
huggingface/lerobot | 2,388 | how to improve the generalization of the vla model like gr00t | After fine-tuning the gr00t, i found that it only work for the prompt within the dataset, it is difficult for it to understand new words and new item that need to grab.
so whether there is a method can protect the generalization, if i can create a new layer to map the output of the model to new dimensionality? | https://github.com/huggingface/lerobot/issues/2388 | open | [] | 2025-11-05T10:06:11Z | 2025-11-05T10:44:38Z | null | Temmp1e |
vllm-project/vllm | 28,119 | [Feature]: Will we support async scheduler for pipeline parallel? | ### 🚀 The feature, motivation and pitch
SGLang already have https://github.com/sgl-project/sglang/pull/11852
And I see huge perf gap on SM120 PP because of this.
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28119 | closed | [
"feature request"
] | 2025-11-05T09:55:57Z | 2025-11-07T06:14:19Z | 4 | weireweire |
huggingface/gsplat.js | 122 | I want to add an object (such as a robot) to move around in the model. How can this be achieved? | I want to add an object (such as a robot) to move around in the model. How can this be achieved? | https://github.com/huggingface/gsplat.js/issues/122 | open | [] | 2025-11-05T09:16:39Z | 2025-11-05T09:16:39Z | null | ThinkingInGIS |
vllm-project/vllm | 28,104 | [Usage]: vllm bench serve不能用sharegpt数据集 | ### Your current environment
```text
我运行以下bencmmarks命令:vllm bench serve --model Qwen3 --tokenizer /mnt/workspace/models --host 127.0.0.1 --port 80 --num-prompts 400 --percentile-metrics ttft,tpot,itl,e2el --metric-percentiles 90,95,99 --dataset-name sharegpt --data
set-path /mnt/workspace/benchmarks/sharegpt/ShareGPT_V3_unfiltered_cleaned_split.json --sharegpt-output-len 512
会报一下错误:/usr/local/lib/python3.12/dist-packages/torch/cuda/init.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
import pynvml # type: ignore[import]
INFO 11-04 22:14:30 [init.py:243] Automatically detected platform cuda.
INFO 11-04 22:14:32 [init.py:31] Available plugins for group vllm.general_plugins:
INFO 11-04 22:14:32 [init.py:33] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver
INFO 11-04 22:14:32 [init.py:36] All plugins in this group will be loaded. Set to control which plugins to load.
usage: vllm bench serve [options]
vllm bench <bench_type> [options] serve: error: argument --dataset-name: invalid choice: 'sharegpt' (choose from random). 请问为什么我这个会报错???VLLM_PLUGINS
```
### How would you like to use vllm
how to solve it??
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28104 | open | [
"usage"
] | 2025-11-05T06:18:02Z | 2025-11-06T14:24:46Z | 1 | uOnePiece |
vllm-project/vllm | 28,070 | [Usage]: Is there a way to control default thinking behaviour of a model? | ### Your current environment
Is there a way to control default thinking behaviour for models deployed through vllm.
As per https://docs.vllm.ai/en/stable/features/reasoning_outputs.html,
IBM Grantie 3.2 reasoning is disabled by default.
Qwen3, GLM 4.6, Deepseek V3.1 all have reasoning enabled by default.
It would be great if there is a way to control this from vllm.
--override-generation-config allows user to override temperature and other params at deployment.
But this does not work for reasoning.
I have tried
`docker run -d --runtime nvidia -e TRANSFORMERS_OFFLINE=1 -e DEBUG="true" -p 8000:8000 --ipc=host vllm/vllm-openai:v0.11.0 --reasoning-parser qwen3 --model Qwen/Qwen3-4B --override-generation-config '{"chat_template_kwargs": {"enable_thinking": false}}'`
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28070 | closed | [
"usage"
] | 2025-11-04T22:03:32Z | 2025-12-30T03:38:48Z | 0 | yz342 |
vllm-project/vllm | 28,056 | [Bug]: Missing libarm_compute.so in Arm CPU pip installed wheels | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
We now have vllm wheels for Arm CPUs in pypi thanks to https://github.com/vllm-project/vllm/pull/26931 and https://github.com/vllm-project/vllm/pull/27331
You can install Arm CPU wheels with:
```
pip install --pre vllm==0.11.1rc3+cpu --extra-index-url https://wheels.vllm.ai/0.11.1rc3%2Bcpu/
```
However it will currently fail, unless you ldpreload ACL:
```
WARNING 10-29 12:33:18 [interface.py:171] Failed to import from vllm._C: ImportError('libarm_compute.so: cannot open shared object file: No such file or directory')
We need to figure out how to package libarm_compute.so in the wheel
```
Best way to reproduce this locally is:
- build vllm from main locally with `VLLM_TARGET_DEVICE=cpu python3 setup.py bdist_wheel`
- remove `vllm/deps` which contains the libarm_compute.so
- pip install the wheel you built
then you will run into the issue (because it will try to load libarm_compute.so under vllm/.deps/arm_compute-src/build/)
Note: ACL/oneDNN are built in vllm here:
We need to figure out how to bundle `libarm_compute.so` in the wheel to avoid this.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28056 | closed | [
"bug"
] | 2025-11-04T17:22:55Z | 2025-11-13T05:43:10Z | 2 | fadara01 |
vllm-project/vllm | 28,046 | Qwen3-Omni model inference : ValueError: Either SamplingParams or PoolingParams must be provided. | ### Your current environment
```text
The output of `python web_demo.py`
```
The above mentioned method provides the error below
```
qwen/Qwen3-Omni/collect_env.py", line 287, in get_vllm_version
from vllm import __version__, __version_tuple__
ImportError: cannot import name '__version__' from 'vllm' (unknown location)
```
while the envs installed are below:
```
pip list
Package Version Editable project location
--------------------------------- --------------------------------- ----------------------------------------------------------
accelerate 1.11.0
aiofiles 24.1.0
aiohappyeyeballs 2.6.1
aiohttp 3.13.2
aiosignal 1.4.0
airportsdata 20250909
annotated-doc 0.0.3
annotated-types 0.7.0
anyio 4.11.0
astor 0.8.1
async-timeout 5.0.1
attrs 25.4.0
audioread 3.1.0
av 16.0.1
blake3 1.0.8
Brotli 1.1.0
cachetools 6.2.1
certifi 2025.10.5
cffi 2.0.0
charset-normalizer 3.4.4
click 8.2.1
cloudpickle 3.1.2
cmake 4.1.2
compressed-tensors 0.10.2
cupy-cuda12x 13.6.0
decorator 5.2.1
depyf 0.18.0
dill 0.4.0
diskcache 5.6.3
distro 1.9.0
dnspython 2.8.0
einops 0.8.1
email-validator 2.3.0
exceptiongroup 1.3.0
fastapi 0.121.0
fastapi-cli 0.0.14
fastapi-cloud-cli 0.3.1
fastrlock 0.8.3
ffmpy 0.6.4
filelock 3.20.0
flash_attn 2.8.3
frozenlist 1.8.0
fsspec 2025.10.0
gguf 0.17.1
gradio 5.44.1
gradio_client 1.12.1
groovy 0.1.2
h11 0.16.0
hf-xet 1.2.0
httpcore 1.0.9
httptools 0.7.1
httpx 0.28.1
huggingface-hub 0.36.0
idna 3.11
interegular 0.3.3
Jinja2 3.1.6
jiter 0.11.1
joblib 1.5.2
jsonschema 4.25.1
jsonschema-specifications 2025.9.1
lark 1.2.2
lazy_loader 0.4
librosa 0.11.0
llguidance 0.7.30
llvmlite 0.44.0
lm-format-enforcer 0.10.12
markdown-it-py 4.0.0
MarkupSafe 3.0.3
mdurl 0.1.2
mistral_common 1.8.5
mpmath 1.3.0
msgpack 1.1.2
msgspec 0.19.0
multidict 6.7.0
nest-asyncio 1.6.0
networkx 3.4.2
ninja 1.13.0
numba 0.61.2
numpy 2.2.6
nvidia-cublas-cu12 12.6.4.1
nvidia-cuda-cupti-cu12 12.6.80
nvidia-cuda-nvrtc-cu12 12.6.77
nvidia-cuda-runtime-cu12 12.6.77
nvidia-cudnn-cu12 9.5.1.17
nvidia-cufft-cu12 11.3.0.4
nvidia-cufile-cu12 1.11.1.6
nvidia-curand-cu12 10.3.7.77
nvidia-cusolver-cu12 11.7.1.2
nvidia-cusparse-cu12 12.5.4.2
nvidia-cusparselt-cu12 0.6.3
nvidia-nccl-cu12 2.26.2
nvidia-nvjitlink-cu12 12.6.85
nvidia-nvtx-cu12 12.6.77
openai 1.90.0
opencv-python-headless 4.12.0.88
orjson 3.11.4
outlines 0.1.11
outlines_core 0.1.26
packaging 25.0
pandas 2.3.3
partial-json-parser 0.2.1.1.post6
pillow 11.3.0
pip 25.2
platformdirs 4.5.0
pooch 1.8.2
prometheus_client 0.23.1
prometheus-fastapi-instrumentator 7.1.0
propcache | https://github.com/vllm-project/vllm/issues/28046 | closed | [
"usage"
] | 2025-11-04T13:59:57Z | 2025-11-24T19:24:39Z | 22 | Tortoise17 |
vllm-project/vllm | 28,045 | [Doc]: Any detailed documentation about how to load_weights in customized vllm model? | ### 📚 The doc issue
I don't know how to modify the attention and how the load_model works.
The documentation says too few, I find it's hard to understand.
Anyone has some more detailed experience? Thank you!
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28045 | open | [
"documentation"
] | 2025-11-04T13:23:25Z | 2025-11-05T02:07:55Z | 0 | sleepwalker2017 |
vllm-project/vllm | 28,035 | [Usage]: deepseek-ocr The output token count is too low and unstable. | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
python3 -m vllm.entrypoints.openai.api_server --served-model-name deepseek-ocr --model deepseekocr --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --disable-log-requests --logits_processors vllm.model_executor.models.deepseek_ocr:NGramPerReqLogitsProcessor
{
"model": "DeepSeek-OCR",
"messages": [{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{self.image_to_base64(image_path)}"}
},
{"type": "text", "text": ”<image>\nFree OCR.“}
]
}],
"vllm_xargs": {
"ngram_size": 30,
"window_size": 100,
"whitelist_token_ids": "[128821, 128822]"
},
"temperature": 0.0,
"max_tokens": 4096
}
"finish_reason":"stop" but "completion_tokens":200+ ,cannot output the complete image content.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28035 | open | [
"usage"
] | 2025-11-04T09:50:53Z | 2025-11-04T09:50:53Z | 0 | sixgod-666 |
vllm-project/vllm | 28,031 | [Usage]: Error: Failed to initialize the TMA descriptor 700 | ### Your current environment
vllm0.11.0 to train Qwen3-vl-8B
The following error message appears intermittently during training.
```
[36m(WorkerDict pid=82555)[0m TMA Desc Addr: 0x7f4e2736b080
[36m(WorkerDict pid=82555)[0m format 9
[36m(WorkerDict pid=82555)[0m dim 4
[36m(WorkerDict pid=82555)[0m gmem_address 0xa9bdcd0000
[36m(WorkerDict pid=82555)[0m globalDim (128,415,2,1,1)
[36m(WorkerDict pid=82555)[0m globalStrides (2,2048,1024,0,0)
[36m(WorkerDict pid=82555)[0m boxDim (64,128,1,1,1)
[36m(WorkerDict pid=82555)[0m elementStrides (1,1,1,1,1)
[36m(WorkerDict pid=82555)[0m interleave 0
[36m(WorkerDict pid=82555)[0m swizzle 3
[36m(WorkerDict pid=82555)[0m l2Promotion 2
[36m(WorkerDict pid=82555)[0m oobFill 0
[36m(WorkerDict pid=82555)[0m Error: Failed to initialize the TMA descriptor 700
[36m(WorkerDict pid=82555)[0m TMA Desc Addr: 0x7f4e2736b080
[36m(WorkerDict pid=82555)[0m format 9
[36m(WorkerDict pid=82555)[0m dim 4
[36m(WorkerDict pid=82555)[0m gmem_address 0xa46a000000
[36m(WorkerDict pid=82555)[0m globalDim (128,16,2,61647,1)
[36m(WorkerDict pid=82555)[0m globalStrides (2,512,256,8192,0)
[36m(WorkerDict pid=82555)[0m boxDim (64,128,1,1,1)
[36m(WorkerDict pid=82555)[0m elementStrides (1,1,1,1,1)
[36m(WorkerDict pid=82555)[0m interleave 0
[36m(WorkerDict pid=82555)[0m swizzle 3
[36m(WorkerDict pid=82555)[0m l2Promotion 2
[36m(WorkerDict pid=82555)[0m oobFill 0
[36m(WorkerDict pid=82555)[0m Error: Failed to initialize the TMA descriptor 700
[36m(WorkerDict pid=82555)[0m TMA Desc Addr: 0x7f4e2736b080
[36m(WorkerDict pid=82555)[0m format 9
[36m(WorkerDict pid=82555)[0m dim 4
[36m(WorkerDict pid=82555)[0m gmem_address 0xa48819e000
[36m(WorkerDict pid=82555)[0m globalDim (128,16,2,61647,1)
[36m(WorkerDict pid=82555)[0m globalStrides (2,512,256,8192,0)
[36m(WorkerDict pid=82555)[0m boxDim (64,128,1,1,1)
[36m(WorkerDict pid=82555)[0m elementStrides (1,1,1,1,1)
[36m(WorkerDict pid=82555)[0m interleave 0
[36m(WorkerDict pid=82555)[0m swizzle 3
[36m(WorkerDict pid=82555)[0m l2Promotion 2
[36m(WorkerDict pid=82555)[0m oobFill 0
[36m(WorkerDict pid=82555)[0m Error: Failed to initialize the TMA descriptor 700
[36m(WorkerDict pid=82555)[0m TMA Desc Addr: 0x7f4e2736b080
[36m(WorkerDict pid=82555)[0m format 9
[36m(WorkerDict pid=82555)[0m dim 4
[36m(WorkerDict pid=82555)[0m gmem_address 0xa46a000000
[36m(WorkerDict pid=82555)[0m globalDim (128,16,2,61647,1)
[36m(WorkerDict pid=82555)[0m globalStrides (2,512,256,8192,0)
[36m(WorkerDict pid=82555)[0m boxDim (64,128,1,1,1)
[36m(WorkerDict pid=82555)[0m elementStrides (1,1,1,1,1)
[36m(WorkerDict pid=82555)[0m interleave 0
[36m(WorkerDict pid=82555)[0m swizzle 3
[36m(WorkerDict pid=82555)[0m l2Promotion 2
[36m(WorkerDict pid=82555)[0m oobFill 0
[36m(WorkerDict pid=82555)[0m Error: Failed to initialize the TMA descriptor 700
[36m(WorkerDict pid=82555)[0m TMA Desc Addr: 0x7f4e2736b080
[36m(WorkerDict pid=82555)[0m format 9
[36m(WorkerDict pid=82555)[0m dim 4
[36m(WorkerDict pid=82555)[0m gmem_address 0xa48819e000
[36m(WorkerDict pid=82555)[0m globalDim (128,16,2,61647,1)
[36m(WorkerDict pid=82555)[0m globalStrides (2,512,256,8192,0)
[36m(WorkerDict pid=82555)[0m boxDim (64,128,1,1,1)
[36m(WorkerDict pid=82555)[0m elementStrides (1,1,1,1,1)
[36m(WorkerDict pid=82555)[0m interleave 0
[36m(WorkerDict pid=82555)[0m swizzle 3
[36m(WorkerDict pid=82555)[0m l2Promotion 2
[36m(WorkerDict pid=82555)[0m oobFill 0
[36m(WorkerDict pid=82555)[0m Error: Failed to initialize the TMA descriptor 700
[36m(WorkerDict pid=82555)[0m CUDA error (/workspace/.deps/vllm-flash-attn-src/hopper/flash_fwd_launch_template.h:191): an illegal memory access was encountered
[36m(WorkerDict pid=82558)[0m l2Promotion 2
[36m(WorkerDict pid=82558)[0m l2Promotion 2
[36m(WorkerDict pid=82558)[0m l2Promotion 2
[36m(WorkerDict pid=82558)[0m l2Promotion 2
[36m(WorkerDict pid=82558)[0m l2Promotion 2
```
then the error message below is being repeated, but training has not stopped.
```
[36m(WorkerDict pid=134586)[0m [rank7]:[W1104 07:52:01.751088784 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=90, addr=[train-kubeflow-72-46805-20251104102107-master-0]:49384, remote=[train-kubeflow-72-46805-20251104102107-master-0]:32991): Connection reset by peer[32m [repeated 6x across cluster][0m
[36m(WorkerDict pid=134586)[0m Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:679 (most recent call first):[32m [repeated 6x across cluster][0m
[36m(WorkerDict pid=134580)[0m frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::ba | https://github.com/vllm-project/vllm/issues/28031 | open | [
"usage"
] | 2025-11-04T08:13:45Z | 2025-12-11T08:18:15Z | 4 | DBMing |
vllm-project/vllm | 28,016 | [Usage]: How to recognize PDFs in DeepSeek-OCR with openai | ### Your current environment
```
vllm serve deepseek-ai/DeepSeek-OCR --logits_processors vllm.model_executor.models.deepseek_ocr.NGramPerReqLogitsProcessor --no-enable-prefix-caching --mm-processor-cache-gb 0
```
### How would you like to use vllm
How to recognize PDFs and convert PDFs to Markdown with DeepSeek-OCR via an OpenAI-compatible API?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28016 | open | [
"usage"
] | 2025-11-04T03:35:38Z | 2025-11-04T07:33:07Z | 2 | shoted |
vllm-project/vllm | 28,003 | [Usage]: | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 4.1.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-6.8.0-54-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA H100 NVL
Nvidia driver version : 570.86.10
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 48
Stepping: 1
BogoMIPS: 4799.59
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 768 MiB (48 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.3.1
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.14.1
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cuspa | https://github.com/vllm-project/vllm/issues/28003 | open | [
"usage"
] | 2025-11-03T21:19:15Z | 2025-11-26T15:32:40Z | 1 | amitmvyas |
vllm-project/vllm | 27,995 | [RFC]: Make PassConfig flags less verbose | ### Motivation.
Almost all `PassConfig` field names have `enable_` in the name, which is unnecessarily verbose. They are also pretty long, and sometimes not descriptive enough. Finally, `enable_fusion` should be split into rmsnorm+quant and activation+quant flags as we want to control these flags separately.
### Proposed Change.
We should rename the flags:
- `enable_async_tp` -> `fuse_gemm_comms`
- `enable_attn_fusion` -> `fuse_attn_quant`
- `enable_fi_allreduce_fusion` -> `fuse_allreduce_rms`
- `enable_fusion` -> `fuse_norm_quant`, `fuse_act_quant`
- `enable_noop` -> `eliminate_noops`
- `enable_sequence_parallelism` -> `enable_sp`
For future RoPE-based fusion passes, the flags will look like:
- `enable_qknorm_rope_fusion` -> `fuse_qknorm_rope`
- `enable_rope_cache_fusion` -> `fuse_rope_cache`
- ...
We can deprecate the original flags in the next release and map them to the new ones, and remove them 1 or even 2 releases later (shouldn't be hard to support). These flags will be used less commonly after `-O` optimization levels land anyway.
### Feedback Period.
1 week, 11/3 - 11/7
### CC List.
@zou3519 @youkaichao @mgoin @ilmarkov @nvpohanh @pavanimajety
### Any Other Things.
With passes following a common construction convention, we can also add a `full_pass_pipeline` arg where users can control the exact order of the passes if necessary, but that is less likely to be needed urgently and can be added later.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27995 | closed | [
"help wanted",
"good first issue",
"RFC",
"torch.compile"
] | 2025-11-03T17:49:29Z | 2025-12-03T19:53:01Z | 7 | ProExpertProg |
huggingface/peft | 2,888 | Potential remote code execution via untrusted tokenizer_kwargs in PromptEmbedding | ### Description
A remote code execution vector exists in the PEFT prompt-tuning flow. A remote `adapter_config.json` can inject loader kwargs that are forwarded to `AutoTokenizer.from_pretrained` calls. If an attacker sets `"tokenizer_kwargs": {"trust_remote_code": true}` and points `tokenizer_name_or_path` at an attacker-controlled repo, constructing the prompt embedding will cause `AutoTokenizer.from_pretrained(...)` to import and run code from that repo. This happens during normal initialization and requires no further user interaction.
### Root Cause
`PromptEmbedding` trusts and forwards fields from config into `AutoTokenizer.from_pretrained` without validating or sanitizing them:
https://github.com/huggingface/peft/blob/30a19a08f9ef85ce1095b9ac69e78269121525e2/src/peft/tuners/prompt_tuning/model.py#L78-L84
### Impact
This issue turns remote configuration files into attack vectors. Any user who loads a malicious adapter config can have arbitrary code executed on their machine. The compromise is silent, requires no extra user action beyond `from_pretrained`, and is easy to weaponize by publishing a seemingly legitimate config that explicitly set `trust_remote_code=True` and points to attacker code. Consequences include command execution, credential and data theft, file tampering, and worm infection if environment tokens or write permissions are present. This should be fixed urgently by treating config-supplied kwargs as untrusted: filter or reject sensitive parameters such as `trust_remote_code`.
### Who can help?
@benjaminbossan @githubnemo
### Reproduction
A malicious remote config can look like:
```json
{
"base_model_name_or_path": "XManFromXlab/peft-prompt-embedding-rce",
"tokenizer_name_or_path": "XManFromXlab/peft-prompt-embedding-rce"
"tokenizer_kwargs": { "trust_remote_code": true }
}
```
When users are attracted to the repo and use peft to load the config from remote repo
```python
from peft import PromptEmbedding, PromptTuningConfig
from transformers import AutoModelForSeq2SeqLM
t5_model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
example_model = "XManFromXlab/peft-prompt-embedding-rce"
config = PromptTuningConfig.from_pretrained(example_model, trust_remote_code=False)
prompt_embedding = PromptEmbedding(config, t5_model.shared)
```
During `PromptEmbedding` initialization the code reads `tokenizer_kwargs` from the remote config and calls `AutoTokenizer.from_pretrained(config.tokenizer_name_or_path, **tokenizer_kwargs)`. Because `trust_remote_code` was injected via the config, the loader imports and executes the attacker’s backend code, demonstrating RCE.
### Expected behavior
In my example, the above code will print the message 'Execute Malicious Payload!!!!!!', which indicates the execution of malicious scripts.
```bash
$ python3 main.py
Execute Malicious Payload!!!!!!
Execute Malicious Payload!!!!!!
Execute Malicious Payload!!!!!!
``` | https://github.com/huggingface/peft/issues/2888 | closed | [] | 2025-11-03T16:04:52Z | 2025-11-04T17:50:28Z | 3 | Vancir |
huggingface/lerobot | 2,371 | memory increase continuously during training Groot | ### System Info
```Shell
- lerobot version: 0.4.1
- Platform: Linux-5.4.250-2-velinux1u3-amd64-x86_64-with-glibc2.31
- Python version: 3.10.15
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.1.3
- PyTorch version: 2.7.1+cu126
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.6
- GPU model: NVIDIA GeForce RTX 4090
- Using GPU in script?: <fill in>
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
run
` lerobot-train \
--output_dir=$OUTPUT_DIR \
--save_checkpoint=true \
--batch_size=64 \
--steps=10000 \
--save_freq=1000 \
--log_freq=100 \
--policy.push_to_hub=false \
--policy.type=groot \
--dataset.repo_id=$DATASET_ID \
--dataset.root=$DATASET_ROOT_DIR \
--dataset.streaming=false \
--dataset.image_transforms.enable=true \
--wandb.enable=true \
--wandb.mode=offline \
--wandb.project=groot_test \
--job_name=$JOB_NAME \`
### Expected behavior
memory increase until out of memory | https://github.com/huggingface/lerobot/issues/2371 | open | [
"question",
"policies",
"performance"
] | 2025-11-03T14:38:52Z | 2025-12-31T13:17:11Z | null | caoran2025 |
vllm-project/vllm | 27,982 | [Usage]: How can I access or return hidden states (representations) after generation? | ### Your current environment
In my training pipeline (GRPO), I need to access hidden-state representations of all layers and store prompt representations alongside generated sequences.
Is there any supported way to extract or return hidden states from the vLLM inference engine?
Environment
vllm==0.11.0
Python 3.12
### How would you like to use vllm
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27982 | open | [
"usage"
] | 2025-11-03T13:01:51Z | 2025-11-04T03:07:40Z | 1 | hakbari14 |
huggingface/lerobot | 2,368 | Release 0.5.0 | A Github Issue created for the upcoming release to discuss the planned features & changes:
* Audio PR #967
* Bump transformers dependency to +v5 | https://github.com/huggingface/lerobot/issues/2368 | open | [
"bug",
"question",
"dependencies"
] | 2025-11-03T12:46:51Z | 2025-12-24T00:08:16Z | null | imstevenpmwork |
vllm-project/vllm | 27,981 | [Usage]: qwenvl2.5如何指定max_pixels | ### Your current environment
如题,我尝试了``--mm-processor-kwargs {"max_pixels": $MAX_PIXELS}``无效
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27981 | open | [
"usage"
] | 2025-11-03T12:38:34Z | 2025-11-04T08:19:54Z | 3 | aJupyter |
huggingface/accelerate | 3,829 | Does Accelerate automatically set the DataLoader’s sampler to a DistributedSampler? | ```python
from accelerate import Accelerator
accelerator = Accelerator()
device = accelerator.device
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
```
We know that in PyTorch DDP training the DataLoader must use torch.utils.data.DistributedSampler. In this code, when using Accelerate, do we need to manually set DistributedSampler when constructing the `training_dataloader`, or will Accelerate automatically modify the dataloader’s sampler to support DDP later? (In other words, when we build the dataloader for Accelerate, can we completely ignore DistributedSampler and just leave it as we would for single‑GPU training?) | https://github.com/huggingface/accelerate/issues/3829 | closed | [] | 2025-11-03T07:17:29Z | 2025-12-16T15:09:43Z | 2 | caixxiong |
vllm-project/vllm | 27,957 | [Usage]: What is the difference between embedding task and pooler task? | ### Your current environment
Any document about this?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27957 | closed | [
"usage"
] | 2025-11-03T03:38:39Z | 2025-11-03T10:20:18Z | 1 | sleepwalker2017 |
vllm-project/vllm | 27,949 | [Usage]: How do I deploy GGUF models with vLLM via Docker correct? | ### Your current environment
```text
The output of `python collect_env.py`
```
Here is the output from `sudo python3 collect_env.py`
```
Traceback (most recent call last):
File "/export/nvme/vllm/collect_env.py", line 18, in <module>
import regex as re
ModuleNotFoundError: No module named 'regex'
```
### How would you like to use vllm
I am using an Ubuntu 22.04 LTS LXC in Proxmox.
I have Docker installed.
I downloaded `https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf?download=true` to `/export/nvme/huggingface/DeepSeek-R1-Distill-Llama-70B-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf` via `wget`.
The command that I am trying to use to start said Docker container is:
```
sudo docker run --runtime nvidia --gpus all \
--name vllm \
-v /export/nvme/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF:/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF \
-v /export/nvme/vllm:/export/nvme/vllm \
-e TRANSFORMERS_OFFLINE=1 \
--shm-size=16G \
-v /dev/shm:/dev/shm \
-p 0.0.0.0:8000:8000 \
--security-opt apparmor:unconfined \
vllm/vllm-openai:v0.8.5 \
--model /root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf \
--tokenizer /root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B \
--tensor-parallel-size 2 \
--max-model-len=32K \
--chat-template=/export/nvme/vllm/examples/tool_chat_template_deepseekr1.jinja
```
But this is the error message that I get:
```
INFO 11-02 15:21:55 [__init__.py:239] Automatically detected platform cuda.
INFO 11-02 15:21:59 [api_server.py:1043] vLLM API server version 0.8.5
INFO 11-02 15:21:59 [api_server.py:1044] args: Namespace(host=None, port=8000, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template='/export/nvme/vllm/examples/tool_chat_template_deepseekr1.jinja', chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf', task='auto', tokenizer='/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B', hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, load_format='auto', download_dir=None, model_loader_extra_config={}, use_tqdm_on_load=True, config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', max_model_len=32768, guided_decoding_backend='auto', reasoning_parser=None, logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, data_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, disable_custom_all_reduce=False, block_size=None, gpu_memory_utilization=0.9, swap_space=4, kv_cache_dtype='auto', num_gpu_blocks_override=None, enable_prefix_caching=None, prefix_caching_hash_algo='builtin', cpu_offload_gb=0, calculate_kv_scales=False, disable_sliding_window=False, use_v2_block_manager=True, seed=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_token=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config={}, limit_mm_per_prompt={}, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=None, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=None, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', speculative_config=None, ignore_patterns=[], served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, max_num_batched_tokens=None, max_num_seqs=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, num_lookahead_slots=0, scheduler_delay_factor=0.0, preemption_mode=None, num_scheduler_steps=1, multi_step_stream_outputs=True, scheduling_policy='fcfs', enable_chunked_prefill=None, disable_chunked_mm_input=False, scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilati | https://github.com/vllm-project/vllm/issues/27949 | open | [
"usage"
] | 2025-11-02T23:33:49Z | 2025-11-02T23:36:44Z | 1 | alpha754293 |
huggingface/xet-core | 549 | How to get the "Xet backed hash"? | Hi,
On HuggingFace, every page has a "Xet backed hash" (I've attached an example below) and I am trying to figure out how to compute that locally.
I've read the documentation and it says there are 4 types of different hashes but it's not really clear how a "Xet backed hash" is calculated.
So I was just wondering if you can you tell me how I can get the "Xet backed hash" on a local file?
Thank you for your time.
<img width="630" height="308" alt="Image" src="https://github.com/user-attachments/assets/9fad42a3-e15b-4734-b57a-a769b5b77577" /> | https://github.com/huggingface/xet-core/issues/549 | closed | [] | 2025-11-02T09:40:39Z | 2025-11-06T16:20:25Z | null | arch-btw |
huggingface/lerobot | 2,360 | diffusion transformer | 请问有大佬在lerobot中将diffusion unet改为DiT过吗 | https://github.com/huggingface/lerobot/issues/2360 | open | [
"question",
"policies"
] | 2025-11-02T09:05:30Z | 2025-11-12T09:01:59Z | null | Benxiaogu |
vllm-project/vllm | 27,928 | [Bug]: What happened to /get_world_size ? | ### Your current environment
vllm 0.11.0
trl 0.24.0
python 3.12
linux amd64
### 🐛 Describe the bug
TRL is expecting a `/get_world_size` route https://github.com/huggingface/trl/blob/main/trl/extras/vllm_client.py#L279 for its GRPO trainer. That gives a 404 on the latest version of vLLM.
Was this changed to another route? I can't seem to find it
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27928 | open | [
"bug"
] | 2025-11-01T22:56:45Z | 2025-11-03T02:42:14Z | 1 | pbarker-synth |
huggingface/lerobot | 2,356 | AsyncInference only running one action chunk | I have my SO101 arms connected to my computer, and I'm running an asynchronous server on a cloud GPU with a RTX 4090.
When I start running Pi0.5, the model is loaded and the SO101 makes its first move by setting the robot to be at its middle position, but then no further actions are made although the server logs new observations and action sequences being generated.
The robot moves to this position and doesn't move further:
<img width="332" height="413" alt="Image" src="https://github.com/user-attachments/assets/0499680b-4072-4c90-acda-e4fc1af18e64" />
I have one wrist camera and one top-down view camera. Here is my client command:
```
python3 -m lerobot.async_inference.robot_client \
--server_address=ip:port \
--robot.type=so101_follower \
--robot.port=/dev/ttyACM0 \
--robot.id=arm \
--robot.cameras="{ base_0_rgb: {type: opencv, index_or_path: \"/dev/video2\", width: 640, height: 480, fps: 30}, left_wrist_0_rgb: {type: opencv, index_or_path: \"/dev/video0\", width: 640, height: 480, fps: 30}}" \
--policy_device=cuda \
--aggregate_fn_name=weighted_average \
--debug_visualize_queue_size=True \
--task="Pick up the orange and place it on the plate" \
--policy_type=pi05 \
--pretrained_name_or_path=lerobot/pi05_base \
--actions_per_chunk=50 \
--chunk_size_threshold=0.0 \
--debug_visualize_queue_size=True
```
Here are my server logs:
```
(lerobot) root@eff66f201198:/workspace/arm-x64# ./robot.sh runpod async-server
INFO 2025-11-01 20:17:34 y_server.py:421 {'fps': 30,
'host': '0.0.0.0',
'inference_latency': 0.03333333333333333,
'obs_queue_timeout': 2,
'port': 8080}
INFO 2025-11-01 20:17:34 y_server.py:431 PolicyServer started on 0.0.0.0:8080
INFO 2025-11-01 20:18:03 y_server.py:112 Client ipv4:129.97.131.28:23025 connected and ready
INFO 2025-11-01 20:18:03 y_server.py:138 Receiving policy instructions from ipv4:129.97.131.28:23025 | Policy type: pi05 | Pretrained name or path: lerobot/pi05_base | Actions per chunk: 50 | Device: cuda
The PI05 model is a direct port of the OpenPI implementation.
This implementation follows the original OpenPI structure for compatibility.
Original implementation: https://github.com/Physical-Intelligence/openpi
INFO 2025-11-01 20:18:03 ils/utils.py:43 Cuda backend detected, using cuda.
WARNING 2025-11-01 20:18:03 /policies.py:82 Device 'mps' is not available. Switching to 'cuda'.
INFO 2025-11-01 20:18:03 ils/utils.py:43 Cuda backend detected, using cuda.
WARNING 2025-11-01 20:18:03 /policies.py:82 Device 'mps' is not available. Switching to 'cuda'.
Loading model from: lerobot/pi05_base
✓ Loaded state dict from model.safetensors
WARNING 2025-11-01 20:19:08 ng_pi05.py:1023 Vision embedding key might need handling: paligemma_with_expert.paligemma.model.vision_tower.vision_model.embeddings.patch_embedding.bias
WARNING 2025-11-01 20:19:08 ng_pi05.py:1023 Vision embedding key might need handling: paligemma_with_expert.paligemma.model.vision_tower.vision_model.embeddings.patch_embedding.weight
Remapped: action_in_proj.bias -> model.action_in_proj.bias
Remapped: action_in_proj.weight -> model.action_in_proj.weight
Remapped: action_out_proj.bias -> model.action_out_proj.bias
Remapped: action_out_proj.weight -> model.action_out_proj.weight
Remapped: paligemma_with_expert.gemma_expert.lm_head.weight -> model.paligemma_with_expert.gemma_expert.lm_head.weight
Remapped: paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.bias -> model.paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.bias
Remapped: paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.weight
Remapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.down_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.down_proj.weight
Remapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.gate_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.gate_proj.weight
Remapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.up_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.up_proj.weight
Remapped 812 state dict keys
Warning: Could not remap state dict keys: Error(s) in loading state_dict for PI05Policy:
Missing key(s) in state_dict: "model.paligemma_with_expert.paligemma.model.language_model.embed_tokens.weight".
INFO 2025-11-01 20:19:43 y_server.py:171 Time taken to put policy on cuda: 99.9787 seconds
INFO 2025-11-01 20:19:43 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver
INFO 2025-11-01 20:20:02 y_server.py:226 Running inference for observation #0 (must_go: True)
INFO 2025-11-01 20:20:03 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver
INFO 2025-11-01 20:20:04 y_server.py:362 Preprocessing and inference took 1.3530s, action shape: torch.Size([1, 50, 32])
INFO 2025-11-01 20:20:04 y_server.py:392 Observation | https://github.com/huggingface/lerobot/issues/2356 | open | [
"question",
"robots"
] | 2025-11-01T20:31:10Z | 2025-12-23T01:10:35Z | null | kevinjosethomas |
vllm-project/vllm | 27,916 | [Feature]: Does the latest version support LoRa for visual models? | ### 🚀 The feature, motivation and pitch
When I loaded the QWEN2.5-VL model fine-tuned by LoRa using vllm version 0.8.4, I encountered the following prompt:
> Regarding multimodal models, vLLM currently only supports adding LoRA to language model, visual.blocks.31.mlp.up_proj will be ignored.
I found an issue https://github.com/vllm-project/vllm/issues/26422 with a similar problem, but it seems the PR hasn't been merged into master. How can I enable loading visual-side LORA parameters and using VLLM to accelerate inference?
Looking forward to your reply
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27916 | closed | [
"feature request"
] | 2025-11-01T12:23:36Z | 2025-12-26T12:48:22Z | 1 | SmartNight-cc |
huggingface/lerobot | 2,354 | Cannot reproduce SmolVLA results on LIBERO benchmark | Hello,
I am trying to reproduce LIBERO benchmark results of [SmolVLA](https://huggingface.co/HuggingFaceVLA/smolvla_libero).
However, I can't reproduce results on neither [leaderboard](https://huggingface.co/spaces/HuggingFaceVLA/libero-vla-leaderboard) and [paper](https://arxiv.org/abs/2506.01844)
I am working on NVIDIA Jetson AGX Orin Developer Kit (Jetpack 6.2.1, Jetson Linux 36.4.4)
and below is my pip list
Hello,
I am trying to reproduce the LIBERO benchmark results of [SmolVLA](https://huggingface.co/HuggingFaceVLA/smolvla_libero).
However, I can't reproduce the results on either the [leaderboard](https://huggingface.co/spaces/HuggingFaceVLA/libero-vla-leaderboard) or the [paper](https://arxiv.org/abs/2506.01844).
I am working on an NVIDIA Jetson AGX Orin Developer Kit (JetPack 6.2.1, Jetson Linux 36.4.4),
and below is my pip list.
<details>
<summary>pip list</summary>
```
absl-py==2.3.1
accelerate==1.10.1
aiohappyeyeballs==2.6.1
aiohttp==3.13.0
aiosignal==1.4.0
annotated-types==0.7.0
antlr4-python3-runtime==4.9.3
anyio==4.9.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==3.0.0
async-lru==2.0.5
attrs==23.2.0
av==15.1.0
babel==2.17.0
bddl==1.0.1
beautifulsoup4==4.13.4
bleach==6.2.0
blinker==1.7.0
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
click==8.3.0
cloudpickle==3.1.1
cmake==3.31.6
comm==0.2.2
contourpy==1.3.2
cryptography==41.0.7
cuda-bindings==12.8.0
cuda-python==12.8.0
cycler==0.12.1
Cython==3.0.12
dataclasses==0.6
datasets==4.1.1
dbus-python==1.3.2
debugpy==1.8.14
decorator==5.2.1
deepdiff==8.6.1
defusedxml==0.7.1
diffusers @ file:///opt/diffusers-0.34.0.dev0-py3-none-any.whl#sha256=cf07a8004c994f02e0d41e9bface90486f53a98cd3abdda39972c5ffe7009d87
dill==0.4.0
distro==1.9.0
docopt==0.6.2
docutils==0.21.2
draccus==0.10.0
easydict==1.13
egl_probe @ git+https://github.com/huggingface/egl_probe.git@eb5e5f882236a5668e43a0e78121aaa10cdf2243
einops==0.8.1
etils==1.13.0
evdev==1.9.2
executing==2.2.0
Farama-Notifications==0.0.4
fastjsonschema==2.21.1
filelock==3.18.0
fonttools==4.57.0
fqdn==1.5.1
frozenlist==1.8.0
fsspec==2025.3.2
future==1.0.0
gitdb==4.0.12
GitPython==3.1.45
glfw==2.10.0
grpcio==1.75.1
gym==0.26.2
gym-notices==0.1.0
gymnasium==0.29.1
h11==0.14.0
h5py==3.13.0
hf-xet==1.1.10
hf_transfer==0.1.9
httpcore==1.0.8
httplib2==0.20.4
httpx==0.28.1
huggingface-hub==0.35.3
hydra-core==1.3.2
id==1.5.0
idna==3.10
imageio==2.37.0
imageio-ffmpeg==0.6.0
importlib_metadata==8.6.1
importlib_resources==6.5.2
iniconfig==2.1.0
inquirerpy==0.3.4
ipykernel==6.29.5
ipython==9.1.0
ipython_pygments_lexers==1.1.1
ipywidgets==8.1.6
isoduration==20.11.0
jaraco.classes==3.4.0
jaraco.context==6.0.1
jaraco.functools==4.1.0
jedi==0.19.2
jeepney==0.9.0
Jinja2==3.1.6
json5==0.12.0
jsonlines==4.0.0
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2025.4.1
jupyter==1.1.1
jupyter-console==6.6.3
jupyter-events==0.12.0
jupyter-lsp==2.2.5
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyter_server==2.15.0
jupyter_server_terminals==0.5.3
jupyterlab==4.4.1
jupyterlab_myst==2.4.2
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
jupyterlab_widgets==3.0.14
jupytext==1.17.3
keyring==25.6.0
kiwisolver==1.4.8
launchpadlib==1.11.0
lazr.restfulclient==0.14.6
lazr.uri==1.0.6
-e git+https://github.com/huggingface/lerobot@6f5bb4d4a49fbdb47acfeaa2c190b5fa125f645a#egg=lerobot
libero @ git+https://github.com/huggingface/lerobot-libero.git@b053a4b0de70a3f2d736abe0f9a9ee64477365df
llvmlite==0.45.1
Mako==1.3.10
Markdown==3.9
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.10.1
matplotlib-inline==0.1.7
mdit-py-plugins==0.5.0
mdurl==0.1.2
mergedeep==1.3.4
mistune==3.1.3
more-itertools==10.7.0
mpmath==1.3.0
mujoco==3.3.2
multidict==6.7.0
multiprocess==0.70.16
mypy_extensions==1.1.0
nbclient==0.10.2
nbconvert==7.16.6
nbformat==5.10.4
nest-asyncio==1.6.0
networkx==3.4.2
nh3==0.2.21
ninja==1.11.1.4
notebook==7.4.1
notebook_shim==0.2.4
num2words==0.5.14
numba==0.62.1
numpy==2.2.5
oauthlib==3.2.2
omegaconf==2.3.0
onnx==1.17.0
opencv-contrib-python==4.11.0.86
opencv-python==4.11.0
opencv-python-headless==4.12.0.88
optimum==1.24.0
orderly-set==5.5.0
overrides==7.7.0
packaging==25.0
pandas==2.3.3
pandocfilters==1.5.1
parso==0.8.4
pexpect==4.9.0
pfzy==0.3.4
pillow==11.2.1
pkginfo==1.12.1.2
platformdirs==4.3.7
pluggy==1.6.0
prometheus_client==0.21.1
prompt_toolkit==3.0.51
propcache==0.4.1
protobuf==6.30.2
psutil==7.0.0
ptyprocess==0.7.0
pure_eval==0.2.3
pyarrow==21.0.0
pyav==14.2.1
pycparser==2.22
pycuda==2025.1
pydantic==2.12.1
pydantic_core==2.41.3
Pygments==2.19.1
PyGObject==3.48.2
PyJWT==2.7.0
pynput==1.8.1
PyOpenGL==3.1.10
PyOpenGL-accelerate==3.1.10
pyparsing==3.1.1
pyrsistent==0.20.0
pyserial==3.5
pytest==8.4.2
python-apt==2.7.7+ubuntu4
python-dateutil==2.9.0.post0
python-json-logger==3.3.0
python-xlib==0.33
pytools==2025.1.2
pytz==2025.2
PyYAML==6.0.2
pyyaml-include==1.4.1
pyzmq==26.4.0
readme_renderer==44.0
referencing==0.36.2
regex==2024.11.6
requests==2.32.3
requests-toolbelt= | https://github.com/huggingface/lerobot/issues/2354 | open | [
"question",
"policies",
"simulation"
] | 2025-11-01T11:20:05Z | 2026-01-05T08:38:48Z | null | Hesh0629 |
huggingface/trl | 4,419 | GRPO with reward model. CUDA out of memory. How to fix? Thank you very much. | train_grpo.py:
```python
import argparse
import os
from typing import Callable, Dict, List, Optional
import torch
from datasets import Dataset, load_dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
AutoModelForSequenceClassification,
pipeline,
set_seed,
)
from trl import GRPOConfig, GRPOTrainer
class CombinedReward:
"""Combine multiple reward sources with weights.
Each reward function follows signature:
reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float]
"""
def __init__(
self,
reward_fns: List[Callable[[List[str], List[str]], List[float]]],
weights: Optional[List[float]] = None,
) -> None:
if not reward_fns:
raise ValueError("reward_fns must not be empty")
self.reward_fns = reward_fns
self.weights = weights or [1.0] * len(reward_fns)
if len(self.weights) != len(self.reward_fns):
raise ValueError("weights length must match reward_fns length")
def __call__(self, completions: List[str], prompts: List[str], **kwargs) -> List[float]:
if not completions:
return []
all_scores: List[List[float]] = []
for reward_fn in self.reward_fns:
scores = reward_fn(completions, prompts, **kwargs)
if len(scores) != len(completions):
raise ValueError("All reward functions must return scores for each completion")
all_scores.append(scores)
# weighted sum
totals: List[float] = [0.0] * len(completions)
for w, scores in zip(self.weights, all_scores):
for i, s in enumerate(scores):
totals[i] += w * float(s)
return totals
def build_reward_model_fn(
reward_model_name: str,
device: Optional[str] = None,
normalize: bool = True,
) -> Callable[[List[str], List[str]], List[float]]:
"""Create a reward function using a sequence classification model.
Returns a function that outputs a scalar reward per completion.
"""
rm_tokenizer = AutoTokenizer.from_pretrained(reward_model_name, use_fast=True)
# ensure padding token exists for batched inference
if rm_tokenizer.pad_token is None:
candidate = rm_tokenizer.eos_token or rm_tokenizer.sep_token or rm_tokenizer.cls_token or rm_tokenizer.unk_token
if candidate is not None:
rm_tokenizer.pad_token = candidate
else:
rm_tokenizer.add_special_tokens({"pad_token": "[PAD]"})
rm_model = AutoModelForSequenceClassification.from_pretrained(reward_model_name, torch_dtype=torch.float16,
device_map="auto")
if getattr(rm_model.config, "pad_token_id", None) is None and rm_tokenizer.pad_token_id is not None:
rm_model.config.pad_token_id = rm_tokenizer.pad_token_id
# use a pipeline for batching and device placement
pipe_device = 0 if (device == "cuda" or (device is None and torch.cuda.is_available())) else -1
rm_pipe = pipeline(
task="text-classification",
model=rm_model,
tokenizer=rm_tokenizer,
# device=pipe_device,
truncation=True,
top_k=None,
function_to_apply="none", # use raw logits so we can map scores directly
return_all_scores=True,
)
def reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float]:
del prompts # unused here
outputs = rm_pipe(completions, batch_size=kwargs.get("batch_size", 2))
scores: List[float] = []
for out in outputs:
# If binary classifier, use logit of positive class; otherwise sum weighted by label index
if len(out) == 1:
scores.append(float(out[0]["score"]))
else:
# prefer last class as "more positive"
scores.append(float(out[-1]["score"]))
if not normalize:
return scores
# z-norm for stability (per-batch)
t = torch.tensor(scores, dtype=torch.float32)
std = float(t.std().clamp(min=1e-6))
mean = float(t.mean())
normed = ((t - mean) / std).tolist()
return [float(x) for x in normed]
return reward_fn
def build_keyword_reward_fn(keywords: List[str], case_sensitive: bool = False, bonus: float = 1.0) -> Callable[[List[str], List[str]], List[float]]:
ks = keywords if case_sensitive else [k.lower() for k in keywords]
def reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float]:
del prompts
scores: List[float] = []
for text in completions:
t = text if case_sensitive else text.lower()
count = sum(1 for k in ks if k in t)
scores.append(bonus * float(count))
return scores
return reward_fn
def build_length_reward_fn(target_min: int, target_max: int, scale: float = 1.0) -> Callable[[List[str], List[str]], Li | https://github.com/huggingface/trl/issues/4419 | open | [
"🏋 Reward",
"🏋 GRPO"
] | 2025-11-01T10:29:28Z | 2025-11-20T12:26:50Z | null | guotong1988 |
vllm-project/vllm | 27,912 | [Usage]: How should I use the CPU to deploy QWEN3 VL 30B-A3B? | ### Your current environment
```text
The output of `python collect_env.py`
```
(APIServer pid=1033476) Traceback (most recent call last):
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/bin/vllm", line 33, in <module>
(APIServer pid=1033476) sys.exit(load_entry_point('vllm==0.11.1rc6.dev33+g3a5de7d2d.cpu', 'console_scripts', 'vllm')())
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/main.py", line 73, in main
(APIServer pid=1033476) args.dispatch_function(args)
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/serve.py", line 59, in cmd
(APIServer pid=1033476) uvloop.run(run_server(args))
(APIServer pid=1033476) File "/home/maxgameone/.local/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
(APIServer pid=1033476) return __asyncio.run(
(APIServer pid=1033476) ^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/asyncio/runners.py", line 194, in run
(APIServer pid=1033476) return runner.run(main)
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1033476) return self._loop.run_until_complete(task)
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1033476) File "/home/maxgameone/.local/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
(APIServer pid=1033476) return await main
(APIServer pid=1033476) ^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 1910, in run_server
(APIServer pid=1033476) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 1926, in run_server_worker
(APIServer pid=1033476) async with build_async_engine_client(
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1033476) return await anext(self.gen)
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 185, in build_async_engine_client
(APIServer pid=1033476) async with build_async_engine_client_from_engine_args(
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1033476) return await anext(self.gen)
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 232, in build_async_engine_client_from_engine_args
(APIServer pid=1033476) async_llm = AsyncLLM.from_vllm_config(
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/utils/func_utils.py", line 116, in inner
(APIServer pid=1033476) return fn(*args, **kwargs)
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/v1/engine/async_llm.py", line 218, in from_vllm_config
(APIServer pid=1033476) return cls(
(APIServer pid=1033476) ^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/v1/engine/async_llm.py", line 140, in __init__
(APIServer pid=1033476) self.engine_core = EngineCoreClient.make_async_mp_client(
(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1033476) File "/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm | https://github.com/vllm-project/vllm/issues/27912 | open | [
"usage"
] | 2025-11-01T07:40:04Z | 2025-11-01T07:40:04Z | 0 | maxgameone |
vllm-project/vllm | 27,899 | [Bug]: Inductor specialize after 2.9 rebase | ### Your current environment
NA
### 🐛 Describe the bug
Could you or someone have a look at compile ranges [PR](https://github.com/vllm-project/vllm/pull/24252) again? It seems to stop working with the update to pytorch 2.9. We started getting failed assertions in generated code like it was compiled for a single shape. Could you explain how to let the inductor know that we compile for a range not for a single shape?
Example of the assertion. Compilation was done for a range (512, 8192)
assert_size_stride(arg0_1, (8192, s4, s94), (s4*s94, s94, 1))
Can you add quick repro instructions?
Sure, on the PR branch:
vllm serve meta-llama/Meta-Llama-3.1-70B-Instruct --disable-log-requests --no-enable-prefix-caching -tp 4 -dp 1 --max-num-seqs 256 --load-format dummy --port 8001 --compilation-config '{"pass_config":{"enable_fusion":false,"enable_attn_fusion":false,"enable_noop":true,"enable_sequence_parallelism":false,"enable_async_tp":false,"enable_fi_allreduce_fusion":true}}'
cc @ilmarkov | https://github.com/vllm-project/vllm/issues/27899 | closed | [
"bug"
] | 2025-10-31T22:16:27Z | 2025-11-07T00:03:25Z | 7 | laithsakka |
vllm-project/vllm | 27,898 | [Doc]: Multi-node EP on EFA (i.e. no IBGDA/DeepEP) | ### 📚 The doc issue
Usecase: On AWS we have EFA for high bandwidth interconnect, not Infiniband, so no IBGDA.
The [documentation](https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment.html#backend-selection-guide) indicates that the DeepEP kernels should be used for multi/inter-node EP, and pplx for single node. However, [DeepEP indicates that they only support IBGDA for inter-node comms](https://github.com/deepseek-ai/DeepEP/issues/369).
pplx has good support for EFA. Is pplx for single node, DeepEP for multi-node a suggestion based on testing, or a hard requirement?
In addition, it appears that the EP size cannot be configured and is always TP x DP. Is there any way to set EP size to equal TP size (for example), so we can have each node be a DP group and limit EP alltoall's to intra-node (NVLink) only?
Thank you!
EDIT: per https://github.com/vllm-project/vllm/issues/27633 it appears this may be problematic, although since pplx supports EFA as a transportation layer, this seems bizarre. Specific docs around usage on EFA would be helpful. | https://github.com/vllm-project/vllm/issues/27898 | open | [
"documentation"
] | 2025-10-31T21:22:28Z | 2025-11-06T19:50:07Z | 1 | nathan-az |
huggingface/peft | 2,884 | [Question/Bug] How to safely continue LoRA fine-tuning under DeepSpeed ZeRO-3 (multi-stage training with modules_to_save) | Hi,
I’m trying to perform multi-stage LoRA fine-tuning under DeepSpeed ZeRO-3 using PEFT.
However, continuing training on an existing LoRA checkpoint without merging causes a series of errors and conflicts.
Problem
When I load the LoRA from Stage 1 and attempt to continue training:
• load_state_dict() throws shape mismatch (e.g. [0, hidden_size])
• resize_token_embeddings() fails (empty tensor)
• GPU memory usage explodes (batch size drops from 4 → 1)
Question
What’s the recommended practice for continuing LoRA fine-tuning under ZeRO-3?
• Should we always merge the previous adapter (merge_and_unload()) before starting Stage 2?
• Or is there a way to safely keep the existing adapter and continue training?
### Who can help?
_No response_
### Reproduction
Setup
• Stage 1: LoRA fine-tuning with modules_to_save=['wte','ff_out']
• Stage 2: Continue training on a new dataset (without merging)
• Using DeepSpeed ZeRO-3 (zero3_init_flag=False)
### Expected behavior
Expected Behavior
PEFT should provide a consistent way to:
• Continue fine-tuning LoRA adapters across multiple stages with ZeRO-3 enabled.
• Avoid re-initialization or memory explosion when modules_to_save is used. | https://github.com/huggingface/peft/issues/2884 | closed | [] | 2025-10-31T20:13:12Z | 2025-12-09T15:05:26Z | null | XiangZhang-zx |
huggingface/lerobot | 2,351 | Details of adapting SmolVLA to other robotic arms with different configurations | I want to deploy the untuned `smolvla_base` model directly onto my AgileX PIPER robotic arm.I ran into the following two issues along the way:
1. Missing normalization parameters in the metadata.
```
File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/zwt/Projects/lerobot/lerobot/common/policies/smolvla/modeling_smolvla.py", line 434, in select_action
batch = self._prepare_batch(batch)
File "/home/zwt/Projects/lerobot/lerobot/common/policies/smolvla/modeling_smolvla.py", line 412, in _prepare_batch
batch = self.normalize_inputs(batch)
File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/zwt/Projects/lerobot/lerobot/common/policies/normalize.py", line 170, in forward
assert not torch.isinf(mean).any(), _no_stats_error_str("mean")
AssertionError: `mean` is infinity. You should either initialize with `stats` as an argument, or use a pretrained model.
```
The error was resolved when I copied the normalization parameters from other training results, but I'm not sure if this is the correct way to run `smolvla_base` directly.
2. I've noticed that different robotic arms may have different degrees of freedom, or even if they have the same degrees of freedom, the range of rotation of the same joint can vary. I'm unsure whether this range of rotation mapping is necessary when transferring the model to other robotic arms.It seems there is similar operation for the aloha in the code.
```
def _pi_aloha_decode_state(self, state):
# Flip the joints.
for motor_idx in [1, 2, 8, 9]:
state[:, motor_idx] *= -1
# Reverse the gripper transformation that is being applied by the Aloha runtime.
for motor_idx in [6, 13]:
state[:, motor_idx] = aloha_gripper_to_angular(state[:, motor_idx])
return state
def _pi_aloha_encode_actions(self, actions):
# Flip the joints.
for motor_idx in [1, 2, 8, 9]:
actions[:, :, motor_idx] *= -1
# Reverse the gripper transformation that is being applied by the Aloha runtime.
for motor_idx in [6, 13]:
actions[:, :, motor_idx] = aloha_gripper_from_angular(actions[:, :, motor_idx])
return actions
def _pi_aloha_encode_actions_inv(self, actions):
# Flip the joints again.
for motor_idx in [1, 2, 8, 9]:
actions[:, :, motor_idx] *= -1
# Reverse the gripper transformation that is being applied by the Aloha runtime.
for motor_idx in [6, 13]:
actions[:, :, motor_idx] = aloha_gripper_from_angular_inv(actions[:, :, motor_idx])
return actions
```
btw, is it a meaningful operation to directly run smolvla_base? This is just one of my sudden thoughts. | https://github.com/huggingface/lerobot/issues/2351 | closed | [
"question",
"policies"
] | 2025-10-31T14:55:35Z | 2025-12-14T14:47:04Z | null | yquanli |
vllm-project/vllm | 27,880 | [Installation]: [HELP]How to install the latest main version of vllm | ### Your current environment
I clone the vllm code, and run install commands, but it fails, Help!!
### How you are installing vllm
```sh
VLLM_USE_PRECOMPILED=1 uv pip install --editable .
Using Python 3.10.12 environment at: /home/alice/.venv
× No solution found when resolving dependencies:
╰─▶ Because there is no version of xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029 and vllm==0.11.1rc6.dev16+g933cdea44.precompiled depends
on xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029, we can conclude that vllm==0.11.1rc6.dev16+g933cdea44.precompiled cannot be used.
And because only vllm==0.11.1rc6.dev16+g933cdea44.precompiled is available and you require vllm, we can conclude that your requirements are unsatisfiable.
(alice) alice@dc53-p31-t0-n067:~/vllm_bak$ uv pip install -e .
Using Python 3.10.12 environment at: /home/alice/.venv
× No solution found when resolving dependencies:
╰─▶ Because there is no version of xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029 and vllm==0.11.1rc6.dev16+g933cdea44.cu126 depends on
xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029, we can conclude that vllm==0.11.1rc6.dev16+g933cdea44.cu126 cannot be used.
And because only vllm==0.11.1rc6.dev16+g933cdea44.cu126 is available and you require vllm, we can conclude that your requirements are unsatisfiable.```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27880 | closed | [
"installation"
] | 2025-10-31T13:57:20Z | 2025-11-13T07:25:13Z | 7 | sleepwalker2017 |
vllm-project/vllm | 27,877 | [Usage]: How to install nightly version??? Why this command doesn't work? | ### Your current environment
I run this to install vllm with the latest code. But, the installed vllm doesn't include the code I need.
I check the `siglip.py` file, it's modified 4 days ago.
But in the vllm installed, it doesn't contain this commit! https://github.com/vllm-project/vllm/pull/27566/files#diff-ca771e5a262cbf32fb481c518bea41d0e341414e021d6542e421abb98cceec61
why is this?
I use this command.
```text
pip install -U vllm \
--pre \
--extra-index-url https://wheels.vllm.ai/nightly```
`pip install -U vllm \
--pre \
--extra-index-url https://wheels.vllm.ai/nightly
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://bytedpypi.byted.org/simple, https://bytedpypi.byted.org/simple, https://wheels.vllm.ai/nightly
Requirement already satisfied: vllm in /home/alice/.local/lib/python3.10/site-packages (0.11.0)
Collecting vllm
Downloading https://wheels.vllm.ai/nightly/vllm-0.11.1rc6.dev16%2Bg933cdea44.cu129-cp38-abi3-manylinux1_x86_64.whl (479.0 MB)
━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.8/479.0 MB 575.3 kB/s eta 0:13:22`
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27877 | open | [
"usage"
] | 2025-10-31T12:29:51Z | 2025-10-31T12:38:19Z | 0 | sleepwalker2017 |
vllm-project/vllm | 27,875 | [Usage]: how to get profiler on OpenAI server | ### Your current environment
```text
INFO 10-31 10:27:06 [importing.py:17] Triton not installed or not compatible; certain GPU-related functions will not be available.
WARNING 10-31 10:27:06 [importing.py:29] Triton is not installed. Using dummy decorators. Install it via `pip install triton` to enable kernel compilation.
INFO 10-31 10:27:08 [__init__.py:39] Available plugins for group vllm.platform_plugins:
INFO 10-31 10:27:08 [__init__.py:41] - ascend -> vllm_ascend:register
INFO 10-31 10:27:08 [__init__.py:44] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 10-31 10:27:08 [__init__.py:235] Platform plugin ascend is activated
WARNING 10-31 10:27:12 [_custom_ops.py:22] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 4.1.0
Libc version: glibc-2.35
Python version: 3.11.13 (main, Jul 26 2025, 07:27:32) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-60.18.0.50.r865_35.hce2.aarch64-aarch64-with-glibc2.35
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: HiSilicon
BIOS Vendor ID: HiSilicon
Model name: Kunpeng-920
BIOS Model name: HUAWEI Kunpeng 920 5250
Model: 0
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 4
Stepping: 0x1
BogoMIPS: 200.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
L1d cache: 12 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 96 MiB (192 instances)
L3 cache: 192 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
NUMA node4 CPU(s): 96-119
NUMA node5 CPU(s): 120-143
NUMA node6 CPU(s): 144-167
NUMA node7 CPU(s): 168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==27.0.2
[pip3] torch==2.5.1
[pip3] torch-npu==2.5.1.post1
[pip3] torchvision==0.20.1
[pip3] transformers==4.52.4
[conda] Could not collect
vLLM Version: 0.9.1
vLLM Ascend Version: 0.9.2.dev0+g0740d1021.d20251029 (git sha: 0740d1021, date: 20251029)
ENV Variables:
ATB_OPSRUNNER_KERNEL_CACHE_LOCAL_COUNT=1
ATB_STREAM_SYNC_EVERY_RUNNER_ENABLE=0
ATB_OPSRUNNER_SETUP_CACHE_ENABLE=1
ATB_WORKSPACE_MEM_ALLOC_GLOBAL=0
ATB_DEVICE_TILING_BUFFER_BLOCK_NUM=32
ATB_STREAM_SYNC_EVERY_KERNEL_ENABLE=0
VLLM_TORCH_PROFILER_DIR=/workspace/prof
ATB_OPSRUNNER_KERNEL_CACHE_GLOABL_COUNT=5
ATB_HOME_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0
ASCEND_TOOLKIT_HOME=/usr/local/Ascend/ascend-toolkit/latest
ATB_COMPARE_TILING_EVERY_KERNEL=0
ASCEND_OPP_PATH=/usr/local/Ascend/ascend-toolkit/latest/opp
LD_LIBRARY_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling/lib/linux/aarch64:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr | https://github.com/vllm-project/vllm/issues/27875 | closed | [
"usage"
] | 2025-10-31T10:33:49Z | 2025-10-31T14:38:04Z | 1 | zhaohaixu |
vllm-project/vllm | 27,872 | [Feature]: AFD support load customer connect model from local path. | ### 🚀 The feature, motivation and pitch
Add `afd_connector_module_path` field in AFDConfig, user can implement customer afd connect, but don't need change vllm code.
https://github.com/vllm-project/vllm/pull/25162 merge after.
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27872 | open | [
"feature request"
] | 2025-10-31T09:08:50Z | 2025-12-08T03:32:33Z | 1 | lengrongfu |
huggingface/trl | 4,413 | What is the default value of num_processes? | Based on the documentation on page docs/source/grpo_trainer.md, num_processes is used but nowhere does the documentation define what num_processes is or what is its default value. | https://github.com/huggingface/trl/issues/4413 | closed | [
"📚 documentation",
"❓ question",
"🏋 GRPO"
] | 2025-10-31T05:01:23Z | 2025-10-31T17:31:33Z | null | thisisraghavkumar |
huggingface/diffusers | 12,564 | [Proposals Welcome] Fal Flashpack integration for faster model loading | Hey! 👋
We've had a request to explore integrating Fal's Flashpack for faster DiT and Text Encoder loading (https://github.com/huggingface/diffusers/issues/12550). Before we jump into implementation, we wanted to open this up to the community to gather ideas and hear from anyone who's experimented with this.
We'd love your input on:
1. Performance: Has anyone tried it? What kind of speedups did you see? Are there any performance trade-offs?
2. Integration Design: How would you approach it if you were to integrating this into Diffusers? Describe your design at a high level - how would we support this in our existing framework and what would the API look like?
We're looking for proposals and ideas rather than PRs at this stage. We're genuinely interested in hearing different approaches and perspectives from the community on this.
Feel free to share your thoughts!
| https://github.com/huggingface/diffusers/issues/12564 | open | [
"help wanted",
"contributions-welcome"
] | 2025-10-31T02:25:55Z | 2025-10-31T12:26:13Z | 2 | yiyixuxu |
vllm-project/vllm | 27,832 | [RFC]: Remap `CompilationConfig` from `-O` to `-cc` in CLI | ### Motivation.
With #20283 (and #26847), we're repurposing `-O0`/`-O1`/`-O2`/`-O3` to map to `optimization_level` instead of `CompilationConfig.level`/`CompilationConfig.mode`. This leaves us in a slightly confusing state where `-O` can refer to optimization level or compilation config depending on what follows it:
- `-O0` -> `optimization_level=0`
- `-O 3` -> `optimization_level=3`
- `-O {"cudagraph_mode": "NONE"}` -> `CompilationConfig(cudagraph_mode="NONE")`
- `-O.use_inductor=False` -> `CompilationConfig(use_inductor=False)`
- `--compilation-config.backend=eager` -> `CompilationConfig(backend="eager")`
This is bad UX, and we should fix it. However, a CLI shorthand for `CompilationConfig` is still needed so users can easily compose different properties.
### Proposed Change.
We should create a new shorthand for `CompilationConfig` should be `-cc`. Other options are `-c` and `-C`, but as discussed [here](https://github.com/vllm-project/vllm/pull/26847#discussion_r2439248068), single letters are not "pythonic" and capital letters are worse (extra `Shift` keystroke + less pythonic). However, the exact shorthand is up for discussion. React below to cast your vote.
Example changes:
- `-O0` -> `-O0` (unchanged)
- `-O 3` -> `-O 3` (unchanged)
- `-O {"cudagraph_mode": "NONE"}` -> `-cc {"cudagraph_mode": "NONE"}`
- `-O.use_inductor=False` -> `-cc.use_inductor=False`
- `--compilation-config.backend=eager` -> `--compilation-config.backend=eager` (unchanged)
### Feedback Period.
One week, 10/30 - 11/5
### CC List.
@hmellor @morrison-turnansky @zou3519
### Any Other Things.
Vote for your preferred shorthand:
- 👍 for `-cc`
- 👎 for `-O` (keep it the same)
- 🎉 for `-C`
- 🚀 for `-c`
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27832 | closed | [
"help wanted",
"good first issue",
"RFC",
"torch.compile"
] | 2025-10-30T20:29:31Z | 2025-11-28T21:51:13Z | 3 | ProExpertProg |
huggingface/trl | 4,407 | Complete paper index | These are the papers mentioned at least one in the codebase.
- [ ] https://huggingface.co/papers/1707.06347
- [x] https://huggingface.co/papers/1909.08593 (only mentioned in notebook, no need to have in paper index)
- [x] https://huggingface.co/papers/1910.02054 #4551
- [ ] https://huggingface.co/papers/1910.10683
- [x] https://huggingface.co/papers/2106.09685 #4441
- [ ] https://huggingface.co/papers/2211.14275
- [x] https://huggingface.co/papers/2305.10425 #3990
- [x] https://huggingface.co/papers/2305.18290 #3937
- [ ] https://huggingface.co/papers/2306.13649
- [x] https://huggingface.co/papers/2307.09288 #4094
- [x] https://huggingface.co/papers/2309.06657 #4441
- [ ] https://huggingface.co/papers/2309.16240 #3906
- [x] https://huggingface.co/papers/2310.12036 #3990
- [ ] https://huggingface.co/papers/2312.00886
- [x] https://huggingface.co/papers/2312.09244 #4094
- [ ] https://huggingface.co/papers/2401.08417
- [x] https://huggingface.co/papers/2402.00856 #3990
- [x] https://huggingface.co/papers/2402.01306 #4440
- [x] https://huggingface.co/papers/2402.03300 #4441
- [ ] https://huggingface.co/papers/2402.04792
- [x] https://huggingface.co/papers/2402.05369 #3990
- [ ] https://huggingface.co/papers/2402.09353
- [x] https://huggingface.co/papers/2402.14740 #3801
- [x] https://huggingface.co/papers/2403.00409 #3990
- [ ] https://huggingface.co/papers/2403.07691
- [x] https://huggingface.co/papers/2403.17031 (these are implementations details, no need to have in paper index)
- [x] https://huggingface.co/papers/2404.04656 #3990
- [ ] https://huggingface.co/papers/2404.09656
- [ ] https://huggingface.co/papers/2404.19733
- [x] https://huggingface.co/papers/2405.00675 #3900
- [ ] https://huggingface.co/papers/2405.14734
- [ ] https://huggingface.co/papers/2405.16436
- [ ] https://huggingface.co/papers/2405.21046
- [x] https://huggingface.co/papers/2406.05882 #3990
- [x] https://huggingface.co/papers/2406.08414 #3990
- [ ] https://huggingface.co/papers/2406.11827 #3906
- [x] https://huggingface.co/papers/2407.21783 (LLaMA 3 paper, no need to have in paper index)
- [x] https://huggingface.co/papers/2408.06266 #3990
- [ ] https://huggingface.co/papers/2409.06411 #3906
- [ ] https://huggingface.co/papers/2409.20370
- [ ] https://huggingface.co/papers/2411.10442
- [ ] https://huggingface.co/papers/2501.03262
- [x] https://huggingface.co/papers/2501.03884 #3824
- [ ] https://huggingface.co/papers/2501.12599 (Kimi 1.5 paper mentioned in an example, no need to have in paper index)
- [ ] https://huggingface.co/papers/2501.12948
- [x] https://huggingface.co/papers/2503.14476 #3937
- [x] https://huggingface.co/papers/2503.20783 #3937
- [x] https://huggingface.co/papers/2503.24290 (link to justify beta=0 in the doc, no need to have in paper index)
- [ ] https://huggingface.co/papers/2505.07291
- [x] https://huggingface.co/papers/2506.01939 #4580
- [x] https://huggingface.co/papers/2507.18071 #3775
- [x] https://huggingface.co/papers/2508.00180 #3855
- [x] https://huggingface.co/papers/2508.05629 #4042
- [x] https://huggingface.co/papers/2508.08221 #3935
- [x] https://huggingface.co/papers/2508.09726 #3989
| https://github.com/huggingface/trl/issues/4407 | open | [
"📚 documentation"
] | 2025-10-30T20:23:26Z | 2025-12-24T05:50:21Z | 4 | qgallouedec |
vllm-project/vllm | 27,830 | [Usage]: GPS OSS 120b on L40S (Ada) | ### Your current environment
(Just a general question)
### How would you like to use vllm
I want to run inference of a GPT OSS 120b with multiple L40S. I read the [docs](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html) as it clearly says it is not natively supported yet. After I had no success with vLLM it worked plug-and-play with Ollama. My question is, if there is any road map where I can see the progress? Or is it even possible to contribute on that problem? Unfortunately I am not familiar with GPUs. However I need to get it running. Any suggestion is highly appreciated. Even a clear description of the problem and what would be required to solve, is a real advantage. Thank you.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27830 | closed | [
"usage"
] | 2025-10-30T20:07:42Z | 2025-11-17T12:46:43Z | 6 | Hansehart |
vllm-project/vllm | 27,823 | [Doc]: Multi-node distributed guide issues | ### 📚 The doc issue
For context, see a recent issue (https://github.com/ROCm/ROCm/issues/5567) where a user was trying to set up distributed inference with `ray` by following guidance at https://docs.vllm.ai/en/v0.8.0/serving/distributed_serving.html#running-vllm-on-multiple-nodes. I ran into several issues setting this up on AMD GPUs that I believe might be deficiencies in the vLLM docs:
- The `run_cluster.sh` script passes `--gpus all` which I believe is NVIDIA-only, needed to remove this from the script
- I had to add `--distributed_executor_backend="ray"` to the `vllm serve` command to get vLLM to use the `ray` cluster that the script sets up
- I had to set NCCL_SOCKET_IFNAME and GLOO_SOCKET_IFNAME to the appropriate network interfaces, otherwise ran into a NCCL connection error
- Relevant environment variables (NCCL_SOCKET_IFNAME, GLOO_SOCKET_IFNAME, NCCL_DEBUG) are not propagated to the Docker containers that the script creates; I worked around this by adding them to the `ray` invocation in `run_cluster.sh`, but I don't see a reason why the script shouldn't pass these to the container automatically
I also needed to set `--enforce-eager` but I believe that is an issue specific to our current rocm/vllm Docker images.
For the above issues I'm not sure which are general gaps in the documentation, which are AMD-specific, and which might have arisen from our Docker images. The image I used and got working was `rocm/vllm:latest` which at the time had vLLM 0.11.
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27823 | open | [
"documentation"
] | 2025-10-30T18:33:04Z | 2025-10-30T18:33:04Z | 0 | schung-amd |
huggingface/trl | 4,399 | Update or remove some of the notebooks | I suspect these notebooks to be outdated, if so they should be either updated or removed.
- gpt2-sentiment-control.ipynb
- best_of_n.ipynb
- gpt2-sentiment.ipynb | https://github.com/huggingface/trl/issues/4399 | closed | [
"📚 documentation"
] | 2025-10-30T15:34:36Z | 2025-11-04T23:52:50Z | 0 | qgallouedec |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.