repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/trl
4,397
Remove or move Multi Adapter RL
I don't think this make sense to have this as a whole section in the doc. Either remove it or update and move it to PEFT integration
https://github.com/huggingface/trl/issues/4397
closed
[ "📚 documentation", "⚡ PEFT" ]
2025-10-30T15:12:58Z
2025-11-04T23:57:56Z
0
qgallouedec
huggingface/transformers
41,948
Does Qwen2VLImageProcessor treat two consecutive images as one group/feature?
When looking at Qwen3-VL model's image processor (which uses Qwen2-VL's one), I found the following lines of code hard to understand. `L296-300` checks the number of input images (`patches.shape[0]`), and repeat the last one to make it divisible by `temporal_patch_size`. This make the model processes two consecutive images as a single feature due to the use of 3DConv with temporal_patch_size=2 by default. https://github.com/huggingface/transformers/blob/76fc50a1527a7db593a6057903b749598f7000a9/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L293-L300 But as I understand, Qwen2-VL paper mentions that it repeats each input image `temporal_patch_size` times. Did I misunderstand the code?.? <img width="787" height="205" alt="Image" src="https://github.com/user-attachments/assets/fc697460-e0a2-49fa-99b8-ea3e733bb097" />
https://github.com/huggingface/transformers/issues/41948
closed
[]
2025-10-30T09:23:50Z
2025-10-31T01:01:09Z
3
priancho
huggingface/transformers
41,947
why Smolvlm-256M-Instruct slower then Internvl-v2-1B ?
As title, Smolvlm have smaller model size (1/4 less matrix multiplication), smaller input embedding. But, both torch.CudaEvent, timer.perf_counter with torch.sync report the slower inference time ? I wonder that does this related with the wrong implementation of Smolvlm in transformers ? inference performance comparison : internvl-1B > inp_embed : (1, 547, 896) trainable params: 17,596,416 || all params: 647,260,288 || trainable%: 2.7186 smolvlm-256M > inp_embed : (1, 171, 576) trainable params: 9,768,960 || all params: 172,742,976 || trainable%: 5.6552 --- model init (all flags turns on, especially flash attention!) : ```python if 'internvl' in self.variant.lower(): if '3_5' in self.variant: self.model = AutoModelForImageTextToText.from_pretrained(self.variant, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, use_flash_attn=True, trust_remote_code=True) # internvl3.5, lm_head is not part of language_model !? lm_head = self.model.lm_head self.model = self.model.language_model self.model.lm_head = lm_head else: self.model = AutoModel.from_pretrained("OpenGVLab/InternVL2-1B", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, use_flash_attn=True, trust_remote_code=True) self.model = self.model.language_model try: self.model.embed_tokens = self.model.base_model.embed_tokens except: self.model.embed_tokens = self.model.model.tok_embeddings elif 'smolvlm' in self.variant.lower(): self.model = AutoModelForImageTextToText.from_pretrained("HuggingFaceTB/SmolVLM-256M-Instruct", torch_dtype=torch.bfloat16, _attn_implementation="flash_attention_2", trust_remote_code=True) lm_head = self.model.lm_head self.model = self.model.model.text_model self.model.lm_head = lm_head # self.model.embed_tokens already built-in! else: raise ValueError(f"Carefull: Variant {self.variant} not tested.") ``` code snippet to measure fps : ```python for _ in range(30): _, _, _ = self.model(model_input) print('warm up done!') prof = Tch_prof(device=self.device) #prof = CudaEvent_Tch_prof(device=self.device) with torch.no_grad(): with prof: pred_speed_wps, pred_route, language = self.model(model_input, device=self.device) # timer + sync : # internvl v2-1b, lang mode : 0.3302s > 330ms ; no-lang mode : 0.0972s > 97ms (10 FPS) ? # smolvlm 256m, 0.3974s > 390ms ; no-lang : 0.1 s > 100ms ? # CudaEvent + sync : # internvl v2-1b, no-lang : 82.55ms ? # smolvlm 256m > no-lang : 90.68ms ? print(prof.get_profile()) ``` code snippet for timer classes : ```python class Tch_prof(object): def __init__(self, device): self.device = device self.hw_type = 'gpu' self.tlt_time = { 'cpu' : 0, 'gpu' : 0 } def __enter__(self): torch.cuda.current_stream(self.device).synchronize() self.s = time.perf_counter() def __exit__(self, *exc): torch.cuda.current_stream(self.device).synchronize() self.tlt_time[self.hw_type] += time.perf_counter() - self.s def get_profile(self, hw_type='all'): if hw_type == 'all': return self.tlt_time elif hw_type in self.tlt_time.keys(): return self.tlt_time[hw_type] else: raise RuntimeError(f"No such hardware type {hw_type}") class CudaEvent_Tch_prof(object): def __init__(self, device): self.device = device self.start = torch.cuda.Event(enable_timing=True) self.end = torch.cuda.Event(enable_timing=True) def __enter__(self): self.start.record() def __exit__(self, *exc): self.end.record() torch.cuda.current_stream(self.device).synchronize() self.tlt_time = self.start.elapsed_time(self.end) def get_profile(self): return self.tlt_time ``` Any suggestion will be helpful !!
https://github.com/huggingface/transformers/issues/41947
closed
[]
2025-10-30T08:10:28Z
2025-10-31T11:47:44Z
4
HuangChiEn
huggingface/trl
4,386
Reference supported trainers in Liger Kernel integration guide
Currently, we only have an example with SFT, and it's hard to know which trainer supports liger. We should list the trainer which support liger.
https://github.com/huggingface/trl/issues/4386
closed
[ "📚 documentation", "🏋 SFT" ]
2025-10-30T04:08:04Z
2025-11-03T18:16:04Z
0
qgallouedec
huggingface/trl
4,385
Use a common 'trl-lib` namespace for the models/datasets/spaces
In the doc, we have examples using different namespaces, like `kashif/stack-llama-2`, `edbeeching/gpt-neo-125M-imdb` etc. we should unify all these examples to use a common `trl-lib` namespace.
https://github.com/huggingface/trl/issues/4385
open
[ "📚 documentation", "✨ enhancement" ]
2025-10-30T04:04:10Z
2025-10-30T04:04:38Z
0
qgallouedec
huggingface/trl
4,384
Write the subsection "Multi-Node Training"
This section must be written, with a simple code example, and a link to the `accelerate` documentation
https://github.com/huggingface/trl/issues/4384
open
[ "📚 documentation", "⚡accelerate" ]
2025-10-30T03:57:53Z
2025-12-08T16:23:23Z
2
qgallouedec
huggingface/trl
4,383
Add PEFT subsection to "Reducing Memory Usage"
PEFT is a major technique to reduce memory usage of the training. We should have a small section pointing to the PEFT integration guide
https://github.com/huggingface/trl/issues/4383
closed
[ "📚 documentation", "✨ enhancement", "⚡ PEFT" ]
2025-10-30T03:55:55Z
2025-11-07T00:03:01Z
0
qgallouedec
huggingface/trl
4,382
Populate "Speeding Up Training"
Currently, this section only mentions vLLM. We should have a small guide for other methods, like flash attention. Ideally, to avoid repetition, we should have a very light example, and a link to the place in the doc where it's more extensively discussed, example vLLM pointing to vLLM integration guide
https://github.com/huggingface/trl/issues/4382
closed
[ "📚 documentation", "⚡accelerate" ]
2025-10-30T03:54:34Z
2025-12-01T09:47:23Z
0
qgallouedec
huggingface/trl
4,380
Fully transition from `flash-attn` to `kernels`
The new recommended way to use flash attention is to use kernels. We should update our tests, and documentation to use `kernels` instead of "flash_attention2". Eg https://github.com/huggingface/trl/blob/1eb561c3e9133892a2e907d84123b46e40cbc5a0/docs/source/reducing_memory_usage.md#L149 ```diff - training_args = DPOConfig(..., padding_free=True, model_init_kwargs={"attn_implementation": "flash_attention_2"}) + training_args = DPOConfig(..., padding_free=True, model_init_kwargs={"attn_implementation": "kernels-community/flash-attn2"}) ```
https://github.com/huggingface/trl/issues/4380
closed
[ "📚 documentation", "✨ enhancement" ]
2025-10-30T03:46:07Z
2025-11-13T04:07:35Z
0
qgallouedec
huggingface/trl
4,379
Remove or populate "Training customization"
Currently, this part of the documentation shows some possible customizations that applies to all trainers https://huggingface.co/docs/trl/main/en/customization However, it only features a few examples. This sections would make sense if it gets populated with other customizations, or removed. This thread can be used to discussed additional customizations
https://github.com/huggingface/trl/issues/4379
closed
[ "📚 documentation" ]
2025-10-30T03:41:02Z
2025-12-01T09:39:09Z
0
qgallouedec
huggingface/trl
4,378
Extend basic usage example to all supported CLIs
currently https://huggingface.co/docs/trl/main/en/clis?command_line=Reward#basic-usage shows only basic example usage for SFT, DPO and Reward. We should have it for all supported CLIs (ie, GRPO, RLOO, KTO)
https://github.com/huggingface/trl/issues/4378
closed
[ "📚 documentation", "🏋 KTO", "🏋 RLOO", "📱 cli", "🏋 GRPO" ]
2025-10-30T03:35:36Z
2025-11-14T01:13:17Z
0
qgallouedec
vllm-project/vllm
27,783
[Usage]: Model performance different from api
### Your current environment ```text vllm==0.10.0 ``` ### How would you like to use vllm I'm running model Qwen3-8B with vllm. I also run the same experiment using Qwen3-8B api. But I find the result is quite different, the accuracy of api-model on my task is much higher than the vllm-model. I use the same temperature and top_k. Is there anyone else meeting the same question (the api-model is stronger than the vllm-model)? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27783
open
[ "usage" ]
2025-10-30T03:30:02Z
2025-10-30T03:30:02Z
0
fny21
vllm-project/vllm
27,782
[Usage]: The same configuration v0.11.0 will report insufficient video memory compared to v0.8.5
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm The server is a 4090 with 4 cards Docker runs vllm openai: v0.8.5 deployment command: "command: --model /models/Qwen3/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1 --tensor_parallel_size 4" Can be deployed and started normally, switch the image version to v0.11.0, and run the command "command: --model /models/Qwen3/Qwen3-30B-A3B --reasoning-parser deepseek_r1 --tensor_parallel_size 4" It will report that the graphics card memory is insufficient, and the error log is: Capturing CUDA graphs (mixed prefill-decode, PIECEWISE): 100%|██████████| 67/67 [00:19<00:00, 3.43it/s] Capturing CUDA graphs (decode, FULL): 100%|██████████| 35/35 [00:07<00:00, 4.78it/s] vllm | (Worker_TP3 pid=263) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB vllm | (Worker_TP1 pid=261) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB vllm | (Worker_TP0 pid=260) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB vllm | (Worker_TP2 pid=262) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] WorkerProc hit an exception. vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] Traceback (most recent call last): vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 3217, in _dummy_sampler_run vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] sampler_output = self.sampler(logits=logits, vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return self._call_impl(*args, **kwargs) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return forward_call(*args, **kwargs) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/sampler.py", line 100, in forward vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] sampled, processed_logprobs = self.sample(logits, sampling_metadata) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/sampler.py", line 180, in sample vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] random_sampled, processed_logprobs = self.topk_topp_sampler( vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return self._call_impl(*args, **kwargs) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return forward_call(*args, **kwargs) vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 122, in forward_cuda vllm | (Worker
https://github.com/vllm-project/vllm/issues/27782
open
[ "usage" ]
2025-10-30T03:24:54Z
2025-11-06T06:53:15Z
2
lan-qh
huggingface/trl
4,376
Rewrite `peft_integration.md`
This section of the documentation is widely outdated and rely only on PPO. Ideally, we should have a clear documentation that shows how to use peft with SFT, DPO and GRPO at least, via the `peft_config` argument. We could have additional subsection about QLoRA and prompt-tuning.
https://github.com/huggingface/trl/issues/4376
closed
[]
2025-10-30T03:23:24Z
2025-11-24T10:39:27Z
0
qgallouedec
vllm-project/vllm
27,778
[Usage]: Is DP + PP a possible way to use vLLM?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm Hi there, I wonder if we can adopt DP + PP in vLLM to form a heterogeneous inference pipeline. For example, If i have two V100 32G GPUs and one A100 80G GPU, can I utilize them in pipeline parallelism with vLLM? I might use V100 as the first stage, and A100 as the second. Consider that V100's compute ability is lower than A100, this would result in unbalance, and the V100 stage becomes a bottleneck. Thus I would like to use two V100s in DP at the first PP stage. Is this possible with the current released vLLM version? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27778
open
[ "usage" ]
2025-10-30T02:05:06Z
2025-10-30T02:05:06Z
0
oldcpple
vllm-project/vllm
27,746
[Bug]: `strict` value in function definitions causes request error when using Mistral tokenizer
### Your current environment Tested with latest vllm source build from main ### 🐛 Describe the bug Start vLLM with a model that uses the mistral tokenizer: ``` vllm serve mistralai/Mistral-Small-24B-Instruct-2501 \ --enable-auto-tool-choice \ --tool-call-parser mistral \ --tokenizer-mode mistral ``` Send a simple tool call request with the `strict` parameter set to a value of `False`: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="fake") tools = [ { "type": "function", "function": { "name": "get_current_time", "description": "Get the current time in UTC", "parameters": { "type": "object", "properties": {}, "required": [] }, "strict": False, } }, ] model = client.models.list().data[0].id response = client.chat.completions.create( model=model, messages=[{"role": "user", "content": "What is the current time?"}], tools=tools, ) print("Success!") ``` The request fails with a 400 error like: `openai.BadRequestError: Error code: 400 - {'error': {'message': '1 validation error for Tool\nfunction.strict\n Extra inputs are not permitted [type=extra_forbidden, input_value=False, input_type=bool]\n For further information visit https://errors.pydantic.dev/2.12/v/extra_forbidden 1 validation error for Tool\nfunction.strict\n Extra inputs are not permitted [type=extra_forbidden, input_value=False, input_type=bool]\n For further information visit https://errors.pydantic.dev/2.12/v/extra_forbidden', 'type': 'BadRequestError', 'param': None, 'code': 400}}` Start vLLM without the mistral tokenizer and the request succeeds. Note that this is explicitly NOT about making `strict=True` actually enforce structured outputs. The scope of this is simply to not return a validation error when this parameter is passed with any valid value when the `mistral` tokenizer is in use. The current behavior breaks some client frameworks that always pass this value, even when it has a value of `False`. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27746
open
[ "bug" ]
2025-10-29T14:33:13Z
2025-10-30T19:14:50Z
4
bbrowning
huggingface/trl
4,368
GKD: multimodal inputs?
Does the Generalized Knowledge Distillation trainer (GKDTrainer) support multimodal inputs (VLMs)? If yes, what's the expected dataset format? There is no example of this in the documentation. Thanks!
https://github.com/huggingface/trl/issues/4368
closed
[ "📚 documentation", "❓ question", "🏋 GKD" ]
2025-10-29T14:08:44Z
2025-11-07T19:26:23Z
2
e-zorzi
huggingface/lerobot
2,338
policy gr00t not found when do async inference with gr00t
### System Info ```Shell lerobot version: 3f8c5d98 (HEAD -> main, origin/main, origin/HEAD) fix(video_key typo): fixing video_key typo in update_video_info (#2323) ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction I have installed the following packages: pip install "torch>=2.2.1,<2.8.0" "torchvision>=0.21.0,<0.23.0" # --index-url https://download.pytorch.org/whl/cu1XX pip install ninja "packaging>=24.2,<26.0" # flash attention dependencies pip install "flash-attn>=2.5.9,<3.0.0" --no-build-isolation python -c "import flash_attn; print(f'Flash Attention {flash_attn.__version__} imported successfully')" pip install lerobot[groot] Then I ran asyn inference server: python -m lerobot.async_inference.policy_server \ --host=127.0.0.1 \ --port=8080 when the async inference client send policy gr00t, the server complains no groot as below: ERROR 2025-10-29 05:30:24 /_server.py:636 Exception calling application: Policy type groot not supported. Supported policies: ['act', 'smolvla', 'diffusion', 'tdmpc', 'vqbet', 'pi0', 'pi05'] Finetuning a pi05 model is OK on the same code Any idea why this happens? ### Expected behavior Should not complain about no groot policy
https://github.com/huggingface/lerobot/issues/2338
closed
[ "bug", "question", "policies" ]
2025-10-29T05:36:20Z
2025-11-21T15:34:21Z
null
jcl2023
huggingface/lerobot
2,337
Can I continue reinforcement learning in HIL-SERL using a pi0
Can I continue reinforcement learning in HIL-SERL using a pi0 model from LERobot that has been fine-tuned via imitation learning?
https://github.com/huggingface/lerobot/issues/2337
open
[ "question", "policies" ]
2025-10-29T04:30:26Z
2025-11-11T03:13:23Z
null
pparkgyuhyeon
huggingface/peft
2,878
peft " target_modules='all-linear' " have different behavior between x86 and aarch ?
### System Info i have tested on 2 arch (x86, arm) then find this bug. both arch have peft==0.17.1 ### Who can help? @benjaminbossan @githubnemo ### Reproduction Reproduction script : bug_reprod.py ```python from transformers import AutoModelForImageTextToText model = AutoModelForImageTextToText.from_pretrained("OpenGVLab/InternVL3_5-1B-HF", trust_remote_code=True) lm_head = model.lm_head model = model.language_model model.lm_head = lm_head from peft import get_peft_model from peft import LoraConfig peft_config = LoraConfig( inference_mode=False, r=12, target_modules="all-linear", ) bug_model = get_peft_model(model, peft_config) bug_model.print_trainable_parameters() breakpoint() # p bug_model, you will find lm_head have different results ``` put bug_reprod.py to x86 and aarch, run it you will find it have different results on lm_head! following figure show the error : #### x86 <img width="978" height="567" alt="Image" src="https://github.com/user-attachments/assets/b33df3f2-15bc-4855-b6cb-c1b84e7ba9d9" /> #### aarch <img width="1067" height="911" alt="Image" src="https://github.com/user-attachments/assets/1bfcd649-9bc9-44ff-a74e-5a26a7070c49" /> ### Expected behavior `target_module='all-linear'` should exclude lm_head in lora tuning. At least, x86, arm arch should have identical behavior.
https://github.com/huggingface/peft/issues/2878
closed
[]
2025-10-29T03:43:02Z
2025-12-07T15:03:33Z
4
HuangChiEn
huggingface/peft
2,877
peft config 'all-linear' include lm_head, is there anyway to remove it ?
I'm not sure is it a bug or my modification affect the peft ? > since some issue reveal that 'all-linear' will not include the lm_head ```python if 'internvl' in self.variant.lower(): if '3_5' in self.variant: self.model = AutoModelForImageTextToText.from_pretrained(self.variant, trust_remote_code=True) # internvl3.5, lm_head is not part of language_model !? lm_head = self.model.lm_head self.model = self.model.language_model self.model.lm_head = lm_head # then from peft import get_peft_model from peft import LoraConfig print('Using PEFT model') peft_config = LoraConfig( inference_mode=False, r=self.lora_r, lora_alpha=self.lora_alpha, lora_dropout=self.lora_dropout, target_modules="all-linear", ) self.model = get_peft_model(self.model, peft_config) ``` if the modification do affect the peft config, is there any way to exclude the lm_head by setting LoraConfig ? peft version : 0.17.0 Can anyone kindly give me some suggestion ? many TKS ~ --- Update : peft have different behavior between x86 and aarch ? error message while loading the pretrain weight : <img width="1702" height="186" alt="Image" src="https://github.com/user-attachments/assets/e13b167f-4215-446a-9f7b-42ba7d690029" /> #### x86 arch, normal <img width="978" height="567" alt="Image" src="https://github.com/user-attachments/assets/ee758c1e-46b7-4edd-9b2a-21d8f6fbfa5b" /> #### aarch, bug occurs <img width="1067" height="911" alt="Image" src="https://github.com/user-attachments/assets/94755fc8-67da-4200-a408-7929cec0f6f4" />
https://github.com/huggingface/peft/issues/2877
closed
[]
2025-10-29T02:19:21Z
2025-10-29T03:43:20Z
1
HuangChiEn
huggingface/lerobot
2,335
How to Visualize All Episodes of a LeRobot Dataset Locally?
Hi everyone, I have a question about LeRobot datasets. I'd like to inspect my data locally, but using the command _lerobot-dataset-viz --repo-id=${HF_USER}/record-test --episode-index=0_ only allows me to view one episode at a time, which is quite cumbersome. Is there a way to visualize all episodes of a dataset locally—similar to [visualize dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset), where I can easily browse through all episodes? Thanks!
https://github.com/huggingface/lerobot/issues/2335
open
[ "question", "dataset" ]
2025-10-29T02:01:01Z
2025-12-29T12:18:57Z
null
Vacuame
vllm-project/vllm
27,692
it run on rtx 5060 ti 16 gb
### Your current environment https://github.com/bokkob556644-coder/suc-vllm-rtx-5060-ti-16-gb/blob/main/suc_vllm.txt ### How would you like to use vllm [I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ](https://github.com/bokkob556644-coder/suc-vllm-rtx-5060-ti-16-gb/blob/main/suc_vllm.txt) ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27692
open
[ "usage" ]
2025-10-28T21:43:00Z
2025-10-28T21:43:16Z
1
bokkob556644-coder
huggingface/transformers
41,919
LFM2 image_processing_lfm2_vl_fast.py Mean Std swapped?
### System Info In LFM2-VL image_processing_lfm2_vl_fast.py line 212 following the MEAN and STD from imagenet is used for preprocessing. However it seems like they are swapped: image_mean = IMAGENET_STANDARD_STD image_std = IMAGENET_STANDARD_MEAN or is this correct ? ### Who can help? @Cyrilvallez ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Have a look at https://github.com/huggingface/transformers/blob/main/src/transformers/models/lfm2_vl/image_processing_lfm2_vl_fast.py ### Expected behavior Not optimized VLM Behaviour
https://github.com/huggingface/transformers/issues/41919
closed
[ "bug" ]
2025-10-28T16:17:44Z
2025-10-31T15:02:40Z
4
florianvoss-commit
vllm-project/vllm
27,667
[Usage]: DeepseekOCR on CPU missing implementation for fused_topk
### Your current environment Try to test if it is possible to run DeepseekOCR on CPU using current git main branch. Fails because there is no implementation of `fused_topk` for CPU. ``` INFO 10-28 15:41:18 [v1/worker/cpu_model_runner.py:77] Warming up model for the compilation... ERROR: Traceback (most recent call last): File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 677, in lifespan async with self.lifespan_context(app) as maybe_state: ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 566, in __aenter__ await self._router.startup() File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 654, in startup await handler() File "/app/start_server.py", line 161, in startup_event initialize_model() File "/app/start_server.py", line 84, in initialize_model llm = LLM( ^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 336, in __init__ self.llm_engine = LLMEngine.from_engine_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 188, in from_engine_args return cls( ^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 122, in __init__ self.engine_core = EngineCoreClient.make_client( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 95, in make_client return InprocClient(vllm_config, executor_class, log_stats) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 264, in __init__ self.engine_core = EngineCore(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 109, in __init__ num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 234, in _initialize_kv_caches self.model_executor.initialize_from_config(kv_cache_configs) File "/opt/venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 113, in initialize_from_config self.collective_rpc("compile_or_warm_up_model") File "/opt/venv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 73, in collective_rpc return [run_method(self.driver_worker, method, args, kwargs)] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/serial_utils.py", line 459, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/cpu_worker.py", line 105, in compile_or_warm_up_model self.model_runner.warming_up_model() File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/cpu_model_runner.py", line 80, in warming_up_model self._dummy_run( File "/opt/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3464, in _dummy_run outputs = self.model( ^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/deepseek_ocr.py", line 582, in forward hidden_states = self.language_model( ^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/deepseek.py", line 495, in forward hidden_states = self.model( ^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl return forward_call(*args, **kwargs)
https://github.com/vllm-project/vllm/issues/27667
open
[ "usage" ]
2025-10-28T16:14:40Z
2025-10-28T16:14:40Z
0
brainlag
vllm-project/vllm
27,661
[RFC]: Consolidated tool call parser implementations by type (JSON, Python, XML, Harmony)
### Motivation. When someone wants to add a new tool call parser today, they typically choose an existing tool call parser that looks close to what is needed, copy it into a new file, and adjust things here and there as needed for their specific model. Sometimes tests get added, and sometimes not. Sometimes the changes to the copied parser make meaningful fixes, and sometimes the changes to the copied parser add bugs. Generally, we have a few buckets of tool call parsers based on the format the models are trained to output - JSON, Python, XML, or Hamony style tool calls. But, we have N different implementations of streaming partial JSON parsing, N different python parsing, and so on. Instead of multiple copies of each of those, ideally we'd maintain one high quality implementation for streaming partial JSON parsing that's extensible enough to handle the needs of individual model differences. ### Proposed Change. The overall change I propose is a refactoring of the existing tool call parsers, lowering the burden to add a new tool call parser, reducing the maintenance and bug permutations possible, and providing us higher test coverage of all tool call parsers so we can systematically track and fix bugs as reported in one place. General steps proposed: **Test coverage** Before starting any refactor, the focus will be on building confidence in the existing state of all our tool call parsers by focusing on adding and extending their test suites. - [ ] Add a new common tool call parser unit test suite for all tool call parsers lacking any tests - #27599 - [ ] Reorganize existing tool call parser tests to cleanly separate unit tests that just need a tokenizer from integration tests that need actual running inference servers. - Today we have `tests/tool_use` that is mostly integration tests, and `tests/entrypoints/openai/tool_parsers` that is mostly unit tests, but there's a mix of each in both. The plan is to move integration tests to `tests/tool_use` since that's where most of those live, and unit tests in `tests/entrypoints/openai/tool_parsers` that can all be run without an accelerator and execute quickly. - [ ] Review the history of each tool call parser, bugs filed against that tool call parser, and special statements in the code of each tool parser to identify special case handling. Create a test for each of these special cases. - [ ] Refactor existing tool call parser tests to use the common test suite for all tool call parsers while retaining any model-specific tests required by the previous review of parsers. - [ ] File issues of type bug for every test in the common suite that is marked as "expected fail" for various tool call parsers. There will be a number of these, with tool call parsers that do not meet the standards of the common suite today. These represent low-hanging fruit for us to find and fix for each parser. - Some fixes may be trivial, and can happen before consolidating implementations just to incrementally raise the quality of our parsers. Some fixes may not be trivial, and may only happen after consolidating implementations. **Refactoring and consolidation** After we have the expanded test suite, we'll have the confidence to undertake this refactor without introducing a lot of new bugs as each parser has some bespoke logic today that needs to be accounted for. - [ ] Consolidate all the partial and streaming JSON parsing logic into a central place that every JSON-style tool call parser consumes. Ensure no test regressions - [ ] Consolidate all the partial and streaming Python parsing logic into a central place that every Python-style tool call parser consumes. **Post-consolidation bug squashing and docs** - [ ] Remove any remaining `xfail` markers in the test suite across all tool parser test suites. - [ ] Update contributor docs that discuss how to add a new tool call parser, how to reuse the common logic for JSON, Python, XML, etc parsing instead of writing new, and how to use the new common test suite to simplify testing of the new parser. ### Feedback Period. This is ongoing work and feedback is accepted at any time while this issue is open. Initial stages of expanding our test coverage have already started, but there's at least a couple of weeks to provide feedback before work gets to the point of actual refactoring and consolidating of the tool call parsers. ### CC List. _No response_ ### Any Other Things. _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27661
open
[ "RFC" ]
2025-10-28T14:54:10Z
2025-10-30T16:14:09Z
2
bbrowning
huggingface/lerobot
2,329
smolvla base model ( the Vlm part) to other model
Can I change smolvla base model ( the Vlm part) to other model? What should I do? Thanks
https://github.com/huggingface/lerobot/issues/2329
closed
[ "question", "policies" ]
2025-10-28T12:28:44Z
2025-10-31T15:09:12Z
null
smartparrot
vllm-project/vllm
27,649
[Usage]: Qwen3-32B on RTX PRO 6000 (55s First Token Delay and 15t/s)
Why does the Qwen3-32B model take 55 seconds before producing the first token, and why is the generation speed only 15t/s? My vLLM configuration: Device: GB202GL [RTX PRO 6000 Blackwell Server Edition] Nvidia Driver Version:580.95.05 CUDA Version:13.0 Docker configuration: ```sh PORT=8085 MODEL_PATH=Qwen/Qwen3-32B SERVED_MODEL_NAME=vLLM-Qwen3-32B docker run -d \ --runtime nvidia \ --gpus all \ -v /data/projects/docker/vllm/.cache/huggingface:/root/.cache/huggingface \ -p $PORT:8000 \ --env "HUGGING_FACE_HUB_TOKEN=$HUGGING_FACE_HUB_TOKEN" \ --name $SERVED_MODEL_NAME \ --restart unless-stopped \ --ipc=host \ vllm/vllm-openai:v0.11.0 \ --model /root/.cache/huggingface/$MODEL_PATH \ --served-model-name $SERVED_MODEL_NAME \ --dtype bfloat16 \ --gpu-memory-utilization 0.92 \ --max-model-len 32768 \ --max-num-seqs 64 \ --tensor-parallel-size 1 \ --api-key sk-vx023nmlrtTmlC ```
https://github.com/vllm-project/vllm/issues/27649
open
[ "usage" ]
2025-10-28T10:49:43Z
2025-11-07T02:30:26Z
4
yizhitangtongxue
vllm-project/vllm
27,646
[Usage]: How to use vllm bench serve to bench remote deployed vllm models (can't bench when ep enabled!!!)
### Your current environment I deployed dpskv3 in a remote server using: ``` export VLLM_USE_V1=1 export VLLM_ALL2ALL_BACKEND=deepep_low_latency vllm serve /models/hf/models--deepseek-ai--DeepSeek-V3 --tensor-parallel-size 1 --data-parallel-size 8 --enable-expert-parallel --no-enforce-eager --load-format dummy ``` And on another server: ``` VLLM_USE_V1=1 vllm bench serve --model /models/hf/models--deepseek-ai--DeepSeek-V3/ --endpoint /v1/completions --dataset-name sharegpt --dataset-path /datasets/ShareGPT/ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --ready-check-timeout-sec 0 --ip 10.102.212.22 --port 8000 ``` where 10.102.212.22 is the server ip, 8000 is the default port And I got this below error on server: ``` "POST /v1/completions HTTP/1.1" 404 Not Found ``` ### How would you like to use vllm I want to run inference of a deepseekv3. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27646
open
[ "usage" ]
2025-10-28T09:56:37Z
2025-10-28T15:23:06Z
3
Valerianding
huggingface/transformers
41,910
Breaking change about AWQ Fused modules due to Attention Refactor
### System Info transformers==5.0.0dev autoawq==0.2.9 autoawq_kernels==0.0.9 torch==2.6.0+cu124 ### Who can help? Due to PR #35235, the `past_key_values` is no longer a returned value of attention modules. However, when using AWQ models with Fused modules [AWQ Fused modules docs](https://huggingface.co/docs/transformers/main/en/quantization/awq#fused-modules), there will be an error like issue #38554 ```bash hidden_states, _ = self.self_attn( ValueError: too many values to unpack (expected 2) ``` So we can hack the `awq.modules.fused.attn.QuantAttentionFused` to avoid returning `past_key_values`. Therefore, I create a primary PR #41909 to fix it. However, for special `rope_type` such as LLaMA3, the RoPE implementation in AutoAWQ will cause error, since `awq.modules.fused.attn.RoPE` supports default RoPE only. Maybe we can implement and maintain `AwqRoPE` and `AwqQuantAttentionFused` in `transformers.integrations.awq`? Or we can maintain `huggingface/AutoAWQ` as `casper-hansen/AutoAWQ` is archived. I'd like to refine my PR to help transformers fix this bug! @SunMarc @MekkCyber ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```python from transformers import AwqConfig, AutoModelForCausalLM, AutoTokenizer # model_path = "./llama-3.1-8b-instruct-awq" model_path = "./qwen2.5-7b-instruct-awq" # model_path = "./qwen3-8b-awq" awq_config = AwqConfig( bits=4, do_fuse=True, fuse_max_seq_len=8192 ) model = AutoModelForCausalLM.from_pretrained(model_path, quantization_config=awq_config).to("cuda:0") print(model) tokenizer = AutoTokenizer.from_pretrained(model_path) max_new_tokens = 1024 if "qwen3" in model_path else 32 messages = [] prompt1 = "What is the result of 3+5?" messages.append({"role": "user", "content": prompt1}) text1 = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs1 = tokenizer(text1, return_tensors="pt").to("cuda:0") generated_ids1 = model.generate(**inputs1, max_new_tokens=max_new_tokens) output_ids1 = generated_ids1[0, len(inputs1.input_ids[0]) :].tolist() output1 = tokenizer.decode(output_ids1, skip_special_tokens=True) messages.append({"role": "assistant", "content": output1}) print("Output 1:", output1) prompt2 = "What about adding 10 to that result?" messages.append({"role": "user", "content": prompt2}) text2 = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs2 = tokenizer(text2, return_tensors="pt").to("cuda:0") generated_ids2 = model.generate(**inputs2, max_new_tokens=max_new_tokens) output_ids2 = generated_ids2[0, len(inputs2.input_ids[0]) :].tolist() output2 = tokenizer.decode(output_ids2, skip_special_tokens=True) messages.append({"role": "assistant", "content": output2}) print("Output 2:", output2) ``` ### Expected behavior There is no error.
https://github.com/huggingface/transformers/issues/41910
closed
[ "bug" ]
2025-10-28T08:29:03Z
2025-11-20T13:41:34Z
3
fanqiNO1
vllm-project/vllm
27,636
[Usage]: vllm如何保留qwen3-vl中的special token
### Your current environment 我微调过的qwen3-vl模型的grounding格式为:<|object_ref_start|>图片<|object_ref_end|><|box_start|>(x1,y1),(x2,y2)<|box_end|> 使用vllm serve推理的格式是:图片(460,66),(683,252),这个是直接忽略了special token么,是否有方法可以保留。 ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27636
open
[ "usage" ]
2025-10-28T06:52:16Z
2025-10-28T06:52:16Z
0
qfs666
huggingface/diffusers
12,553
Reason to move from OpenCV to ffmpeg
I see that `diffusers.utils.export_to_video()` encourages ffmpeg usage instead of OpenCV. Can you share the reason? I'm looking for a way to add video decoding to my project so I'm collecting arguments.
https://github.com/huggingface/diffusers/issues/12553
open
[]
2025-10-28T06:49:48Z
2025-11-07T13:27:03Z
10
Wovchena
vllm-project/vllm
27,634
[Usage]: how to use --quantization option of `vllm serve`?
### Your current environment ```text ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0 Clang version : Could not collect CMake version : Could not collect Libc version : glibc-2.35 ============================== PyTorch Info ============================== PyTorch version : 2.8.0+cu129 Is debug build : False CUDA used to build PyTorch : 12.9 ROCM used to build PyTorch : N/A ============================== Python Environment ============================== Python version : 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] (64-bit runtime) Python platform : Linux-5.15.0-160-generic-x86_64-with-glibc2.35 ============================== CUDA / GPU Info ============================== Is CUDA available : True CUDA runtime version : 11.5.119 CUDA_MODULE_LOADING set to : LAZY GPU models and configuration : GPU 0: NVIDIA GeForce RTX 4090 D Nvidia driver version : 570.195.03 cuDNN version : Could not collect HIP runtime version : N/A MIOpen runtime version : N/A Is XNNPACK available : True ============================== CPU Info ============================== Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: AuthenticAMD Model name: AMD Ryzen 9 9950X3D 16-Core Processor CPU family: 26 Model: 68 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU max MHz: 8839.3555 CPU min MHz: 3000.0000 BogoMIPS: 8583.32 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse 4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm ex tapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tc e topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l 3 hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adj ust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx _vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbr v svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefi lter pfthreshold v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcnt dq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d Virtualization: AMD-V L1d cache: 768 KiB (16 instances) L1i cache: 512 KiB (16 instances) L2 cache: 16 MiB (16 instances) L3 cache: 128 MiB (2 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerability Gather data sampling: Not affected Vulnerability Indirect target selection: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic I BRS; IBPB conditional; STIBP always-on; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsa: Not
https://github.com/vllm-project/vllm/issues/27634
open
[ "usage" ]
2025-10-28T06:24:38Z
2025-10-28T15:57:47Z
3
Septemberlemon
huggingface/candle
3,151
Tensor conversion to_vec1() failing on 0.9.2-alpha.1 - Metal
Dependencies ```toml candle-core = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] } candle-nn = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] } candle-transformers = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] } ``` Running on Macbook M2 Pro - Metal - Tahoe 26.0.1 Since upgrading to 0.9.2-alpha.1, BERT operations on Metal have started hanging when converting rank-1 tensor to Vec<32>. This seems to be affecting any ops that attempt to synchronize or move data from GPU to CPU. Not sure if this is directly related to the update but rolling back to 0.9.1 or using CPU as device fixes the issue. Some example of ops that are failing... ```rust tensor.device().synchronize() tensor.to_device() tensor.to_vec1() ``` Actual code being run... ```rust let (token_ids, token_type_ids, attention_mask) = self.encode_text(text)?; let hidden_states = self .forward_model(&token_ids, &token_type_ids, &attention_mask) .await .map_err(|e| { log::error!("Failed to forward to model: {}", e); e })?; let embeddings = self .apply_mean_pooling(&hidden_states, &attention_mask) .map_err(|e| { log::error!("Failed to apply mean pooling: {}", e); e })?; ... fn apply_mean_pooling( &self, hidden_states: &Tensor, attention_mask: &Tensor, ) -> Result<Vec<f32>> { log::info!("Applying mean pooling to hidden states..."); let attention_mask_for_pooling = attention_mask .to_dtype(hidden_states.dtype())? .unsqueeze(2)?; let sum_mask = attention_mask_for_pooling.sum(1)?; let pooled = (hidden_states.broadcast_mul(&attention_mask_for_pooling)?).sum(1)?; let sum_mask_safe = sum_mask.clamp(NUMERICAL_STABILITY_EPSILON, f32::MAX)?; let pooled = pooled.broadcast_div(&sum_mask_safe)?; let denom = pooled .sqr()? .sum_keepdim(1)? .sqrt()? .clamp(NUMERICAL_STABILITY_EPSILON, f32::MAX)?; let pooled = pooled.broadcast_div(&denom)?; let pooled = pooled.squeeze(0)?; // HANGING HERE ... no errors // Tensor shape - Tensor[dims 1024; f32, metal:4294968337] let embeddings = pooled.to_vec1::<f32>().map_err(|e| Error::TensorOp { operation: format!("Failed to convert tensor to f32 vector: {}", e), })?; Ok(embeddings) } ```
https://github.com/huggingface/candle/issues/3151
closed
[]
2025-10-27T21:36:17Z
2025-11-06T22:44:14Z
2
si-harps
vllm-project/vllm
27,604
[Bug]: Is Flashinfer Attn backend supposed to work with FP8 KV cache on Hopper?
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Collecting environment information... ============================== System Info ============================== OS : Amazon Linux 2023.7.20250428 (x86_64) GCC version : (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5) Clang version : Could not collect CMake version : version 3.26.4 Libc version : glibc-2.34 ============================== PyTorch Info ============================== PyTorch version : 2.8.0+cu128 Is debug build : False CUDA used to build PyTorch : 12.8 ROCM used to build PyTorch : N/A ============================== Python Environment ============================== Python version : 3.12.6 (main, May 6 2025, 20:22:13) [GCC 11.5.0 20240719 (Red Hat 11.5.0-5)] (64-bit runtime) Python platform : Linux-6.1.134-150.224.amzn2023.x86_64-x86_64-with-glibc2.34 ============================== CUDA / GPU Info ============================== Is CUDA available : True CUDA runtime version : 12.8.93 CUDA_MODULE_LOADING set to : LAZY GPU models and configuration : GPU 0: NVIDIA H100 80GB HBM3 GPU 1: NVIDIA H100 80GB HBM3 GPU 2: NVIDIA H100 80GB HBM3 GPU 3: NVIDIA H100 80GB HBM3 GPU 4: NVIDIA H100 80GB HBM3 GPU 5: NVIDIA H100 80GB HBM3 GPU 6: NVIDIA H100 80GB HBM3 GPU 7: NVIDIA H100 80GB HBM3 Nvidia driver version : 570.133.20 cuDNN version : Could not collect HIP runtime version : N/A MIOpen runtime version : N/A Is XNNPACK available : True ============================== CPU Info ============================== Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 192 On-line CPU(s) list: 0-191 Vendor ID: AuthenticAMD Model name: AMD EPYC 7R13 Processor CPU family: 25 Model: 1 Thread(s) per core: 2 Core(s) per socket: 48 Socket(s): 2 Stepping: 1 BogoMIPS: 5299.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid Hypervisor vendor: KVM Virtualization type: full L1d cache: 3 MiB (96 instances) L1i cache: 3 MiB (96 instances) L2 cache: 48 MiB (96 instances) L3 cache: 384 MiB (12 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-47,96-143 NUMA node1 CPU(s): 48-95,144-191 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected ============================== Versions of relevant libraries ============================== [pip3] flashinfer-python==0.3.1 [pip3] numpy==2.2.6 [pip3] nvidia-cublas-cu12==12.8.4.1 [pip3] nvidia-cuda-cupti-cu12==12.8.90 [pip3] nvidia-cuda-nvrtc-cu12==12.8.93 [pip3] nvidia-cuda-runtime-cu12==12.8.90 [pip3] nvidia-cudnn-cu12==9.10.2.21 [pip3] nvidia-cudnn-frontend==1.15.0 [pip3] nvidia-cufft-cu12==11.3.3.83 [pi
https://github.com/vllm-project/vllm/issues/27604
open
[ "bug", "nvidia" ]
2025-10-27T20:22:37Z
2025-11-06T02:37:17Z
10
jmkuebler
huggingface/smolagents
1,834
Discussion: how to edit the messages sent to the underlying LLM
Hi! I'm working on a feature to allow a user to add callbacks to modify the content before it is sent to the LLM, inside the agent loop. I noticed this strange behavior where the first user message must start with "New Task:", otherwise I get this cryptic and misleading error message. ""Error:\nError while parsing tool call from model output: The model output does not contain any JSON blob.\nNow let's retry: take care not to repeat previous errors! If you have retried several times, try a completely different approach.\n"" So I think I have two (or maybe one question): 1. Is my approach to control the messages flow by wrapping the `generate` member function of a Smolagent correct? Or do you recommend a better way to modify messages before sending them to the underlying LLM? 2. Is it expected that the first user message needs to start with New Task:, or have I found a bug or missing assertion somewhere in the code? Thanks! https://github.com/mozilla-ai/any-agent/blob/f2475d7507c5a78e241ff5f0883b546d796d29fc/src/any_agent/callbacks/wrappers/smolagents.py#L75 I'm on smolagents==1.22.0, python 3.13. UPDATE: I'm no longer sure that adding "New Task:" is the fix, I am still seeing intermittent errors even when I have that text added. It seems like there some sort of race condition, I'm confused about where the "messages" content should be edited, since it seems like maybe it's being stored or referenced in multiple conditions? Any help appreciated!
https://github.com/huggingface/smolagents/issues/1834
closed
[]
2025-10-27T17:28:38Z
2025-10-27T19:02:39Z
null
njbrake
huggingface/peft
2,873
Can I use Lora fine-tuning twice?
I’m planning to work with a two-stage LoRA fine-tuning pipeline (Stage 1: SFT with code completion outputs; Stage 2: SFT with full-code outputs; RL follows). My question is: When I continue training the same LoRA adapter in Stage 2, will I risk overwriting or degrading the knowledge learned during Stage 1 ? In other words, does continuing on the same adapter effectively preserve the Stage 1 capabilities, or should I be using a separate adapter (or merging strategy) to ensure both sets of skills remain intact? Thank you for any guidance or best‐practice pointers!
https://github.com/huggingface/peft/issues/2873
closed
[]
2025-10-27T12:51:45Z
2025-12-05T15:05:00Z
8
tohokulgq
vllm-project/vllm
27,572
[Bug]: chat/completions stream intermittently returns null as finish_reason
### Your current environment ``` My env: vllm 0.10.0 ``` ### 🐛 Describe the bug ``` + curl -kLsS https://127.0.0.1:7888/v1/chat/completions -H 'Content-Type: application/json' --data '{ "model": "ibm/granite-3-8b-instruct", "stream": true, "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is the weather like in Warsaw?" } ], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } } }, "required": ["location"] } } ], "tool_choice": "auto" }' data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":"<"},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":"tool"},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":"_"},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":"call"},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece","object":"chat.completion.chunk","created":1761566772,"model":"ibm/granite-3-8b-instruct","choices":[{"index":0,"delta":{"content":">"},"logprobs":null,"finish_reason":null}]} data: [DONE] ``` This happens after running several requests sequentially. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27572
open
[ "bug" ]
2025-10-27T12:14:03Z
2025-11-24T20:27:24Z
13
shuynh2017
huggingface/chat-ui
1,957
Fail to use proxy
How to make this web app go through local proxy? I tried a few methods, all of which don't work.
https://github.com/huggingface/chat-ui/issues/1957
open
[ "support" ]
2025-10-27T06:31:51Z
2025-10-30T03:31:24Z
2
geek0011
huggingface/diffusers
12,547
Fine tuning Dreambooth Flux Kontext I2I Error: the following arguments are required: --instance_prompt
### Describe the bug Hello HF team, @sayakpaul @bghira I'm encountering a persistent issue when trying to fine-tune the black-forest-labs/FLUX.1-Kontext-dev model using the train_dreambooth_lora_flux_kontext.py script. I am following the [official README instructions](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md#training-kontext) for Image-to-Image (I2I) finetuning. My goal is to train a transformation on my own dataset, which is structured for I2I (condition image, target image, and text instruction). ### The Problem Every time I run the script with the correct arguments for I2I finetuning, I get a : `the following arguments are required: --instance_prompt` When I run this [Reproduction], I receive the error: `the following arguments are required: --instance_prompt.` To isolate the issue from my personal dataset, I also tested the exact example command provided in the documentation (the one using `kontext-community/relighting`). I found that this command also fails with `the identical the following arguments are required: --instance_prompt` error. Given that both my custom command and the official example command are failing in the same way, I am trying to understand the origin of this error. It seems the `--instance_prompt` argument is being required even when all I2I-specific arguments are provided. ### Environment **Script**: `examples/dreambooth/train_dreambooth_lora_flux_kontext.py` **Diffusers Version**: I am using the specific commit `05e7a854d0a5661f5b433f6dd5954c224b104f0b` (installed via `pip install -e .` from a clone), as recommended in the README. Could you please help me understand why this might be happening? Is this expected behavior, or am I perhaps missing a configuration step? Thank you for your time! ### Reproduction ### How to Reproduce I am running the following command, which provides all the necessary arguments for I2I finetuning using my (`dataset_name`, `image_column`, `cond_image_column`, and `caption_column`) using my public dataset: ``` accelerate launch /local-git-path/train_dreambooth_lora_flux_kontext.py \ --pretrained_model_name_or_path="black-forest-labs/FLUX.1-Kontext-dev" \ --output_dir="/local-path/kontext-finetuning-v1" \ --dataset_name="MichaelMelgarejoTotto/mi-dataset-kontext" \ --image_column="output" \ --cond_image_column="file_name" \ --caption_column="instruccion" \ --mixed_precision="bf16" \ --resolution=1024 \ --train_batch_size=1 \ --guidance_scale=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --optimizer="adamw" \ --use_8bit_adam \ --cache_latents \ --learning_rate=1e-4 \ --lr_scheduler="constant" \ --lr_warmup_steps=200 \ --max_train_steps=1000 \ --rank=16 \ --seed="0" ``` ### Logs ```shell train_dreambooth_lora_flux_kontext.py: error: the following arguments are required: --instance_prompt ``` ### System Info - 🤗 Diffusers version: 0.35.0.dev0 - Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.28 - Running on Google Colab?: No - Python version: 3.10.19 - PyTorch version (GPU?): 2.7.1+cu118 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.36.0 - Transformers version: 4.57.1 - Accelerate version: 1.11.0 - PEFT version: 0.17.1 - Bitsandbytes version: 0.48.1 - Safetensors version: 0.6.2 - xFormers version: not installed - Accelerator: NA - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> <img width="639" height="289" alt="Image" src="https://github.com/user-attachments/assets/52a5168d-0089-4aab-834e-fa39cab0034d" /> ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/12547
closed
[ "bug" ]
2025-10-27T00:21:34Z
2025-10-28T02:31:42Z
7
MichaelMelgarejoFlorez
huggingface/transformers
41,876
LlamaAttention num_heads
### System Info In older version of transformers, LlamaAttention init attribute num_heads. class LlamaAttention(nn.Module): def __init__(self, config): self.num_heads = config.num_attention_heads self.head_dim = config.hidden_size // config.num_attention_heads However, in the recent versions, this attribute has been removed and thus causing mismatched when using previous codes. It ssems num_key_value_heads is also deprecated. This issue could be addressed by adding: self.num_heads = config.num_attention_heads # shanhx self.num_key_value_heads = config.num_key_value_heads Is there any reasons why these attributes are removed? Is it intended or a bug? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction At least the num_heads stil remained at 4.44. But missed in 4.54. ### Expected behavior Missing many attributes in LlamaAttention.
https://github.com/huggingface/transformers/issues/41876
closed
[ "bug" ]
2025-10-27T00:07:31Z
2025-10-31T00:13:31Z
2
shanhx2000
huggingface/transformers
41,874
Distributed training of SigCLIP
https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/siglip/modeling_siglip.py#L983, here define how to compute sigclip loss. In sigclip, different tpu will exchange data with each other. I want to know how to train a model in this way.
https://github.com/huggingface/transformers/issues/41874
closed
[]
2025-10-26T14:43:51Z
2025-12-04T08:02:55Z
1
zyk1559676097-dot
huggingface/transformers
41,861
transformers.Adafactor is almost 2x slower on Windows than Linux - even WSL is slow what can be reason?
I am training Qwen Image model with Kohya Musubi tuner : https://github.com/kohya-ss/musubi-tuner Exactly same setup and same machine on Linux is almost 2x faster 9.5 second / it vs 5.8 second / it On Windows it can't utilize GPU power it utilizes like 250 watt out of 575 watt What can be culprit? transformers==4.54.1 torch 2.8 CUDA 12.9 tested on RTX 5090 this is what codex tells but i don't know if it is true doesnt make sense to me <img width="1637" height="736" alt="Image" src="https://github.com/user-attachments/assets/81b687c7-801e-4265-a2fd-6d1eae065637" /> ### Who can help? trainer: @SunMarc kernels: @MekkCyber @drbh
https://github.com/huggingface/transformers/issues/41861
closed
[ "bug" ]
2025-10-25T15:49:47Z
2025-12-03T08:02:55Z
null
FurkanGozukara
huggingface/transformers
41,859
Human Verification not working?
### System Info Hello! I need your help because I can't verify my identity via email: I receive a link, open it, but get a blank page and nothing else((( I've tried several times. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Navigate to the Hugging Face website. 2. Register or log in to your account. 3. Go to the identity verification section. 4. Submit a request for the identity verification link. 5. Get the confirmation email to arrive. 6. Follow confirmation link in email 7. Get blank page in site example https://huggingface.co/email_confirmation/zKFZszGtcabRsYOURYmCQkXdfzIY ### Expected behavior The identity verification link should work
https://github.com/huggingface/transformers/issues/41859
closed
[ "bug" ]
2025-10-25T10:48:52Z
2025-10-26T12:29:10Z
4
thefued
huggingface/lerobot
2,311
Question: How I can train only online without dataset?
How I can train only online? without need of dataset. Can I do it without hugging face repo id? only local? I try like that without success: ``` cat > "train_cfg.json" <<'JSON' { "job_name": "hilserl_fetch_pick_v4_cpu", "seed": 0, "env": { "type": "gymnasium-robotics", "task": "FetchPickAndPlace-v4", "episode_length": 200, "features_map": { "action": "action", "agent_pos": "observation.state", "top": "observation.image", "pixels/top": "observation.image" }, "features": { "action": { "type": "ACTION", "shape": [ 4 ] }, "agent_pos": { "type": "STATE", "shape": [ 4 ] }, "pixels/top": { "type": "VISUAL", "shape": [ 480, 480, 3 ] } } }, "policy": { "type": "sac", "device": "cpu", "concurrency": { "actor": "threads", "learner": "threads" }, "repo_id": "None", "push_to_hub": false }, "dataset": { "repo_id": "online-buffer", "root": "${{ github.workspace }}/dataset", "use_imagenet_stats": true } } JSON mkdir -p dataset/online-buffer export HF_HUB_OFFLINE=1 export HF_HUB_DISABLE_TELEMETRY=1 export HF_DATASETS_OFFLINE=1 export WANDB_MODE=disabled # Launch learner and actor (one shell) python -m lerobot.rl.learner --config_path "train_cfg.json" python -m lerobot.rl.actor --config_path "train_cfg.json" ```
https://github.com/huggingface/lerobot/issues/2311
open
[ "question", "dataset" ]
2025-10-25T05:07:48Z
2025-10-27T08:50:11Z
null
talregev
vllm-project/vllm
27,505
[Bug]: Value error, Found conflicts between 'rope_type=default' (modern field) and 'type=mrope'
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug vllm 0.11.0 transformers 5.0.0.dev0 torch 2.8.0+cu129 model base: Qwen2.5-VL-7B-instruct. How to solve this problem? <img width="1250" height="602" alt="Image" src="https://github.com/user-attachments/assets/c6b13dff-1d6a-4872-a959-f8076fff43e6" /> ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27505
open
[ "bug" ]
2025-10-25T04:39:53Z
2025-10-26T07:33:27Z
1
asirgogogo
vllm-project/vllm
27,504
[Usage]: `add_vision_id` ignored for Qwen 2.5-VL-32B-Instruct
### Your current environment ```text ============================== System Info ============================== OS : Ubuntu 24.04.3 LTS (x86_64) GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version : Could not collect CMake version : Could not collect Libc version : glibc-2.39 ============================== PyTorch Info ============================== PyTorch version : 2.8.0+cu128 Is debug build : False CUDA used to build PyTorch : 12.8 ROCM used to build PyTorch : N/A ============================== Python Environment ============================== Python version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime) Python platform : Linux-6.8.0-85-generic-x86_64-with-glibc2.39 ============================== CUDA / GPU Info ============================== Is CUDA available : True CUDA runtime version : 12.8.93 CUDA_MODULE_LOADING set to : LAZY GPU models and configuration : GPU 0: NVIDIA RTX A6000 GPU 1: NVIDIA RTX A6000 GPU 2: NVIDIA RTX A6000 GPU 3: NVIDIA RTX A6000 Nvidia driver version : 570.124.06 cuDNN version : Probably one of the following: /usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn.so.9.8.0 /usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv.so.9.8.0 /usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn.so.9.8.0 /usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.8.0 /usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.8.0 /usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_graph.so.9.8.0 /usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.8.0 /usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops.so.9.8.0 /usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn.so.9.8.0 /usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_adv.so.9.8.0 /usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_cnn.so.9.8.0 /usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.8.0 /usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.8.0 /usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_graph.so.9.8.0 /usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.8.0 /usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_ops.so.9.8.0 HIP runtime version : N/A MIOpen runtime version : N/A Is XNNPACK available : True ============================== CPU Info ============================== Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz CPU family: 6 Model: 106 Thread(s) per core: 1 Core(s) per socket: 16 Socket(s): 2 Stepping: 6 CPU(s) scaling MHz: 23% CPU max MHz: 3600.0000 CPU min MHz: 800.0000 BogoMIPS: 6200.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 1.5 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 40 MiB (32 instances) L3 cache: 72 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-3
https://github.com/vllm-project/vllm/issues/27504
open
[ "usage" ]
2025-10-25T03:42:44Z
2025-10-26T07:32:49Z
1
justachetan
huggingface/lighteval
1,028
How to evaluate MMLU-Pro
Hi, Thank you for the wonderful work! I just want to ask how to perform the evaluation on MMLU-Pro, as I don't see any related code besides the README.
https://github.com/huggingface/lighteval/issues/1028
open
[]
2025-10-24T20:03:10Z
2025-11-04T10:40:46Z
null
qhz991029
huggingface/tokenizers
1,879
rust tokenizer
Hello. Is there a rust tokenizer please? Chat gpt told me there used to be. Best regards!
https://github.com/huggingface/tokenizers/issues/1879
open
[]
2025-10-24T17:03:04Z
2025-10-24T22:03:31Z
2
gogo2464
vllm-project/vllm
27,482
[Bug]: `return_token_ids` missing tokens when using tool calls
### Your current environment Testing with latest vLLM builds from main, as of Fri Oct 24th 2025 (when this bug was opened). ### 🐛 Describe the bug The `return_token_ids` parameter that is supposed to return all generated token ids back to the client is missing quite a few tokens for Chat Completion streaming requests that result in tool calls being generated. Exactly how many and where they are missing in the request will depend on the tool call parser in use as well as the exact request format. Here's a minimal reproducer. First, run vLLM with a tool call parser and model. I use a Granite model for testing here, but it should be roughly the same for any model with a tool call parser. ``` vllm serve ibm-granite/granite-3.3-8b-instruct \ --enable-auto-tool-choice \ --tool-call-parser granite ``` Then, send a streaming tool call request to the server and check the response for missing tokens: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="fake") tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } }, ] response = client.chat.completions.create( model="ibm-granite/granite-3.3-8b-instruct", messages=[{"role": "user", "content": "What is the weather in Sydney in celsius?"}], tools=tools, tool_choice="auto", stream=True, stream_options={ "include_usage": True, "continuous_usage_stats": True, }, extra_body={"return_token_ids": True}, ) returned_token_ids = [] last_completion_tokens = 0 for event in response: if not getattr(event, "choices", None): continue choice = event.choices[0] usage = event.usage if hasattr(choice, "token_ids"): returned_token_ids.extend(choice.token_ids) num_token_ids = len(choice.token_ids) else: num_token_ids = 0 elapsed_completion_tokens = usage.completion_tokens - last_completion_tokens if elapsed_completion_tokens != num_token_ids: raise ValueError( "Model generated more tokens than returned by return_token_ids!\n" f"All tokens returned so far: {returned_token_ids}" ) last_completion_tokens = usage.completion_tokens ``` Running that, I get the following output: ``` python return_token_ids_test.py Traceback (most recent call last): File "/Volumes/SourceCode/vllm/return_token_ids_test.py", line 49, in <module> raise ValueError( ValueError: Model generated more tokens than returned by return_token_ids! All tokens returned so far: [49154, 48685] ``` If I add a bit of debug logging into vLLM server side and run it again, I can see the list of tokens that should have been returned: `current_token_ids: [49154, 7739, 8299, 563, 3447, 2645, 563, 313, 16716, 6161, 910, 392, 313, 2243, 563, 313, 3308, 101, 3263, 3918, 313, 426, 563, 313, 371, 81, 1700, 81, 15859, 48685]` All of the tokens between the first and last in that list were missed by `return_token_ids`. This code is not executed for every generated token when tool call parser (or reasoning parsers, most likely) are in use: https://github.com/vllm-project/vllm/blob/61089465a6101790635ed96c26df3e9a57d8d2c9/vllm/entrypoints/openai/serving_chat.py#L1090 The reason is because we return early at: https://github.com/vllm-project/vllm/blob/61089465a6101790635ed96c26df3e9a57d8d2c9/vllm/entrypoints/openai/serving_chat.py#L1063 ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27482
closed
[ "bug" ]
2025-10-24T16:10:31Z
2025-12-04T19:09:41Z
2
bbrowning
vllm-project/vllm
27,479
[Bug]: Low GPU utilization with Embedding Model
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug Initializing LLM(model="Qwen/Qwen3-Embedding-0.6B", task="embed") on a single B200 (180 GB) immediately reserves ~80% GPU memory (likely PagedAttention KV block pre-allocation). During embedding, GPU-Util stays <40%, whereas a naive Transformers inference with batch_size=512 reaches >80% utilization and memory use on the same box. Is heavy KV Cache pre-allocation expected for task="embed" (prefill-only)? And is there any method to improve the GPU-Util? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27479
open
[ "bug" ]
2025-10-24T15:18:05Z
2025-10-24T15:25:38Z
1
JhaceLam
vllm-project/vllm
27,477
[Bug]: First prompt token missing when requested with "echo"
### Your current environment vllm installed from main: `vllm 0.11.1rc3.dev23+g61089465a.precompiled` ### 🐛 Describe the bug Is it expected behavior that echo isn't returning the first token of the prompt? I am trying to collect exact prompt_token_ids which went into the model served with vllm serve , so I am doing this: ```bash VLLM_LOGGING_LEVEL=DEBUG vllm serve openai/gpt-oss-20b -tp 1 --enforce-eager --return-tokens-as-token-ids --enable-log-requests --enable-prompt-tokens-details ``` and with this snippet: ```python from openai import OpenAI client = OpenAI( api_key="EMPTY", base_url="http://localhost:8000/v1" ) messages = [ {"role": "user", "content": "Continue: The quick brown fox"}, ] response = client.chat.completions.create( model="openai/gpt-oss-20b", messages=messages, temperature=0.0, max_tokens=1024, logprobs=True, extra_body={ "echo": True, } ) print(response.model_extra['prompt_logprobs']) ``` I am seeing: `[None, 17360, 200008, ...]` whereas the vllm server logs are printing this: `[200006, 17360, 200008, ...]` which is correct as the first token is and should be `200006` == `<|start|>` . Not sure why is it `None` in the ChatCompletion object ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27477
closed
[ "bug" ]
2025-10-24T14:43:50Z
2025-10-24T15:04:01Z
2
eldarkurtic
huggingface/text-generation-inference
3,336
Get inference endpoint model settings via client
### Feature request Enable commands via clients such as `OpenAI` that would get model settings from an inference endpoint. Does this exist and I just can't find it? ### Motivation There is currently no clear way to get inference model settings directly from an endpoint. Individual base models have their original settings, but this does not necessarily translate to an endpoint. As an example, [Microsoft's Phi-3 model](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) supports 128k context length as input, but if instantiated as an endpoint on a 24GB gpu the allowed input context length is less (48k). The only way I have found to access the information regarding an individual endpoint is via `huggingface_hub`, specifically: ``` from huggingface_hub import get_inference_endpoint endpoint = get_inference_endpoint(ENDPOINT_NAME, namespace=USERNAME, token=api_key) ``` To get the general settings, you can then access the `raw` dict of the endpoint's image. For example, if I want to get the context length of a specific model at an endpoint, I can do it this way: ``` # the settings/specs of the endpoint in a 'llamacpp' image settings = endpoint.raw['model']['image']['llamacpp'] # this allows me to get info like context length (via the ['ctxSize']) key >>> print(settings['ctxSize']) 48000 ``` This is problematic when sending prompts to an endpoint - if it were easier to query model properties programmatically, then I could write code to adjust queries on the fly appropriately depending on the target model. As it is, the sender needs to know the properties of a particular endpoint beforehand. IMO what is needed is to be able to get this info directly from a client. In the OpenAI client in the Huggingface Inference API there seems to be some functionality for this, i.e. I can instantiate a client: ``` client = OpenAI( base_url=endpoint, # AWS/server URL api_key=api_key, # huggingface token ) ``` Then I can get a list of models at that url: ``` print(client.models.list()) ``` But this only prints out basic information, which doesn't include such things as context length. Is there a way to get this info from the client that I'm just missing? I have noticed when there are errors related to input length, the client returns an error with the key `n_ctx`. For example, if a model I'm working with has a 12k context window and I send 13k tokens, the error is: ``` openai.BadRequestError: Error code: 400 - {'error': {'code': 400, 'message': 'the request exceeds the available context size, try increasing it', 'type': 'exceed_context_size_error', 'n_prompt_tokens': 13954, 'n_ctx': 12032}} ``` This tells me that the client has access to the overall settings, but it's not clear to me how to get them. ### Your contribution Happy to work on this if someone can point me where to look for relevant code that would pass inference endpoint settings info to the client, perhaps via the `client.models.list()` method.
https://github.com/huggingface/text-generation-inference/issues/3336
closed
[]
2025-10-24T13:07:15Z
2025-10-30T14:10:46Z
1
lingdoc
huggingface/datasets
7,829
Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict
### Describe the bug Hi team, first off, I love the datasets library! 🥰 I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict. Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows. Training Task: I'm performing contrastive learning with SentenceTransformer and Accelerate on a single node with 4 H100, which requires me to sample from only one dataset at a time. Training Loop: At each training step, I sample ~16,000 examples from a single dataset, and then switch to a different dataset for the next step. I iterate through all 362 datasets this way. Problem: The process's memory usage continuously increases over time, eventually causing a stale status where GPUs would stop working. It seems memory from previously sampled datasets isn't being released. I've set num_workers=0 for all experiments. Chart 1: Standard DatasetDict The memory usage grows steadily until it make the training stale (RSS memory) <img width="773" height="719" alt="Image" src="https://github.com/user-attachments/assets/6606bef5-1153-4f2d-bf08-82da249d6e8d" /> Chart 2: IterableDatasetDict I also tried to use IterableDatasetDict and IterableDataset. The memory curve is "smoother," but the result is the same: it grows indefinitely and the training become stale. <img width="339" height="705" alt="Image" src="https://github.com/user-attachments/assets/ee90c1a1-6c3b-4135-9edc-90955cb1695a" /> Any feedback or guidance on how to manage this memory would be greatly appreciated! ### Steps to reproduce the bug WIP, I'll add some code that manage to reproduce this error, but not straightforward. ### Expected behavior The memory usage should remain relatively constant or plateau after a few steps. Memory used for sampling one dataset should be released before or during the sampling of the next dataset. ### Environment info Python: 3.12 Datasets: 4.3.0 SentenceTransformers: 5.1.1
https://github.com/huggingface/datasets/issues/7829
open
[]
2025-10-24T09:51:38Z
2025-11-06T13:31:26Z
4
raphaelsty
huggingface/transformers
41,842
Incorrect usage of `num_items_in_batch`?
It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430). However, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Does it make sense to pass `num_items_in_batch` (for the whole batch) or should that number be for that particular input only? Right now, the entire batch's `num_items_in_batch` is used [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2486).
https://github.com/huggingface/transformers/issues/41842
closed
[]
2025-10-24T07:36:00Z
2025-12-01T08:02:48Z
2
gohar94
vllm-project/vllm
27,463
[Usage]: How to request DeepSeek-OCR with http request
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm i want to request DeepSeek-OCR with http, is any example for it? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27463
closed
[ "usage" ]
2025-10-24T07:07:29Z
2025-10-29T17:26:49Z
8
YosanHo
huggingface/lerobot
2,306
how to use groot without flash attention
my system is ubuntu 20.04 with glibc 2.3.1 which is not supported flash attention, If I can modify the config of groot to use it with normal attention?
https://github.com/huggingface/lerobot/issues/2306
open
[ "question", "policies", "dependencies" ]
2025-10-24T06:35:18Z
2025-11-04T01:28:38Z
null
shs822
huggingface/lerobot
2,305
Error dependence about the `Transformer` library
### System Info ```Shell - lerobot version: 0.4.0 - Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39 - Python version: 3.12.12 - Huggingface Hub version: 0.35.3 - Datasets version: 4.1.1 - Numpy version: 2.2.6 - PyTorch version: 2.7.0+cu128 - Is PyTorch built with CUDA support?: True - Cuda version: 12.8 - GPU model: NVIDIA RTX PRO 6000 Blackwell Workstation Edition - Using GPU in script?: <fill in> ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction # Environment I used the `uv` tools to auto-solve the environment. The `pyproject.toml` is shown as following. ``` [project] name = "openpi-pytorch-env2" version = "0.1.0" description = "Add your description here" requires-python = "==3.12.12" dependencies = [ # Pytorch 依赖项 "torch==2.7.0", "torchvision==0.22.0", "torchaudio==2.7.0", "pytorch_lightning", # lerobot-libero "libero @ git+https://github.com/huggingface/lerobot-libero.git#egg=libero", # lerobot "lerobot[all] @ git+https://github.com/huggingface/lerobot.git@v0.4.0", ] [tool.uv.sources] torch = { index = "pytorch-cu128" } torchvision = { index = "pytorch-cu128" } torchaudio = { index = "pytorch-cu128" } [[tool.uv.index]] name = "pytorch-cu128" url = "https://download.pytorch.org/whl/cu128" explicit = true ``` # BUG Report When I was running the `pi0` code ``` import os import torch from lerobot.policies.pi0.modeling_pi0 import PI0Policy from transformers import AutoTokenizer MODEL_PATH = os.path.expanduser("~/Models/pi0_base") policy = PI0Policy.from_pretrained(MODEL_PATH) ``` There are errors like: ``` An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues ImportError: cannot import name 'check' from 'transformers.models.siglip' (/opt/miniforge3/envs/pi0_torch2/lib/python3.12/site-packages/transformers/models/siglip/__init__.py) During handling of the above exception, another exception occurred: File "/home/robot/pi0/openpi_pytorch2/test_simple.py", line 22, in <module> policy = PI0Policy.from_pretrained(MODEL_PATH) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues ``` The transformer lib is auto-solved by the `pyproject.toml` in `lerobot` lib. Can you solve the error? Thanks ### Expected behavior Loading the weights successfully.
https://github.com/huggingface/lerobot/issues/2305
open
[ "question", "policies", "dependencies" ]
2025-10-24T05:59:32Z
2025-11-14T16:01:49Z
null
sunshineharry
vllm-project/vllm
27,454
[Usage]: How to set the expert id on each EP by myself after setting EP in Deepseek (how to reorder experts?)
### Your current environment ```text vllm 0.8.5 ``` ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27454
open
[ "usage" ]
2025-10-24T03:15:16Z
2025-10-24T07:27:50Z
2
HameWu
vllm-project/vllm
27,448
[Usage]: how to pass multi turn multimode messages to Vllm?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27448
open
[ "usage" ]
2025-10-24T02:41:45Z
2025-10-24T03:33:13Z
1
cqray1990
huggingface/lerobot
2,304
How to load local model?
For example, i'm trying to fine-tune pi0, so I downloaded pi0_base locallly and save it in [position A,like lerobot/models/pi0_base] ,which has 5 files in total,including model.safetensors. Then how to load it in code? I used to just set model.path=[position A] But followed tuorial, it uses pretrained_path_or_name as key words. Howover, my code raised error here: ```python print(f"Loading model from: {pretrained_name_or_path}") try: from transformers.utils import cached_file # Try safetensors first resolved_file = cached_file( pretrained_name_or_path, "model.safetensors", cache_dir=kwargs.get("cache_dir"), force_download=kwargs.get("force_download", False), resume_download=kwargs.get("resume_download"), proxies=kwargs.get("proxies"), use_auth_token=kwargs.get("use_auth_token"), revision=kwargs.get("revision"), # local_files_only=kwargs.get("local_files_only", False), local_files_only=True # I set this for experiment but failed too ) from safetensors.torch import load_file original_state_dict = load_file(resolved_file) print("✓ Loaded state dict from model.safetensors") except Exception as e: print(f"Could not load state dict from remote files: {e}") print("Returning model without loading pretrained weights") return model ``` Its outputs: Loading model from: /home/user/working_folder/lerobot/local/model/pi0_base (I use this absolute path) Could not load state dict from remote files: /home/user/working_folder/lerobot/local/model/pi0_base does not appear to have a file named model.safetensors. Checkout 'https://huggingface.co//home/user/working_folder/lerobot/local/model/pi0_base/tree/main' for available files. It seems that the program see my pretrain_path_or_name as a repo_id :/ How can I introduce local pretrained path? * Ok I know that my file is incorrect. It's my bad not code's
https://github.com/huggingface/lerobot/issues/2304
closed
[]
2025-10-24T01:59:26Z
2025-10-24T02:33:25Z
null
milong26
vllm-project/vllm
27,441
[Bug]: vllm/v1/core/sched/scheduler.py: Unintended reordering of requests during scheduling
### Your current environment <details> This error is independent of the environment. </details> ### 🐛 Describe the bug ### Description The function `schedule()` in [vllm/v1/core/sched/scheduler.py](https://github.com/vllm-project/vllm/blob/main/vllm/v1/core/sched/scheduler.py) is responsible for scheduling inference requests. In certain cases — such as when a request is waiting for KV blocks from a remote prefill worker or when the token budget is exhausted — the request must be reinserted into the waiting queue `self.waiting`. Currently, the implementation pops such requests, prepends them to skipped_waiting_requests, and then prepends skipped_waiting_requests back to self.waiting. However, this behavior can shuffle the request order, potentially impacting the tail latency of request serving. ### How to Fix Replace all calls to `skipped_wating_requests.prepend_request(request)` with `skipped_wating_requests.add_request(request)` ### Result <img width="1445" height="801" alt="Image" src="https://github.com/user-attachments/assets/4e81a662-c527-4b15-a5d1-8e78150961e8" /> The figure compares the request-serving timelines of the original (left) and fixed (right) versions. * X-axis: Time * Y-axis: Request ID (submission order) * Green: Duration while the request is in `self.waiting` * Black: Time between GPU memory allocation and completion of the request’s prefill computation * Red: Time between the end of prefill computation and GPU memory release (while waiting for the remote decoder to read KV blocks) The scheduling policy used is FCFS. In the original version, requests are shuffled under resource pressure. After applying the fix, the request serving order remains consistent, as expected. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27441
open
[ "bug" ]
2025-10-23T22:35:50Z
2025-11-22T04:20:35Z
1
dongha-yoon
huggingface/lerobot
2,303
Question: Does the follower arm have an api for scripting movement?
Hi, apologies if this has been answered before or if it's not the right place to ask. I've been using the SO-101 arms for imitation learning, but recently I've wanted to try and test out the follower arm for embodied reasoning models such as Gemini ER 1.5. To do this, I figure I would need to have some way to map outputs from the ER model (coordinates or general, high-level movements) to movements for the SO-101. Does the SO-101 has an API for this type of low-level movement control, e.g. if I just wanted to move it pre-scripted in space using coordinates or motor motion? What would the code for this type of low-level movement look? Thank you so much for any and all help!
https://github.com/huggingface/lerobot/issues/2303
open
[ "question", "robots", "python" ]
2025-10-23T20:40:56Z
2025-10-23T22:29:28Z
null
Buttmunky1
huggingface/lerobot
2,294
Question about the HuggingFaceVLA/smolvla_libero Model Configuration
Hello, Lerobot has officially ported [LIBERO](https://github.com/huggingface/lerobot/issues/1369#issuecomment-3323183721), and we can use the checkpoint at [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero) to evaluate the LIBERO benchmark. However, the model configuration of [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero) appears to differ from the [original model](https://huggingface.co/lerobot/smolvla_base). For example: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base/blob/main/config.json) ```json { "vlm_model_name": "HuggingFaceTB/SmolVLM2-500M-Video-Instruct", "load_vlm_weights": true, "add_image_special_tokens": false, "attention_mode": "cross_attn", "prefix_length": 0, "pad_language_to": "max_length", "num_expert_layers": 0, "num_vlm_layers": 16, "self_attn_every_n_layers": 2, "expert_width_multiplier": 0.75 } ``` [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero/blob/main/config.json) ```json { "vlm_model_name": "HuggingFaceTB/SmolVLM2-500M-Instruct", "load_vlm_weights": true, "add_image_special_tokens": false, "attention_mode": "cross_attn", "prefix_length": 0, "pad_language_to": "longest", "num_expert_layers": -1, "num_vlm_layers": 0, <- it becomes 32 when model is initialized "self_attn_every_n_layers": 2, "expert_width_multiplier": 0.5, } ``` In particular, `num_vlm_layers` is set to 32 across all layers, which is not consistent with the [paper](https://arxiv.org/pdf/2506.01844) where they use half of them (16 layers). Could you provide the original model checkpoint and the training recipe so we can reproduce the LIBERO benchmark performance?
https://github.com/huggingface/lerobot/issues/2294
open
[ "question", "policies" ]
2025-10-23T13:37:48Z
2025-10-30T07:49:17Z
null
Hesh0629
vllm-project/vllm
27,413
[Usage]: how to request a qwen2.5-VL-7B classify model served by vllm using openai SDK?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm I launch a server with the following command to serving a Qwen2.5-VL-7B model finetued for seqence classification. (this model replaced the lm_head with a 2 classes score_head) The launch command is : ``` vllm serve --model=//video_classification/qwenvl_7b_video_cls/v5-20251011-121851/2340_vllm_format --served_model_name Qwen2.5-7B-shenhe --task=classify --port=8080 --tensor-parallel-size=2 ``` I don't know how to request the server with the openAI sdk. I use the code snnipet showed below which works well with pure text, but it got 400 bad request when I put the video url into the prompt this works well: ``` # SPDX-License-Identifier: Apache-2.0 # SPDX-FileCopyrightText: Copyright contributors to the vLLM project """Example Python client for classification API using vLLM API server NOTE: start a supported classification model server with `vllm serve`, e.g. vllm serve jason9693/Qwen2.5-1.5B-apeach """ import argparse import pprint import requests def post_http_request(payload: dict, api_url: str) -> requests.Response: headers = {"User-Agent": "Test Client"} response = requests.post(api_url, headers=headers, json=payload) return response def parse_args(): parse = argparse.ArgumentParser() parse.add_argument("--host", type=str, default="localhost") parse.add_argument("--port", type=int, default=8000) parse.add_argument("--model", type=str, default="jason9693/Qwen2.5-1.5B-apeach") return parse.parse_args() def main(args): host = args.host port = args.port model_name = args.model api_url = f"http://{host}:{port}/classify" prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] payload = { "model": model_name, "input": prompts, } classify_response = post_http_request(payload=payload, api_url=api_url) pprint.pprint(classify_response.json()) if __name__ == "__main__": args = parse_args() main(args) ``` but if I replace the prompts with multimodal data, the server doesn't work. ``` video_url = "https://js-ad.a.yximgs.com/bs2/ad_nieuwland-material/t2i2v/videos/3525031242883943515-140276939618048_24597237897733_v0_1759927515165406_3.mp4" prompts = [ {"role": "user", "content": [ {"type": "text", "text": "你是一个专业的视频质量分析师,请你仔细判断下方提供的视频是否存在质量问题\n质量问题包括但不限于:\n1.画面质量差,画面模糊,亮度闪烁\n2.画面中文字存在模糊问题\n3.视频画面不符合真实物理逻辑,例如凭空产生的人物肢体、头像、手指手臂数量不对,腿部不自然等问题\n4.画面运动不符合物理规律,例如凭空产生的物体,画面卡顿、晃动、抖动、跳动等\n\n如果视频存在问题请返回0,如果视频不存在问题请返回1。\n## 视频内容如下\n"}, {"type": "video", "video": f"{video_url}"}, ] } ] ``` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27413
open
[ "good first issue", "usage" ]
2025-10-23T12:32:25Z
2025-10-25T00:18:54Z
12
muziyongshixin
huggingface/transformers.js
1,447
How to use half precision ONNX models?
### Question Hi, I just exported a detection model with fp16 using optimum. `--dtype fp16 ` This is my pipeline: ```javascript const model = await AutoModel.from_pretrained( "./onnx_llama", { dtype: "fp16", device: "cpu" } const processor = await AutoProcessor.from_pretrained("./onnx_llama"); const { pixel_values, reshaped_input_sizes } = await processor(image); const buffer = await fs.readFile("image3.jpg"); const blob = new Blob([buffer]); const image = await RawImage.fromBlob(blob); const { pixel_values, reshaped_input_sizes } = await processor(image); const { output0 } = await model({ pixel_values: tensor }); ``` Using this results in: An error occurred during model execution: "Error: Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(float16))". Which makes sense, however when i try to convert to fp16 "manually" ```javascript const fp16data = Float16Array.from(pixel_values.data); //float32ArrayToUint16Array(pixel_values.data); const tensor = new Tensor("float16", fp16data, pixel_values.dims); const { output0 } = await model({ pixel_values:tensor }); ``` I get: `Tensor.data must be a typed array (4) for float16 tensors, but got typed array (0).` What's going on here? I tried to converting the `pixel_data.data` to a UInt16Array manually but that has no effect as it gets converted to a Float16Array in the tensor constructor anyway. Help is much appreciated! Thanks
https://github.com/huggingface/transformers.js/issues/1447
open
[ "question" ]
2025-10-23T09:18:26Z
2025-10-23T09:18:26Z
null
richarddd
huggingface/transformers
41,810
How do you use t5gemma decoder with a different encoder?
I am trying to combine the t5gemma decoder with a pretrained deberta encoder that I have trained from scratch using `EncoderDecoderModel`. Here is the code: ``` model_1 = "WikiQuality/pre_filtered.am" model_2 = "google/t5gemma-2b-2b-ul2" encoder = AutoModel.from_pretrained(model_1) decoder = AutoModel.from_pretrained(model_2, dtype=torch.bfloat16) model = EncoderDecoderModel(encoder=encoder, decoder=decoder) ``` The above code raises the error: ``` AttributeError: 'T5GemmaConfig' object has no attribute 'hidden_size' ``` From this I understand that `hidden_size` is accesible from `decoder.config.decoder.hidden_size` and not `decoder.config.hidden_size`, which is where EncoderDecoderModel is looking. So I change my code to load the encoder-decoder model to this: ``` model = EncoderDecoderModel(encoder=encoder, decoder=decoder.decoder) ``` This gives me the following error: ``` ValueError: Unrecognized model identifier: t5_gemma_module. Should contain one of aimv2, aimv2_vision_model, albert, align, altclip, apertus, arcee, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, bitnet, blenderbot, blenderbot-small, blip, blip-2, blip_2_qformer, bloom, blt, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, cohere2_vision, colpali, colqwen2, conditional_detr, convbert, convnext, convnextv2, cpmant, csm, ctrl, cvt, d_fine, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v2, deepseek_v3, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dia, diffllama, dinat, dinov2, dinov2_with_registers, dinov3_convnext, dinov3_vit, distilbert, doge, donut-swin, dots1, dpr, dpt, edgetam, edgetam_video, edgetam_vision_model, efficientformer, efficientloftr, efficientnet, electra, emu3, encodec, encoder-decoder, eomt, ernie, ernie4_5, ernie4_5_moe, ernie_m, esm, evolla, exaone4, falcon, falcon_h1, falcon_mamba, fastspeech2_conformer, fastspeech2_conformer_with_hifigan, flaubert, flava, flex_olmo, florence2, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, gemma3n, gemma3n_audio, gemma3n_text, gemma3n_vision, git, glm, glm4, glm4_moe, glm4v, glm4v_moe, glm4v_moe_text, glm4v_text, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gpt_oss, gptj, gptsan-japanese, granite, granite_speech, granitemoe, granitemoehybrid, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hgnet_v2, hiera, hubert, hunyuan_v1_dense, hunyuan_v1_moe, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, internvl, internvl_vision, jamba, janus, jetmoe, jukebox, kosmos-2, kosmos-2.5, kyutai_speech_to_text, layoutlm, layoutlmv2, layoutlmv3, led, levit, lfm2, lfm2_vl, lightglue, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longcat_flash, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, metaclip_2, mgp-str, mimi, minimax, ministral, mistral, mistral3, mixtral, mlcd, mllama, mm-grounding-dino, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, modernbert-decoder, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmo3, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, ovis2, owlv2, owlvit, paligemma, parakeet, parakeet_ctc, parakeet_encoder, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, perception_encoder, perception_lm, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_omni, qwen2_5_vl, qwen2_5_vl_text, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen2_vl_text, qwen3, qwen3_moe, qwen3_next, qwen3_omni_moe, qwen3_vl, qwen3_vl_moe, qwen3_vl_moe_text, qwen3_vl_text, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam2, sam2_hiera_det_model, sam2_video, sam2_vision_model, sam_hq, sam_hq_vision_model, sam_vision_model, seamless_m4t, seamless_m4t_v2, seed_oss, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip2_vision_model, siglip_vision_model, smollm3, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, t5gemma, table-transformer, tapas, textnet, tim
https://github.com/huggingface/transformers/issues/41810
closed
[]
2025-10-23T08:48:19Z
2025-12-01T08:02:53Z
1
kushaltatariya
huggingface/accelerate
3,818
Duplicate W&B initialization in offline mode
### System Info ```Shell - `Accelerate` version: 1.10.1 ``` ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`) - [ ] My own task or dataset (give details below) ### Reproduction When using Accelerate with `wandb` in **offline mode**, two separate W&B runs are created for a single training process. This happens because both the `start` and the `store_init_configuration` method of `WandBTracker` call `wandb.init()`, which leads to redundant initialization. https://github.com/huggingface/accelerate/blob/a12beee389f6bd37cfae0aba233db03f375f7f80/src/accelerate/tracking.py#L318-L325 https://github.com/huggingface/accelerate/blob/a12beee389f6bd37cfae0aba233db03f375f7f80/src/accelerate/tracking.py#L343-L350 Is there any plan to refine the duplication? ### Expected behavior initialize wandb run only 1 time
https://github.com/huggingface/accelerate/issues/3818
closed
[ "good first issue" ]
2025-10-23T02:19:38Z
2025-12-16T13:10:48Z
3
ShuyUSTC
vllm-project/vllm
27,347
[Usage]: vllm: error: unrecognized arguments: --all2all-backend deepep_low_latency
### Your current environment ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 24.04.2 LTS (x86_64) GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version : Could not collect CMake version : version 3.31.6 Libc version : glibc-2.39 ============================== PyTorch Info ============================== PyTorch version : 2.8.0+cu128 Is debug build : False CUDA used to build PyTorch : 12.8 ROCM used to build PyTorch : N/A ============================== Python Environment ============================== Python version : 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime) Python platform : Linux-5.15.0-89-generic-x86_64-with-glibc2.39 ============================== CUDA / GPU Info ============================== Is CUDA available : True CUDA runtime version : 12.8.93 CUDA_MODULE_LOADING set to : LAZY GPU models and configuration : GPU 0: NVIDIA H200 GPU 1: NVIDIA H200 GPU 2: NVIDIA H200 GPU 3: NVIDIA H200 GPU 4: NVIDIA H200 GPU 5: NVIDIA H200 GPU 6: NVIDIA H200 GPU 7: NVIDIA H200 Nvidia driver version : 570.133.20 cuDNN version : Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0 HIP runtime version : N/A MIOpen runtime version : N/A Is XNNPACK available : True ============================== CPU Info ============================== Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 192 On-line CPU(s) list: 0 Off-line CPU(s) list: 1-191 Vendor ID: GenuineIntel Model name: INTEL(R) XEON(R) PLATINUM 8558 CPU family: 6 Model: 207 Thread(s) per core: 2 Core(s) per socket: 48 Socket(s): 2 Stepping: 2 CPU(s) scaling MHz: 76% CPU max MHz: 4000.0000 CPU min MHz: 800.0000 BogoMIPS: 4200.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 4.5 MiB (96 instances) L1i cache: 3 MiB (96 instances) L2 cache: 192 MiB (96 instances) L3 cache: 520 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-47,96-143 NUMA node1 CPU(s): 48-95,144-191 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mit
https://github.com/vllm-project/vllm/issues/27347
closed
[ "usage" ]
2025-10-22T14:36:18Z
2025-10-22T15:07:13Z
1
Valerianding
vllm-project/vllm
27,343
[Usage]: Can't get result from /pooling api when using Qwen2.5-Math-PRM-7B online
### Your current environment ``` The output of `python collect_env.py` Collecting environment information... [140/1781] ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0 Clang version : Could not collect CMake version : version 3.22.1 Libc version : glibc-2.35 ============================== PyTorch Info ============================== PyTorch version : 2.8.0+cu128 Is debug build : False CUDA used to build PyTorch : 12.8 ROCM used to build PyTorch : N/A ============================== Python Environment ============================== Python version : 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:16:04) [GCC 11.2.0] (64-bit runti me) Python platform : Linux-5.15.0-153-generic-x86_64-with-glibc2.35 ============================== CUDA / GPU Info ============================== Is CUDA available : True CUDA runtime version : 12.4.99 CUDA_MODULE_LOADING set to : LAZY GPU models and configuration : GPU 0: NVIDIA A100 80GB PCIe GPU 1: NVIDIA A100 80GB PCIe GPU 2: NVIDIA A800 80GB PCIe GPU 3: NVIDIA A800 80GB PCIe GPU 4: NVIDIA A100 80GB PCIe GPU 5: NVIDIA A100 80GB PCIe GPU 6: NVIDIA A800 80GB PCIe GPU 7: NVIDIA A800 80GB PCIe Nvidia driver version : 550.54.15 cuDNN version : Could not collect HIP runtime version : N/A MIOpen runtime version : N/A Is XNNPACK available : True ============================== CPU Info ============================== Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virt
https://github.com/vllm-project/vllm/issues/27343
closed
[ "usage" ]
2025-10-22T13:36:51Z
2025-10-23T03:39:13Z
3
zgc6668
huggingface/transformers.js
1,446
Zhare-AI/sd-1-5-webgpu on HuggingFace.co lists itself as Transformer.js supported?
### Question [Zhare-AI/sd-1-5-webgpu](https://huggingface.co/Zhare-AI/sd-1-5-webgpu) is a `text-to-image` model and is marked as Transformers.js compatible, and even shows demo code using Transformers.js on its `huggingface.co` page. Their example code fails with an error saying `text-to-image` is not supported in Transformers.js. The problem is `text-to-image` is not supported in 3.7.6 and does not appear to even be supported in the v4 branch. I asked them on their `huggingface.co` discussions what version of Transformers.js their model is compatible with but no reply yet. Apparently someone else asked them the same thing 18 days ago and never got a reply. I am very interested adding a Transformers.js demo for `text-to-image` to my Blazor WASM library [SpawnDev.BlazorJS.TransformersJS](https://github.com/LostBeard/SpawnDev.BlazorJS.TransformersJS), but not sure what I am missing.
https://github.com/huggingface/transformers.js/issues/1446
closed
[ "question" ]
2025-10-22T12:20:16Z
2025-10-24T14:33:17Z
null
LostBeard
vllm-project/vllm
27,336
[Feature]: Make promt_token_ids optional in streaming response (disable by default)
### 🚀 The feature, motivation and pitch Starting with v0.10.2, the first server-sent event (SSE) in streaming responses now includes the full list of `prompt_token_ids`. While this can be useful for debugging or detailed inspection, it introduces several practical issues in production environments: 1. Large payload size: For long prompts, this significantly increases the size of the first streaming event. This can increase latency, cause network throttling, and reduce streaming responsiveness. 2. Parser and infrastructure limitations: Some clients and intermediate parsers have message size limits. The larger first event may cause them to fail or disconnect, requiring changes across multiple components in existing systems that previously handled smaller initial events. 3. Breaking change in behavior: Previously, streaming responses did not include prompt token IDs, so this change affects compatibility with existing clients expecting smaller events. ### Suggested Fix Make the inclusion of prompt_token_ids optional per request and disabled by default (same as `return_token_ids`), restoring the previous behavior. ### Alternatives Alternatively, provide an API flag or configuration option to exclude `prompt_token_ids` globally for the entire server, so that no streaming response include this field. ### Additional context For example, the first streaming response for a prompt of ~130k tokens can now exceed 600KB, while some parsers and scanners have default buffer sizes of 64KB (which was previously sufficient). ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27336
closed
[ "feature request" ]
2025-10-22T11:42:41Z
2025-10-27T11:06:45Z
1
Gruner-atero
huggingface/transformers
41,775
Hugging Face website and models not reachable
### System Info ``` $ pip show transformers Name: transformers Version: 4.57.1 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: transformers@huggingface.co ``` ``` $ python --version Python 3.12.3 ``` ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. `python -c 'from transformers import pipeline; pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B")'` I am getting connection issues: ``` OSError: We couldn't connect to 'https://huggingface.co' to load the files, and couldn't find them in the cached files. Check your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. ``` It rather funny that it recommends checking https://huggingface.co/docs/transformers/installation#offline-mode when https://huggingface.co is not reachable :-) Maybe this information, e.g. about mirrors, could be hosted somewhere else? ### Expected behavior The examples should work as documented.
https://github.com/huggingface/transformers/issues/41775
closed
[ "bug" ]
2025-10-22T07:40:32Z
2025-11-21T08:10:00Z
8
christian-rauch
vllm-project/vllm
27,319
[Usage]: Quantized FusedMoE crashed in graph compiled stage
### Your current environment ```text ============================== System Info ============================== OS : Ubuntu 24.04.2 LTS (x86_64) GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version : 19.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.4.3 25224 d366fa84f3fdcbd4b10847ebd5db572ae12a34fb) CMake version : version 3.31.6 Libc version : glibc-2.39 ============================== PyTorch Info ============================== PyTorch version : 2.8.0+rocm6.4 Is debug build : False CUDA used to build PyTorch : N/A ROCM used to build PyTorch : 6.4.43482-0f2d60242 ============================== Python Environment ============================== Python version : 3.12.11 | packaged by conda-forge | (main, Jun 4 2025, 14:45:31) [GCC 13.3.0] (64-bit runtime) Python platform : Linux-6.8.0-79-generic-x86_64-with-glibc2.39 ============================== CUDA / GPU Info ============================== Is CUDA available : True CUDA runtime version : Could not collect CUDA_MODULE_LOADING set to : LAZY GPU models and configuration : AMD Radeon PRO W7900 Dual Slot (gfx1100) Nvidia driver version : Could not collect cuDNN version : Could not collect HIP runtime version : 6.4.43482 MIOpen runtime version : 3.4.0 Is XNNPACK available : True ============================== CPU Info ============================== Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 52 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 256 On-line CPU(s) list: 0-255 Vendor ID: AuthenticAMD BIOS Vendor ID: Advanced Micro Devices, Inc. Model name: AMD EPYC 9554 64-Core Processor BIOS Model name: AMD EPYC 9554 64-Core Processor Unknown CPU @ 3.1GHz BIOS CPU family: 107 CPU family: 25 Model: 17 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 Stepping: 1 Frequency boost: enabled CPU(s) scaling MHz: 51% CPU max MHz: 3100.0000 CPU min MHz: 1500.0000 BogoMIPS: 6199.71 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap Virtualization: AMD-V L1d cache: 4 MiB (128 instances) L1i cache: 4 MiB (128 instances) L2 cache: 128 MiB (128 instances) L3 cache: 512 MiB (16 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-63,128-191 NUMA node1 CPU(s): 64-127,192-255 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1:
https://github.com/vllm-project/vllm/issues/27319
closed
[ "rocm", "usage" ]
2025-10-22T06:29:32Z
2025-10-24T02:19:55Z
1
Rus-P
vllm-project/vllm
27,298
[Doc]: Update metrics documentation to remove V0 references and add v1 changes.
## Problem The metrics documentation in `docs/design/metrics.md` still contains references to V0 metrics implementation, but V0 metrics have been removed after @njhill 's PR https://github.com/vllm-project/vllm/pull/27215 was merged. To avoid confusion, I think we should remove this and update it with the new set of v1 metrics. Was curious if we want to keep this v0 reference and add the v1 details on top of this. ### Suggest a potential alternative/fix 1. Remove all V0 references from the metrics documentation. 2. Update the introduction to focus on V1 metrics only. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27298
closed
[ "documentation" ]
2025-10-21T22:08:48Z
2025-10-22T13:29:17Z
1
atalhens
vllm-project/vllm
27,268
[Usage]: failed to infer device type on GCP COS despite nvidia container toolkit installed
### Your current environment I failed to run this script on GCP COS. ### How would you like to use vllm I was trying to use VLLM on a Google Cloud (GCP) Container-Optimized OS (COS) instance via Docker. I followed GCP's [documentation](https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus) to install the nvidia driver, including mapping nvidia driver-related dirs to the Docker container. All tests worked fine. However, when trying to start a VLLM server via Docker, I got the error that `libcuda.so.1` cannot be found and VLLM failed to infer device info. I tried to change the target dirs in the mapping to like `/usr/local/lib`, `/usr/local/cuda/lib`, etc. But no luck. I also tried adding the flags `--runtime nvidia --gpus all` per [this instruction](https://docs.vllm.ai/en/v0.8.4/deployment/docker.html) but got the error that `Error response from daemon: unknown or invalid runtime name: nvidia.` If someone can shed the light of where vllm official Docker image looks for CUDA stuff, it will be greatly appreciated. Thanks in advance. The complete command and error: ``` $ docker run -v ~/.cache/huggingface:/root/.cache/huggingface --env "HUGGING_FACE_HUB_TOKEN=<secret>" -p 8010:8000 --ipc=host vllm/vllm-openai:latest --model mistralai/Mistral-7B-v0.1 INFO 10-21 08:13:18 [__init__.py:220] No platform detected, vLLM is running on UnspecifiedPlatform WARNING 10-21 08:13:23 [_custom_ops.py:20] Failed to import from vllm._C with ImportError('libcuda.so.1: cannot open shared object file: No such file or directory') Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1949, in <module> parser = make_arg_parser(parser) ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/cli_args.py", line 263, in make_arg_parser parser = AsyncEngineArgs.add_cli_args(parser) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1714, in add_cli_args parser = EngineArgs.add_cli_args(parser) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 919, in add_cli_args vllm_kwargs = get_kwargs(VllmConfig) ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 281, in get_kwargs return copy.deepcopy(_compute_kwargs(cls)) ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 182, in _compute_kwargs default = field.default_factory() ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 123, in __init__ s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s) File "/usr/local/lib/python3.12/dist-packages/vllm/config/device.py", line 58, in __post_init__ raise RuntimeError( RuntimeError: Failed to infer device type, please set the environment variable `VLLM_LOGGING_LEVEL=DEBUG` to turn on verbose logging to help debug the issue. ``` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27268
open
[ "usage" ]
2025-10-21T15:24:21Z
2025-10-21T15:24:21Z
0
forrestbao
vllm-project/vllm
27,265
[Usage]: Cannot register custom model (Out-of-Tree Model Integration)
``` ### Your current environment ============================== Versions of relevant libraries ============================== [pip3] flake8==7.1.1 [pip3] flashinfer==0.1.6+cu124torch2.4 [pip3] flashinfer-python==0.2.5 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-modelopt==0.31.0 [pip3] nvidia-modelopt-core==0.31.0 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] pynvml==12.0.0 [pip3] pyzmq==26.2.0 [pip3] sentence-transformers==3.3.1 [pip3] torch==2.6.0 [pip3] torch_memory_saver==0.0.6 [pip3] torchao==0.9.0 [pip3] torchaudio==2.6.0 [pip3] torchdata==0.11.0 [pip3] torchprofile==0.0.4 [pip3] torchtext==0.18.0 [pip3] torchvision==0.21.0 [pip3] transformer_engine_torch==2.3.0 [pip3] transformers==4.51.1 [pip3] triton==3.2.0 [conda] flashinfer 0.1.6+cu124torch2.4 pypi_0 pypi [conda] flashinfer-python 0.2.5 pypi_0 pypi [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi [conda] nvidia-ml-py 12.560.30 pypi_0 pypi [conda] nvidia-modelopt 0.31.0 pypi_0 pypi [conda] nvidia-modelopt-core 0.31.0 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] pynvml 12.0.0 pypi_0 pypi [conda] pyzmq 26.2.0 pypi_0 pypi [conda] sentence-transformers 3.3.1 pypi_0 pypi [conda] torch 2.6.0 pypi_0 pypi [conda] torch-memory-saver 0.0.6 pypi_0 pypi [conda] torchao 0.9.0 pypi_0 pypi [conda] torchaudio 2.6.0 pypi_0 pypi [conda] torchdata 0.11.0 pypi_0 pypi [conda] torchprofile 0.0.4 pypi_0 pypi [conda] torchtext 0.18.0 pypi_0 pypi [conda] torchvision 0.21.0 pypi_0 pypi [conda] transformer-engine-torch 2.3.0 pypi_0 pypi [conda] transformers 4.51.1 pypi_0 pypi [conda] triton 3.2.0 pypi_0 pypi ============================== vLLM Info ============================== ROCM Version : Could not collect vLLM Version : 0.8.5.post1 ``` # How would you like to use vllm Hi, I'm trying to integrate a custom multi-modal model (Qwen2_5_VLForConditionalGeneration_Vilavt) using the out-of-tree plugin system, following the official documentation and the vllm_add_dummy_model example. ### The Issue: The model loading behavior is inconsistent between single-GPU and multi-GPU (tensor parallel) modes: - Single-GPU (CUDA_VISIBLE_DEVICES=0): Everything works perfectly. The engine initializes, and I can run inference. - Multi-GPU (CUDA_VISIBLE_DEVICES=0,1,2,3): The engine fails to start. Although the logs from VllmWorker processes show that my custom model is successfully registered, the main EngineCore process throws a ValueError, complaining that the model cannot be found. I've successfully created a package `vllm_vilavt`, installed it with `pip install -e .` , and my `setup.py` correctly points to a register() function in the entry_points. My `setup.py`: ``` from setuptools import setup, find_packages setup( name="vllm_vilavt", version="0.1", packages=find_packages(), entry_points={ "vllm.general_plugins": ["register_vilavt_model = vllm_
https://github.com/vllm-project/vllm/issues/27265
closed
[ "usage" ]
2025-10-21T14:17:17Z
2025-10-25T13:19:40Z
1
Hyperwjf
vllm-project/vllm
27,263
[Responses API] Support tool calling and ouput token streaming
Splitting off from #14721 > FYI a start has been made here https://github.com/vllm-project/vllm/pull/20504 > > That PR (which was merged to `main` on [7/9/2025](https://github.com/vllm-project/vllm/pull/20504#event-18495144925)) explicitly has an unchecked boxes for > > * [ ] Tool/functional calling support > * [ ] Output token streaming > > Any plans to implement those features? I think that is what is needed to support agentic coding tools like codex. See: > > * https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#harmony-format-support _Originally posted by @bartlettroscoe in [#14721](https://github.com/vllm-project/vllm/issues/14721#issuecomment-3321963360)_
https://github.com/vllm-project/vllm/issues/27263
open
[]
2025-10-21T12:36:44Z
2025-12-07T01:06:46Z
4
markmc
vllm-project/vllm
27,252
[Usage]: ”@app.post("/generate")“ API is support qwen2_vl or not?
### Your current environment i want tot know ”@app.post("/generate")“ API support qwen2_vl or not? ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27252
open
[ "usage" ]
2025-10-21T07:30:11Z
2025-10-21T07:30:11Z
0
wwkww
huggingface/lerobot
2,269
how to configure pi0_base to train with single camera dataset
Hi, I'm trying to train pi0_base with "lerobot/aloha_sim_transfer_cube_human" dataset which has only one camera input "observation.images.top". However, pi0 seems to expect three camera inputs: "observation.images.base_0_rgb", "observation.images.left_wrist_0_rgb", "observation.images.right_wrist_0_rgb" "ValueError: All image features are missing from the batch. At least one expected. (batch: dict_keys(['action', 'next.reward', 'next.done', 'next.truncated', 'info', 'action_is_pad', 'task', 'index', 'task_index', 'observation.images.top', 'observation.state', 'observation.language.tokens', 'observation.language.attention_mask'])) (image_features: {'observation.images.base_0_rgb': PolicyFeature(type=<FeatureType.VISUAL: 'VISUAL'>, shape=(3, 224, 224)), 'observation.images.left_wrist_0_rgb': PolicyFeature(type=<FeatureType.VISUAL: 'VISUAL'>, shape=(3, 224, 224)), 'observation.images.right_wrist_0_rgb': PolicyFeature(type=<FeatureType.VISUAL: 'VISUAL'>, shape=(3, 224, 224))}) Exception in thread Thread-2 (_pin_memory_loop): Traceback (most recent call last): File "/root/.local/share/mamba/envs/lerobot/lib/python3.10/threading.py", line 1016, in _bootstrap_inner" Is there a command-line argument I can use to set the single camera input to train with the pi0_base model?
https://github.com/huggingface/lerobot/issues/2269
open
[ "question", "policies", "dataset" ]
2025-10-21T01:32:50Z
2025-10-21T17:36:17Z
null
dalishi
vllm-project/vllm
27,233
gguf run good
### Your current environment from vllm import LLM, SamplingParams gguf_path = "/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf" llm = LLM( gguf_path, tokenizer="Qwen/Qwen3-1.7B" ) params = SamplingParams( temperature=0.8, top_p=0.9, top_k=40, max_tokens=200, ) outputs = llm.generate(["Who is Napoleon Bonaparte?"], params) print(outputs[0].outputs[0].text) ### How would you like to use vllm I want to run inferevenv) m@m-HP-Z440-Workstation:~/Desktop/vllm/vllm/examples/offline_inference/basic$ (venv) m@m-HP-Z440-Workstation:~/Desktop/vllm/vllm/examples/offline_inference/basic$ python3 Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> from vllm import LLM, SamplingParams INFO 10-21 03:05:39 [__init__.py:216] Automatically detected platform cuda. >>> >>> gguf_path = "/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf" >>> >>> llm = LLM( ... gguf_path, ... tokenizer="Qwen/Qwen3-1.7B" ... ) INFO 10-21 03:05:41 [utils.py:233] non-default args: {'tokenizer': 'Qwen/Qwen3-1.7B', 'disable_log_stats': True, 'model': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'} INFO 10-21 03:06:14 [model.py:547] Resolved architecture: Qwen3ForCausalLM `torch_dtype` is deprecated! Use `dtype` instead! ERROR 10-21 03:06:14 [config.py:278] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'. Use `repo_type` argument if needed., retrying 1 of 2 ERROR 10-21 03:06:16 [config.py:276] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'. Use `repo_type` argument if needed. INFO 10-21 03:06:16 [model.py:1730] Downcasting torch.float32 to torch.bfloat16. INFO 10-21 03:06:16 [model.py:1510] Using max model len 32768 INFO 10-21 03:06:16 [scheduler.py:205] Chunked prefill is enabled with max_num_batched_tokens=8192. (EngineCore_DP0 pid=67528) INFO 10-21 03:06:41 [core.py:644] Waiting for init message from front-end. (EngineCore_DP0 pid=67528) INFO 10-21 03:06:41 [core.py:77] Initializing a V1 LLM engine (v0.11.0) with config: model='/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf', speculative_config=None, tokenizer='Qwen/Qwen3-1.7B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=gguf, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=gguf, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf, enable_prefix_caching=True, chunked_prefill_enabled=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention","vllm.sparse_attn_indexer"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":[2,1],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"use_inductor_graph_partition":false,"pass_config":{},"max_capture_size":512,"local_cache_dir":null} [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Ex
https://github.com/vllm-project/vllm/issues/27233
open
[ "usage" ]
2025-10-21T00:11:26Z
2025-10-22T00:44:10Z
12
kmnnmk212-source
vllm-project/vllm
27,228
[Installation]: Compatibility with PyTorch 2.9.0?
### Your current environment ```text The output of `python collect_env.py` ``` ### How you are installing vllm Is there a version of vllm that is compatible with the latest PyTorch release 2.9.0? ``` pip install vllm==0.11.0 pip install torch==2.9.0 ``` ``` $ vllm bench latency --input-len 256 --output-len 256 --model Qwen3/Qwen3-8B --batch-size 1 terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Aborted (core dumped) ``` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27228
closed
[ "installation" ]
2025-10-20T21:10:24Z
2025-10-21T22:40:15Z
3
andrewor14
vllm-project/vllm
27,208
[Feature]: Upgrade CUDA version to 12.9.1 in docker images
### 🚀 The feature, motivation and pitch The current builds display warning logs like these ``` Warning: please use at least NVCC 12.9 for the best DeepGEMM performance ``` Can we bump this version easily? ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27208
closed
[ "feature request" ]
2025-10-20T16:08:49Z
2025-10-21T21:20:19Z
1
jhuntbach-bc
huggingface/lerobot
2,259
Clarifications on fine-tuning on different envs and embodiments
Hi everyone, I’m currently working on fine-tuning SmolVLA and π₀ using **[RLBench](https://github.com/stepjam/RLBench)**. The robot setup is a Franka Emika Panda (7DoF + gripper), and I’ve already collected custom LeRobot datasets for a pick-and-place task ([available on my Hugging Face](https://huggingface.co/RonPlusSign)) with 500 demo episodes. I’ve successfully fine-tuned [OpenVLA](https://github.com/openvla/openvla) using its official repository, where the action space is defined as ΔEEF pose (Euler rotation) + gripper, and the state as ΔEEF pose (quaternion rotation) + gripper, using a single observation image (left shoulder), reaching around 22% success rate. However, when trying to fine-tune SmolVLA, despite the training running without issues (loss converges and wandb plots look fine), the evaluation yields 0% success. I suspect I’m misunderstanding how to correctly define the state and action spaces for SmolVLA in this context. Since RLBench is not one of the officially supported envs, I created an evaluation script (you can find it [here](https://github.com/RonPlusSign/RLBench/blob/master/test_smolvla.py)), similar to the examples provided in [Robot Learning: A Tutorial](https://github.com/fracapuano/robot-learning-tutorial/blob/main/snippets/ch5/02_using_smolvla.py) (thanks @fracapuano for the amazing work!). <img width="1207" height="393" alt="Image" src="https://github.com/user-attachments/assets/59175cf4-c458-49f3-96b0-e96bc414e333" /> For example, I started the finetuning using: ```sh python src/lerobot/scripts/lerobot_train.py \ --policy.path=HuggingFaceVLA/smolvla_libero \ --policy.repo_id=RonPlusSign/smolvla_PutRubbishInBin \ --dataset.repo_id=RonPlusSign/RLBench-LeRobot-v3-PutRubbishInBin \ --batch_size=32 \ --output_dir=outputs/train/smolvla_finetuned_rubbish \ --policy.device=cuda \ --wandb.enable=true \ --save_freq=10000 \ --steps=60000 ``` I also tested smaller finetunings (e.g. 5k, 10k, 20k steps). Here are some specific points I’d like to clarify: 1. What are the exact action and state spaces used in SmolVLA and π₀ pretraining? (ΔEEF pose, absolute EEF pose, joint positions, joint velocities, ... and angle representations e.g. quaternion or Euler). 2. Regarding camera inputs: does the naming or number of cameras affect the model performance? Should I stick to the _exact_ names provided in the `config.json` file, such as `observation.images.image` and `observation.images.image2` (front/wrist), similar to pretraining? Or is it fine to use different camera names and/or add extra views? Is there a way to override the existing input and output features or this means that the pretrain would be wasted? 3. The base model [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base) is pretrained on the SO100/SO101 robot, so I assume it might not transfer well to Franka Panda tasks — is that correct? 4. Would it make more sense to start from a model trained on Franka, e.g. [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero), or it's still a different type of embodiment (it seems with 6DoF+gripper, which is not my case)? 5. Are the datasets [HuggingFaceVLA/libero](https://huggingface.co/datasets/HuggingFaceVLA/libero) and/or [HuggingFaceVLA/smol-libero](https://huggingface.co/datasets/HuggingFaceVLA/smol-libero) the ones used for pretraining [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero)? 6. In [HuggingFaceVLA/smol-libero](https://huggingface.co/datasets/HuggingFaceVLA/smol-libero) the actions have dimension 7, which doesn’t clearly map to 7 joint angles + gripper. Are these absolute joint positions, EEF poses, or something else? Does LIBERO use a 6DoF or 7DoF Franka setup? If 6DoF, which joint is excluded? Any guidance on these points (or pointers to where this information is documented) would be very helpful — I’ve been trying to align my setup with the pretrained models but haven’t found clear references for these details. Thanks a lot for your time and for maintaining this project!
https://github.com/huggingface/lerobot/issues/2259
open
[ "question", "policies", "simulation" ]
2025-10-20T13:24:22Z
2025-12-23T10:37:31Z
null
RonPlusSign
vllm-project/vllm
27,184
[Doc]: Multi-Modal Benchmark is too simple
### 📚 The doc issue The latest doc about Multi-Modal Benchmark shows : - 1、download sharegpt4v_instruct_gpt4-vision_cap100k.json and COCO's 2017 Train images - 2、vllm serve and vllm bench serve But there is so much details to concern: - 1、delete all json that not is coco`s in sharegpt4v_instruct_gpt4-vision_cap100k.json - 2、place COCO's 2017 Train images in /root directory like /train2017/, - 3、 vllm serve --allowed-local-media-path /train2017/ , because vllm use the condition: ``` if allowed_local_media_path not in filepath.resolve().parents ``` the ` filepath.resolve().parents` is ["/train2017", "/"], so the easiest‌ way is to place the images in /train2017/ and set `--allowed-local-media-path /train2017/` ### Suggest a potential alternative/fix _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27184
open
[ "documentation" ]
2025-10-20T06:24:18Z
2025-10-20T16:44:17Z
2
BigFaceBoy
vllm-project/vllm
27,182
[Feature]: INT8 Support in Blackwell Arch
### 🚀 The feature, motivation and pitch hello, I want to use w8a8(int8) in blackwell gpus, and when I read the source code, it says, the int8 is not support by sm120. According to the nvidia-ptx-instructions, blackwell series gpus still have a int8 tensor, is there another way we use w8a8 int8 in rtx5090 by vllm now <img width="1165" height="1109" alt="Image" src="https://github.com/user-attachments/assets/42546583-4124-4d3c-a1ad-ea3fb19d70cf" /> ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27182
open
[ "feature request" ]
2025-10-20T06:04:03Z
2025-10-20T06:04:03Z
0
nhanngoc94245
huggingface/optimum
2,376
Support qwen2_5_vl for ONNX export
### Feature request I would like to be able to convert [this model](https://huggingface.co/prithivMLmods/DeepCaption-VLA-V2.0-7B) which is based on Qwen 2.5 VL architecture using optimum. Right now, I get the error: ``` ValueError: Trying to export a qwen2_5_vl model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type qwen2_5_vl to be supported natively in the ONNX export. ``` I read the documentation but I have no idea how I'd go about setting the custom onnx config up. ### Motivation Qwen 2.5 VL is a SOTA architecture that is already being used in downstream models (see my example), so it is worth supporting. ### Your contribution I can do research but I don't have enough experience with this codebase and ML code to contribute a PR.
https://github.com/huggingface/optimum/issues/2376
open
[]
2025-10-19T22:08:28Z
2026-01-06T08:03:39Z
8
ayan4m1
huggingface/transformers
41,731
transformers CLI documentation issue
### System Info - `transformers` version: 5.0.0.dev0 - Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.9 - Huggingface_hub version: 1.0.0.rc6 - Safetensors version: 0.6.2 - Accelerate version: 1.10.1 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.8.0+cu128 (CUDA) - Using distributed or parallel set-up in script?: no - Using GPU in script?: yes - GPU type: NVIDIA GeForce RTX 3050 Laptop GPU ### Who can help? @stevhliu ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] Update the documentation for the transformers-cli - [ ] Set the default --fixed flag to "pipe" in place of "infer" ### Reproduction echo -e "Plants create [MASK] through a process known as photosynthesis." | transformers run --task fill-mask --model google-bert/bert-base-uncased --device 0 (as shown in documentation) **output:-** <img width="1089" height="354" alt="Image" src="https://github.com/user-attachments/assets/0722d782-748b-4ecf-afa9-e4e6dbe67126" /> ### Expected behavior **output:** <img width="1087" height="287" alt="Image" src="https://github.com/user-attachments/assets/1169bfea-8473-48c4-bdd1-f623d16e2f28" /> **Fix/updated command:** echo -e "Plants create [MASK] through a process known as photosynthesis." | transformers run fill-mask --model google-bert/bert-base-uncased --device 0 --format pipe This indicates the current working format is:- transformers run <task_name> --model <model_name> --format <format_name> [options] **update** we could let the default --format flag be "pipe" instead of "infer" which is deprecated. so we could also write command as follows for most models :- transformers run <task_name> --model <model_name> **Action Needed:** (documentation change) All documentation for similar models should be updated for the transformer CLI inference I would like to confirm if my understanding is correct: should I go ahead and raise a PR to update the documentation and set the default as "pipe" for --format flag? I am relatively new to open source and would greatly appreciate any guidance or tips you could provide to ensure my contribution is appropriate and follows best practices.
https://github.com/huggingface/transformers/issues/41731
closed
[ "bug" ]
2025-10-19T09:31:46Z
2025-12-22T08:03:09Z
14
ArjunPimpale
huggingface/chat-ui
1,947
HuggingChat MoM (Mixture-of-Models) Integration Proposal 🤗
# **HuggingChat MoM (Mixture-of-Models) Integration Proposal 🤗** **Status:** Proposal **Date:** 2025-10-19 **Version:** 1.0 **Authors**: vLLM-SR Team --- ## Executive Summary This proposal outlines the integration of **vLLM Semantic Router** into HuggingChat as a new **MoM (Mixture-of-Models)** routing option. The integration will enable advanced intelligent routing capabilities including semantic caching, PII detection, and chain-of-thought (CoT) transparency, while maintaining full backward compatibility with the existing Omni (Arch router) implementation. --- ## 1. Motivation ### Current State - HuggingChat currently supports **Omni** routing via the Arch router (`src/lib/server/router/arch.ts`) - Arch router provides basic route selection using LLM-based decision-making - Limited visibility into routing decisions and no semantic caching capabilities ### Desired State - Support **MoM (Mixture-of-Models)** routing via vLLM Semantic Router - Enable advanced features: semantic caching, PII detection, intelligent routing - Provide transparent chain-of-thought (CoT) information for routing decisions - Maintain coexistence of both Omni and MoM routers for gradual rollout ### Business Value 1. **Performance**: Semantic caching reduces latency for repeated queries 2. **Security**: PII detection protects user privacy 3. **Transparency**: CoT information builds user trust 4. **Flexibility**: Users can choose between Omni and MoM routing strategies 5. **Dashboard Integration**: vLLM-SR dashboard provides monitoring and analytics ### About vLLM Semantic Router **vLLM Semantic Router** is an intelligent routing system that embodies the **Mixture-of-Models (MoM)** philosophy, with modelName (**MoM**): ```shell curl -X POST http://localhost:8801/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "MoM", "messages": [ {"role": "user", "content": "What is the derivative of x^2?"} ] }' ``` - **Intelligent Routing**: Routes requests to the optimal model based on semantic understanding of the query, not just keyword matching - **Semantic Caching**: Leverages semantic similarity to cache responses, dramatically reducing latency for similar queries (not just exact matches) - **Semantic Chain Architecture**: Evolving toward a composable semantic chain where all stages are orchestrated in an extensible pipeline, enabling future enhancements and custom stage integration in work-in-progress "SemanticChain". - **Three-Stage Pipeline** (Extensible & Composable): - **Stage 1 - Prompt Guard**: Security-first approach with jailbreak detection and PII protection - **Stage 2 - Router Memory**: Intelligent semantic caching for performance optimization - **Stage 3 - Smart Routing**: Multi-level intelligent routing combining three complementary strategies: - **Domain Understanding**: Semantic classification of queries into domains (math, coding, general, etc.) - **Similarity-Based Routing**: Semantic similarity matching to route similar queries to optimal models - **Keyword-Based Routing**: Keyword pattern matching for explicit intent detection - These three routing strategies work together to provide comprehensive query understanding and optimal model selection - Future stages can be added to the pipeline without disrupting existing functionality - **Mixture-of-Models Philosophy**: Recognizes that no single model is optimal for all tasks. By intelligently routing different types of queries to different specialized models, it achieves: - Better accuracy through task-specific model selection - Cost optimization by using smaller models for simple tasks - Performance improvement through semantic understanding - Transparency via chain-of-thought visibility - **Production-Ready**: Battle-tested with comprehensive error handling, monitoring, and dashboard support - **Open Source**: vLLM Community-driven development with active maintenance and feature additions --- ## 2. Goals ### Primary Goals - ✅ Integrate vLLM Semantic Router as a new MoM routing option - ✅ Extract and store chain-of-thought (CoT) metadata from vLLM-SR responses - ✅ Support both Omni and MoM routers coexisting in the same system - ✅ Expose CoT information to frontend for visualization ### Secondary Goals - ✅ Support A/B testing between Omni and MoM routers - ✅ Integrate with vLLM-SR dashboard for monitoring --- ## 3. Non-Goals - ❌ Replace Omni router entirely (maintain coexistence) - ❌ Modify vLLM Semantic Router codebase - ❌ Implement custom semantic caching in HuggingChat (use vLLM-SR's caching) - ❌ Create new dashboard (integrate with existing vLLM-SR dashboard) - ❌ Support non-OpenAI-compatible endpoints for MoM --- ## 4. Design Principles ### 1. **Backward Compatibility** - Existing Omni router functionality remains unchanged - No breaking changes to current APIs or configurations - Both routers can be configured independently ### 2. **Transparency** - CoT inf
https://github.com/huggingface/chat-ui/issues/1947
open
[ "enhancement" ]
2025-10-19T08:17:14Z
2025-10-20T11:12:30Z
3
Xunzhuo
huggingface/tokenizers
1,877
encode bytes directly
Is there a way to directly encode bytes with a bpe based HF tokenizer without having to decode the string first?
https://github.com/huggingface/tokenizers/issues/1877
open
[]
2025-10-19T03:30:39Z
2025-11-28T07:43:18Z
2
tsengalb99
vllm-project/vllm
27,154
[Installation]: How to reduce the vllm image
### Your current environment Hi, I looked at docker pull vllm/vllm-openai:latest — the image is around 12 GB. I’m exploring ways to reduce the vLLM image size specifically for NVIDIA L40s (i use linux amd64). any ideas? does building vllm from source help to reduce the image? Here’s what I’ve tried so far (but not sure how to install flashinfer): ``` FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 # Install Python and pip RUN apt-get update && apt-get install -y python3 python3-pip && \ apt-get clean && rm -rf /var/lib/apt/lists/* # Install only vLLM and production dependencies RUN pip3 install --no-cache-dir vllm # Set CUDA arch for A100 (8.0) ENV TORCH_CUDA_ARCH_LIST="8.9+PTX" # Expose API port EXPOSE 8000 ENTRYPOINT ["python3", "-m", "vllm.entrypoints.openai.api_server"] ``` more infos: https://discuss.vllm.ai/t/current-vllm-docker-image-size-is-12-64gb-how-to-reduce-it/1204/4 https://docs.vllm.ai/en/latest/deployment/docker.html#building-vllm-s-docker-image-from-source pr: https://github.com/vllm-project/vllm/pull/22377 ### How you are installing vllm ```sh pip install -vvv vllm ``` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27154
open
[ "installation" ]
2025-10-18T17:52:07Z
2025-10-20T17:45:39Z
4
geraldstanje
vllm-project/vllm
27,153
[Feature]: Allow vllm bench serve in non-streaming mode with /completions API
### 🚀 The feature, motivation and pitch vLLM’s bench serve currently supports recording benchmark results only in the streaming mode - recording metrics like TTFT, TPOT, ITL etc. For my use case benchmarking [llm-d ](https://github.com/llm-d/llm-d)which uses vLLM, I would like to enable vllm bench serve in non-streaming mode for the openai backend, recording only non-streaming latency metrics like E2E Latency. Overall, the changes required would be as follows: * Add a new Async Request Function - `async_request_openai_completions_non_streaming()` function in [`vllm/vllm/benchmarks/lib/endpoint_request_func.py`](https://github.com/vllm-project/vllm/blob/main/vllm/benchmarks/lib/endpoint_request_func.py) to support parsing of non-streaming vllm outputs. * Add a new benchmark argument: `benchmark_streaming`. If `benchmark_streaming` is set to False for the `openai` backend, then the above function `async_request_openai_completions_non_streaming()` is called instead of `async_request_openai_completions`. * Either modify [`vllm/benchmarks/serve.py`](https://github.com/vllm-project/vllm/blob/main/vllm/benchmarks/serve.py) or design a new benchmark script to calculate and save metrics, excluding streaming-only metrics like TTFT, TPOT and ITL. Happy to discuss and create PRs for the above implementation. Looking forward to thoughts and feedback. ### Alternatives Another option I'm considering is using [benchmark_throughput.py](https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_throughput.py). However, it relies on the offline LLM library which does not serve my use-case of benchmarking the vllm server in non-streaming mode. ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27153
open
[ "feature request" ]
2025-10-18T17:47:44Z
2025-10-18T20:50:49Z
0
susiejojo
huggingface/candle
3,137
Strategic Discussion: Flicker's Hybrid Architecture for Lightweight Inference + Advanced Training
# Strategic Discussion: Flicker's Hybrid Architecture Evolution ## Overview This issue proposes a comprehensive strategic discussion about flicker's positioning and architecture evolution. The detailed proposal is documented in `STRATEGIC_DISCUSSION_PROPOSAL.md`. ## Context During analysis of flicker's capabilities vs PyTorch, a critical strategic question emerged: Should flicker be primarily a **lightweight inference engine** or evolve into a **comprehensive training framework**? ## Proposed Solution: Hybrid Architecture Instead of choosing one direction, we propose a dual-track approach: - **flicker-core**: Lightweight inference (current focus) - **flicker-train**: Advanced training features - **Feature Gates**: Granular control for specific capabilities ## Key Strategic Questions ### 1. Technical Feasibility - Is zero-copy gradient system feasible with Rust ownership? - How do we implement compile-time training validation? - What's the best approach for async-distributed training? ### 2. Market Positioning - Does hybrid approach make sense for flicker's goals? - How do we balance inference vs training development resources? - Will this attract both inference and training users? ### 3. Implementation Priority - Which advanced training features should we implement first? - How do we ensure seamless transition from inference to training? - What performance targets should we set vs PyTorch? ## Revolutionary Differentiators The proposal identifies 4 major areas where Rust could revolutionize ML: 1. **Zero-Copy Gradient Systems** - Gradients as views, not copies 2. **Compile-Time Training Validation** - Catch training errors at compile time 3. **Async-First Training Infrastructure** - True concurrency without GIL 4. **SIMD-Optimized Research Features** - Hand-optimized kernels impossible in Python ## Benefits ✅ Preserves current lightweight inference advantages ✅ Enables advanced training capabilities unique to Rust ✅ Creates natural upgrade path for users ✅ Positions flicker as both practical tool and research platform ## Next Steps 1. **Enable GitHub Discussions** to facilitate community input 2. **Review detailed proposal** in `STRATEGIC_DISCUSSION_PROPOSAL.md` 3. **Gather feedback** from community on strategic direction 4. **Validate technical feasibility** of proposed features 5. **Create implementation roadmap** based on consensus ## Discussion Document 📋 **Full Proposal**: See `STRATEGIC_DISCUSSION_PROPOSAL.md` for comprehensive analysis including: - Current state analysis - PyTorch comparison - Technical implementation details - Code examples of revolutionary features - Trade-offs and considerations - Community input questions ## Call for Input This represents a potential major evolution for flicker. Community input is essential to validate: - Strategic direction alignment with user needs - Technical feasibility of proposed features - Implementation priority and resource allocation - Market positioning effectiveness **Please review the detailed proposal and share your thoughts on flicker's strategic future.** --- *This issue will be converted to a GitHub Discussion once discussions are enabled on the repository.*
https://github.com/huggingface/candle/issues/3137
closed
[]
2025-10-18T17:27:24Z
2025-10-21T16:18:51Z
1
jagan-nuvai
huggingface/lerobot
2,245
release 0.4.0 and torch 2.8.0
Hello Lerobot Team! :) Quick question, do you have a time estimate for: - lerobot release 0.4.0 (ie next stable release using the new v30 data format) - bumping torch to 2.8 Thanks a lot in advance!
https://github.com/huggingface/lerobot/issues/2245
closed
[ "question", "dependencies" ]
2025-10-18T16:57:07Z
2025-10-19T18:34:47Z
null
antoinedandi
huggingface/lerobot
2,242
Is it no longer possible to fine-tune the previously used π0 model?
I previously trained a model using the following command for fine-tuning: `lerobot-train --dataset.repo_id=parkgyuhyeon/slice-clay --policy.path=lerobot/pi0 --output_dir=outputs/train/pi0_slice-clay --job_name=pi0_slice-clay --policy.device=cuda --wandb.enable=false --wandb.project=lerobot --log_freq=10 --steps=50000 --policy.repo_id=parkgyuhyeon/pi0_slice-clay --policy.push_to_hub=false` However, after the release of π0.5, I noticed that the new example command includes additional arguments like: ``` --policy.repo_id=your_repo_id \ --policy.compile_model=true \ --policy.gradient_checkpointing=true \ --policy.dtype=bfloat16 \ ``` It seems that some new options have been added. Does this mean the model I fine-tuned earlier using π0 can no longer be used?
https://github.com/huggingface/lerobot/issues/2242
closed
[ "question", "policies" ]
2025-10-18T08:42:35Z
2025-10-20T00:18:03Z
null
pparkgyuhyeon
huggingface/lerobot
2,239
Models trained using openpi pi0.5 on Lerobot's pi0.5
Hi, can I check if models trained using the [pytorch port of openpi's pi0.5](https://github.com/Physical-Intelligence/openpi?tab=readme-ov-file#pytorch-support) are compatible with lerobot's defination of pi0.5? Thanks!
https://github.com/huggingface/lerobot/issues/2239
open
[ "question", "policies" ]
2025-10-18T02:01:45Z
2025-10-18T10:54:06Z
null
brycegoh
huggingface/lerobot
2,228
Trossen WidowX AI model, depth cameras and tests
Hi, Would you be open to receive pull requests to support more recent trossen robotics setups as well as depth cameras? I think for the robot part the pattern is quite well established. For depth cameras we solved it by tweaking a bit the dataset utils. Our implementation is fairly tested.
https://github.com/huggingface/lerobot/issues/2228
closed
[ "question", "robots" ]
2025-10-17T09:32:22Z
2025-10-31T19:15:25Z
null
lromor
vllm-project/vllm
27,090
[Usage]: Does vLLM support a data-parallel group spanning multiple nodes when starting an online service?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm Does vLLM support a data-parallel group spanning multiple nodes when starting an online service? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27090
open
[ "usage" ]
2025-10-17T09:15:04Z
2025-10-20T02:37:19Z
2
KrisLu999
vllm-project/vllm
27,086
[Bug]: After enabling P-D Disaggregation, the final output results are not entirely identical.
### Your current environment vllm VERSION: 0.10.1 ### 🐛 Describe the bug When I fixed the random seed and ensured all environment variables were consistent, I noticed that launching PD separation with the same configuration produced inconsistent final outputs. This phenomenon may require multiple attempts to fully manifest. I have a question: Is this behavior normal? (under temperature=0 conditions) vllm startup script (D),The startup process for P nodes is almost identical, except for the use of “kv_producer”. ``` VLLM_CFG=( --trust-remote-code --data-parallel-size 1 --tensor-parallel-size 8 --no-enable-prefix-caching --no-enable-chunked-prefill --kv-transfer-config '{"kv_connector":"NixlConnector","kv_role":"kv_consumer"}' ) ``` When requested, temperature=0 ``` curl -X POST -s http://${HOST_PORT}/v1/completions \ -H "Content-Type: application/json" \ -d '{ "model": "base_model", "prompt": "xxxx", # The prompt is identical for every request, and this prompt will also appear. "max_tokens": 1000, "temperature": 0, "stream": true }' printf "\n" ``` My question is: Does the PD also have a probability of producing non-identical outputs at every step when temperature=0? If this is a normal phenomenon, what causes it? If this is a bug, what might be causing it? Looking forward to your responses. Thank you. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
https://github.com/vllm-project/vllm/issues/27086
open
[ "bug" ]
2025-10-17T07:56:41Z
2025-10-20T09:16:21Z
4
freedom-cui
huggingface/lerobot
2,227
How to easily run inference with a trained model
Hello, and thank you for sharing such an inspiring project! I’m currently working with a 7-DoF robotic arm (6 joint axes + 1 gripper) and generating datasets through video recordings for training on smolVLA. Since there’s still some ongoing engineering work related to dataset generation, I’d like to start by understanding how the inference pipeline is implemented. I have successfully verified the training workflow using the [lerobot/svla_so100_pickplace](https://huggingface.co/datasets/lerobot/svla_so100_pickplace) dataset and produced a trained model. Now, I’m wondering if there is a way to quickly load the trained model and perform inference, similar to how OpenVLA provides a simple demo on Hugging Face — where the model can be loaded and tested with just a few lines of code. For OpenVLA example: ``` from transformers import AutoModelForVision2Seq, AutoProcessor from PIL import Image import torch # Load Processor & VLA processor = AutoProcessor.from_pretrained("openvla/openvla-7b", trust_remote_code=True) vla = AutoModelForVision2Seq.from_pretrained( "openvla/openvla-7b", attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn` torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True ).to("cuda:0") # Grab image input & format prompt image: Image.Image = get_from_camera(...) prompt = "In: What action should the robot take to {<INSTRUCTION>}?\nOut:" # Predict Action (7-DoF; un-normalize for BridgeData V2) inputs = processor(prompt, image).to("cuda:0", dtype=torch.bfloat16) action = vla.predict_action(**inputs, unnorm_key="bridge_orig", do_sample=False) # Execute... robot.act(action, ...) ``` I would be very grateful if you could share any related information or references.
https://github.com/huggingface/lerobot/issues/2227
open
[ "question" ]
2025-10-17T05:41:15Z
2025-12-16T02:57:00Z
null
Biz-Joe