repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot | 2,224 | Can i just modify the json the pretrained policy to adapt it to my own robot? | I just want to know if i can just modify the config json(shape of state, size of image .etc) to adapt the model to inference in my modified robot(have different number of feetect and different image resolution)? | https://github.com/huggingface/lerobot/issues/2224 | open | [
"question",
"policies"
] | 2025-10-17T01:33:32Z | 2025-10-20T16:40:26Z | null | shs822 |
huggingface/lerobot | 2,221 | Question about pre-trained weights usability and performance on Hugging Face models | Hello,
I would like to ask whether the weights provided on Hugging Face (for example, under the lerobot author page) can be directly downloaded and used for inference, or if they must be fine-tuned before achieving reasonable performance.
When I directly load and evaluate the models (e.g., lerobot/smolvla_base or lerobot/pi05_libero_base), the performance appears extremely poor, almost random. I’m wondering if this is expected behavior or if I might have made a mistake in my setup.
Here’s the list of models I found on Hugging Face:
lerobot/smolvla_base
lerobot/pi05_base
lerobot/diffusion_pusht
lerobot/pi0_base
lerobot/pi05_libero_base
lerobot/act_aloha_sim_transfer_cube_human
lerobot/vqbet_pusht
lerobot/diffusion_pusht_keypoints
lerobot/act_aloha_sim_insertion_human
lerobot/pi0_libero_base
lerobot/pi05_libero_finetuned
lerobot/pi05_libero_finetuned_quantiles
lerobot/pi0_libero_finetuned
Are the *_base models supposed to be general pre-trained checkpoints that require downstream fine-tuning (e.g., on LIBERO), while the *_finetuned ones are ready for evaluation?
Thank you in advance for your clarification! | https://github.com/huggingface/lerobot/issues/2221 | closed | [
"question"
] | 2025-10-16T14:14:39Z | 2025-10-31T16:26:45Z | null | MichaelWu99-lab |
vllm-project/vllm | 27,021 | [Usage]: Need guidance reproducing benchmark results from PR #25337 — results differ significantly from reported data | ## Background
Recently, we have been working on optimizing the position computation for multimodal models in vLLM.
During benchmarking, we noticed that our results were not as expected.
To investigate, we decided to reproduce the benchmark results from [PR #25337](https://github.com/vllm-project/vllm/pull/25337), comparing the performance before and after that PR was merged into the main branch.
- Before PR commit: cf56cf78b47e5f9b6a81ce0d50a94f9291922315
- After PR commit: 30d08911f7cf78287f8da003ddcc99f6ef196f9f
<img width="1380" height="712" alt="Image" src="https://github.com/user-attachments/assets/afca55db-c443-4c98-ba6b-f656b070af5f" />
However, our reproduced results differ **significantly** from the performance data reported in the PR.
We’d like to understand whether this discrepancy may be caused by hardware differences, model choice, or benchmark setup.
**Who can help guide me?**
## Model and Environment
- Model used: Qwen/Qwen3-VL-30B-A3B-Instruct-FP8(The modelQwen3-VL-4B used in the PR could not be found on Hugging Face.)
- GPU: NVIDIA A100 PCIe
- vLLM startup command:
```bash
vllm serve "Qwen/Qwen3-VL-30B-A3B-Instruct-FP8" \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--max-model-len 16384
```
## Benchmark Command
```bash
vllm bench serve \
--backend openai-chat \
--model "Qwen/Qwen3-VL-30B-A3B-Instruct-FP8" \
--base-url "http://localhost:8000" \
--endpoint "/v1/chat/completions" \
--dataset-name "hf" \
--dataset-path "lmarena-ai/VisionArena-Chat" \
--num-prompts 100 \
--request-rate 10 \
--save-result \
--result-dir benchmarks_results \
--result-filename test.json
```
## Our Benchmark Results
### Before PR #25337
```text
============ Serving Benchmark Result ============
Successful requests: 100
Request rate configured (RPS): 10.00
Benchmark duration (s): 16.91
Total input tokens: 5280
Total generated tokens: 11522
Request throughput (req/s): 5.91
Output token throughput (tok/s): 681.42
Peak output token throughput (tok/s): 2225.00
Peak concurrent requests: 97.00
Total Token throughput (tok/s): 993.68
---------------Time to First Token----------------
Mean TTFT (ms): 1176.13
Median TTFT (ms): 1185.79
P99 TTFT (ms): 2178.91
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 88.39
Median TPOT (ms): 78.68
P99 TPOT (ms): 392.01
---------------Inter-token Latency----------------
Mean ITL (ms): 77.30
Median ITL (ms): 42.31
P99 ITL (ms): 581.15
==================================================
```
### After PR #25337
```text
============ Serving Benchmark Result ============
Successful requests: 100
Request rate configured (RPS): 10.00
Benchmark duration (s): 16.89
Total input tokens: 5280
Total generated tokens: 11640
Request throughput (req/s): 5.92
Output token throughput (tok/s): 689.02
Peak output token throughput (tok/s): 2178.00
Peak concurrent requests: 97.00
Total Token throughput (tok/s): 1001.57
---------------Time to First Token----------------
Mean TTFT (ms): 1193.52
Median TTFT (ms): 1285.23
P99 TTFT (ms): 2111.41
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 88.84
Median TPOT (ms): 78.00
P99 TPOT (ms): 344.25
---------------Inter-token Latency----------------
Mean ITL (ms): 76.89
Median ITL (ms): 42.30
P99 ITL (ms): 597.42
==================================================
```
## Reference: Benchmark Results from PR #25337
### Main branch
```text
============ Serving Benchmark Result ============
Successful requests: 1000
Request rate configured (RPS): 10.00
Benchmark duration (s): 101.85
Total input tokens: 94327
Total generated tokens: 120882
Request throughput (req/s): 9.82
Output token throughput (tok/s): 1186.81
Peak output token throughput (tok/s): 2862.00
Peak concurrent requests: 133.00
Total Token throughput (tok/s): 2112.91
---------------Time to First Token----------------
Mean TTFT (ms): 229.53
Median TTFT (ms): 180.19
P99 TTFT (ms): 928.83
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): | https://github.com/vllm-project/vllm/issues/27021 | open | [
"usage"
] | 2025-10-16T12:31:03Z | 2025-10-17T05:46:32Z | 5 | deitxfge |
vllm-project/vllm | 27,017 | [Doc]: KV Cache Memory allocations | ### 📚 The doc issue
Hello,
When serving a model via vLLM for text(token) generation:
1. Before a new request gets scheduled, does vLLM check if KV cache for a sequence length of `max_model_len` is available for that new request or does it check if KV cache for a sequence length of `input prompt + max_tokens` (if it's less than _max_model_length_) is available for the request? In case the request does not specify a _max_tokens_ does it default to 16?
2. In case the required KV cache memory is not available, does the server wait until it is available to schedule that new request?
3. When exactly is the KV cache allocated for a particular request? Do the KV cache blocks get allocated after computing the number of new blocks required for all current requests after each generation step of the model, as mentioned in this [blog post](https://www.aleksagordic.com/blog/vllm)? i.e. the KV cache block is not fully allocated upfront based on the point [1] calculation instead incrementally allocated since the request could finish before it reaches the _max_tokens_ or _max_model_length_ limit?
4. I am trying to understand if the server concurrency can be more than the one specified in the server startup logs (based on the _max_model_len_) and get a clearer understanding of request scheduling.
example logs:
```
GPU KV cache size: {X} tokens
Maximum concurrency for {max_model_len} tokens per request: Y
```
5. The KV cache token and concurrency estimations vLLM gives in the start up logs for the **_Qwen-235B MoE_** model do not match the below formula for `tensor_parallel_size` of 8. It does match for `tensor_parallel_size` of 4 and in general for a different model like **_Llama-70B_**. Is the below formula missing something specifically for the Qwen-235B models at `tensor_parallel_size` of 8?
```
number of layers * number of KV heads * head dimension * precision/8 * 2 (for K & V) * seq_len bytes
OR
(number of layers * number of KV heads * head dimension * precision/8 * 2 (for K & V) * seq_len)/tensor_parallel_size bytes per GPU
i.e. for Qwen-235B MoE
(94 * 4 * 128 * 16/8 * 2 * seq_len)/8 bytes per GPU
```
Thanks!
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27017 | closed | [
"documentation"
] | 2025-10-16T11:43:43Z | 2025-11-04T11:08:02Z | 7 | sneha5gsm |
vllm-project/vllm | 27,011 | [Usage]: Runnig GLM4.5-Air with Speculative Decoding | ### Your current environment
```
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air-FP8) with speculative decoding. From [GLM 4.5](https://huggingface.co/zai-org/GLM-4.5) page, it mentioned `All models use MTP layers and specify --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 to ensure competitive inference speed.`
They gave examples of how to use speculative decoding in sglang, but not in vLLM. I was wondering if it is being supported in vLLM
### Before submitting a new issue...
- [x]Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27011 | open | [
"usage"
] | 2025-10-16T10:17:54Z | 2025-10-16T10:23:01Z | 0 | aqx95 |
vllm-project/vllm | 27,006 | [Usage]: In vLLM version 0.8.5, when I send an HTTP image URL directly, the model cannot recognize the image content, but it works correctly when I use a base64-encoded image. I’d like to understand why this happens. | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/27006 | open | [
"usage"
] | 2025-10-16T08:09:29Z | 2025-10-16T10:33:49Z | 4 | Lislttt |
huggingface/lerobot | 2,218 | image pad value in pi0/pi05 | ### System Info
```Shell
the latest lerobot version
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
def resize_with_pad_torch( # see openpi `resize_with_pad_torch` (exact copy)
images: torch.Tensor,
height: int,
width: int,
mode: str = "bilinear",
) -> torch.Tensor:
"""PyTorch version of resize_with_pad. Resizes an image to a target height and width without distortion
by padding with black. If the image is float32, it must be in the range [-1, 1].
Args:
images: Tensor of shape [*b, h, w, c] or [*b, c, h, w]
height: Target height
width: Target width
mode: Interpolation mode ('bilinear', 'nearest', etc.)
Returns:
Resized and padded tensor with same shape format as input
"""
# Check if input is in channels-last format [*b, h, w, c] or channels-first [*b, c, h, w]
if images.shape[-1] <= 4: # Assume channels-last format
channels_last = True
if images.dim() == 3:
images = images.unsqueeze(0) # Add batch dimension
images = images.permute(0, 3, 1, 2) # [b, h, w, c] -> [b, c, h, w]
else:
channels_last = False
if images.dim() == 3:
images = images.unsqueeze(0) # Add batch dimension
batch_size, channels, cur_height, cur_width = images.shape
# Calculate resize ratio
ratio = max(cur_width / width, cur_height / height)
resized_height = int(cur_height / ratio)
resized_width = int(cur_width / ratio)
# Resize
resized_images = F.interpolate(
images,
size=(resized_height, resized_width),
mode=mode,
align_corners=False if mode == "bilinear" else None,
)
# Handle dtype-specific clipping
if images.dtype == torch.uint8:
resized_images = torch.round(resized_images).clamp(0, 255).to(torch.uint8)
elif images.dtype == torch.float32:
resized_images = resized_images.clamp(-1.0, 1.0)
else:
raise ValueError(f"Unsupported image dtype: {images.dtype}")
# Calculate padding
pad_h0, remainder_h = divmod(height - resized_height, 2)
pad_h1 = pad_h0 + remainder_h
pad_w0, remainder_w = divmod(width - resized_width, 2)
pad_w1 = pad_w0 + remainder_w
# Pad
constant_value = 0 if images.dtype == torch.uint8 else -1.0
padded_images = F.pad(
resized_images,
(pad_w0, pad_w1, pad_h0, pad_h1), # left, right, top, bottom
mode="constant",
value=constant_value,
)
# Convert back to original format if needed
if channels_last:
padded_images = padded_images.permute(0, 2, 3, 1) # [b, c, h, w] -> [b, h, w, c]
return padded_images
### Expected behavior
image from lerobot range from 0 to 1 and dtype is float32 , so constant_value in this code is -1 not 0. -1*2-1=-3, so that there are '-3' in the input of siglip embedding | https://github.com/huggingface/lerobot/issues/2218 | open | [
"bug",
"question",
"policies"
] | 2025-10-16T06:48:13Z | 2025-10-17T09:58:49Z | null | Tgzz666 |
huggingface/transformers | 41,640 | AttributeError: BartTokenizerFast has no attribute image_token. Did you mean: 'mask_token'? | ### System Info
Ubuntu
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
import requests
from PIL import Image
from transformers import AutoProcessor, Florence2ForConditionalGeneration
model = Florence2ForConditionalGeneration.from_pretrained(
"microsoft/Florence-2-large",
dtype=torch.bfloat16,
)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
task_prompt = "<OD>"
inputs = processor(text=task_prompt, images=image, return_tensors="pt").to(model.device, torch.bfloat16)
generated_ids = model.generate(
**inputs,
max_new_tokens=1024,
num_beams=3,
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
image_size = image.size
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=image_size)
print(parsed_answer)
```
### Expected behavior
```
raise AttributeError(f"{self.__class__.__name__} has no attribute {key}")
AttributeError: BartTokenizerFast has no attribute image_token. Did you mean: 'mask_token'?
``` | https://github.com/huggingface/transformers/issues/41640 | closed | [
"bug"
] | 2025-10-16T06:34:02Z | 2025-10-17T09:00:36Z | 5 | conceptofmind |
huggingface/transformers.js | 1,439 | Integration to a CLI application created using PKG | ### Question
I'm trying to bundle a Node.js CLI tool that uses `@xenova/transformers` into a single executable using [pkg](https://github.com/vercel/pkg).
The build works fine, but when I run the packaged executable, I get this error:
```
Error: Cannot find module '../bin/napi-v3/linux/x64/onnxruntime_binding.node'
Require stack:
- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/binding.js
- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/backend.js
- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/index.js
- /snapshot/custom-cli/dist/custom-cli.cjs
```
**Build command:**
`webpack && pkg -t node18-linux -o custom-cli dist/custom-cli.cjs`
**pkg config:**
```
"pkg": {
"assets": [
"node_modules/onnxruntime-node/bin/napi-v3/**/onnxruntime_binding.node"
]
}
```
**Is it possible to give a custom absolute path for ONNX native bindings (something like this):**
```
import { env } from "@xenova/transformers";
env.backends.onnx.customBindingPath = "/custom-cli/onnxruntime_binding.node";
```
then the tool could:
- Extract prebuilt binaries (onnxruntime_binding.node) from a known location (or GitHub ZIP)
- Pass that custom path to @xenova/transformers / onnxruntime-node
- Load correctly even when packaged by pkg
| https://github.com/huggingface/transformers.js/issues/1439 | open | [
"question"
] | 2025-10-16T05:30:32Z | 2025-10-26T23:32:41Z | null | JosephJibi |
huggingface/lerobot | 2,216 | gpu memory required to finetune pi05 | I tried to finetune pi05 with rxt a6000 (48GB) and get an insufficient memory error . Does anyone know how much GPU memory is needed to finetune a pi05 policy?
Thanks, | https://github.com/huggingface/lerobot/issues/2216 | open | [
"question",
"policies",
"performance"
] | 2025-10-16T04:46:21Z | 2025-12-22T07:42:45Z | null | jcl2023 |
vllm-project/vllm | 26,981 | [Usage]: Does vllm support use TokensPrompt for Qwen3VL model | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
My truncation strategy differs slightly from the standard approach (I wish to preserve the system prompt and the final suffix, only truncating the middle portion). It seems that the current version of vLLM does not support this, so I attempted to pass pre-processed token IDs along with mm_data as input, for example: TokensPrompt(prompt_token_ids=text[:self.max_model_length] + self.suffix_tokens, multi_modal_data=mm_data, mm_processor_kwargs=video_kwargs).
However, I encountered an error. Could you please advise on the correct way to use this?
<img width="1555" height="351" alt="Image" src="https://github.com/user-attachments/assets/935cdcf5-59ff-480b-bbc5-a6426e48a12c" />
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26981 | open | [
"usage"
] | 2025-10-16T03:22:09Z | 2025-10-27T03:33:53Z | 10 | afalf |
huggingface/lerobot | 2,214 | Potential Scale Imbalance in smolVLA Embedding Pipeline | Hi, I noticed a potential scale inconsistency in the embedding pipeline.
Specifically, state_emb is not normalized, while both img_emb and lang_emb are explicitly scaled by math.sqrt(emb_dim):
https://github.com/huggingface/lerobot/blob/a6ff3cfebb0304f2c378515dd30ea06fff8f473f/src/lerobot/policies/smolvla/modeling_smolvla.py#L591-L601
In practice, the numerical magnitude of img_emb tends to be much higher (often in the hundreds), while lang_emb and state_emb remain in the single-digit range. This discrepancy might cause the image features to dominate during multimodal fusion or attention.
Related code:
https://github.com/huggingface/lerobot/blob/a6ff3cfebb0304f2c378515dd30ea06fff8f473f/src/lerobot/policies/smolvla/modeling_smolvla.py#L561-L566
Suggestion:
Consider adding a LayerNorm after img_emb (or before the multimodal fusion stage) to align the scale across modalities. This could improve stability during training and quantization.
—
Reported by Tank @ iMotion AI | https://github.com/huggingface/lerobot/issues/2214 | open | [
"question",
"policies"
] | 2025-10-16T02:11:24Z | 2025-10-17T11:29:36Z | null | kkTkk012 |
vllm-project/vllm | 26,964 | [Bug]: Issue with Deepseek Reasoning parser with Qwen3 2507 chat templates | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
# wget https://raw.githubusercontent.com/vllm-project/vllm/main/vllm/collect_env.py
# For security purposes, please feel free to check the contents of collect_env.py before running it.
python collect_env.py
--2025-10-15 17:33:01-- https://raw.githubusercontent.com/vllm-project/vllm/main/vllm/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28050 (27K) [text/plain]
Saving to: ‘collect_env.py.2’
collect_env.py.2 100%[===================================>] 27.39K --.-KB/s in 0s
2025-10-15 17:33:01 (65.0 MB/s) - ‘collect_env.py.2’ saved [28050/28050]
# # sh: 8: python: not found
```
</details>
### 🐛 Describe the bug
I'm running vLLM as a docker container on an Unraid server. It is a backend to Open WebUI chat interface. The issue I see is that the reasoning block for Open WebUI is closing too early. According to this discussion on the Open WebUI git, I think it is because of the deepseek parser used as recommended by the model card. See this link: https://github.com/open-webui/open-webui/pull/16687
Here is an example of the issue that I face:
<img width="1936" height="807" alt="Image" src="https://github.com/user-attachments/assets/eb2f6452-3df0-49f0-a1c5-5b99b56f578a" />
I think this is the place to raise this issue. Thanks so much!
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26964 | open | [
"bug"
] | 2025-10-16T00:39:12Z | 2025-10-20T17:47:02Z | 1 | MikeNatC |
vllm-project/vllm | 26,949 | [Bug]: RuntimeError: CUDA driver error: invalid device ordinal when symmetric memory (symm_mem) is enabled in multi-GPU vLLM setup with 4H100 PCIe | ### My current environment
Environment:
Model: RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
vLLM Version: latest main (installed via pip)
Hardware: 4× NVIDIA H100 PCIe (80GB)
Driver: 550.xx
CUDA: 12.2
PyTorch: 2.4.0
OS: Ubuntu 22.04
Launch Command:
python3 -m vllm.entrypoints.api_server \
--model /ephemeral/huggingface/models--RedHatAI--Llama-4-Scout-17B-16E-Instruct-FP8-dynamic/snapshots/... \
--tensor-parallel-size 4 \
--gpu-memory-utilization 0.85 \
--kv-cache-dtype fp8_e4m3 \
--max-model-len 4000000 \
--max-num-seqs 16 \
--enable-prefix-caching \
--kv-events-config '{"enable_kv_cache_events": true, "publisher": "zmq", "endpoint": "tcp://*:5557"}'
### bug
RuntimeError: CUDA driver error: invalid device ordinal
(EngineCore_DP0 pid=11546) ERROR [symm_mem.py:88] handle = torch_symm_mem.rendezvous(self.buffer, self.group.group_name)
(EngineCore_DP0 pid=11546) ERROR WorkerProc failed to start
RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {'EngineCore_DP0': 1}
Behavior:
When symm_mem is enabled (default) → fails with invalid device ordinal
When symm_mem is disabled via --disable-symm-mem →
✅ vLLM engine starts
❌ No KV cache event logs (BlockStored, BlockRemoved, etc.)
❌ No prefix cache hit metrics
What I’ve Tried
Verified all 4 GPUs visible via nvidia-smi
Confirmed correct CUDA device indexing
Reduced tensor-parallel-size to 2 → same error
Checked for NCCL initialization issues — none
Manually set CUDA_VISIBLE_DEVICES=0,1,2,3
Rebuilt PyTorch + vLLM from source with USE_SYMMETRIC_MEMORY=1 — same result
Question:
Is there a known compatibility issue between symmetric memory (torch_symm_mem) and H100 PCIe devices in multi-GPU setups?
If so, is there a fallback mechanism to preserve KV event publishing (--kv-events-config) when symmetric memory is disabled?
Thanks for looking into it.
| https://github.com/vllm-project/vllm/issues/26949 | open | [
"bug"
] | 2025-10-15T22:08:34Z | 2025-12-25T03:42:49Z | 2 | vadapallij |
vllm-project/vllm | 26,940 | [Feature]: Support `inf` value for burstiness in benchmarks | ### 🚀 The feature, motivation and pitch
In the benchmarks, the burstiness value is used in a gamma distribution to sample the delays between consecutive requests.
```
theta = 1.0 / (current_request_rate * burstiness)
delay_ts.append(np.random.gamma(shape=burstiness, scale=theta))
```
[Theoretically ](https://en.wikipedia.org/wiki/Gamma_distribution)(and this is also what is observed in practice), the generated delays have as mean `1.0 / current_request_rate` and the spread is controlled by the burstiness. When the burstiness is high, we observe lower variance in the delay values, all values being closer to the mean `1.0 / current_request_rate`. When burstiness tends to infinity, we should observe a single generated delay, which is `1.0 / current_request_rate`. In practice, the `np.random.gamma` function generates `nan` as results, so we need to manually condition on `burstiness` value and append `1.0 / current_request_rate` to the list of delays when burstiness becomes infinite.
See attached image as mathematical proof
<img width="1323" height="1672" alt="Image" src="https://github.com/user-attachments/assets/455cfd00-ea8f-44c8-874f-7fdac4faae6d" />
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26940 | closed | [
"feature request"
] | 2025-10-15T19:39:03Z | 2025-11-03T18:33:19Z | 0 | sducouedic |
vllm-project/vllm | 26,914 | [Usage]: 为什么在采集的profiling中看不到通信算子? | ### Your current environment
```text
The output of `python collect_env.py`
```
通过llm.start_profile和stop_profile,我采集到了profiling,但kernel_details里面看不到通信算子。
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26914 | open | [
"usage"
] | 2025-10-15T13:38:14Z | 2025-10-15T13:38:14Z | 0 | sheep94lion |
vllm-project/vllm | 26,903 | [Usage]: vLLM for video input | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of qwen2.5-vl or qwen2.5-omni.
When I convert the video to base64 for api calls (e.g. openai format), I found that vLLM seems to use all the video frames by checking the number of prompt tokens.
Is there any parameter similar to fps to control the sampling rate?
Or do I need to sample the video externally well in advance, save it as video and then convert to base64?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26903 | open | [
"usage"
] | 2025-10-15T09:29:23Z | 2025-12-11T03:26:33Z | 6 | King-king424 |
huggingface/diffusers | 12,492 | module transformers has no attribute CLIPFeatureExtractor | ### System Info
latest main
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from diffusers import AnimateDiffPipeline
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism")
```
error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_utils.py", line 1024, in from_pretrained
loaded_sub_model = load_sub_model(
^^^^^^^^^^^^^^^
File "/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_loading_utils.py", line 752, in load_sub_model
class_obj, class_candidates = get_class_obj_and_candidates(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_loading_utils.py", line 419, in get_class_obj_and_candidates
class_obj = getattr(library, class_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqing/transformers/src/transformers/utils/import_utils.py", line 1920, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers has no attribute CLIPFeatureExtractor
```
### Expected behavior
As transformers deprecated FeatureExtractor classes in favor of ImageProcessor classes for image preprocessing. How to handle models that already set FeatureExtractor in model hub like [emilianJR/epiCRealism](https://huggingface.co/emilianJR/epiCRealism/blob/main/feature_extractor/preprocessor_config.json#L11)? | https://github.com/huggingface/diffusers/issues/12492 | closed | [
"bug"
] | 2025-10-15T08:26:05Z | 2025-11-03T05:02:54Z | 3 | jiqing-feng |
vllm-project/vllm | 26,858 | [RFC]: Top-level CLI interface for KV cache offloading | ### Motivation.
CPU (and tier-2 storage) offloading is an important feature in many cases (multi-round QA, document analysis, agent workflow, and reinforcement learning). With the recent advancement in the offloading connector, we already have the vLLM native CPU offloading implemented via the connector API. Also, there are multiple community efforts to provide other offloading implementations (e.g., LMCache, Nixl storage, mooncake) via the same set of APIs.
However, there is no clear documentation about how to configure the CPU offloading from the user's perspective. Right now, in order to enable CPU offloading, the user needs to pass a JSON string to `--kv-transfer-config`, which may create a huge mental barrier for new users. Therefore, it would be better to have a simple & clear user interface for users to enable CPU offloading.
### Proposed Change.
This proposal contains two new command-line arguments:
- `--kv-offloading-size`: a numeric value to control a global offloading buffer size (in GB). When TP > 1, this number should be the total size summed across all the TP ranks. (An alternative is the buffer size for each TP rank.)
- `--kv-offloading-backend`: a string that specifies which offloading backend to use, such as "native", "lmcache", "mooncake", "3fs", or "nixl".
This will give enough clarity to most of the users who want to use the offloading feature, and should be extensible enough to new offloading backends and tier-2 storage.
## Required changes
To implement this proposal, the following things are needed:
- Add logic to parse the new CLI argument and store it into vllm config.
- Add a new module to translate the `--kv-offloading-size` and `--kv-offloading-backend` to the corresponding KV connector config.
- Add the documentation to the vLLM user guide.
### Feedback Period.
1~2 weeks
### CC List.
@simon-mo @orozery @njhill
### Any Other Things.
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26858 | closed | [
"RFC"
] | 2025-10-15T00:11:15Z | 2025-11-01T07:17:08Z | 8 | ApostaC |
huggingface/diffusers | 12,485 | How to enable Context Parallelism for training | Hi @a-r-r-o-w , I would like to ask you for tips on using Context Parallelism for distributed training.
**Is your feature request related to a problem? Please describe.**
Here is the minimal code for adapting Context Parallelism into diffusion model training
```python
# Diffusers Version: 0.36.0.dev0
from diffusers.models._modeling_parallel import ContextParallelConfig
# I have 8 GPUs in total
cp_config = ContextParallelConfig(ring_degree=1, ulysses_degree=8)
flux_transformer.enable_parallelism(config=cp_config)
loss = train(flux_transformer)
accelerator.backward(loss)
grad_norm = accelerator.clip_grad_norm_(flux_transformer.parameters(), args.max_grad_norm)
```
However, there is a bug:
```bash
[rank5]: Traceback (most recent call last):
[rank5]: File "/home/code/diffusers/flux/sft_flux.py", line 1494, in <module>
[rank5]: main_with_cleanup(args)
[rank5]: File "/home/code/diffusers/flux/sft_flux.py", line 1460, in main_with_cleanup
[rank5]: main(args)
[rank5]: File "/home/code/diffusers/flux/sft_flux.py", line 1216, in main
[rank5]: grad_norm = accelerator.clip_grad_norm_(flux_transformer.parameters(), args.max_grad_norm)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/accelerate/accelerator.py", line 2863, in clip_grad_norm_
[rank5]: return torch.nn.utils.clip_grad_norm_(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 36, in _no_grad_wrapper
[rank5]: return func(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 222, in clip_grad_norm_
[rank5]: _clip_grads_with_norm_(parameters, max_norm, total_norm, foreach)
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 36, in _no_grad_wrapper
[rank5]: return func(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py", line 155, in _clip_grads_with_norm_
[rank5]: clip_coef = max_norm / (total_norm + 1e-6)
[rank5]: ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/_tensor.py", line 39, in wrapped
[rank5]: return f(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/_tensor.py", line 1101, in __rdiv__
[rank5]: return self.reciprocal() * other
[rank5]: ^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/_compile.py", line 53, in inner
[rank5]: return disable_fn(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 929, in _fn
[rank5]: return fn(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
[rank5]: return DTensor._op_dispatcher.dispatch(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py", line 166, in dispatch
[rank5]: self.redistribute_local_args(
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py", line 303, in redistribute_local_args
[rank5]: resharded_local_tensor = redistribute_local_tensor(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_redistribute.py", line 208, in redistribute_local_tensor
[rank5]: new_local_tensor = partial_spec._reduce_value(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_ops/_math_ops.py", line 126, in _reduce_value
[rank5]: reduced_tensor = super()._reduce_value(tensor, mesh, mesh_dim)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/placement_types.py", line 679, in _reduce_value
[rank5]: return funcol.all_reduce(
[rank5]: ^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/_functional_collectives.py", line 175, in all_reduce
[rank5]: group_name = _resolve_group_name(group, tag)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/.local/lib/python3.11/site-packages/torch/distributed/_functional_collectives.py", line 783, in _resolve_group_name
[rank5]: return dmesh._dim_group_names[dim]
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^
[rank5]: AttributeError: 'DeviceMesh' obj | https://github.com/huggingface/diffusers/issues/12485 | closed | [] | 2025-10-14T21:48:35Z | 2025-10-15T20:33:30Z | null | liming-ai |
vllm-project/vllm | 26,840 | [Doc]: Update AWQ Guide | ### 📚 The doc issue
Situation: AutoAWQ functionality was adopted by llm-compressor but vllm [docs](https://docs.vllm.ai/en/latest/features/quantization/auto_awq.html) point to AutoAWQ which is deprecated
### Suggest a potential alternative/fix
1) Update the [AutoAWQ guide](https://github.com/vllm-project/vllm/blob/main/docs/features/quantization/auto_awq.md) to use the [llm-compressor](https://github.com/vllm-project/llm-compressor/tree/2a6a0a34c8a57b6090b5fbac9c0659edf982185c/examples/awq) apis/flow
2) Make sure to also update links in [quantization doc](https://github.com/vllm-project/vllm/blob/main/docs/features/quantization/README.md)
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26840 | closed | [
"documentation"
] | 2025-10-14T20:02:21Z | 2025-11-03T15:39:12Z | 0 | HDCharles |
vllm-project/vllm | 26,838 | [Performance]: RTX 6000 PRO - FP8 in sglang is faster | ### Proposal to improve performance
Can we have a discussion about the sglang FP8 performance vs VLLM performance -
I'm able to get 133 tokens/sec with sglang GLM-4.5-Air-FP8 vs 78 tokens/sec in VLLM
```PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True USE_TRITON_W8A8_FP8_KERNEL=1 SGL_ENABLE_JIT_DEEPGEMM=0 python -m sglang.launch_server --model /mnt/GLM-4.5-FP8/ --tp 4 --host 0.0.0.0 --port 5000 --mem-fraction-static 0.93 --context-length 128000 --enable-metrics --attention-backend flashinfer --tool-call-parser glm45 --reasoning-parser glm45 --served-model-name glm-4.5-air --chunked-prefill-size 8092 --enable-mixed-chunk --cuda-graph-max-bs 32 --kv-cache-dtype fp8_e5m2```
It is using TRITON
I'm not able to achieve the same speed with VLLM with any methods - neither flashinfer, nor triton etc. - the maximum is always around 78 tokens/sec
1) Any idea how to achieve the same 133tokens/sec in VLLM using triton and same configuration like in sglang?
2) is it cutlass design that it is not that fast as triton?
### Report of performance regression
_No response_
### Misc discussion on performance
_No response_
### Your current environment (if you think it is necessary)
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26838 | open | [
"performance"
] | 2025-10-14T19:41:14Z | 2025-12-29T14:52:57Z | 10 | voipmonitor |
vllm-project/vllm | 26,817 | [Feature]: Add process_weights_after_loading to AttentionImpl | ### 🚀 The feature, motivation and pitch
Currently, in the `Attention` layer, we check if `process_weights_after_loading` exists and then call it conditionally, and after that we apply flashinfer-specific logic.
Instead, we should just add a `process_weights_after_loading` method to AttentionImpl (no-op) by default, call it from `Attention.process_weights_after_loading`, and override it in `FlashInferAttentionImpl`.
### Alternatives
_No response_
### Additional context
https://github.com/vllm-project/vllm/pull/23016#discussion_r2414787224
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26817 | closed | [
"help wanted",
"good first issue",
"feature request"
] | 2025-10-14T15:59:54Z | 2025-10-16T15:02:31Z | 2 | ProExpertProg |
vllm-project/vllm | 26,806 | [Usage]: MCP-USE with VLLM gpt-oss:20b via ChatOpenAI | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
i am trying to create an agent using gpt-oss:20B with mcp-use
most times the model returns "Agent completed the task successfully.", and sometimes the proper output which is required
### code
`vllm serve openai/gpt-oss-20b --max-model-len 100000 --gpu-memory-utilization 0.9 --port 8000 --tool-call-parser openai --enable-auto-tool-choice`
client = MCPClient.from_dict(config)
llm = ChatOpenAI(
model="openai/gpt-oss-20b",
base_url="http://127.0.0.1:8000/v1",
api_key="not-needed",
temperature=0.8,
max_tokens=2048
)
agent = MCPAgent(llm=llm, client=client, max_steps=30)
also raising this on mcp-use
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26806 | open | [
"usage"
] | 2025-10-14T13:00:38Z | 2025-11-20T06:33:29Z | 2 | Tahirc1 |
vllm-project/vllm | 26,786 | [Usage]: cuda12.8 docker 0.11.0 Error occurs when launching the model, NCCL error: unhandled cuda error. | When I use only a single graphics card, the system can start up normally.
Below are Docker configuration files, logs, and environment information.
I encountered this issue when upgrading from version 10.1.1 to 10.2.
[The system generates an error when using dual graphics cards; version 10.1.1 functions correctly, but version 10.2 triggers an error upon execution.](https://github.com/vllm-project/vllm/issues/25813)
### Your current environment
```text
# vllm collect-env
INFO 10-14 19:07:58 [__init__.py:216] Automatically detected platform cuda.
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 4.1.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version : 571.96
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 1
Stepping: 4
BogoMIPS: 4788.75
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti ssbd ibrs ibpb stibp fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 640 KiB (20 instances)
L1i cache: 640 KiB (20 instances)
L2 cache: 20 MiB (20 instances)
L3 cache: 27.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-39
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.3.1
[pip3] numpy== | https://github.com/vllm-project/vllm/issues/26786 | closed | [
"usage"
] | 2025-10-14T09:01:39Z | 2025-11-07T17:17:32Z | 3 | ooodwbooo |
vllm-project/vllm | 26,774 | [Usage]: how to use vllm on CUDA 12.9 | ### Your current environment
```text
Traceback (most recent call last):
File "/vllm-workspace/collect_env.py", line 825, in <module>
main()
File "/vllm-workspace/collect_env.py", line 804, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "/vllm-workspace/collect_env.py", line 799, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "/vllm-workspace/collect_env.py", line 619, in get_env_info
cuda_module_loading=get_cuda_module_loading_config(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/vllm-workspace/collect_env.py", line 540, in get_cuda_module_loading_config
torch.cuda.init()
File "/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py", line 339, in init
_lazy_init()
File "/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py", line 372, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
root@test2222-7dcd6b94b7-wl6w4:/vllm-workspace# python3 --version
Python 3.12.1
```
### How would you like to use vllm
My node CUDA version is 12.9, and the running pod image CUDA variable is 12.8. Will this cause the No CUDA GPUs are available error? Is 12.9 compatible with version 12.8? Should we upgrade the VLLM version or lower the CUDA version of the node to 12.8
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26774 | open | [
"usage"
] | 2025-10-14T07:30:56Z | 2025-10-14T07:40:08Z | 1 | Mrpingdan |
vllm-project/vllm | 26,772 | [Feature]: Option kv_event default config | ### 🚀 The feature, motivation and pitch
Current kv_event config publisher is null, but endpoint is zmq endpoint, so when not set publisher config, vllm cannot start, got a error: `EventPublisher.__init__() got an unexpected keyword argument 'endpoint'`.
Can we change this default publisher to zmq, when start enable_kv_cache_events after use can direct use.
https://github.com/vllm-project/vllm/blob/d32c611f455766c9d67034b5e0f8e66f28f4a3ba/vllm/config/kv_events.py#L20-L24
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26772 | closed | [
"feature request"
] | 2025-10-14T07:08:58Z | 2025-10-22T19:19:34Z | 5 | lengrongfu |
vllm-project/vllm | 26,762 | [Usage]: about curl http://ip:8000/metrics | ### Your current environment
When I run this command, I get the following results:
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 12286.0
python_gc_objects_collected_total{generation="1"} 1244.0
python_gc_objects_collected_total{generation="2"} 1326.0
# HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 1378.0
python_gc_collections_total{generation="1"} 124.0
python_gc_collections_total{generation="2"} 9.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="12",patchlevel="11",version="3.12.11"} 1.0
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.1701968896e+010
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.045848064e+09
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.76036994809e+09
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 148.44
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 69.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP http_requests_total Total number of requests by method, status and handler.
# TYPE http_requests_total counter
http_requests_total{handler="none",method="GET",status="4xx"} 1.0
# HELP http_requests_created Total number of requests by method, status and handler.
# TYPE http_requests_created gauge
http_requests_created{handler="none",method="GET",status="4xx"} 1.7604160309440813e+09
# HELP http_request_size_bytes Content length of incoming requests by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_request_size_bytes summary
http_request_size_bytes_count{handler="none"} 1.0
http_request_size_bytes_sum{handler="none"} 0.0
# HELP http_request_size_bytes_created Content length of incoming requests by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_request_size_bytes_created gauge
http_request_size_bytes_created{handler="none"} 1.7604160309442668e+09
# HELP http_response_size_bytes Content length of outgoing responses by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_response_size_bytes summary
http_response_size_bytes_count{handler="none"} 1.0
http_response_size_bytes_sum{handler="none"} 22.0
# HELP http_response_size_bytes_created Content length of outgoing responses by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_response_size_bytes_created gauge
http_response_size_bytes_created{handler="none"} 1.7604160309445088e+09
# HELP http_request_duration_highr_seconds Latency with many buckets but no API specific labels. Made for more accurate percentile calculations.
# TYPE http_request_duration_highr_seconds histogram
http_request_duration_highr_seconds_bucket{le="0.01"} 1.0
http_request_duration_highr_seconds_bucket{le="0.025"} 1.0
http_request_duration_highr_seconds_bucket{le="0.05"} 1.0
http_request_duration_highr_seconds_bucket{le="0.075"} 1.0
http_request_duration_highr_seconds_bucket{le="0.1"} 1.0
http_request_duration_highr_seconds_bucket{le="0.25"} 1.0
http_request_duration_highr_seconds_bucket{le="0.5"} 1.0
http_request_duration_highr_seconds_bucket{le="0.75"} 1.0
http_request_duration_highr_seconds_bucket{le="1.0"} 1.0
http_request_duration_highr_seconds_bucket{le="1.5"} 1.0
http_request_duration_highr_seconds_bucket{le="2.0"} 1.0
http_request_duration_highr_seconds_bucket{le="2.5"} 1.0
http_request_duration_highr_seconds_bucket{le="3.0"} 1.0
http_request_duration_highr_seconds_bucket{le="3.5"} 1.0
http_request_duration_highr_seconds_bucket{le="4.0"} 1.0
http_request_duration_highr_seconds_bucket{le="4.5"} 1.0
http_request_duration_highr_seconds_bucket{le="5.0"} 1.0
http_request_duration_highr_seconds_bucket{le="7.5"} 1.0
http_request_duration_highr_seconds_bucket{le="10.0"} 1.0
http_request_duration_highr_seconds_bucket{le="30.0"} 1.0
http_request_duration_highr_seconds_bucket{le="60.0"} 1.0
http_request_duration_highr_se | https://github.com/vllm-project/vllm/issues/26762 | open | [
"usage"
] | 2025-10-14T05:13:30Z | 2025-10-14T05:13:30Z | 0 | Renoshen |
huggingface/lerobot | 2,194 | During training with PI0, the loss is very low. Is this normal, and is the training proceeding correctly? | I am currently training with PI05.
<img width="1039" height="355" alt="Image" src="https://github.com/user-attachments/assets/5ab3f3e0-82bc-403c-8124-416b330dab14" />
`INFO 2025-10-14 04:57:11 ot_train.py:299 step:10 smpl:320 ep:0 epch:0.00 loss:0.468 grdn:3.522 lr:1.6e-07 updt_s:4.906 data_s:4.874 INFO 2025-10-14 04:57:59 ot_train.py:299 step:20 smpl:640 ep:0 epch:0.00 loss:0.467 grdn:3.936 lr:4.1e-07 updt_s:4.807 data_s:0.008 INFO 2025-10-14 04:58:48 ot_train.py:299 step:30 smpl:960 ep:0 epch:0.01 loss:0.508 grdn:3.973 lr:6.6e-07 updt_s:4.815 data_s:0.009 INFO 2025-10-14 04:59:36 ot_train.py:299 step:40 smpl:1K ep:1 epch:0.01 loss:0.513 grdn:3.805 lr:9.1e-07 updt_s:4.841 data_s:0.009`
The loss is very low right from the start of training. Is it training normally? | https://github.com/huggingface/lerobot/issues/2194 | closed | [
"question",
"policies"
] | 2025-10-14T05:04:31Z | 2025-10-14T08:19:29Z | null | pparkgyuhyeon |
huggingface/peft | 2,832 | Gradient checkpoint with multiple adapters | I'm not sure if it can be considered as a bug since I might be using the library differently from how it's supposed to be used.
**Context:**
I have a PeftModel that need to be infered with 2 different inputs.
For each input I have a pretrained adapter that is frozen and a new adapter for finetuning.
My forward does:
```
for name, x in inputs:
mypeft_model.base_model.set_adapter([name+'pretrain',name+'ft'])
custom_set_pretrain_grad_false_ft_true() #Doing it because set_adapter force gradients to True cf 2759#issue-3363985341
feature = mypeft_model(x)
```
(https://github.com/huggingface/peft/issues/2759#issue-3363985341)
**Issue:**
1) if mypeft_model contains cp.checkpoint(mymodule, x), the backpropagation will not update properly the weight of the LoRA layers in my module either because it did not 'see the set_adapter' or it did not 'see the force grad'
2) A work around I have found is to wrap the whole code inside the loop with a cp.checkpoint but it's super heavy on the memory as I have to store all in GPU until the end of the backbone (ViT-G 40 blocks transformers)
**Question:**
Is there anyway to 'provide' the context to the backpropagation even using gradient checkpointing when switching adapters in the forward?
I have not explored huggingface transformers.enable_gradient_checkpointing() since I'm using a custom model and I'm unsure if it fits for my problem.
| https://github.com/huggingface/peft/issues/2832 | closed | [] | 2025-10-14T03:53:10Z | 2025-12-15T08:24:03Z | 3 | NguyenRichard |
huggingface/lerobot | 2,192 | how to test PI0's output | i use this code to test pi0's output:
def main():
# Create a directory to store the training checkpoint.
output_directory = Path("outputs/example_aloha_static_coffee")
output_directory.mkdir(parents=True, exist_ok=True)
# # Select your device
device = torch.device("cuda")
# Number of offline training steps (we'll only do offline training for this example.)
# Adjust as you prefer. 5000 steps are needed to get something worth evaluating.
training_steps = 500
log_freq = 1
# When starting from scratch (i.e. not from a pretrained policy), we need to specify 2 things before
# creating the policy:
# - input/output shapes: to properly size the policy
# - dataset stats: for normalization and denormalization of input/outputs
dataset_metadata = LeRobotDatasetMetadata("lerobot/aloha_static_coffee")
print(dataset_metadata.features.keys())
features = dataset_to_policy_features(dataset_metadata.features)
output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}
input_features = {key: ft for key, ft in features.items() if key not in output_features}
# Policies are initialized with a configuration class, in this case `PI0Config`. For this example,
# we'll just use the defaults and so no arguments other than input/output features need to be passed.
cfg = PI0Config(input_features=input_features, output_features=output_features)
print(cfg)
# We can now instantiate our policy with this config and the dataset stats.
policy = PI0Policy(cfg)
policy.train()
policy.to(device)
preprocessor, postprocessor = make_pre_post_processors(cfg, dataset_stats=dataset_metadata.stats)
# We can then instantiate the dataset with these delta_timestamps configuration.
dataset = LeRobotDataset("lerobot/aloha_static_coffee")
# 取一条数据进行试验
state = dataset[20]["observation.state"]
image_cam_high = dataset[20]["observation.images.cam_high"]
image_cam_left_wrist = dataset[20]["observation.images.cam_left_wrist"]
image_cam_low = dataset[20]["observation.images.cam_low"]
image_cam_right_wrist = dataset[20]["observation.images.cam_right_wrist"]
effort = dataset[20]["observation.effort"]
state = state.unsqueeze(0).to(device)
image_cam_high = image_cam_high.unsqueeze(0).to(device)
image_cam_left_wrist = image_cam_left_wrist.unsqueeze(0).to(device)
image_cam_low = image_cam_low.unsqueeze(0).to(device)
image_cam_right_wrist = image_cam_right_wrist.unsqueeze(0).to(device)
effort = effort.unsqueeze(0).to(device)
print("State size: ", state.size())
print("Image size: ", image_cam_high.size())
print("Effort size: ", effort.size())
observation = {
"observation.state": state,
"observation.images.cam_high": image_cam_high,
"observation.images.cam_left_wrist": image_cam_left_wrist,
"observation.images.cam_low": image_cam_low,
"observation.images.cam_right_wrist": image_cam_right_wrist,
"observation.effort": effort,
}
# 输出action
with torch.inference_mode():
action = policy.select_action(observation)
numpy_action = action.squeeze(0).to("cpu").numpy()
print("Action: ", numpy_action)
but got an error:
Traceback (most recent call last):
File "/home/wjg/trainpi0.py", line 140, in <module>
main()
File "/home/wjg/trainpi0.py", line 129, in main
action = policy.select_action(observation)
File "/data/wjg_files/anaconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 1144, in select_action
actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]
File "/data/wjg_files/anaconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 1157, in predict_action_chunk
lang_tokens, lang_masks = batch[f"{OBS_LANGUAGE_TOKENS}"], batch[f"{OBS_LANGUAGE_ATTENTION_MASK}"]
KeyError: 'observation.language.tokens'
how to solve it? | https://github.com/huggingface/lerobot/issues/2192 | open | [
"question",
"policies"
] | 2025-10-14T03:36:43Z | 2025-10-17T09:56:46Z | null | Addog666 |
vllm-project/vllm | 26,749 | [Bug]: InternVL: passing image embeddings triggers TypeError: can only concatenate tuple (not "Tensor") to tuple in get_multimodal_embeddings, and v1 sanity check then expects a sequence of 2D tensors | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
# Title
InternVL: passing image **embeddings** triggers `TypeError: can only concatenate tuple (not "Tensor") to tuple` in `get_multimodal_embeddings`, and v1 sanity check then expects a sequence of 2D tensors
## Environment
- vLLM: 0.10.2 (also reproducible on 0.10.1)
- Python: 3.11.x
- Model: `InternVL3_5-1B` (HF, `trust_remote_code=True`)
## Minimal Repro (image **embeddings** input)
```python
from vllm import LLM
import torch
llm = LLM(model="InternVL3_5-1B", trust_remote_code=True)
prompt = "USER: <image>\nWhat is this image?\nASSISTANT:"
# 3D embeddings: [B, T, H] just to illustrate the bug (B=1 here)
# H equals the LM hidden_size for the given weight; using 1024 to reproduce.
image_embeds = torch.randn(1, 16, 1024)
out = llm.generate({
"prompt": prompt,
"multi_modal_data": {"image": image_embeds}, # or {"images": image_embeds}
})
print(out[0].outputs[0].text)
```
## Actual Behavior / Stack
On 0.10.2:
```
File ".../vllm/model_executor/models/internvl.py", line 1328, in get_multimodal_embeddings
multimodal_embeddings += vision_embeddings
TypeError: can only concatenate tuple (not "Tensor") to tuple
```
If we monkey-patch around the above concat, the engine soon asserts:
```
vllm/v1/worker/utils.py", line 155, in sanity_check_mm_encoder_outputs
AssertionError: Expected multimodal embeddings to be a sequence of 2D tensors,
but got tensors with shapes [torch.Size([1, 16, 1024])] instead.
This is most likely due to incorrect implementation of the model's `get_multimodal_embeddings` method.
```
So there are **two inconsistencies**:
1) `get_multimodal_embeddings` sometimes returns a **Tensor** (3D) but the code path later concatenates assuming a **tuple** of tensors.
2) v1 expects a **sequence of 2D tensors `[T, H]`**, but the current image-embeddings path can yield a **3D** `[B, T, H]` tensor (batch dimension not flattened), which fails the sanity check.
## Expected Behavior
- Passing embeddings should **not crash**, whether provided as:
- a single 2D tensor `[T, H]` (one image), or
- a 3D tensor `[B, T, H]` (batch of images), or
- a list/tuple of 2D tensors.
- `get_multimodal_embeddings` should normalize its outputs to a **sequence of 2D tensors** to satisfy `sanity_check_mm_encoder_outputs`.
## Why this matters
InternVL supports both pixel inputs and precomputed **embeddings**. The embedding path is useful in production pipelines (pre-encode vision on different hardware, caching, etc.). Currently in 0.10.1/0.10.2 this path is broken due to type/shape inconsistencies, blocking these use-cases.
## Proposed Fix (minimal)
Normalize to a sequence of 2D tensors before concatenation. For example, in `vllm/model_executor/models/internvl.py` inside `get_multimodal_embeddings(...)`:
```diff
@@
- vision_embeddings = self._process_image_input(image_input)
- if torch.is_tensor(vision_embeddings):
- vision_embeddings = (vision_embeddings,)
- multimodal_embeddings += vision_embeddings
+ vision_embeddings = self._process_image_input(image_input)
+
+ # Normalize to tuple[Tensor[T,H], ...]
+ def _to_2d_seq(x):
+ import torch
+ if torch.is_tensor(x):
+ if x.ndim == 3: # [B, T, H] -> B * [T,H]
+ return tuple(x.unbind(0))
+ elif x.ndim == 2: # [T, H]
+ return (x,)
+ raise TypeError(f"vision embeddings must be 2D/3D, got shape {tuple(x.shape)}")
+ elif isinstance(x, (list, tuple)):
+ out = []
+ for e in x:
+ out.extend(_to_2d_seq(e))
+ return tuple(out)
+ else:
+ raise TypeError(f"unexpected type for vision embeddings: {type(x)}")
+
+ vision_embeddings = _to_2d_seq(vision_embeddings)
+ multimodal_embeddings += vision_embeddings
```
Additionally, consider accepting both `"image"` and `"images"` as modality keys (a few code paths assume `"images"`), or clarify in docs which key is canonical.
## Workarounds we tried
- Wrapping the returned tensor into a tuple (avoids the first `TypeError`), but the v1 sanity check still fails because the output remains 3D.
- Providing embeddings as a list of 2D tensors `[T, H]` works, but many upstream encoders naturally produce `[B, T, H]`, so normalizing in the model executor is safer.
- Pixel input path works and can be used as a temporary fallback, but defeats the purpose of passing precomputed embeddings.
## Version Matrix
- ✅ Pixel input: OK on 0.10.1 and 0.10.2
- ❌ Embedding input: crashe | https://github.com/vllm-project/vllm/issues/26749 | closed | [
"bug"
] | 2025-10-14T03:01:33Z | 2025-10-14T09:36:22Z | 1 | BlueBlueFF |
huggingface/transformers | 41,554 | model.from_pretrained( . . . ) not loading needed weights/parameters | I am performing quantization of a PatchTSTForPrediction model and attempting to load a saved quantized model for testing. Model is saved using `model.save_pretrained( . . . )`. Testing proceeds perfectly once performed immediately after QAT (Hugging face trainer's handles loading at the end of training); however, when attempting to load a saved quantized (trained) model, the error below occurs. I perform all the pre-quantization preparation so that the model contains all the necessary parameters (untrained) and then try to load the saved checkpoint. How can I force `from_pretrained( . . . )` to load ALL required weights?
`Some weights of the model checkpoint at ./checkpoints/ . . . were not used when initializing PatchTSTForPrediction: ['head.projection.calib_counter', 'head.projection.num_module_called', 'head.projection.obsrv_clipval', 'head.projection.obsrv_clipvaln', 'head.projection.obsrv_w_clipval', 'head.projection.quantize_feature.clip_val', 'head.projection.quantize_feature.clip_valn', 'head.projection.quantize_weight.clip_val', 'model.encoder.layers.0.ff.0.calib_counter', 'model.encoder.layers.0.ff.0.num_module_called', 'model.encoder.layers.0.ff.0.obsrv_clipval', 'model.encoder.layers.0.ff.0.obsrv_clipvaln', 'model.encoder.layers.0.ff.0.obsrv_w_clipval', 'model.encoder.layers.0.ff.0.quantize_feature.clip_val', 'model.encoder.layers.0.ff.0.quantize_feature.clip_valn', 'model.encoder.layers.0.ff.0.quantize_weight.clip_val', 'model.encoder.layers.0.ff.3.calib_counter', 'model.encoder.layers.0.ff.3.num_module_called', 'model.encoder.layers.0.ff.3.obsrv_clipval', 'model.encoder.layers.0.ff.3.obsrv_clipvaln', 'model.encoder.layers.0.ff.3.obsrv_w_clipval', 'model.encoder.layers.0.ff.3.quantize_feature.clip_val', 'model.encoder.layers.0.ff.3.quantize_feature.clip_valn', 'model.encoder.layers.0.ff.3.quantize_weight.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.num_module_called', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m1.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m1.clip_valn', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m2.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m2.clip_valn', 'model.encoder.layers.0.self_attn.QBmm62.num_module_called', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m1.clip_val', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m1.clip_valn', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m2.clip_val', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m2.clip_valn', 'model.encoder.layers.0.self_attn.k_proj.calib_counter', 'model.encoder.layers.0.self_attn.k_proj.num_module_called', 'model.encoder.layers.0.self_attn.k_proj.obsrv_clipval', 'model.encoder.layers.0.self_attn.k_proj.obsrv_clipvaln', 'model.encoder.layers.0.self_attn.k_proj.obsrv_w_clipval', 'model.encoder.layers.0.self_attn.k_proj.quantize_feature.clip_val', 'model.encoder.layers.0.self_attn.k_proj.quantize_feature.clip_valn', 'model.encoder.layers.0.self_attn.k_proj.quantize_weight.clip_val', 'model.encoder.layers.0.self_attn.out_proj.calib_counter', . . .]
This IS expected if you are initializing PatchTSTForPrediction from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing PatchTSTForPrediction from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).`
NB: QAT is simulated. Additional parameters are added to the model after qmodel_prep is called and QAT proceeds as normal. I am using IBM's fms-model-optimizer. | https://github.com/huggingface/transformers/issues/41554 | closed | [] | 2025-10-13T23:20:20Z | 2025-11-24T08:03:05Z | 5 | lorsonblair |
huggingface/lerobot | 2,186 | how to load pi0? | i use this code to load pi0:
```python
from lerobot.policies.pi0.modeling_pi0 import PI0Policy
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_policy_path = "lerobot/pi0_libero_base"
policy = PI0Policy.from_pretrained(pretrained_policy_path).to(device)
```
but throws an error:
```bash
Traceback (most recent call last):
File "/home/wjg/pi0.py", line 16, in <module>
policy = PI0Policy.from_pretrained(pretrained_policy_path).to(device)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 923, in from_pretrained
model = cls(config, **kwargs)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 872, in __init__
self.model = PI0Pytorch(config)
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 513, in __init__
self.paligemma_with_expert = PaliGemmaWithExpertModel(
File "/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 337, in __init__
vlm_config_hf = CONFIG_MAPPING["paligemma"]()
TypeError: 'NoneType' object is not subscriptable
```
how can i load pi0? | https://github.com/huggingface/lerobot/issues/2186 | closed | [
"question",
"policies",
"python"
] | 2025-10-13T12:24:32Z | 2025-10-17T09:53:02Z | null | Addog666 |
huggingface/accelerate | 3,812 | RuntimeError during load_state | ### System Info
This issue is related to [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101), but it hasn’t been fully resolved yet. The current workaround is to avoid using `safetensors`.
@Narsil suggested using [`load_file/save_file`](https://github.com/huggingface/safetensors/issues/657#issuecomment-3396215002). However, I noticed that accelerate currently uses [save_file](https://github.com/huggingface/accelerate/blob/main/src/accelerate/utils/other.py#L373) for saving and use [load_model](https://github.com/huggingface/accelerate/blob/main/src/accelerate/checkpointing.py#L238) for loading.
Is there any known workaround or recommended fix for this inconsistency?
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [x] My own task or dataset (give details below)
### Reproduction
Please see the [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101).
### Expected behavior
Please see the [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101). | https://github.com/huggingface/accelerate/issues/3812 | closed | [] | 2025-10-13T11:25:17Z | 2025-11-21T15:07:49Z | 2 | Silverster98 |
huggingface/lerobot | 2,185 | Has the lerobot data format been modified after June this year? | Has the lerobot data format been modified after June this year? The original data can no longer be used. | https://github.com/huggingface/lerobot/issues/2185 | closed | [
"question",
"dataset"
] | 2025-10-13T10:07:41Z | 2025-10-14T08:05:04Z | null | Addog666 |
huggingface/transformers | 41,539 | All POETRY operations fail on latest version 4.57.0 | ### System Info
I import transformers (always latest) in my poetry project.
I use poetry 2.1.2
After this transformers release (4.57.0) I regenerated the poetry lock with command: `poetry lock`
Then when retrying to generate the lock again after other updates - it fails with message:
`Could not parse constrains version: <emtpy>`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Doing a simple search in the poetry.lock file I found out that transformers latest package needs `optax (<empty>)`
which produces this failure because poetry does not know how to parse this type of version.
Note I am sure that this is the problem because commenting out the transformers the lock works fine, and also by using 4.56.2 from September it also works fine and that `optax (<empty>)` cannot be found in the lock in this case.
### Expected behavior
A developer should be able to use the latest transformers package version with poetry. | https://github.com/huggingface/transformers/issues/41539 | closed | [
"bug"
] | 2025-10-13T08:40:49Z | 2025-10-13T14:18:02Z | 1 | bfuia |
vllm-project/vllm | 26,692 | [Usage]: How to release KVCache? | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 | packaged by conda-forge | (main, Jun 4 2025, 14:45:31) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-5.15.0-25-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
Nvidia driver version : 550.127.05
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3 | https://github.com/vllm-project/vllm/issues/26692 | open | [
"usage"
] | 2025-10-13T08:28:20Z | 2025-10-13T08:28:20Z | 0 | shenxf1205 |
huggingface/lerobot | 2,184 | How to let an episode realize it has finished the task? | I have successfully trained my real-world lerobot to do several simple tasks from human demonstrations. Say, push an object from point A to point B. I noticed that after the robot arm has finished the task, it would return to its initial pose (same as the human demonstration) and stay idle for the remainder of the episode, until time finishes.
Of course, if I manually move the cup back to point A from point B before the time finishes, it would attempt to finish the job again. But I just wanted to know if there's any way the episode can finish itself, or at least yield a signal, after the first successful attempt?
I'm using lerobot_record.py with specified policy file path. The policy is act.
Thank you | https://github.com/huggingface/lerobot/issues/2184 | open | [] | 2025-10-13T06:27:36Z | 2025-12-22T07:56:00Z | null | genkv |
vllm-project/vllm | 26,660 | [Usage]: Is there any way to enable beam search in online inference? | ### Your current environment
Is there any way to enable beam search in the `vllm serve` command? Or beam search is only available in offline inference code?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26660 | closed | [
"usage"
] | 2025-10-12T13:55:07Z | 2025-10-17T17:12:45Z | 1 | tiesanguaixia |
huggingface/transformers | 41,533 | Add_Specifical_tokens and resize_toked_embeddings result in an error | ### System Info
I want to add a few special tokens to my Qwen2.5VL model as separators, and after executing the following code, he received the following error message. I don't know how to solve this problem.
``` bash
[rank1]: Traceback (most recent call last):
[rank1]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 329273399
[rank0]: Traceback (most recent call last):
[rank0]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 217038339
[rank3]: Traceback (most recent call last):
[rank3]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 116936799
[rank2]: Traceback (most recent call last):
[rank2]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 215673318
Traceback (most recent call last):
File "/home/hk-project-p0022189/tum_yvc3016/miniconda3/envs/qwen2_5-VL/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 355, in wrapper
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
qwenvl/train/train_livecc.py FAILED
Failures:
<NO_OTHER_FAILURES>
Root Cause (first observed failure):
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
``` python
import os
import logging
import pathlib
import torch
import transformers
import json
from typing import Dict
import shutil
import sys
from pathlib import Path
project_root = Path(__file__).parent.parent.parent
sys.path.append(str(project_root))
import qwenvl.train.trainer
from trainer import replace_qwen2_vl_attention_class
from transformers import (
Qwen2VLForConditionalGeneration,
)
from model_code.modeling_qwen2_5_vl import Qwen2_5_VLForConditionalGeneration
# from qwenvl.data.data_qwen import make_supervised_data_module
from qwenvl.data.lmm_dataset_for_batch import make_supervised_data_module
from qwenvl.train.argument import (
ModelArguments,
DataArguments,
TrainingArguments,
)
from transformers import AutoTokenizer, AutoProcessor, Qwen2VLImageProcessor, Trainer
local_rank = None
os.environ["TOKENIZERS_PARALLELISM"] = "false"
def rank0_print(*args):
if local_rank == 0:
print(*args)
def add_special_tokens_safely(tokenizer, new_tokens):
"""
安全地向 tokenizer 添加新的 special tokens,保留原有的 additional_special_tokens。
Args:
tokenizer: Hugging Face tokenizer
model: 对应的语言模型
new_tokens: list of str, 要添加的新 token
Returns:
bool: 是否有新 token 被添加
"""
# 获取当前词表中的所有 token
current_vocab = set(tokenizer.get_vocab().keys())
# 过滤出真正需要添加的 token
tokens_to_add = [t for t in new_tokens if t not in current_vocab]
if not tokens_to_add:
rank0_print("🟢 所有指定的 token 已存在于词表中,无需添加。")
return False
# 获取原有 additional_special_tokens(如 <image>, <ref> 等)
orig_special_tokens = tokenizer.special_tokens_map.get(
"additional_special_tokens", []
)
# 合并:保留原有 + 新增
updated_special_tokens = orig_special_tokens + [
t for t in tokens_to_add if t not in orig_special_tokens
]
rank0_print(f"📌 正在添加新 token: {tokens_to_add}")
rank0_print(f"🔧 更新后的 additional_special_tokens 总数: {len(updated_special_tokens)}")
# 使用 add_special_tokens API(会自动去重)
num_added = tokenizer.add_special_tokens(
{"additional_special_tokens": updated_special_tokens}
)
if num_added > 0:
rank0_print(f"✅ 成功添加 {num_added} 个新 token 到词表")
return num_added > 0
def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str):
"""Collects the state dict and dump to disk."""
if trainer.deepspeed:
torch.cuda.synchronize()
trainer.save_model(output_dir)
return
state_dict = trainer.model.state_dict()
if trainer.args.should_save:
cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()}
del state_dict
trainer._save(output_dir, state_dict=cpu_state_dict) # noqa
def set_model(model_args, model):
if model_args.tune_mm_vision:
for n, p in model.visual.named_parameters():
p.requires_grad = True
else:
for n, p in model.visual.named_parameters():
p.requires_grad = False
if model_args.tune_mm_mlp:
for n, p in model.visual.merger.named_parameters():
p.requires_grad = True
else:
for n, p in model.visual.merger.named_parameters():
p.requires_grad = False
if model_args.tune_mm_llm:
for n, p in model.model.named_parameters():
p.requires_grad = True
model.lm_head.requires_grad = True
else:
for n, p in model.model.named_parameters():
p.requir | https://github.com/huggingface/transformers/issues/41533 | closed | [
"bug"
] | 2025-10-12T13:50:40Z | 2025-10-13T14:09:29Z | 3 | jialiangZ |
huggingface/lerobot | 2,181 | How to chage SmolVLA action_chunk_size? | I want to change 'action_chunk_size' from 50 to 10. I ran the command like this :
'''
python lerobot/scripts/train.py --policy.path=lerobot/smolvla_base --dataset.repo_id=Datasets/grasp_put --batch_size=16 --steps=40000 --output_dir=outputs/train/vla_chunk10 --job_name=smolvla_training --policy.device=cuda --policy.push_to_hub=false --policy.action_chunk_size=10
'''
but it doesn't work
'train.py: error: unrecognized arguments: --action_chunk_size=10'
and I found it can enter this parameter in the terminal :
usage: train.py [-h] [--policy.action_chunk_size str]
How should I resolve this problem? | https://github.com/huggingface/lerobot/issues/2181 | closed | [
"question",
"policies",
"python"
] | 2025-10-12T13:29:35Z | 2025-10-17T11:25:55Z | null | CCCY-0304 |
huggingface/transformers | 41,532 | where is examples/rag from original paper? | ### System Info
https://arxiv.org/pdf/2005.11401 mentions https://github.com/huggingface/transformers/blob/main/examples/rag but it is not there. Add redirect if possible
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Go to https://github.com/huggingface/transformers/blob/main/examples/rag
### Expected behavior
some example instead of 404 | https://github.com/huggingface/transformers/issues/41532 | closed | [
"bug"
] | 2025-10-12T13:17:53Z | 2025-10-17T09:34:15Z | null | IgorKasianenko |
vllm-project/vllm | 26,653 | [Usage]: Qwen3VL image coordinates issue | ### Your current environment
Hi, i found same image, same prompt, the vLLM serving qwen3vl always have wrong cooridnates back.
this is vllm return:
Response: "{\"click_type\": \"left_click\", \"coordinate\": [815, 961]}"
<img width="1093" height="549" alt="Image" src="https://github.com/user-attachments/assets/f55cb990-03a1-4ac7-912b-e2796c8b854a" />
As you can see, when visualize, the VLLM returned x offset is totally far wrong.
Qwen3 official return. Same A3B model.
Does the input were cropped or something?
My server side just used:
```
vllm serve checkpoints/Qwen3-VL-30B-A3B-Instruct \
--dtype auto --max-model-len 4096 \
--api-key token-abc123 \
--gpu_memory_utilization 0.9 \
--trust-remote-code \
--port 8000 \
--served-model-name 'qwen3-vl' \
--max-model-len 8k \
--limit-mm-per-prompt '{"video": 3}' \
--enable-auto-tool-choice \
--tool-call-parser hermes
```
**note**: when visualize i have already mapping the cordiantes to image space, here just compare raw output, it still biased much on x-axis.
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
dfwr
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26653 | closed | [
"usage"
] | 2025-10-12T07:02:29Z | 2025-10-13T03:56:53Z | 2 | lucasjinreal |
huggingface/accelerate | 3,811 | ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model. | Hi, I am trying to fine-tuning qwen-image-edit using accelerate in FSDP mode. I want to warp the ``QwenImageTransformerBlock`` in transformer and ``Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer`` in text_encoder. I set the environment param
```
def set_fsdp_env():
os.environ["ACCELERATE_USE_FSDP"] = 'true'
os.environ["FSDP_AUTO_WRAP_POLICY"] = 'TRANSFORMER_BASED_WRAP'
os.environ["FSDP_BACKWARD_PREFETCH"] = 'BACKWARD_PRE'
os.environ["FSDP_TRANSFORMER_CLS_TO_WRAP"] = 'QwenImageTransformerBlock,Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer'
os.environ["FSDP_CPU_RAM_EFFICIENT_LOADING"] = 'false'
```
and prepare the two models
```
transformer = accelerator.prepare(transformer)
text_encoder = accelerator.prepare(text_encoder)
```
Finally, I encountered the error raised from ``text_encoder = accelerator.prepare(text_encoder)``
```
ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model.
```
How can I resolve this problem? Thanks!
| https://github.com/huggingface/accelerate/issues/3811 | closed | [] | 2025-10-11T10:13:14Z | 2025-11-22T15:06:54Z | 2 | garychan22 |
huggingface/lerobot | 2,172 | Add support for remote GPUs (with async inference!) | Hello,
I'm a student in not the first-world country, and unforturnately, I don't own a PC that would have an NVidia GPU - it costs about $1200 for a decent setup. On the other hand, it costs only $0.12-0.24/hr to rent RTX 4090 instances, so it's pretty cheap to simply rent a computer whenever I need to data collect/train.
But, to my knowledge LeRobot - unlike e.g. most LLM or vision trainers - runs only locally. I haven't tried, but given Async Inference it should be very feasible to make streaming to a local browser from a remote instance. In particular, for data collection.
This will make robotics dataset generation (significantly) more accessible.
I may be able to PR this one, it should be straightforward.
Cheers. | https://github.com/huggingface/lerobot/issues/2172 | open | [
"enhancement",
"question"
] | 2025-10-11T08:49:32Z | 2025-12-19T06:35:21Z | null | MRiabov |
huggingface/transformers | 41,518 | Add Structured Prompt Templates Registry for LLM / VLM / Diffusion Tasks | ### Feature request
Introduce transformers.prompt_templates — a YAML-based registry and accessor API:
```
from transformers import PromptTemplates
PromptTemplates.get("summarization") # "Summarize the following text:"
PromptTemplates.list_tasks() # ["summarization","vqa","ocr",...]
```
- Templates stored as yaml/json under src/transformers/prompt_templates/templates/.
- Accessor + validation in registry.py.
- Optional CLI command transformers-cli list-prompts.
- Pipelines can import a template by task name instead of hard-coding.
### Motivation
Every pipeline and model today embeds its own prompt strings (e.g., summarization, OCR, VQA).
This duplication makes results inconsistent and hard to benchmark.
A central registry of task-specific prompt templates would unify defaults and enable easy community additions.
### Your contribution
I’ll implement the registry module, add unit tests and docs, and migrate 1–2 pipelines (summarization / captioning) to use it.
Contributor: [@Aki-07](https://github.com/Aki-07) | https://github.com/huggingface/transformers/issues/41518 | open | [
"Feature request"
] | 2025-10-11T08:10:20Z | 2025-10-13T15:06:20Z | 2 | Aki-07 |
vllm-project/vllm | 26,616 | [Usage]: How to enable MTP when using Qwen3-Next in local infer ( not vllm serve) | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.2 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-4.18.0-2.6.8.kwai.x86_64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 11.8.89
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
Nvidia driver version : 550.54.14
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7V13 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 4890.88
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat umip vaes vpclmulqdq rdpid
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, STIBP: disabled
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.14.1
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-ml-py==13.580.82
[pip3] nvidia-nccl-cu12==2.27.3
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pyzmq==27.1.0
[pip3] torch==2.8.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.0
[pip3] triton==3.4.0
[conda] nu | https://github.com/vllm-project/vllm/issues/26616 | open | [
"usage"
] | 2025-10-11T03:58:14Z | 2025-10-16T08:45:35Z | 1 | Kimagure7 |
vllm-project/vllm | 26,614 | [Usage]: attn_metadata.seq_lens is not equal to attn_metadata.num_actual_tokens | ### Your current environment
```
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 20.04.6 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version : Could not collect
CMake version : version 3.16.3
Libc version : glibc-2.31
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jul 23 2025, 00:34:44) [Clang 20.1.4 ] (64-bit runtime)
Python platform : Linux-5.4.0-216-generic-x86_64-with-glibc2.31
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version : 555.42.06
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480+
Stepping: 8
Frequency boost: enabled
CPU MHz: 900.000
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 5.3 MiB
L1i cache: 3.5 MiB
L2 cache: 224 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia- | https://github.com/vllm-project/vllm/issues/26614 | open | [
"usage"
] | 2025-10-11T03:35:38Z | 2025-10-11T03:36:31Z | 0 | betacatZ |
vllm-project/vllm | 26,612 | [Usage]: qwen3vl 30 A3B 启动vllm 服务报错 | ### 📚 The doc issue
A_A800-SXM4-80GB.json']
(Worker pid=1939690) INFO 10-11 10:42:13 [monitor.py:34] torch.compile takes 85.33 s in total
(Worker pid=1939690) INFO 10-11 10:42:14 [gpu_worker.py:298] Available KV cache memory: 13.69 GiB
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] EngineCore failed to start.
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] Traceback (most recent call last):
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 92, in __init__
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] self._initialize_kv_caches(vllm_config)
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 199, in _initialize_kv_caches
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] kv_cache_configs = get_kv_cache_configs(vllm_config, kv_cache_specs,
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py", line 1243, in get_kv_cache_configs
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] check_enough_kv_cache_memory(vllm_config, kv_cache_spec_one_worker,
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py", line 716, in check_enough_kv_cache_memory
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] raise ValueError(
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ValueError: To serve at least one request with the models's max seq len (262144), (24.00 GiB KV cache is needed, which is larger than the available KV cache memory (13.69 GiB). Based on the available memory, the estimated maximum model length is 149520. Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:17 [multiproc_executor.py:154] Worker proc VllmWorker-0 died unexpectedly, shutting down executor.
(EngineCore_DP0 pid=1937911) Process EngineCore_DP0:
(EngineCore_DP0 pid=1937911) Traceback (most recent call last):
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/anaconda3_flash_attn/envs/qwen3_vl/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=1937911) self.run()
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/anaconda3_flash_attn/envs/qwen3_vl/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=1937911) self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 712, in run_engine_core
(EngineCore_DP0 pid=1937911) raise e
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=1937911) engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=1937911) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=1937911) super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 92, in __init__
(EngineCore_DP0 pid=1937911) self._initialize_kv_caches(vllm_config)
(EngineCore_DP0 pid=1937911) File "/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 199, in _initialize_kv_caches
(EngineCore_DP0 pid=1937911) kv_cache_configs = get_kv_cache_configs(vllm_config, kv_cache_specs,
(EngineCore_DP0 pid=1937911) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Engin | https://github.com/vllm-project/vllm/issues/26612 | closed | [
"usage"
] | 2025-10-11T02:45:20Z | 2025-10-16T23:00:39Z | 1 | renkexuan369 |
huggingface/lerobot | 2,171 | Data diffusion and data format conversion | 1. Can datasets collected in Lerobot format be disseminated?
2. Can data formats between different Lerobot versions be converted? I noticed that the data format collected in version 0.2.0 is different from the latest data format.
Thank you! | https://github.com/huggingface/lerobot/issues/2171 | open | [
"question",
"dataset"
] | 2025-10-11T02:16:55Z | 2025-10-17T02:02:36Z | null | FALCONYU |
vllm-project/vllm | 26,607 | [Bug]: Since version 0.9.2 comes with nccl built-in, using PCIE causes sys errors. How to disable nccl in vllm for versions after 0.9.2? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
<img width="833" height="138" alt="Image" src="https://github.com/user-attachments/assets/a42c415b-8c5b-4698-aa6f-879edc44d512" />
### 🐛 Describe the bug
sh 06_startVllmAPI.sh
INFO 09-30 10:30:16 [__init__.py:216] Automatically detected platform cuda.
(APIServer pid=1599676) INFO 09-30 10:30:17 [api_server.py:1896] vLLM API server version 0.10.2
(APIServer pid=1599676) INFO 09-30 10:30:17 [utils.py:328] non-default args: {'port': 6006, 'model': './autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', 'tokenizer': './autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', 'trust_remote_code': True, 'dtype': 'bfloat16', 'served_model_name': ['Qwen2.5-72B-GeoGPT'], 'tensor_parallel_size': 8, 'gpu_memory_utilization': 0.5}
(APIServer pid=1599676) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=1599676) INFO 09-30 10:30:24 [__init__.py:742] Resolved architecture: Qwen2ForCausalLM
(APIServer pid=1599676) `torch_dtype` is deprecated! Use `dtype` instead!
(APIServer pid=1599676) INFO 09-30 10:30:24 [__init__.py:1815] Using max model len 131072
(APIServer pid=1599676) INFO 09-30 10:30:24 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 09-30 10:30:29 [__init__.py:216] Automatically detected platform cuda.
(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [core.py:654] Waiting for init message from front-end.
(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [core.py:76] Initializing a V1 LLM engine (v0.10.2) with config: model='./autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', speculative_config=None, tokenizer='./autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=8, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen2.5-72B-GeoGPT, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":1,"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":512,"local_cache_dir":null}
(EngineCore_DP0 pid=1600151) WARNING 09-30 10:30:31 [multiproc_worker_utils.py:273] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3, 4, 5, 6, 7], buffer_handle=(8, 16777216, 10, 'psm_7e0498ff'), local_subscribe_addr='ipc:///tmp/33a7ec3b-72b3-4984-9ed3-6fc1fb572c4a', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.
INFO 09-30 10:30:40 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1413bf45'), local_subscribe_addr='ipc:///tmp/a | https://github.com/vllm-project/vllm/issues/26607 | open | [
"bug"
] | 2025-10-11T01:48:50Z | 2025-10-17T01:09:03Z | 0 | tina0852 |
huggingface/hf-hub | 131 | InvalidCertificate and how to fix it | I am trying to install a DuckDB extension written in Rust (https://github.com/martin-conur/quackformers) that uses the library.
During the install, I am getting a
```
HfHub(RequestError(Transport(Transport { kind: ConnectionFailed, message: Some("tls connection init failed"), url: Some(Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("huggingface.co")), port: None, path: "/sentence-transformers/all-MiniLM-L6-v2/resolve/main/tokenizer.json", query: None, fragment: None }), source: Some(Custom { kind: InvalidData, error: InvalidCertificate(UnknownIssuer) }) })))
```
The file can be accessed from my environment via curl.
The file can be accessed from DuckDB using their `httpfs` extension which is written in C/C++.
I am working in environment with a very strict enterprise proxy and this is most likely what's causing the issue (I have zero issue when running the same commands at home).
1. can the behavior of HfHub with respect to proxy be modified using env variables?
2. can the behavior of HfHub with respect to TLS certificates be modified using env variables?
3. where can I find the default value(s) for the proxy settings and the location of certs used by the library
References:
- bug report for quackformer = https://github.com/martin-conur/quackformers/issues/7
| https://github.com/huggingface/hf-hub/issues/131 | open | [] | 2025-10-10T14:42:12Z | 2025-10-10T18:18:28Z | null | sahuguet |
vllm-project/vllm | 26,585 | [Usage]: use vllm embedding to extract last token hidden states? | ### Your current environment
```/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
import pynvml # type: ignore[import]
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : 14.0.0-1ubuntu1.1
CMake version : version 3.21.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA H20-3e
Nvidia driver version : 570.133.20
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8575C
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 640 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-pytho | https://github.com/vllm-project/vllm/issues/26585 | closed | [
"usage"
] | 2025-10-10T13:01:42Z | 2025-12-15T06:54:05Z | 2 | rxqy |
vllm-project/vllm | 26,582 | [Bug]: which triton-kernels version for MXFP4 Triton backend? | ### Your current environment
vllm v0.11.0 installed via `uv pip install vllm --torch-backend=auto`
triton + triton-kernels at different commits installed from source
### 🐛 Describe the bug
**Which triton + triton-kernels version does one have to install to run GPT-OSS with the MXFP4 Triton backend?**
No matter which version I try, I always get an error `Failed to import Triton kernels. Please make sure your triton version is compatible.`
Clearly, the latest triton-kernels will not work since the code in `vllm.model_executor.layers.fused_moe.gpt_oss_triton_kernels_moe` tries to import from `triton_kernels.routing`, but `triton_kernels.routing` has been deprecated (cf. https://github.com/triton-lang/triton/commit/30ede52aa2aecfd2ab3d6672ed21bbf4eb6438b3).
But also with older versions I get errors like `ImportError: cannot import name 'triton_key' from 'triton.compiler.compiler` or `Error: No module named 'triton.language.target_info`.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26582 | closed | [
"bug"
] | 2025-10-10T11:51:59Z | 2025-12-12T20:30:06Z | 8 | matkle |
huggingface/lerobot | 2,162 | [Question] How to suppress verbose Svt[info] logs from video encoding during save_episode()? | Hi, thank you for this fantastic library!
I am currently using lerobot (Version: 0.3.3) to record and save robotics data. When I use the `dataset.save_episode() method`, I get a large number of verbose log messages prefixed with Svt[info]:
```shell
Svt[info]: ------------------------------------------- | 0/1 [00:00<?, ?ba/s]
Svt[info]: SVT [version]: SVT-AV1 Encoder Lib v3.0.0
Svt[info]: SVT [build] : GCC 14.2.1 20250110 (Red Hat 14.2.1-7) 64 bit
Svt[info]: LIB Build date: Jul 3 2025 03:14:07
Svt[info]: -------------------------------------------
Svt[info]: Level of Parallelism: 5
Svt[info]: Number of PPCS 140
Svt[info]: [asm level on system : up to avx2]
Svt[info]: [asm level selected : up to avx2]
Svt[info]: -------------------------------------------
Svt[info]: SVT [config]: main profile tier (auto) level (auto)
Svt[info]: SVT [config]: width / height / fps numerator / fps denominator : 256 / 256 / 30 / 1
Svt[info]: SVT [config]: bit-depth / color format : 8 / YUV420
Svt[info]: SVT [config]: preset / tune / pred struct : 8 / PSNR / random access
Svt[info]: SVT [config]: gop size / mini-gop size / key-frame type : 2 / 32 / key frame
Svt[info]: SVT [config]: BRC mode / rate factor : CRF / 30
Svt[info]: SVT [config]: AQ mode / variance boost : 2 / 0
Svt[info]: SVT [config]: sharpness / luminance-based QP bias : 0 / 0
Svt[info]: Svt[info]: -------------------------------------------
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 712/712 [00:00<00:00, 4740.68 examples/s]
Creating parquet from Arrow format: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 738.56ba/s]
```
While these logs are informative, they clutter the console output, especially when saving a large number of episodes in a loop. I would like to find a way to suppress them.
I try to redirecting stdout and stderr:
```python
import os
from contextlib import redirect_stdout, redirect_stderr
with open(os.devnull, 'w') as f_null:
with redirect_stderr(f_null), redirect_stdout(f_null):
dataset.save_episode()
```
But it doesn't works.
Any guidance on how to achieve a quieter output would be appreciated. | https://github.com/huggingface/lerobot/issues/2162 | closed | [
"question",
"dataset"
] | 2025-10-10T08:56:52Z | 2025-10-13T05:43:01Z | null | zxytql |
huggingface/transformers | 41,494 | Incorrect tokenizer created for gemma gguf files | ### System Info
- `transformers` version: 4.57.0
- Platform: Linux-5.15.0-144-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 0.34.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.3.1+cu121 (NA)
- Tensorflow version (GPU?): 2.17.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NA
### Who can help?
@yijun-lee
@Isotr0py
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
t1 = AutoTokenizer.from_pretrained("unsloth/gemma-3-4b-it-GGUF", gguf_file="gemma-3-4b-it-Q8_0.gguf")
x1 = t1.tokenize("<bos>What is eunoia?")
print(f"{x1=}")
t2 = AutoTokenizer.from_pretrained("google/gemma-3-4b-it")
x2 = t2.tokenize("<bos>What is eunoia?")
print(f"{x2=}")
```
### Expected behavior
The print out of the x1 and x2 should be the same. However,
```
x1=['<bos>', 'Wh', 'at', '▁is', '▁eu', 'no', 'ia', '?']
x2=['<bos>', 'What', '▁is', '▁e', 'uno', 'ia', '?']
```
Looking more into it, the tokenizer created for HF model (t2) is BPE while the tokenizer created for the GGUF model (t1) is Unigram. | https://github.com/huggingface/transformers/issues/41494 | closed | [
"bug"
] | 2025-10-09T23:27:25Z | 2025-11-29T08:02:57Z | 4 | amychen85 |
vllm-project/vllm | 26,530 | [Bug]: Fix CVE-2023-48022 in docker image | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
Not required for this.
</details>
### 🐛 Describe the bug
The vllm/vllm-openai:v0.10.2 image seems to be affected by the [CVE-2023-48022](https://avd.aquasec.com/nvd/2023/cve-2023-48022/) **Critical** CVE with `ray` (see scan results below). Is there any plan to address this?
```
grype vllm/vllm-openai:v0.10.2 --scope all-layers
```
```
NAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK
ray 2.49.1 python GHSA-6wgj-66m2-xxp2 Critical 91.9% (99th) 86.4
libgssapi-krb5-2 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3
libk5crypto3 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3
libkrb5-3 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3
libkrb5support0 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3
python3-pip 22.0.2+dfsg-1ubuntu0.6 22.0.2+dfsg-1ubuntu0.7 deb CVE-2023-32681 Medium 6.3% (90th) 3.1
libaom3 3.3.0-1ubuntu0.1 deb CVE-2019-2126 Low 8.1% (91st) 2.4
libcaca0 0.99.beta19-2.2ubuntu4 deb CVE-2022-0856 Low 4.9% (89th) 1.5
python3-httplib2 0.20.2-2 deb CVE-2021-21240 Low 4.5% (88th) 1.4
login 1:4.8.1-2ubuntu2.2 deb CVE-2024-56433 Low 3.6% (87th) 1.1
passwd 1:4.8.1-2ubuntu2.2 deb CVE-2024-56433 Low 3.6% (87th) 1.1
...
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/26530 | closed | [
"bug"
] | 2025-10-09T20:16:02Z | 2025-10-10T21:14:49Z | 3 | geodavic |
huggingface/lerobot | 2,156 | How to reproduce lerobot/pi0_libero_finetuned? | Thanks for the great work!
I evaluated lerobot/pi0_libero_finetuned on libero goal datasets.
When using n_action_steps=50, the success rate is ~ 75%
When using n_action_steps=10, the success rate is ~ 90%
I tried to reproduce the training results, so I mainly refered to [train_config.json](https://huggingface.co/lerobot/pi0_libero_finetuned/blob/main/train_config.json) in the `lerobot/pi0_libero_finetuned` repo, which has one key value pair in the config dict:
```
"pretrained_path": "pepijn223/pi0_libero_finetuned_extra"
```
So I also refered to the [train_config.json](https://huggingface.co/pepijn223/pi0_libero_finetuned_extra/blob/main/train_config.json) in th `pepijn223/pi0_libero_finetuned_extra` repo, which also has the key value pair:
```
"pretrained_path": "lerobot/pi0_libero_finetuned"
```
This again points back to the checkpoint that depends on it.
And my questions are, how are these checkpoints actually trained, and can anyone provide a train_config.json in the latest lerobot version that can reproduce lerobot/pi0_libero_finetuned?
Please also share some successful training configs if possible! | https://github.com/huggingface/lerobot/issues/2156 | open | [
"question",
"policies",
"simulation"
] | 2025-10-09T18:11:47Z | 2025-10-22T09:27:03Z | null | PuzhenYuan |
huggingface/lerobot | 2,153 | Why can’t I find something like train_expert_only in the latest version of pi0? Do the current versions of pi0 and pi0.5 only support full-parameter training? | Why can’t I find something like “train_expert_only” in the latest version of pi0?
Do the current versions of pi0 and pi0.5 only support full-parameter training? | https://github.com/huggingface/lerobot/issues/2153 | closed | [
"enhancement",
"question",
"policies",
"good first issue"
] | 2025-10-09T13:08:10Z | 2025-12-31T14:54:29Z | null | ZHHhang |
huggingface/datasets | 7,802 | [Docs] Missing documentation for `Dataset.from_dict` | Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace.
The method in question:
```python
@classmethod
def from_dict(
cls,
mapping: dict,
features: Optional[Features] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
) -> "Dataset":
"""
Convert `dict` to a `pyarrow.Table` to create a [`Dataset`].
Important: a dataset created with from_dict() lives in memory
and therefore doesn't have an associated cache directory.
This may change in the future, but in the meantime if you
want to reduce memory usage you should write it back on disk
and reload using e.g. save_to_disk / load_from_disk.
Args:
mapping (`Mapping`):
Mapping of strings to Arrays or Python lists.
features ([`Features`], *optional*):
Dataset features.
info (`DatasetInfo`, *optional*):
Dataset information, like description, citation, etc.
split (`NamedSplit`, *optional*):
Name of the dataset split.
Returns:
[`Dataset`]
"""
``` | https://github.com/huggingface/datasets/issues/7802 | open | [] | 2025-10-09T02:54:41Z | 2025-10-19T16:09:33Z | 2 | aaronshenhao |
huggingface/transformers | 41,431 | gradient scaling occurs even though total gradient remains < max_grad_norm in trainer.py | Even though gradients remain < max_grad_norm throughout training, the gradient still goes through a scaling process. For instance, I set max_grad_norm = 1, and grad_norm consistently remains <= 0.33. Because the trainer takes you through the grad clip process if max_grad_norm > 0 or not None, this operation always gets executed within torch's clip function: `clip_coef = max_norm / (total_norm + 1e-6)`. Is there a way to prevent this? Thanks.
| https://github.com/huggingface/transformers/issues/41431 | closed | [] | 2025-10-07T22:13:08Z | 2025-11-15T08:02:51Z | 7 | lorsonblair |
huggingface/candle | 3,120 | AutoModel / PreTrainedModel equivalent magic ? | Hello all, first, thanks a lot for this wonderful crate.
I was wondering if it's on the roadmap or if there is a solution to have the same magic as in python with a `AutoModel.from_pretrained("the_model_name_string")`
As I'm protoyping and am often changing models... which requires to change the architecture everytime and having this "auto load" would save time.
Alternatives : https://github.com/lucasjinreal/Crane or https://docs.rs/kalosm/latest/kalosm/
Thanks in advance,
Have a nice day. | https://github.com/huggingface/candle/issues/3120 | open | [] | 2025-10-07T21:27:31Z | 2025-10-09T13:02:35Z | 2 | ierezell |
huggingface/lerobot | 2,134 | what is the transformers version for latest lerobot pi0? | ### System Info
```Shell
- lerobot version: 0.3.4
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.18
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 1.26.4
- PyTorch version: 2.7.1+cu126
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.6
- GPU model: NVIDIA A800-SXM4-80GB
- Using GPU in script?:
lerobot-eval --policy.path="lerobot/pi0_libero_finetuned" --env.type=libero --env.task=libero_goal --eval.batch_size=1 --eval.n_episodes=2 --seed=1000
```
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
Clone latest LeRobot repository and install dependencies and run lerobot_eval.py
```
lerobot-eval --policy.path="lerobot/pi0_libero_finetuned" --env.type=libero --env.task=libero_goal --eval.batch_size=1 --eval.n_episodes=2 --seed=1000
```
```
Traceback (most recent call last):
File "/cephfs/yuanpuzhen/conda_data/envs/libero/bin/lerobot-eval", line 7, in <module>
sys.exit(main())
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/scripts/lerobot_eval.py", line 750, in main
eval_main()
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/configs/parser.py", line 225, in wrapper_inner
response = fn(cfg, *args, **kwargs)
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/scripts/lerobot_eval.py", line 495, in eval_main
policy = make_policy(
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/factory.py", line 386, in make_policy
policy = policy_cls.from_pretrained(**kwargs)
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 923, in from_pretrained
model = cls(config, **kwargs)
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 872, in __init__
self.model = PI0Pytorch(config)
File "/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py", line 545, in __init__
raise ValueError(msg) from None
ValueError: An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues
Exception ignored in: <function MjRenderContext.__del__ at 0x7fb47e108ee0>
Traceback (most recent call last):
File "/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/robosuite/utils/binding_utils.py", line 199, in __del__
self.gl_ctx.free()
File "/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/robosuite/renderers/context/egl_context.py", line 150, in free
EGL.eglDestroyContext(EGL_DISPLAY, self._context)
File "/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/OpenGL/error.py", line 230, in glCheckError
raise self._errorClass(
OpenGL.raw.EGL._errors.EGLError: EGLError(
err = EGL_NOT_INITIALIZED,
baseOperation = eglDestroyContext,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7fb47c6805c0>,
<OpenGL._opaque.EGLContext_pointer object at 0x7fb47c6804c0>,
),
result = 0
)
```
### Expected behavior
Expect to evaluate the given checkpoint, output eval videos and eval_info.json
Can you provide stable transformers and numpy versions for the latest lerobot?
And what version of transformers could satisfied the code in PI0Pytorch?
```
try:
from transformers.models.siglip import check
if not check.check_whether_transformers_replace_is_installed_correctly():
raise ValueError(msg)
except ImportError:
raise ValueError(msg) from None
``` | https://github.com/huggingface/lerobot/issues/2134 | closed | [] | 2025-10-07T12:06:52Z | 2025-11-14T20:04:50Z | null | PuzhenYuan |
huggingface/diffusers | 12,441 | Support Wan2.2-Animate | [Wan2.2-Animate-14B](https://humanaigc.github.io/wan-animate), it's a unified model for character animation and replacement, with holistic movement and expression replication.
https://github.com/user-attachments/assets/351227d0-4edc-4f6c-9bf9-053e53f218e4
We would like open to the community, if anyone is interested, to integrate this model with Diffusers. Just take into consideration these points:
1. Don't integrate the preprocessing, we can help with that using a modular custom block.
2. This issue is for more advanced users than know the diffusers library very well.
Just let me know that you're interested and if you have any doubts, feel free to ask, if you open a PR we can help but we are currently busy with other priorities so we ask you to be patient. | https://github.com/huggingface/diffusers/issues/12441 | closed | [
"help wanted",
"contributions-welcome"
] | 2025-10-06T18:08:21Z | 2025-11-13T02:52:32Z | 0 | asomoza |
huggingface/lerobot | 2,124 | Question regarding downsampling and resizing dataset | Hi,
Thank you for providing this wonderful library! I was curious about how one can take an existing dataset (collected or downloaded) and modify the fps (downsample, resize images, or delete specific episodes (for v3) prior to policy training. I am finding this tricky to do particularly when the dataset is not loaded in code but provided as a parameter to lerobot-train. I've spent time digging around the codebase but didn't see a way that doesn't involve loading the dataset in script first and adjusting this (for resizing, not sure about downsampling fps). Does the codebase provide utility functions for this? Thanks! | https://github.com/huggingface/lerobot/issues/2124 | open | [
"question",
"dataset",
"good first issue"
] | 2025-10-06T16:07:47Z | 2025-10-07T20:25:20Z | null | karthikm-0 |
huggingface/transformers | 41,363 | RT-Detr docs should reflect fixed 640x640 input size | The authors of RT-Detr mention that the model was trained on 640x640 images and was meant to be used for inference on 640x640 images. Also, the current implementation has certain quirks that make training/inferring on images of different sizes problematic. For example, the pixel masks used for batching images of varying sizes are discarded.
https://github.com/huggingface/transformers/blob/0452f28544f3626273d25f07f83c0e5f7da2d47a/src/transformers/models/rt_detr/modeling_rt_detr.py#L1645
The above are not clear in the current docs. I'll open a PR which adds a few lines in the docs to notify users about these issues. | https://github.com/huggingface/transformers/issues/41363 | closed | [
"Documentation"
] | 2025-10-06T11:04:37Z | 2025-11-06T13:24:01Z | 4 | konstantinos-p |
huggingface/tokenizers | 1,873 | Why is my Python implementation faster than the Rust implementation? | I am comparing the tokenizers in the Python and the huggingface implementation as follows
```python
import json
import time
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
[... Define and save the texts as data.json]
with open('./data.json', 'w', encoding='utf-8') as f:
json.dump(texts[:N], f, ensure_ascii=False)
N = 500
start = time.time()
for text in texts:
tokenizer(text)
end = time.time()
loop_time = end-start
print("Python in a loop: ",end-start, f"for {N} examples.")
# Python in a loop: 4.231077432632446 for 500 examples.
start = time.time()
results = tokenizer(texts[:N])
end = time.time()
batch_time = end-start
print("Python as a batch: ",batch_time, f"for {N} examples.")
# Python as a batch: 0.86988 for 500 examples.
```
and the rust implementation
```rust
use tokenizers::tokenizer::{Result as TokenizerResult, Tokenizer,Encoding};
use serde_json::Result as SerdeResult;
use std::time::Instant;
use std::fs::File;
use std::io::{BufReader,BufWriter, Write};
use std::any::type_name;
use rayon::prelude::*;
fn main() -> TokenizerResult<()> {
// needs http feature enabled
let tokenizer = Tokenizer::from_pretrained("bert-base-cased", None)?;
let file = File::open("./data.json")?;
let reader = BufReader::new(file);
let items: Vec<String> = serde_json::from_reader(reader)?;
let texts: Vec<&str> = items.iter().map(|s| s.as_str()).collect();
let start = Instant::now();
for name in texts.iter(){
let encoding = tokenizer.encode(*name, false)?;
}
let duration = start.elapsed();
println!("(1) Execution in loop: {:.6} seconds", duration.as_secs_f64());
// (1) Execution in loop: 29.867990 seconds
let start = Instant::now();
let encoded_items: Vec<_> = texts.par_iter().map(|name| tokenizer.encode(*name, false)).collect();
let duration = start.elapsed();
println!("(2) Execution with par_iter : time: {:.6} seconds", duration.as_secs_f64());
// (2) Execution with par_iter : 3.968467
let start = Instant::now();
let encoded_items: TokenizerResult<Vec<Encoding>> = tokenizer.encode_batch(items2.clone(), false);
let duration = start.elapsed();
println!("(3) Execution with encode_batch : time: {:.6} seconds", duration.as_secs_f64());
// (3) Execution with encode_batch : 3.968467 seconds
let start = Instant::now();
let encoded_items: TokenizerResult<Vec<Encoding>> = tokenizer.encode_batch_char_offsets(items2.clone(), false);
let duration = start.elapsed();
println!("(4) Execution with encode_batch_char_offsets : time: {:.6} seconds", duration.as_secs_f64());
// (4) Execution with encode_batch_char_offsets : 6.839765 seconds
let start = Instant::now();
let encoded_items: TokenizerResult<Vec<Encoding>> = tokenizer.encode_batch_fast(items2.clone(), false);
let duration = start.elapsed();
println!("(5) Execution with encode_batch_fast : time: {:.6} seconds", duration.as_secs_f64());
// (5) Execution with encode_batch_fast : 5.758732 seconds
Ok(())
}
```
You see that Rust is 10 times slower in a loop and 3 times slower even when parallelization is used.
What is the trick here? How can I make my Rust code as fast (or hopefully faster) than the python code? | https://github.com/huggingface/tokenizers/issues/1873 | closed | [] | 2025-10-05T08:02:47Z | 2025-10-08T17:41:28Z | 4 | sambaPython24 |
huggingface/transformers | 41,336 | is there a bug in group_videos_by_shape for qwenvl video preprocessiong? | ### System Info
in src/transformers/video_utils.py,
group_videos_by_shape
grouped_videos = {shape: torch.stack(videos, dim=0) for shape, videos in grouped_videos.items()}, where each video is of shape BTCHW. This will create a new dimension.
However, in qwenvl video preprocess
batch_size, grid_t, channel = patches.shape[:3]
It does not consider the additional dimension created in group_videos_by_shape
I think we should use torch.cat, not torch.stack?
@yonigozlan @molbap
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
running video preprocessing with list of video inputs, each with different shape
### Expected behavior
run without error | https://github.com/huggingface/transformers/issues/41336 | closed | [
"bug"
] | 2025-10-03T22:26:26Z | 2025-10-03T22:44:43Z | 1 | dichencd |
huggingface/lerobot | 2,111 | frame deletion | Great work on this project! I have a quick question - does LeRobotDataset support frame deletion? For example, in the DROID_lerobot dataset, the first few frames have an action value of 0 and I need to remove them.
I'd appreciate any insights you can provide. Thank you for your time and help! | https://github.com/huggingface/lerobot/issues/2111 | closed | [
"question",
"dataset"
] | 2025-10-03T13:05:12Z | 2025-10-10T12:17:53Z | null | Yysrc |
huggingface/lerobot | 2,108 | HIL-SERL Transform order for (tanh → rescale) is reversed | In `TanhMultivariateNormalDiag`:
```
transforms = [TanhTransform(cache_size=1)]
if low is not None and high is not None:
transforms.insert(0, RescaleFromTanh(low, high)) # puts Rescale *before* tanh
```
this applies RescaleFromTanh then Tanh, which is backwards. should we change it to tanh first, then rescale?
Fix
```
transforms = [TanhTransform(cache_size=1)]
if low is not None and high is not None:
transforms.append(RescaleFromTanh(low, high)) # tanh → rescale
```
Also, when I tried to assign value for low, high. I got error:
```
torch/distributions/transforms.py", line 303, in domain
domain = self.parts[0].domain
AttributeError: 'RescaleFromTanh' object has no attribute 'domain'
```
Might be fixed by adding the following to `class RescaleFromTanh(Transform)`
```
# Required attributes for PyTorch Transform
self.domain = constraints.interval(-1.0, 1.0)
self.codomain = constraints.interval(low, high)
self.bijective = True
``` | https://github.com/huggingface/lerobot/issues/2108 | open | [
"question",
"policies"
] | 2025-10-02T21:44:22Z | 2025-10-07T20:36:31Z | null | priest-yang |
huggingface/lerobot | 2,107 | Low Success Rate When Training SmolVLA-0.24B on LIBERO | Hi folks, I'm trying to replicate the 0.24B SmolVLA model on the LIBERO dataset. Intuitively, I just changed the base model `vlm_model_name: str = "HuggingFaceTB/SmolVLM2-256M-Video-Instruct"`. Here is the command I used to train.
`lerobot-train --policy.type=smolvla --policy.load_vlm_weights=true --dataset.repo_id=HuggingFaceVLA/libero --env.type=libero --env.task=libero_10 --output_dir=./outputs/ --steps=100000 --batch_size=64 --eval.batch_size=1 --eval.n_episodes=1 --eval_freq=1000 --wandb.enable=true`
I trained on a single RTX4090. However, I found that the success rate on the eval set is quite low. The success rate was only 7.5%. Is there anything I did wrong? Attaching the training plots below.
<img width="1116" height="629" alt="Image" src="https://github.com/user-attachments/assets/9bbdcadb-e113-4d9f-b315-4f37b57bde37" />
<img width="1116" height="310" alt="Image" src="https://github.com/user-attachments/assets/23951a72-a374-4eda-9368-363367e4c746" /> | https://github.com/huggingface/lerobot/issues/2107 | open | [
"question",
"policies",
"simulation"
] | 2025-10-02T19:11:55Z | 2025-12-20T09:30:58Z | null | zimgong |
huggingface/optimum-onnx | 66 | How to export a stateless whisper model via optimum-cli? | I observe that when exporting a Whisper model via Python API, the resulting model is stateless, i.e. the decoder is split into two models.
```python
import os
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
ORTModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny", export=True).save_pretrained("./whisper/python")
print(os.listdir("./whisper/python"))
# ['encoder_model.onnx', 'decoder_with_past_model.onnx', 'decoder_model.onnx', 'config.json', 'generation_config.json']
```
When I export this model via CLI, the decoder model is exported as stateful even if I provide the `--no-post-process` argument.
```bash
optimum-cli export onnx --task automatic-speech-recognition -m openai/whisper-tiny --no-post-process ./whisper/cli
ls ./whisper/cli
# added_tokens.json decoder_model.onnx generation_config.json normalizer.json special_tokens_map.json tokenizer.json
# config.json encoder_model.onnx merges.txt preprocessor_config.json tokenizer_config.json vocab.json
```
My environment:
```
certifi==2025.8.3
charset-normalizer==3.4.3
coloredlogs==15.0.1
filelock==3.19.1
flatbuffers==25.9.23
fsspec==2025.9.0
hf-xet==1.1.10
huggingface-hub==0.35.3
humanfriendly==10.0
idna==3.10
Jinja2==3.1.6
MarkupSafe==3.0.3
ml_dtypes==0.5.3
mpmath==1.3.0
networkx==3.4.2
numpy==2.2.6
nvidia-cublas-cu12==12.8.4.1
nvidia-cuda-cupti-cu12==12.8.90
nvidia-cuda-nvrtc-cu12==12.8.93
nvidia-cuda-runtime-cu12==12.8.90
nvidia-cudnn-cu12==9.10.2.21
nvidia-cufft-cu12==11.3.3.83
nvidia-cufile-cu12==1.13.1.3
nvidia-curand-cu12==10.3.9.90
nvidia-cusolver-cu12==11.7.3.90
nvidia-cusparse-cu12==12.5.8.93
nvidia-cusparselt-cu12==0.7.1
nvidia-nccl-cu12==2.27.3
nvidia-nvjitlink-cu12==12.8.93
nvidia-nvtx-cu12==12.8.90
onnx==1.19.0
onnxruntime==1.23.0
optimum @ git+https://github.com/huggingface/optimum@a813c95ac088c401547fe15e7a68ac5c6f00f9a7
optimum-onnx @ git+https://github.com/huggingface/optimum-onnx.git@671b84f78a244594dd21cb1a8a1f7abb8961ea60
packaging==25.0
protobuf==6.32.1
PyYAML==6.0.3
regex==2025.9.18
requests==2.32.5
safetensors==0.6.2
sympy==1.14.0
tokenizers==0.21.4
torch==2.8.0
tqdm==4.67.1
transformers==4.55.4
triton==3.4.0
typing_extensions==4.15.0
urllib3==2.5.0
```
How to export this model as stateless via optimum-cli? Also, how to export this model as stateful via Python API?
Thanks! | https://github.com/huggingface/optimum-onnx/issues/66 | closed | [
"question"
] | 2025-10-02T09:50:03Z | 2025-10-13T05:33:25Z | null | nikita-savelyevv |
huggingface/lerobot | 2,104 | Select the VLM backbone for SmolVLA | Hi may I ask about the vlm_model_name, is there any model more powerful than HuggingFaceTB/SmolVLM2-500M-Video-Instruct which can be used to train SmolVLA for Lerobot SO101? | https://github.com/huggingface/lerobot/issues/2104 | open | [
"question",
"policies",
"good first issue"
] | 2025-10-02T07:35:29Z | 2025-10-11T16:53:59Z | null | Llkhhb |
huggingface/diffusers | 12,415 | SVG 2 kernels | Can we support new sparse kernels in (Neurips 2025)
https://svg-project.github.io/v2/ | https://github.com/huggingface/diffusers/issues/12415 | open | [] | 2025-10-01T10:52:50Z | 2025-10-01T10:52:50Z | 0 | bhack |
huggingface/lerobot | 2,096 | How can I change the task name of already recorded episodes? | I recorded the dataset using:
--dataset.single_task="slice the clay until it becomes 4 pieces"
Now I want to update those recorded episodes to a different task name. How can I do that? | https://github.com/huggingface/lerobot/issues/2096 | open | [
"question",
"dataset",
"good first issue"
] | 2025-10-01T02:15:49Z | 2025-10-30T03:48:47Z | null | pparkgyuhyeon |
huggingface/transformers | 41,235 | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ? | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transformers trainer to train model ,when training is broken ,it can recover from data checkpoint and model checkpoint ? thanks
i wish you understand what i said | https://github.com/huggingface/transformers/issues/41235 | closed | [
"bug"
] | 2025-09-30T17:07:07Z | 2025-11-08T08:04:40Z | null | ldh127 |
huggingface/accelerate | 3,802 | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ? | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transformers trainer to train model ,when training is broken ,it can recover from data checkpoint and model checkpoint ? thanks
i wish you understand what i said
| https://github.com/huggingface/accelerate/issues/3802 | closed | [] | 2025-09-30T15:58:32Z | 2025-11-09T15:06:58Z | null | ldh127 |
huggingface/transformers | 41,211 | Add DEIMv2 | ### Model description
It would be nice to integrate DEIMv2, a new state-of-the-art model for real-time object detection based on DINOv3. The weights are released under Apache 2.0.
Related thread: https://github.com/Intellindust-AI-Lab/DEIMv2/issues/20
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Code: https://github.com/Intellindust-AI-Lab/DEIMv2
Weights (on Google Drive for now): https://github.com/Intellindust-AI-Lab/DEIMv2?tab=readme-ov-file#1-model-zoo
Ideally, the [AutoBackbone API](https://huggingface.co/docs/transformers/main_classes/backbones) can be leveraged to not having to re-implement the entire DINOv3 backbone in `modular_deimv2.py` and `modeling_deimv2.py`. See an example of how this is leveraged for DETR [here](https://github.com/huggingface/transformers/blob/59035fd0e1876f9e526488b61fe43ff8829059f6/src/transformers/models/detr/modeling_detr.py#L280). | https://github.com/huggingface/transformers/issues/41211 | open | [
"New model"
] | 2025-09-30T09:43:07Z | 2025-10-04T18:44:06Z | 4 | NielsRogge |
huggingface/transformers | 41,208 | Integrate mamba SSM kernels from the hub | ### Feature request
Currently, mamba kernels are imported via the main source package ex, for [GraniteMoeHybrid](https://github.com/huggingface/transformers/blob/main/src/transformers/models/granitemoehybrid/modeling_granitemoehybrid.py#L44-L46)
Can we migrate this to use the kernels-hub (`kernels-community/mamba-ssm`) variation instead?
### Motivation
Removes the external dependency. Kernel hub is also integrated at several other places throughout the library.
### Your contribution
I can submit a PR for migrating from the PyPi `mamba_ssm` package to the `kernels` package for mamba ops. | https://github.com/huggingface/transformers/issues/41208 | closed | [
"Feature request"
] | 2025-09-30T07:50:52Z | 2025-12-18T10:17:06Z | 15 | romitjain |
huggingface/tokenizers | 1,870 | How can I convert a trained tokenizer into `transformers` format | Hi guys,
I have trained a tokenizer which works pretty well and it is stored in a single `.json` file. Is there any method / API to convert it into a `transformers` toeknizer format?
If there's no such implementation I am happy to contribute. | https://github.com/huggingface/tokenizers/issues/1870 | closed | [] | 2025-09-30T06:09:52Z | 2025-09-30T13:53:53Z | 1 | dibbla |
huggingface/lighteval | 999 | How to print all pass@k scores when generating 16 samples? | Hi,
I want to print all results of pass@k metrics when generating 16 samples. (e.g., k=1, 2, 4, 8, 16)
```python
math_500_pass_k_at_16 = LightevalTaskConfig(
name="math_500_pass_k_at_16",
suite=["custom"],
prompt_function=math_500_prompt_fn,
hf_repo="HuggingFaceH4/MATH-500",
hf_subset="default",
hf_avail_splits=["test"],
evaluation_splits=["test"],
few_shots_split=None,
few_shots_select=None,
generation_size=32768,
metrics=[
Metrics.pass_at_k_math(sample_params={"k": 1, "n": 16}),
Metrics.pass_at_k_math(sample_params={"k": 2, "n": 16}),
Metrics.pass_at_k_math(sample_params={"k": 4, "n": 16}),
Metrics.pass_at_k_math(sample_params={"k": 8, "n": 16}),
Metrics.pass_at_k_math(sample_params={"k": 16, "n": 16}),
],
version=2,
```
But, I can't see full results that I want. Does anyone know how to resolve it?
| https://github.com/huggingface/lighteval/issues/999 | open | [] | 2025-09-29T21:49:44Z | 2025-10-14T08:04:17Z | null | passing2961 |
huggingface/lerobot | 2,083 | How to train this RL model with my trained data | I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?
`{ "output_dir": "outputs/train/2025-09-28/17-28-55_default",
"job_name": "default", "resume": true,
"seed": 1000, "num_workers": 4,
"batch_size": 256,
"steps": 100000,`
and the origin code is :
`{ "output_dir": null,
"job_name": "default", "resume": flase,
"seed": 1000, "num_workers": 4,
"batch_size": 256,
"steps": 100000,
`[
<img width="1515" height="717" alt="Image" src="https://github.com/user-attachments/assets/5f46acd3-9a72-41a5-8506-742f5c479c53" />
](url) | https://github.com/huggingface/lerobot/issues/2083 | open | [] | 2025-09-29T07:22:08Z | 2025-10-07T20:32:04Z | null | 993984583 |
huggingface/lerobot | 2,082 | How to train this RL model with my model data | I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?
`{
"output_dir": "outputs/train/2025-09-28/17-28-55_default",
"job_name": "default",
"resume": true,
"seed": 1000,
"num_workers": 4,
"batch_size": 256,
"steps": 100000,`[
<img width="1515" height="717" alt="Image" src="https://github.com/user-attachments/assets/df121807-b309-4a5c-bee1-850b0fab2ae0" />
](url) | https://github.com/huggingface/lerobot/issues/2082 | closed | [] | 2025-09-29T07:18:52Z | 2025-10-07T20:33:11Z | null | 993984583 |
huggingface/sentence-transformers | 3,532 | What is the proper way to use prompts? Do we have to format/render them ourselves? | Hi. First time using the Sentence Transformers library and I had a question regarding using prompts. Specifically, it seems like the [`SentenceTransformer.encode_document`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode_document) method is a convenient wrapper for the [`SentenceTransformer.encode`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode) method in the sense that the prompt `"document"` and the task `"document"` are selected automatically.
However, I'm noticing that the prompt is simply prepended to the provided text rather than having it be formatted. The prompt for `"document"` is `title: {title | "none"} | text: {content}` and inside the `encode` method simply prepends it: https://github.com/UKPLab/sentence-transformers/blob/7341bf155b4349b88690b78c84beb5aa658c439f/sentence_transformers/SentenceTransformer.py#L1040
Meaning that the resulting input to the embedding model would look like `title: none | text: {OUR_TEXT}`. But what if we wanted to include a `title` value? It seems like we'd have to pre-process the input ourselves. But then what is the point of using `encode_document`? | https://github.com/huggingface/sentence-transformers/issues/3532 | closed | [] | 2025-09-28T06:32:51Z | 2025-09-30T10:59:24Z | null | seanswyi |
huggingface/transformers | 41,186 | Qwen2.5-VL restore tensor multi-image form |
Hello, I have recently been experimenting with qwen2.5-vl (https://github.com/huggingface/transformers/blob/v4.52-release/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py). I noticed that multiple images are pre-merged here,
```
image_embeds = self.get_image_features(pixel_values, image_grid_thw)
```
but I want to process each image individually, such as performing pooling on each image. I found that when I attempt operations like
```
image_embeds.view(n_img, image_embeds.shape[0]//n_img, -1)
```
I cannot correctly restore the multi-image format. Could you please advise on how to handle this?
| https://github.com/huggingface/transformers/issues/41186 | closed | [] | 2025-09-28T03:36:24Z | 2025-11-05T08:02:55Z | 2 | NiFangBaAGe |
huggingface/peft | 2,802 | Guide on training that requires both LoRA and base model forward calls ? | Hi, I'm working on some training variants that require hidden states from the base model and the hidden states produced with LoRA. I'm currently initializing two separate model objects:
```
from peft import get_peft_model
m1=AutoModelForCausalLM.from_pretrained(model_path)
m2=AutoModelForCausalLM.from_pretrained(model_path)
lora_config = LoraConfig(....)
m2 = get_peft_model(m2, lora_config)
```
Is there already an api to call non-lora forward with `m2` object ? I believe it'll be more memory efficient. | https://github.com/huggingface/peft/issues/2802 | closed | [] | 2025-09-27T23:12:23Z | 2025-10-15T10:26:15Z | 3 | thangld201 |
huggingface/lerobot | 2,072 | How to run lerobot with RTX 5090? If not possible, please add support | ### System Info
```Shell
- lerobot version: 0.3.4
- Platform: Linux-6.14.0-32-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface Hub version: 0.35.1
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.8.0+cu128
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.8
- GPU model: NVIDIA GeForce RTX 5090
- Using GPU in script?: Yes
```
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to run the train script as shown in the examples
```
python -m lerobot.scripts.lerobot_train --policy.path=cijerezg/smolvla-test --dataset.repo_id=cijerezg/pick-up-train-v1 --batch_size=48 --steps=20000 --output_dir=outputs/train/my_smolvla_pickup_v9 --job_name=my_smolvla_training --policy.device=cuda --wandb.enable=true --policy.repo_id=pickup_policy_v5 --save_freq=1000
```
### Expected behavior
I expect it to run, but instead I get the following error:
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py", line 363, in <module>
main()
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py", line 359, in main
train()
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/configs/parser.py", line 225, in wrapper_inner
response = fn(cfg, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py", line 263, in train
batch = next(dl_iter)
^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/utils.py", line 917, in cycle
yield next(iterator)
^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 734, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1516, in _next_data
return self._process_data(data, worker_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1551, in _process_data
data.reraise()
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/_utils.py", line 769, in reraise
raise exception
NotImplementedError: Caught NotImplementedError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/_utils/worker.py", line 349, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/lerobot_dataset.py", line 874, in __getitem__
video_frames = self._query_videos(query_timestamps, ep_idx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/lerobot_dataset.py", line 846, in _query_videos
frames = decode_video_frames(video_path, shifted_query_ts, self.tolerance_s, self.video_backend)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py", line 69, in decode_video_frames
return decode_video_frames_torchcodec(video_path, timestamps, tolerance_s)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py", line 248, in decode_video_frames_torchcodec
decoder = decoder_cache.get_decoder(str(video_path))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py", line 193, in get_decoder
decoder = VideoDecoder(file_handle, seek_mode="approximate")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torchcodec/decoders/_video_decoder.py", line 89, in __init__
self._decoder = create_decoder(source=source, seek_mode=seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/Research/ | https://github.com/huggingface/lerobot/issues/2072 | closed | [] | 2025-09-27T19:52:42Z | 2025-11-08T07:53:00Z | null | cijerezg |
huggingface/text-generation-inference | 3,333 | How to use prefix caching | Hi
I can't find a way to turn on the prefix caching
When I run any model, I always get:
Using prefix caching = False
Thanks a lot | https://github.com/huggingface/text-generation-inference/issues/3333 | open | [] | 2025-09-27T14:14:37Z | 2025-09-29T11:52:48Z | null | Noha-Magdy |
huggingface/smol-course | 259 | [QUESTION] Is this a bug in smollmv3's chat template? |
Hi
I am reading this
https://huggingface.co/learn/smol-course/unit1/2#chat-templates-with-tools
I feel like there is a bug in `HuggingFaceTB/SmolLM3-3B` 's chat template
from the example
```
# Conversation with tool usage
messages = [
{"role": "system", "content": "You are a helpful assistant with access to tools."},
{"role": "user", "content": "What's the weather like in Paris?"},
{
"role": "assistant",
"content": "I'll check the weather in Paris for you.",
"tool_calls": [
{
"id": "call_1",
"type": "function",
"function": {
"name": "get_weather",
"arguments": '{"location": "Paris, France", "unit": "celsius"}'
}
}
]
},
{
"role": "tool",
"tool_call_id": "call_1",
"content": '{"temperature": 22, "condition": "sunny", "humidity": 60}'
},
{
"role": "assistant",
"content": "The weather in Paris is currently sunny with a temperature of 22°C and 60% humidity. It's a beautiful day!"
}
]
# Apply chat template with tools
formatted_with_tools = tokenizer.apply_chat_template(
messages,
tools=tools,
tokenize=False,
add_generation_prompt=False
)
print("Chat template with tools:")
print(formatted_with_tools)
```
I got this result
```
Chat template with tools:
<|im_start|>system
## Metadata
Knowledge Cutoff Date: June 2025
Today Date: 27 September 2025
Reasoning Mode: /think
## Custom Instructions
You are a helpful assistant with access to tools.
### Tools
You may call one or more functions to assist with the user query.
You are provided with function signatures within <tools></tools> XML tags:
<tools>
{'type': 'function', 'function': {'name': 'get_weather', 'description': 'Get the current weather for a location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit'], 'description': 'The temperature unit'}}, 'required': ['location']}}}
{'type': 'function', 'function': {'name': 'calculate', 'description': 'Perform mathematical calculations', 'parameters': {'type': 'object', 'properties': {'expression': {'type': 'string', 'description': 'Mathematical expression to evaluate'}}, 'required': ['expression']}}}
</tools>
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
<|im_end|>
<|im_start|>user
What's the weather like in Paris?<|im_end|>
<|im_start|>assistant
I'll check the weather in Paris for you.<|im_end|>
<|im_start|>user
{"temperature": 22, "condition": "sunny", "humidity": 60}<|im_end|>
<|im_start|>assistant
The weather in Paris is currently sunny with a temperature of 22°C and 60% humidity. It's a beautiful day!<|im_end|>
```
Which is kind of weird.
The first thing is there is no tool call in below message
```
<|im_start|>assistant
I'll check the weather in Paris for you.<|im_end|>
```
I expect it to have `<tool_call> ... </tool_call>` in it.
the second thing is why the `tool` role got replace with `user` role.
Should not we explicitly specify the role?
Can someone help me with this, please? | https://github.com/huggingface/smol-course/issues/259 | closed | [
"question"
] | 2025-09-27T10:19:37Z | 2025-11-24T18:40:09Z | null | Nevermetyou65 |
huggingface/accelerate | 3,797 | Question: ReduceLROnPlateau wrapped by AcceleratedScheduler in DDP may multiply LR by num_processes? | Hi,
I’m using ReduceLROnPlateau wrapped by AcceleratedScheduler in a multi-GPU / DDP setup (num_processes=8).
My main process calls:
```
lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=self.hyper_params['lr_decay_factor'], patience=self.hyper_params['lr_reduce_patient']
)
model, optimizer, train_loader, val_loader, lr_scheduler, = accelerator.prepare(
model_bundle.model, optimizer, data_loaders.train_loader, data_loaders.val_loader, lr_scheduler
)
for epoch in range(self.hyper_params['epochs']):
# train...
val_loss = self.eval()
lr_scheduler.step(val_loss)
```
I noticed that AcceleratedScheduler.step() does:
```
num_processes = AcceleratorState().num_processes
for _ in range(num_processes):
# Special case when using OneCycle and `drop_last` was not used
if hasattr(self.scheduler, "total_steps"):
if self.scheduler._step_count <= self.scheduler.total_steps:
self.scheduler.step(*args, **kwargs)
else:
self.scheduler.step(*args, **kwargs)
```
Will this cause the LR to be reduced num_processes times for a single validation step?
Thanks! | https://github.com/huggingface/accelerate/issues/3797 | closed | [] | 2025-09-26T10:02:20Z | 2025-11-03T15:08:09Z | 1 | nicelulu |
huggingface/lerobot | 2,050 | I wonder how to use RL on so101 within sim environment? | https://github.com/huggingface/lerobot/issues/2050 | closed | [
"question",
"simulation",
"good first issue"
] | 2025-09-26T06:52:38Z | 2025-10-08T18:04:44Z | null | Temmp1e | |
huggingface/lerobot | 2,045 | I would appreciate it if you could explain how to train the slicing clay model | I am planning to conduct a clay-cutting task using pi0. Since this type of task is not typically included among pi0’s foundation model tasks, I would like to inquire how many episodes (and the approximate duration of each) would generally be required for such a custom task.
The task I have in mind involves cutting clay in this manner, and I am uncertain whether it can be made to work effectively. I would greatly appreciate any realistic advice or guidance you could provide on this matter.
<img width="1333" height="1065" alt="Image" src="https://github.com/user-attachments/assets/cd474850-c09a-4ae0-9668-a2ce8c2b3b6e" /> | https://github.com/huggingface/lerobot/issues/2045 | open | [] | 2025-09-26T00:51:59Z | 2025-09-26T00:51:59Z | null | pparkgyuhyeon |
huggingface/lerobot | 2,042 | Question: How to train to get Task Recovery behavior? | We would need the robot to be able to detect a failure (like dropping an object) and attempt to correct it to continue with the task.
How would the training data would look like for this?
Thanks | https://github.com/huggingface/lerobot/issues/2042 | open | [] | 2025-09-25T15:52:55Z | 2025-09-25T15:52:55Z | null | raul-machine-learning |
huggingface/accelerate | 3,794 | Error when evaluating with multi-gpu | I met a problem when evaluating Llada-8B with multi-gpu ( **Nvidia V100** ) using accelerate+lm_eval. Error occurs when **num_processes>1**.
but there is no problem with single GPU, all the other cfgs are the same.
How can i solve this problem?
I use this command to evaluate
accelerate launch --config_file config1.yaml eval_llada.py --tasks ${task} --num_fewshot ${num_fewshot} \
--confirm_run_unsafe_code --model llada_dist \
--model_args model_path='/raid/data/zhouy/model_data/LLaDA-8B-Instruct',
gen_length=${length},steps=${length},block_length=${block_length},show_speed=True
This is my config1.yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
enable_cpu_affinity: false
machine_rank: 0
main_process_ip: null
main_process_port: 5678
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
Here is the Error logs:
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/eval_llada.py", line 364, in <module>
[rank1]: cli_evaluate()
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/__main__.py", line 389, in cli_evaluate
[rank1]: results = evaluator.simple_evaluate(
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/utils.py", line 422, in _wrapper
[rank1]: return fn(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/evaluator.py", line 308, in simple_evaluate
[rank1]: results = evaluate(
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/utils.py", line 422, in _wrapper
[rank1]: return fn(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/evaluator.py", line 528, in evaluate
[rank1]: resps = getattr(lm, reqtype)(cloned_reqs)
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/eval_llada.py", line 312, in generate_until
[rank1]: generated_answer, nfe = generate_with_dual_cache(self.model, input_ids, steps=self.steps, gen_length=self.gen_length, block_length=self.block_length,
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank1]: return func(*args, **kwargs)
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/generate.py", line 208, in generate_with_dual_cache
[rank1]: output = model(x, use_cache=True)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1643, in forward
[rank1]: else self._run_ddp_forward(*inputs, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1459, in _run_ddp_forward
[rank1]: return self.module(*inputs, **kwargs) # type: ignore[index]
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 818, in forward
[rank1]: return model_forward(*args, **kwargs)
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 806, in __call__
[rank1]: return convert_to_fp32(self.model_forward(*args, **kwargs))
[rank1]: File "/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
[rank1]: return func(*args, **kwargs)
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/model/modeling_llada.py", line 1582, in forward
[rank1]: outputs = self.model.forward(
[rank1]: File "/home/zhouy/dllm/Fast-dLLM-main/llada/model/modeling_llada.py", line 1479, in forward
[rank1]: x, cache = block(x, attention_bias=attention_bias, layer_past=layer_past, use_ca | https://github.com/huggingface/accelerate/issues/3794 | closed | [] | 2025-09-25T14:42:29Z | 2025-11-03T15:08:12Z | 1 | adfad1 |
huggingface/text-embeddings-inference | 728 | Compile error in multiple environments for CPU backend | ### System Info
TEI source code:
- Latest main branch(0c1009bfc49b759fe75eed4fd377b4fbad534ad5);
- Latest release `v1.8.2`;
- Release `v1.8.1`
Tested platform:
- Win: AMD 7950X+Windows 10 x64 Version 10.0.19045.6332;
- WSL2: AMD 7950X+Debian 13 on wsl2 (Linux DESKTOP 5.15.167.4-microsoft-standard-WSL2 # 1 SMP Tue Nov 5 00:21:55 UTC 2024 x86_64 GNU/Linux) @ Windows 10 x64 Version 10.0.19045.6332;
- Linux: Intel 6133*2+Ubuntu 20.04;
(GPUs is not mentioned due to build TEI on CPU)
Tested rustup envs:
Freshly installed rustup: default rustup profile: cargo 1.85.1 (d73d2caf9 2024-12-31)
- Win: Freshly installed rustup & Freshly installed MSVC v143 -VS 2022 C++ build tools+Winodws 11 SDK (10.0.22621.0)+cmake
- WSL: Freshly installed rustup & gcc (Debian 14.2.0-19) 14.2.0
- Linux: Freshly installed rustup & gcc (GCC) 10.5.0
### Information
- [ ] Docker
- [x] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Reproduction
As docs' recommend, tested on 3 different envs listed above:
1. `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
2. `cargo install --path router -F mkl --verbose` (added `--verbose` for logging)
Shows compile error about **25 undefined references / external symbol** (`'vsTanh', 'vsSub', 'vsSqrt', 'vsSin', 'vsMul', 'vsLn', 'vsFmin', 'vsExp', 'vsDiv', 'vsCos', 'vsAdd', 'vdTanh', 'vdSub', 'vdSqrt', 'vdSin', 'vdMul', 'vdLn', 'vdFmin', 'vdExp', 'vdDiv', 'vdCos', 'vdAdd', 'sgemm_', 'hgemm_', 'dgemm_'`)
### Expected behavior
Expect finishing compile, but:
- Compile v1.8.2/v1.8.1/main (similar error) on Win+MSVC+AMD CPU:
```
...
Running `C:\Users\nkh04\.rustup\toolchains\1.85.1-x86_64-pc-windows-msvc\bin\rustc.exe --crate-name text_embeddings_router --edition=2021 router\src\main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=115 --crate-type bin --emit=dep-info,link -C opt-level=3 -C panic=abort -C lto=fat -C codegen-units=1 --cfg "feature=\"candle\"" --cfg "feature=\"default\"" --cfg "feature=\"dynamic-linking\"" --cfg "feature=\"http\"" --cfg "feature=\"mkl\"" --check-cfg cfg(docsrs,test) --check-cfg "cfg(feature, values(\"accelerate\", \"candle\", \"candle-cuda\", \"candle-cuda-turing\", \"candle-cuda-volta\", \"default\", \"dynamic-linking\", \"google\", \"grpc\", \"http\", \"metal\", \"mkl\", \"ort\", \"python\", \"static-linking\"))" -C metadata=e1406d246b8c925f --out-dir F:\text-embeddings-inference-1.8.2\target\release\deps -C strip=symbols -L dependency=F:\text-embeddings-inference-1.8.2\target\release\deps --extern anyhow=F:\text-embeddings-inference-1.8.2\target\release\deps\libanyhow-5751be73768123a3.rlib --extern axum=F:\text-embeddings-inference-1.8.2\target\release\deps\libaxum-8bc59cf51b8d1ae2.rlib --extern axum_tracing_opentelemetry=F:\text-embeddings-inference-1.8.2\target\release\deps\libaxum_tracing_opentelemetry-6919ca207315f42e.rlib --extern base64=F:\text-embeddings-inference-1.8.2\target\release\deps\libbase64-20907aaabfa37a5c.rlib --extern clap=F:\text-embeddings-inference-1.8.2\target\release\deps\libclap-ded1b8a7f6da29a7.rlib --extern futures=F:\text-embeddings-inference-1.8.2\target\release\deps\libfutures-55e1ce906ca8ce43.rlib --extern hf_hub=F:\text-embeddings-inference-1.8.2\target\release\deps\libhf_hub-46162d037bf61d01.rlib --extern http=F:\text-embeddings-inference-1.8.2\target\release\deps\libhttp-721bb5a8d4ad5af4.rlib --extern init_tracing_opentelemetry=F:\text-embeddings-inference-1.8.2\target\release\deps\libinit_tracing_opentelemetry-1130e5d6b02b3c83.rlib --extern intel_mkl_src=F:\text-embeddings-inference-1.8.2\target\release\deps\libintel_mkl_src-7de47f7e38d141d5.rlib --extern metrics=F:\text-embeddings-inference-1.8.2\target\release\deps\libmetrics-f38f63f59a9e401d.rlib --extern metrics_exporter_prometheus=F:\text-embeddings-inference-1.8.2\target\release\deps\libmetrics_exporter_prometheus-3e83484daaaf9a40.rlib --extern mimalloc=F:\text-embeddings-inference-1.8.2\target\release\deps\libmimalloc-55786f97dafb497c.rlib --extern num_cpus=F:\text-embeddings-inference-1.8.2\target\release\deps\libnum_cpus-26f3f7fb7d16b825.rlib --extern opentelemetry=F:\text-embeddings-inference-1.8.2\target\release\deps\libopentelemetry-43ce590757d45ebb.rlib --extern opentelemetry_otlp=F:\text-embeddings-inference-1.8.2\target\release\deps\libopentelemetry_otlp-7adf99fb9a924955.rlib --extern opentelemetry_sdk=F:\text-embeddings-inference-1.8.2\target\release\deps\libopentelemetry_sdk-48d11cd15d38a406.rlib --extern reqwest=F:\text-embeddings-inference-1.8.2\target\release\deps\libreqwest-cdbb64c7917c22c9.rlib --extern serde=F:\text-embeddings-inference-1.8.2\target\release\deps\libserde-e13a1b310cb83bc5.rlib --extern serde_json=F:\text-embeddings-inference-1.8.2\target\release\deps\libserde_json-c2074a4721fb3f74.rlib --extern simsimd=F:\text-embeddings-inference-1.8.2\target\release\deps\libsimsimd-5bf7050b419eab84.rlib --extern text_embeddings_bac | https://github.com/huggingface/text-embeddings-inference/issues/728 | open | [
"documentation",
"question"
] | 2025-09-25T11:52:16Z | 2025-11-18T14:49:01Z | null | nkh0472 |
huggingface/transformers | 41,141 | Need a concise example of Tensor Parallelism (TP) training using Trainer/SFTTrainer. | ### Feature request
I have checked the code and there are few places which talk about TP. I saw from_pretrained method for model contains tp_plan and device_mesh. I also checked that the TrainingArgument can take parallelism_config which defines the TP/CP plan along with FSDP. However, I am not able to successfully stitch things together to make the only TP based training work. Please help.
Ref:
- https://github.com/huggingface/transformers/blob/main/examples/3D_parallel.py
### Motivation
Need to enable only TP based training, but no tutorial or example is available.
### Your contribution
Given proper understanding and proper guidance, I can come up with clean example and documentation for the same. | https://github.com/huggingface/transformers/issues/41141 | open | [
"Documentation",
"Feature request",
"Tensor Parallel"
] | 2025-09-25T03:01:02Z | 2026-01-04T14:05:36Z | 10 | meet-minimalist |
huggingface/lerobot | 2,034 | dataset v2.1 and groot n1.5 | for now, groot dose not support dataset v3.0 to fine_tune ? in this case, should we continue use v2.1 ? and if we already collect data from v3, how we can convert it back to v2.1? | https://github.com/huggingface/lerobot/issues/2034 | open | [
"question",
"policies",
"dataset"
] | 2025-09-24T21:12:26Z | 2025-12-24T00:05:45Z | null | zujian-y |
huggingface/tokenizers | 1,868 | How to set the cache_dir in the Rust implementation? | Hey, thank you for your great work with these tokenizers.
When I use the tokenizers through the Python API via transformers, I can set a specific cache_dir like this
```
from transformers import AutoTokenizer
self.tokenizer = AutoTokenizer.from_pretrained(self.tokenizer_name,cache_dir = self.cache_dir)
```
How can I do that in Rust? How can I print the default cache dir (in Rust)? | https://github.com/huggingface/tokenizers/issues/1868 | open | [] | 2025-09-24T18:50:38Z | 2025-10-06T04:25:46Z | null | sambaPython24 |
huggingface/diffusers | 12,386 | Implement missing features on ModularPipeline | as i'm looking to take advantage of new `ModularPipeline` ask is to implement some currently missing features
my use case is to convert existing loaded model using standard pipeline into modular pipeline. that functionality was provided via #11915 and is now working.
first minor obstacle is that modular pipeline does not have defined params for execution
in standard pipeline i can inspect `__call__` signature to see which are allowed params
i currently work around this using
`possible = [input_param.name for input_param in model.blocks.inputs]`
please advise if this is acceptable
second one is that modular pipelines don't seem to implement normal callbacks at all (e.g. `callback_on_step_end_tensor_inputs`? at the minimum we need some kind of callback functionality to capture interim latents on each step
third is more cosmetic - modular pipeline does implement `set_progress_bar_config`, but its not doing anything as its not implement on actual block (tested with `StableDiffusionXLModularPipeline`)
cc @yiyixuxu @DN6 @sayakpaul | https://github.com/huggingface/diffusers/issues/12386 | open | [
"roadmap"
] | 2025-09-24T15:49:23Z | 2025-09-29T05:46:29Z | 0 | vladmandic |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.