repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/trl
4,397
Remove or move Multi Adapter RL
I don't think this make sense to have this as a whole section in the doc. Either remove it or update and move it to PEFT integration
https://github.com/huggingface/trl/issues/4397
closed
[ "📚 documentation", "⚡ PEFT" ]
2025-10-30T15:12:58Z
2025-11-04T23:57:56Z
0
qgallouedec
huggingface/transformers
41,948
Does Qwen2VLImageProcessor treat two consecutive images as one group/feature?
When looking at Qwen3-VL model's image processor (which uses Qwen2-VL's one), I found the following lines of code hard to understand. `L296-300` checks the number of input images (`patches.shape[0]`), and repeat the last one to make it divisible by `temporal_patch_size`. This make the model processes two consecutive i...
https://github.com/huggingface/transformers/issues/41948
closed
[]
2025-10-30T09:23:50Z
2025-10-31T01:01:09Z
3
priancho
huggingface/transformers
41,947
why Smolvlm-256M-Instruct slower then Internvl-v2-1B ?
As title, Smolvlm have smaller model size (1/4 less matrix multiplication), smaller input embedding. But, both torch.CudaEvent, timer.perf_counter with torch.sync report the slower inference time ? I wonder that does this related with the wrong implementation of Smolvlm in transformers ? inference performance comparis...
https://github.com/huggingface/transformers/issues/41947
closed
[]
2025-10-30T08:10:28Z
2025-10-31T11:47:44Z
4
HuangChiEn
huggingface/trl
4,386
Reference supported trainers in Liger Kernel integration guide
Currently, we only have an example with SFT, and it's hard to know which trainer supports liger. We should list the trainer which support liger.
https://github.com/huggingface/trl/issues/4386
closed
[ "📚 documentation", "🏋 SFT" ]
2025-10-30T04:08:04Z
2025-11-03T18:16:04Z
0
qgallouedec
huggingface/trl
4,385
Use a common 'trl-lib` namespace for the models/datasets/spaces
In the doc, we have examples using different namespaces, like `kashif/stack-llama-2`, `edbeeching/gpt-neo-125M-imdb` etc. we should unify all these examples to use a common `trl-lib` namespace.
https://github.com/huggingface/trl/issues/4385
open
[ "📚 documentation", "✨ enhancement" ]
2025-10-30T04:04:10Z
2025-10-30T04:04:38Z
0
qgallouedec
huggingface/trl
4,384
Write the subsection "Multi-Node Training"
This section must be written, with a simple code example, and a link to the `accelerate` documentation
https://github.com/huggingface/trl/issues/4384
open
[ "📚 documentation", "⚡accelerate" ]
2025-10-30T03:57:53Z
2025-12-08T16:23:23Z
2
qgallouedec
huggingface/trl
4,383
Add PEFT subsection to "Reducing Memory Usage"
PEFT is a major technique to reduce memory usage of the training. We should have a small section pointing to the PEFT integration guide
https://github.com/huggingface/trl/issues/4383
closed
[ "📚 documentation", "✨ enhancement", "⚡ PEFT" ]
2025-10-30T03:55:55Z
2025-11-07T00:03:01Z
0
qgallouedec
huggingface/trl
4,382
Populate "Speeding Up Training"
Currently, this section only mentions vLLM. We should have a small guide for other methods, like flash attention. Ideally, to avoid repetition, we should have a very light example, and a link to the place in the doc where it's more extensively discussed, example vLLM pointing to vLLM integration guide
https://github.com/huggingface/trl/issues/4382
closed
[ "📚 documentation", "⚡accelerate" ]
2025-10-30T03:54:34Z
2025-12-01T09:47:23Z
0
qgallouedec
huggingface/trl
4,380
Fully transition from `flash-attn` to `kernels`
The new recommended way to use flash attention is to use kernels. We should update our tests, and documentation to use `kernels` instead of "flash_attention2". Eg https://github.com/huggingface/trl/blob/1eb561c3e9133892a2e907d84123b46e40cbc5a0/docs/source/reducing_memory_usage.md#L149 ```diff - training_args = DPOCon...
https://github.com/huggingface/trl/issues/4380
closed
[ "📚 documentation", "✨ enhancement" ]
2025-10-30T03:46:07Z
2025-11-13T04:07:35Z
0
qgallouedec
huggingface/trl
4,379
Remove or populate "Training customization"
Currently, this part of the documentation shows some possible customizations that applies to all trainers https://huggingface.co/docs/trl/main/en/customization However, it only features a few examples. This sections would make sense if it gets populated with other customizations, or removed. This thread can be used to...
https://github.com/huggingface/trl/issues/4379
closed
[ "📚 documentation" ]
2025-10-30T03:41:02Z
2025-12-01T09:39:09Z
0
qgallouedec
huggingface/trl
4,378
Extend basic usage example to all supported CLIs
currently https://huggingface.co/docs/trl/main/en/clis?command_line=Reward#basic-usage shows only basic example usage for SFT, DPO and Reward. We should have it for all supported CLIs (ie, GRPO, RLOO, KTO)
https://github.com/huggingface/trl/issues/4378
closed
[ "📚 documentation", "🏋 KTO", "🏋 RLOO", "📱 cli", "🏋 GRPO" ]
2025-10-30T03:35:36Z
2025-11-14T01:13:17Z
0
qgallouedec
vllm-project/vllm
27,783
[Usage]: Model performance different from api
### Your current environment ```text vllm==0.10.0 ``` ### How would you like to use vllm I'm running model Qwen3-8B with vllm. I also run the same experiment using Qwen3-8B api. But I find the result is quite different, the accuracy of api-model on my task is much higher than the vllm-model. I use the same temperat...
https://github.com/vllm-project/vllm/issues/27783
open
[ "usage" ]
2025-10-30T03:30:02Z
2025-10-30T03:30:02Z
0
fny21
vllm-project/vllm
27,782
[Usage]: The same configuration v0.11.0 will report insufficient video memory compared to v0.8.5
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm The server is a 4090 with 4 cards Docker runs vllm openai: v0.8.5 deployment command: "command: --model /models/Qwen3/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1 --tensor_parallel_s...
https://github.com/vllm-project/vllm/issues/27782
open
[ "usage" ]
2025-10-30T03:24:54Z
2025-11-06T06:53:15Z
2
lan-qh
huggingface/trl
4,376
Rewrite `peft_integration.md`
This section of the documentation is widely outdated and rely only on PPO. Ideally, we should have a clear documentation that shows how to use peft with SFT, DPO and GRPO at least, via the `peft_config` argument. We could have additional subsection about QLoRA and prompt-tuning.
https://github.com/huggingface/trl/issues/4376
closed
[]
2025-10-30T03:23:24Z
2025-11-24T10:39:27Z
0
qgallouedec
vllm-project/vllm
27,778
[Usage]: Is DP + PP a possible way to use vLLM?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm Hi there, I wonder if we can adopt DP + PP in vLLM to form a heterogeneous inference pipeline. For example, If i have two V100 32G GPUs and one A100 80G GPU, can I utilize them in pipeline parallelism w...
https://github.com/vllm-project/vllm/issues/27778
open
[ "usage" ]
2025-10-30T02:05:06Z
2025-10-30T02:05:06Z
0
oldcpple
vllm-project/vllm
27,746
[Bug]: `strict` value in function definitions causes request error when using Mistral tokenizer
### Your current environment Tested with latest vllm source build from main ### 🐛 Describe the bug Start vLLM with a model that uses the mistral tokenizer: ``` vllm serve mistralai/Mistral-Small-24B-Instruct-2501 \ --enable-auto-tool-choice \ --tool-call-parser mistral \ --tokenizer-mode mistral ``` Send a ...
https://github.com/vllm-project/vllm/issues/27746
open
[ "bug" ]
2025-10-29T14:33:13Z
2025-10-30T19:14:50Z
4
bbrowning
huggingface/trl
4,368
GKD: multimodal inputs?
Does the Generalized Knowledge Distillation trainer (GKDTrainer) support multimodal inputs (VLMs)? If yes, what's the expected dataset format? There is no example of this in the documentation. Thanks!
https://github.com/huggingface/trl/issues/4368
closed
[ "📚 documentation", "❓ question", "🏋 GKD" ]
2025-10-29T14:08:44Z
2025-11-07T19:26:23Z
2
e-zorzi
huggingface/lerobot
2,338
policy gr00t not found when do async inference with gr00t
### System Info ```Shell lerobot version: 3f8c5d98 (HEAD -> main, origin/main, origin/HEAD) fix(video_key typo): fixing video_key typo in update_video_info (#2323) ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction I h...
https://github.com/huggingface/lerobot/issues/2338
closed
[ "bug", "question", "policies" ]
2025-10-29T05:36:20Z
2025-11-21T15:34:21Z
null
jcl2023
huggingface/lerobot
2,337
Can I continue reinforcement learning in HIL-SERL using a pi0
Can I continue reinforcement learning in HIL-SERL using a pi0 model from LERobot that has been fine-tuned via imitation learning?
https://github.com/huggingface/lerobot/issues/2337
open
[ "question", "policies" ]
2025-10-29T04:30:26Z
2025-11-11T03:13:23Z
null
pparkgyuhyeon
huggingface/peft
2,878
peft " target_modules='all-linear' " have different behavior between x86 and aarch ?
### System Info i have tested on 2 arch (x86, arm) then find this bug. both arch have peft==0.17.1 ### Who can help? @benjaminbossan @githubnemo ### Reproduction Reproduction script : bug_reprod.py ```python from transformers import AutoModelForImageTextToText model = AutoModelForImageTextToText.from_pretrained("...
https://github.com/huggingface/peft/issues/2878
closed
[]
2025-10-29T03:43:02Z
2025-12-07T15:03:33Z
4
HuangChiEn
huggingface/peft
2,877
peft config 'all-linear' include lm_head, is there anyway to remove it ?
I'm not sure is it a bug or my modification affect the peft ? > since some issue reveal that 'all-linear' will not include the lm_head ```python if 'internvl' in self.variant.lower(): if '3_5' in self.variant: self.model = AutoModelForImageTextToText.from_pretrained(self.variant, trust_remote_code=True) ...
https://github.com/huggingface/peft/issues/2877
closed
[]
2025-10-29T02:19:21Z
2025-10-29T03:43:20Z
1
HuangChiEn
huggingface/lerobot
2,335
How to Visualize All Episodes of a LeRobot Dataset Locally?
Hi everyone, I have a question about LeRobot datasets. I'd like to inspect my data locally, but using the command _lerobot-dataset-viz --repo-id=${HF_USER}/record-test --episode-index=0_ only allows me to view one episode at a time, which is quite cumbersome. Is there a way to visualize all episodes of a dataset local...
https://github.com/huggingface/lerobot/issues/2335
open
[ "question", "dataset" ]
2025-10-29T02:01:01Z
2025-12-29T12:18:57Z
null
Vacuame
vllm-project/vllm
27,692
it run on rtx 5060 ti 16 gb
### Your current environment https://github.com/bokkob556644-coder/suc-vllm-rtx-5060-ti-16-gb/blob/main/suc_vllm.txt ### How would you like to use vllm [I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ](https://github.com/bokkob556644-coder/suc-vllm-rtx-5060...
https://github.com/vllm-project/vllm/issues/27692
open
[ "usage" ]
2025-10-28T21:43:00Z
2025-10-28T21:43:16Z
1
bokkob556644-coder
huggingface/transformers
41,919
LFM2 image_processing_lfm2_vl_fast.py Mean Std swapped?
### System Info In LFM2-VL image_processing_lfm2_vl_fast.py line 212 following the MEAN and STD from imagenet is used for preprocessing. However it seems like they are swapped: image_mean = IMAGENET_STANDARD_STD image_std = IMAGENET_STANDARD_MEAN or is this correct ? ### Who can help? @Cyrilvallez ### Inf...
https://github.com/huggingface/transformers/issues/41919
closed
[ "bug" ]
2025-10-28T16:17:44Z
2025-10-31T15:02:40Z
4
florianvoss-commit
vllm-project/vllm
27,667
[Usage]: DeepseekOCR on CPU missing implementation for fused_topk
### Your current environment Try to test if it is possible to run DeepseekOCR on CPU using current git main branch. Fails because there is no implementation of `fused_topk` for CPU. ``` INFO 10-28 15:41:18 [v1/worker/cpu_model_runner.py:77] Warming up model for the compilation... ERROR: Traceback (most recent cal...
https://github.com/vllm-project/vllm/issues/27667
open
[ "usage" ]
2025-10-28T16:14:40Z
2025-10-28T16:14:40Z
0
brainlag
vllm-project/vllm
27,661
[RFC]: Consolidated tool call parser implementations by type (JSON, Python, XML, Harmony)
### Motivation. When someone wants to add a new tool call parser today, they typically choose an existing tool call parser that looks close to what is needed, copy it into a new file, and adjust things here and there as needed for their specific model. Sometimes tests get added, and sometimes not. Sometimes the change...
https://github.com/vllm-project/vllm/issues/27661
open
[ "RFC" ]
2025-10-28T14:54:10Z
2025-10-30T16:14:09Z
2
bbrowning
huggingface/lerobot
2,329
smolvla base model ( the Vlm part) to other model
Can I change smolvla base model ( the Vlm part) to other model? What should I do? Thanks
https://github.com/huggingface/lerobot/issues/2329
closed
[ "question", "policies" ]
2025-10-28T12:28:44Z
2025-10-31T15:09:12Z
null
smartparrot
vllm-project/vllm
27,649
[Usage]: Qwen3-32B on RTX PRO 6000 (55s First Token Delay and 15t/s)
Why does the Qwen3-32B model take 55 seconds before producing the first token, and why is the generation speed only 15t/s? My vLLM configuration: Device: GB202GL [RTX PRO 6000 Blackwell Server Edition] Nvidia Driver Version:580.95.05 CUDA Version:13.0 Docker configuration: ```sh PORT=8085 MODEL_PATH=Qwen/Qwen3-32...
https://github.com/vllm-project/vllm/issues/27649
open
[ "usage" ]
2025-10-28T10:49:43Z
2025-11-07T02:30:26Z
4
yizhitangtongxue
vllm-project/vllm
27,646
[Usage]: How to use vllm bench serve to bench remote deployed vllm models (can't bench when ep enabled!!!)
### Your current environment I deployed dpskv3 in a remote server using: ``` export VLLM_USE_V1=1 export VLLM_ALL2ALL_BACKEND=deepep_low_latency vllm serve /models/hf/models--deepseek-ai--DeepSeek-V3 --tensor-parallel-size 1 --data-parallel-size 8 --enable-expert-parallel --no-enforce-eager --load-format dummy ``` An...
https://github.com/vllm-project/vllm/issues/27646
open
[ "usage" ]
2025-10-28T09:56:37Z
2025-10-28T15:23:06Z
3
Valerianding
huggingface/transformers
41,910
Breaking change about AWQ Fused modules due to Attention Refactor
### System Info transformers==5.0.0dev autoawq==0.2.9 autoawq_kernels==0.0.9 torch==2.6.0+cu124 ### Who can help? Due to PR #35235, the `past_key_values` is no longer a returned value of attention modules. However, when using AWQ models with Fused modules [AWQ Fused modules docs](https://huggingface.co/docs/transfo...
https://github.com/huggingface/transformers/issues/41910
closed
[ "bug" ]
2025-10-28T08:29:03Z
2025-11-20T13:41:34Z
3
fanqiNO1
vllm-project/vllm
27,636
[Usage]: vllm如何保留qwen3-vl中的special token
### Your current environment 我微调过的qwen3-vl模型的grounding格式为:<|object_ref_start|>图片<|object_ref_end|><|box_start|>(x1,y1),(x2,y2)<|box_end|> 使用vllm serve推理的格式是:图片(460,66),(683,252),这个是直接忽略了special token么,是否有方法可以保留。 ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't ...
https://github.com/vllm-project/vllm/issues/27636
open
[ "usage" ]
2025-10-28T06:52:16Z
2025-10-28T06:52:16Z
0
qfs666
huggingface/diffusers
12,553
Reason to move from OpenCV to ffmpeg
I see that `diffusers.utils.export_to_video()` encourages ffmpeg usage instead of OpenCV. Can you share the reason? I'm looking for a way to add video decoding to my project so I'm collecting arguments.
https://github.com/huggingface/diffusers/issues/12553
open
[]
2025-10-28T06:49:48Z
2025-11-07T13:27:03Z
10
Wovchena
vllm-project/vllm
27,634
[Usage]: how to use --quantization option of `vllm serve`?
### Your current environment ```text ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0 Clang version : Could not collect CMake version ...
https://github.com/vllm-project/vllm/issues/27634
open
[ "usage" ]
2025-10-28T06:24:38Z
2025-10-28T15:57:47Z
3
Septemberlemon
huggingface/candle
3,151
Tensor conversion to_vec1() failing on 0.9.2-alpha.1 - Metal
Dependencies ```toml candle-core = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] } candle-nn = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] } candle-transformers = { git = "https://github.com/huggingface/candle", rev = "df618f8", featur...
https://github.com/huggingface/candle/issues/3151
closed
[]
2025-10-27T21:36:17Z
2025-11-06T22:44:14Z
2
si-harps
vllm-project/vllm
27,604
[Bug]: Is Flashinfer Attn backend supposed to work with FP8 KV cache on Hopper?
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Collecting environment information... ============================== System Info ============================== OS : Amazon Linux 2023.7.20250428 (x86_64) GCC version ...
https://github.com/vllm-project/vllm/issues/27604
open
[ "bug", "nvidia" ]
2025-10-27T20:22:37Z
2025-11-06T02:37:17Z
10
jmkuebler
huggingface/smolagents
1,834
Discussion: how to edit the messages sent to the underlying LLM
Hi! I'm working on a feature to allow a user to add callbacks to modify the content before it is sent to the LLM, inside the agent loop. I noticed this strange behavior where the first user message must start with "New Task:", otherwise I get this cryptic and misleading error message. ""Error:\nError while parsing ...
https://github.com/huggingface/smolagents/issues/1834
closed
[]
2025-10-27T17:28:38Z
2025-10-27T19:02:39Z
null
njbrake
huggingface/peft
2,873
Can I use Lora fine-tuning twice?
I’m planning to work with a two-stage LoRA fine-tuning pipeline (Stage 1: SFT with code completion outputs; Stage 2: SFT with full-code outputs; RL follows). My question is: When I continue training the same LoRA adapter in Stage 2, will I risk overwriting or degrading the knowledge learned during Stage 1 ? In other wo...
https://github.com/huggingface/peft/issues/2873
closed
[]
2025-10-27T12:51:45Z
2025-12-05T15:05:00Z
8
tohokulgq
vllm-project/vllm
27,572
[Bug]: chat/completions stream intermittently returns null as finish_reason
### Your current environment ``` My env: vllm 0.10.0 ``` ### 🐛 Describe the bug ``` + curl -kLsS https://127.0.0.1:7888/v1/chat/completions -H 'Content-Type: application/json' --data '{ "model": "ibm/granite-3-8b-instruct", "stream": true, "messages": [ { "role...
https://github.com/vllm-project/vllm/issues/27572
open
[ "bug" ]
2025-10-27T12:14:03Z
2025-11-24T20:27:24Z
13
shuynh2017
huggingface/chat-ui
1,957
Fail to use proxy
How to make this web app go through local proxy? I tried a few methods, all of which don't work.
https://github.com/huggingface/chat-ui/issues/1957
open
[ "support" ]
2025-10-27T06:31:51Z
2025-10-30T03:31:24Z
2
geek0011
huggingface/diffusers
12,547
Fine tuning Dreambooth Flux Kontext I2I Error: the following arguments are required: --instance_prompt
### Describe the bug Hello HF team, @sayakpaul @bghira I'm encountering a persistent issue when trying to fine-tune the black-forest-labs/FLUX.1-Kontext-dev model using the train_dreambooth_lora_flux_kontext.py script. I am following the [official README instructions](https://github.com/huggingface/diffusers/blob/ma...
https://github.com/huggingface/diffusers/issues/12547
closed
[ "bug" ]
2025-10-27T00:21:34Z
2025-10-28T02:31:42Z
7
MichaelMelgarejoFlorez
huggingface/transformers
41,876
LlamaAttention num_heads
### System Info In older version of transformers, LlamaAttention init attribute num_heads. class LlamaAttention(nn.Module): def __init__(self, config): self.num_heads = config.num_attention_heads self.head_dim = config.hidden_size // config.num_attention_heads However, in the recent versions, th...
https://github.com/huggingface/transformers/issues/41876
closed
[ "bug" ]
2025-10-27T00:07:31Z
2025-10-31T00:13:31Z
2
shanhx2000
huggingface/transformers
41,874
Distributed training of SigCLIP
https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/siglip/modeling_siglip.py#L983, here define how to compute sigclip loss. In sigclip, different tpu will exchange data with each other. I want to know how to train a model in this way.
https://github.com/huggingface/transformers/issues/41874
closed
[]
2025-10-26T14:43:51Z
2025-12-04T08:02:55Z
1
zyk1559676097-dot
huggingface/transformers
41,861
transformers.Adafactor is almost 2x slower on Windows than Linux - even WSL is slow what can be reason?
I am training Qwen Image model with Kohya Musubi tuner : https://github.com/kohya-ss/musubi-tuner Exactly same setup and same machine on Linux is almost 2x faster 9.5 second / it vs 5.8 second / it On Windows it can't utilize GPU power it utilizes like 250 watt out of 575 watt What can be culprit? transformers==4...
https://github.com/huggingface/transformers/issues/41861
closed
[ "bug" ]
2025-10-25T15:49:47Z
2025-12-03T08:02:55Z
null
FurkanGozukara
huggingface/transformers
41,859
Human Verification not working?
### System Info Hello! I need your help because I can't verify my identity via email: I receive a link, open it, but get a blank page and nothing else((( I've tried several times. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An o...
https://github.com/huggingface/transformers/issues/41859
closed
[ "bug" ]
2025-10-25T10:48:52Z
2025-10-26T12:29:10Z
4
thefued
huggingface/lerobot
2,311
Question: How I can train only online without dataset?
How I can train only online? without need of dataset. Can I do it without hugging face repo id? only local? I try like that without success: ``` cat > "train_cfg.json" <<'JSON' { "job_name": "hilserl_fetch_pick_v4_cpu", "seed": 0, "env": { ...
https://github.com/huggingface/lerobot/issues/2311
open
[ "question", "dataset" ]
2025-10-25T05:07:48Z
2025-10-27T08:50:11Z
null
talregev
vllm-project/vllm
27,505
[Bug]: Value error, Found conflicts between 'rope_type=default' (modern field) and 'type=mrope'
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug vllm 0.11.0 transformers 5.0.0.dev0 torch ...
https://github.com/vllm-project/vllm/issues/27505
open
[ "bug" ]
2025-10-25T04:39:53Z
2025-10-26T07:33:27Z
1
asirgogogo
vllm-project/vllm
27,504
[Usage]: `add_vision_id` ignored for Qwen 2.5-VL-32B-Instruct
### Your current environment ```text ============================== System Info ============================== OS : Ubuntu 24.04.3 LTS (x86_64) GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version : Could not collect CMake version ...
https://github.com/vllm-project/vllm/issues/27504
open
[ "usage" ]
2025-10-25T03:42:44Z
2025-10-26T07:32:49Z
1
justachetan
huggingface/lighteval
1,028
How to evaluate MMLU-Pro
Hi, Thank you for the wonderful work! I just want to ask how to perform the evaluation on MMLU-Pro, as I don't see any related code besides the README.
https://github.com/huggingface/lighteval/issues/1028
open
[]
2025-10-24T20:03:10Z
2025-11-04T10:40:46Z
null
qhz991029
huggingface/tokenizers
1,879
rust tokenizer
Hello. Is there a rust tokenizer please? Chat gpt told me there used to be. Best regards!
https://github.com/huggingface/tokenizers/issues/1879
open
[]
2025-10-24T17:03:04Z
2025-10-24T22:03:31Z
2
gogo2464
vllm-project/vllm
27,482
[Bug]: `return_token_ids` missing tokens when using tool calls
### Your current environment Testing with latest vLLM builds from main, as of Fri Oct 24th 2025 (when this bug was opened). ### 🐛 Describe the bug The `return_token_ids` parameter that is supposed to return all generated token ids back to the client is missing quite a few tokens for Chat Completion streaming reque...
https://github.com/vllm-project/vllm/issues/27482
closed
[ "bug" ]
2025-10-24T16:10:31Z
2025-12-04T19:09:41Z
2
bbrowning
vllm-project/vllm
27,479
[Bug]: Low GPU utilization with Embedding Model
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug Initializing LLM(model="Qwen/Qwen3-Embedding-0.6B", task="embed") on a single B200 (180 GB) immediately reserves ~80...
https://github.com/vllm-project/vllm/issues/27479
open
[ "bug" ]
2025-10-24T15:18:05Z
2025-10-24T15:25:38Z
1
JhaceLam
vllm-project/vllm
27,477
[Bug]: First prompt token missing when requested with "echo"
### Your current environment vllm installed from main: `vllm 0.11.1rc3.dev23+g61089465a.precompiled` ### 🐛 Describe the bug Is it expected behavior that echo isn't returning the first token of the prompt? I am trying to collect exact prompt_token_ids which went into the model served wi...
https://github.com/vllm-project/vllm/issues/27477
closed
[ "bug" ]
2025-10-24T14:43:50Z
2025-10-24T15:04:01Z
2
eldarkurtic
huggingface/text-generation-inference
3,336
Get inference endpoint model settings via client
### Feature request Enable commands via clients such as `OpenAI` that would get model settings from an inference endpoint. Does this exist and I just can't find it? ### Motivation There is currently no clear way to get inference model settings directly from an endpoint. Individual base models have their original s...
https://github.com/huggingface/text-generation-inference/issues/3336
closed
[]
2025-10-24T13:07:15Z
2025-10-30T14:10:46Z
1
lingdoc
huggingface/datasets
7,829
Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict
### Describe the bug Hi team, first off, I love the datasets library! 🥰 I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict. Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows. Training Task: I'm performin...
https://github.com/huggingface/datasets/issues/7829
open
[]
2025-10-24T09:51:38Z
2025-11-06T13:31:26Z
4
raphaelsty
huggingface/transformers
41,842
Incorrect usage of `num_items_in_batch`?
It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430). However, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Do...
https://github.com/huggingface/transformers/issues/41842
closed
[]
2025-10-24T07:36:00Z
2025-12-01T08:02:48Z
2
gohar94
vllm-project/vllm
27,463
[Usage]: How to request DeepSeek-OCR with http request
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm i want to request DeepSeek-OCR with http, is any example for it? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bott...
https://github.com/vllm-project/vllm/issues/27463
closed
[ "usage" ]
2025-10-24T07:07:29Z
2025-10-29T17:26:49Z
8
YosanHo
huggingface/lerobot
2,306
how to use groot without flash attention
my system is ubuntu 20.04 with glibc 2.3.1 which is not supported flash attention, If I can modify the config of groot to use it with normal attention?
https://github.com/huggingface/lerobot/issues/2306
open
[ "question", "policies", "dependencies" ]
2025-10-24T06:35:18Z
2025-11-04T01:28:38Z
null
shs822
huggingface/lerobot
2,305
Error dependence about the `Transformer` library
### System Info ```Shell - lerobot version: 0.4.0 - Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39 - Python version: 3.12.12 - Huggingface Hub version: 0.35.3 - Datasets version: 4.1.1 - Numpy version: 2.2.6 - PyTorch version: 2.7.0+cu128 - Is PyTorch built with CUDA support?: True - Cuda version: 12.8 - GPU ...
https://github.com/huggingface/lerobot/issues/2305
open
[ "question", "policies", "dependencies" ]
2025-10-24T05:59:32Z
2025-11-14T16:01:49Z
null
sunshineharry
vllm-project/vllm
27,454
[Usage]: How to set the expert id on each EP by myself after setting EP in Deepseek (how to reorder experts?)
### Your current environment ```text vllm 0.8.5 ``` ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot liv...
https://github.com/vllm-project/vllm/issues/27454
open
[ "usage" ]
2025-10-24T03:15:16Z
2025-10-24T07:27:50Z
2
HameWu
vllm-project/vllm
27,448
[Usage]: how to pass multi turn multimode messages to Vllm?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues...
https://github.com/vllm-project/vllm/issues/27448
open
[ "usage" ]
2025-10-24T02:41:45Z
2025-10-24T03:33:13Z
1
cqray1990
huggingface/lerobot
2,304
How to load local model?
For example, i'm trying to fine-tune pi0, so I downloaded pi0_base locallly and save it in [position A,like lerobot/models/pi0_base] ,which has 5 files in total,including model.safetensors. Then how to load it in code? I used to just set model.path=[position A] But followed tuorial, it uses pretrained_path_or_name as ...
https://github.com/huggingface/lerobot/issues/2304
closed
[]
2025-10-24T01:59:26Z
2025-10-24T02:33:25Z
null
milong26
vllm-project/vllm
27,441
[Bug]: vllm/v1/core/sched/scheduler.py: Unintended reordering of requests during scheduling
### Your current environment <details> This error is independent of the environment. </details> ### 🐛 Describe the bug ### Description The function `schedule()` in [vllm/v1/core/sched/scheduler.py](https://github.com/vllm-project/vllm/blob/main/vllm/v1/core/sched/scheduler.py) is responsible for scheduling inferen...
https://github.com/vllm-project/vllm/issues/27441
open
[ "bug" ]
2025-10-23T22:35:50Z
2025-11-22T04:20:35Z
1
dongha-yoon
huggingface/lerobot
2,303
Question: Does the follower arm have an api for scripting movement?
Hi, apologies if this has been answered before or if it's not the right place to ask. I've been using the SO-101 arms for imitation learning, but recently I've wanted to try and test out the follower arm for embodied reasoning models such as Gemini ER 1.5. To do this, I figure I would need to have some way to map outpu...
https://github.com/huggingface/lerobot/issues/2303
open
[ "question", "robots", "python" ]
2025-10-23T20:40:56Z
2025-10-23T22:29:28Z
null
Buttmunky1
huggingface/lerobot
2,294
Question about the HuggingFaceVLA/smolvla_libero Model Configuration
Hello, Lerobot has officially ported [LIBERO](https://github.com/huggingface/lerobot/issues/1369#issuecomment-3323183721), and we can use the checkpoint at [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero) to evaluate the LIBERO benchmark. However, the model configuration of [Huggi...
https://github.com/huggingface/lerobot/issues/2294
open
[ "question", "policies" ]
2025-10-23T13:37:48Z
2025-10-30T07:49:17Z
null
Hesh0629
vllm-project/vllm
27,413
[Usage]: how to request a qwen2.5-VL-7B classify model served by vllm using openai SDK?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm I launch a server with the following command to serving a Qwen2.5-VL-7B model finetued for seqence classification. (this model replaced the lm_head with a 2 classes score_head) The launch command is : ...
https://github.com/vllm-project/vllm/issues/27413
open
[ "good first issue", "usage" ]
2025-10-23T12:32:25Z
2025-10-25T00:18:54Z
12
muziyongshixin
huggingface/transformers.js
1,447
How to use half precision ONNX models?
### Question Hi, I just exported a detection model with fp16 using optimum. `--dtype fp16 ` This is my pipeline: ```javascript const model = await AutoModel.from_pretrained( "./onnx_llama", { dtype: "fp16", device: "cpu" } const processor = await AutoProcessor.from_pretrained("./onnx_llama"); const { pixel_val...
https://github.com/huggingface/transformers.js/issues/1447
open
[ "question" ]
2025-10-23T09:18:26Z
2025-10-23T09:18:26Z
null
richarddd
huggingface/transformers
41,810
How do you use t5gemma decoder with a different encoder?
I am trying to combine the t5gemma decoder with a pretrained deberta encoder that I have trained from scratch using `EncoderDecoderModel`. Here is the code: ``` model_1 = "WikiQuality/pre_filtered.am" model_2 = "google/t5gemma-2b-2b-ul2" encoder = AutoModel.from_pretrained(model_1) decoder = AutoModel.from_pretrain...
https://github.com/huggingface/transformers/issues/41810
closed
[]
2025-10-23T08:48:19Z
2025-12-01T08:02:53Z
1
kushaltatariya
huggingface/accelerate
3,818
Duplicate W&B initialization in offline mode
### System Info ```Shell - `Accelerate` version: 1.10.1 ``` ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (s...
https://github.com/huggingface/accelerate/issues/3818
closed
[ "good first issue" ]
2025-10-23T02:19:38Z
2025-12-16T13:10:48Z
3
ShuyUSTC
vllm-project/vllm
27,347
[Usage]: vllm: error: unrecognized arguments: --all2all-backend deepep_low_latency
### Your current environment ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 24.04.2 LTS (x86_64) GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version : Cou...
https://github.com/vllm-project/vllm/issues/27347
closed
[ "usage" ]
2025-10-22T14:36:18Z
2025-10-22T15:07:13Z
1
Valerianding
vllm-project/vllm
27,343
[Usage]: Can't get result from /pooling api when using Qwen2.5-Math-PRM-7B online
### Your current environment ``` The output of `python collect_env.py` Collecting environment information... [140/1781] ============================== ...
https://github.com/vllm-project/vllm/issues/27343
closed
[ "usage" ]
2025-10-22T13:36:51Z
2025-10-23T03:39:13Z
3
zgc6668
huggingface/transformers.js
1,446
Zhare-AI/sd-1-5-webgpu on HuggingFace.co lists itself as Transformer.js supported?
### Question [Zhare-AI/sd-1-5-webgpu](https://huggingface.co/Zhare-AI/sd-1-5-webgpu) is a `text-to-image` model and is marked as Transformers.js compatible, and even shows demo code using Transformers.js on its `huggingface.co` page. Their example code fails with an error saying `text-to-image` is not supported in Tra...
https://github.com/huggingface/transformers.js/issues/1446
closed
[ "question" ]
2025-10-22T12:20:16Z
2025-10-24T14:33:17Z
null
LostBeard
vllm-project/vllm
27,336
[Feature]: Make promt_token_ids optional in streaming response (disable by default)
### 🚀 The feature, motivation and pitch Starting with v0.10.2, the first server-sent event (SSE) in streaming responses now includes the full list of `prompt_token_ids`. While this can be useful for debugging or detailed inspection, it introduces several practical issues in production environments: 1. Large payloa...
https://github.com/vllm-project/vllm/issues/27336
closed
[ "feature request" ]
2025-10-22T11:42:41Z
2025-10-27T11:06:45Z
1
Gruner-atero
huggingface/transformers
41,775
Hugging Face website and models not reachable
### System Info ``` $ pip show transformers Name: transformers Version: 4.57.1 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/hugg...
https://github.com/huggingface/transformers/issues/41775
closed
[ "bug" ]
2025-10-22T07:40:32Z
2025-11-21T08:10:00Z
8
christian-rauch
vllm-project/vllm
27,319
[Usage]: Quantized FusedMoE crashed in graph compiled stage
### Your current environment ```text ============================== System Info ============================== OS : Ubuntu 24.04.2 LTS (x86_64) GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version : 19.0.0git (https://github.com/RadeonOpenC...
https://github.com/vllm-project/vllm/issues/27319
closed
[ "rocm", "usage" ]
2025-10-22T06:29:32Z
2025-10-24T02:19:55Z
1
Rus-P
vllm-project/vllm
27,298
[Doc]: Update metrics documentation to remove V0 references and add v1 changes.
## Problem The metrics documentation in `docs/design/metrics.md` still contains references to V0 metrics implementation, but V0 metrics have been removed after @njhill 's PR https://github.com/vllm-project/vllm/pull/27215 was merged. To avoid confusion, I think we should remove this and update it with the new set of v...
https://github.com/vllm-project/vllm/issues/27298
closed
[ "documentation" ]
2025-10-21T22:08:48Z
2025-10-22T13:29:17Z
1
atalhens
vllm-project/vllm
27,268
[Usage]: failed to infer device type on GCP COS despite nvidia container toolkit installed
### Your current environment I failed to run this script on GCP COS. ### How would you like to use vllm I was trying to use VLLM on a Google Cloud (GCP) Container-Optimized OS (COS) instance via Docker. I followed GCP's [documentation](https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus) to insta...
https://github.com/vllm-project/vllm/issues/27268
open
[ "usage" ]
2025-10-21T15:24:21Z
2025-10-21T15:24:21Z
0
forrestbao
vllm-project/vllm
27,265
[Usage]: Cannot register custom model (Out-of-Tree Model Integration)
``` ### Your current environment ============================== Versions of relevant libraries ============================== [pip3] flake8==7.1.1 [pip3] flashinfer==0.1.6+cu124torch2.4 [pip3] flashinfer-python==0.2.5 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-...
https://github.com/vllm-project/vllm/issues/27265
closed
[ "usage" ]
2025-10-21T14:17:17Z
2025-10-25T13:19:40Z
1
Hyperwjf
vllm-project/vllm
27,263
[Responses API] Support tool calling and ouput token streaming
Splitting off from #14721 > FYI a start has been made here https://github.com/vllm-project/vllm/pull/20504 > > That PR (which was merged to `main` on [7/9/2025](https://github.com/vllm-project/vllm/pull/20504#event-18495144925)) explicitly has an unchecked boxes for > > * [ ] Tool/functional calling support > * [ ] ...
https://github.com/vllm-project/vllm/issues/27263
open
[]
2025-10-21T12:36:44Z
2025-12-07T01:06:46Z
4
markmc
vllm-project/vllm
27,252
[Usage]: ”@app.post("/generate")“ API is support qwen2_vl or not?
### Your current environment i want tot know ”@app.post("/generate")“ API support qwen2_vl or not? ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched...
https://github.com/vllm-project/vllm/issues/27252
open
[ "usage" ]
2025-10-21T07:30:11Z
2025-10-21T07:30:11Z
0
wwkww
huggingface/lerobot
2,269
how to configure pi0_base to train with single camera dataset
Hi, I'm trying to train pi0_base with "lerobot/aloha_sim_transfer_cube_human" dataset which has only one camera input "observation.images.top". However, pi0 seems to expect three camera inputs: "observation.images.base_0_rgb", "observation.images.left_wrist_0_rgb", "observation.images.right_wrist_0_rgb" "ValueError:...
https://github.com/huggingface/lerobot/issues/2269
open
[ "question", "policies", "dataset" ]
2025-10-21T01:32:50Z
2025-10-21T17:36:17Z
null
dalishi
vllm-project/vllm
27,233
gguf run good
### Your current environment from vllm import LLM, SamplingParams gguf_path = "/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf" llm = LLM( gguf_path, tokenizer="Qwen/Qwen3-1.7B" ) params = SamplingParams( temperature=0.8, top_p=0.9, top_k=40, m...
https://github.com/vllm-project/vllm/issues/27233
open
[ "usage" ]
2025-10-21T00:11:26Z
2025-10-22T00:44:10Z
12
kmnnmk212-source
vllm-project/vllm
27,228
[Installation]: Compatibility with PyTorch 2.9.0?
### Your current environment ```text The output of `python collect_env.py` ``` ### How you are installing vllm Is there a version of vllm that is compatible with the latest PyTorch release 2.9.0? ``` pip install vllm==0.11.0 pip install torch==2.9.0 ``` ``` $ vllm bench latency --input-len 256 --output-len 256 --...
https://github.com/vllm-project/vllm/issues/27228
closed
[ "installation" ]
2025-10-20T21:10:24Z
2025-10-21T22:40:15Z
3
andrewor14
vllm-project/vllm
27,208
[Feature]: Upgrade CUDA version to 12.9.1 in docker images
### 🚀 The feature, motivation and pitch The current builds display warning logs like these ``` Warning: please use at least NVCC 12.9 for the best DeepGEMM performance ``` Can we bump this version easily? ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... -...
https://github.com/vllm-project/vllm/issues/27208
closed
[ "feature request" ]
2025-10-20T16:08:49Z
2025-10-21T21:20:19Z
1
jhuntbach-bc
huggingface/lerobot
2,259
Clarifications on fine-tuning on different envs and embodiments
Hi everyone, I’m currently working on fine-tuning SmolVLA and π₀ using **[RLBench](https://github.com/stepjam/RLBench)**. The robot setup is a Franka Emika Panda (7DoF + gripper), and I’ve already collected custom LeRobot datasets for a pick-and-place task ([available on my Hugging Face](https://huggingface.co/RonPlusS...
https://github.com/huggingface/lerobot/issues/2259
open
[ "question", "policies", "simulation" ]
2025-10-20T13:24:22Z
2025-12-23T10:37:31Z
null
RonPlusSign
vllm-project/vllm
27,184
[Doc]: Multi-Modal Benchmark is too simple
### 📚 The doc issue The latest doc about Multi-Modal Benchmark shows : - 1、download sharegpt4v_instruct_gpt4-vision_cap100k.json and COCO's 2017 Train images - 2、vllm serve and vllm bench serve But there is so much details to concern: - 1、delete all json that not is coco`s in sharegpt4v_instruct_gpt4-vision_cap100k...
https://github.com/vllm-project/vllm/issues/27184
open
[ "documentation" ]
2025-10-20T06:24:18Z
2025-10-20T16:44:17Z
2
BigFaceBoy
vllm-project/vllm
27,182
[Feature]: INT8 Support in Blackwell Arch
### 🚀 The feature, motivation and pitch hello, I want to use w8a8(int8) in blackwell gpus, and when I read the source code, it says, the int8 is not support by sm120. According to the nvidia-ptx-instructions, blackwell series gpus still have a int8 tensor, is there another way we use w8a8 int8 in rtx5090 by vllm now ...
https://github.com/vllm-project/vllm/issues/27182
open
[ "feature request" ]
2025-10-20T06:04:03Z
2025-10-20T06:04:03Z
0
nhanngoc94245
huggingface/optimum
2,376
Support qwen2_5_vl for ONNX export
### Feature request I would like to be able to convert [this model](https://huggingface.co/prithivMLmods/DeepCaption-VLA-V2.0-7B) which is based on Qwen 2.5 VL architecture using optimum. Right now, I get the error: ``` ValueError: Trying to export a qwen2_5_vl model, that is a custom or unsupported architecture, but...
https://github.com/huggingface/optimum/issues/2376
open
[]
2025-10-19T22:08:28Z
2026-01-06T08:03:39Z
8
ayan4m1
huggingface/transformers
41,731
transformers CLI documentation issue
### System Info - `transformers` version: 5.0.0.dev0 - Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.9 - Huggingface_hub version: 1.0.0.rc6 - Safetensors version: 0.6.2 - Accelerate version: 1.10.1 - Accelerate config: not found - DeepSpeed version: not installed - Py...
https://github.com/huggingface/transformers/issues/41731
closed
[ "bug" ]
2025-10-19T09:31:46Z
2025-12-22T08:03:09Z
14
ArjunPimpale
huggingface/chat-ui
1,947
HuggingChat MoM (Mixture-of-Models) Integration Proposal 🤗
# **HuggingChat MoM (Mixture-of-Models) Integration Proposal 🤗** **Status:** Proposal **Date:** 2025-10-19 **Version:** 1.0 **Authors**: vLLM-SR Team --- ## Executive Summary This proposal outlines the integration of **vLLM Semantic Router** into HuggingChat as a new **MoM (Mixture-of-Models)** routing option....
https://github.com/huggingface/chat-ui/issues/1947
open
[ "enhancement" ]
2025-10-19T08:17:14Z
2025-10-20T11:12:30Z
3
Xunzhuo
huggingface/tokenizers
1,877
encode bytes directly
Is there a way to directly encode bytes with a bpe based HF tokenizer without having to decode the string first?
https://github.com/huggingface/tokenizers/issues/1877
open
[]
2025-10-19T03:30:39Z
2025-11-28T07:43:18Z
2
tsengalb99
vllm-project/vllm
27,154
[Installation]: How to reduce the vllm image
### Your current environment Hi, I looked at docker pull vllm/vllm-openai:latest — the image is around 12 GB. I’m exploring ways to reduce the vLLM image size specifically for NVIDIA L40s (i use linux amd64). any ideas? does building vllm from source help to reduce the image? Here’s what I’ve tried so far (but not s...
https://github.com/vllm-project/vllm/issues/27154
open
[ "installation" ]
2025-10-18T17:52:07Z
2025-10-20T17:45:39Z
4
geraldstanje
vllm-project/vllm
27,153
[Feature]: Allow vllm bench serve in non-streaming mode with /completions API
### 🚀 The feature, motivation and pitch vLLM’s bench serve currently supports recording benchmark results only in the streaming mode - recording metrics like TTFT, TPOT, ITL etc. For my use case benchmarking [llm-d ](https://github.com/llm-d/llm-d)which uses vLLM, I would like to enable vllm bench serve in non-stream...
https://github.com/vllm-project/vllm/issues/27153
open
[ "feature request" ]
2025-10-18T17:47:44Z
2025-10-18T20:50:49Z
0
susiejojo
huggingface/candle
3,137
Strategic Discussion: Flicker's Hybrid Architecture for Lightweight Inference + Advanced Training
# Strategic Discussion: Flicker's Hybrid Architecture Evolution ## Overview This issue proposes a comprehensive strategic discussion about flicker's positioning and architecture evolution. The detailed proposal is documented in `STRATEGIC_DISCUSSION_PROPOSAL.md`. ## Context During analysis of flicker's capabilities v...
https://github.com/huggingface/candle/issues/3137
closed
[]
2025-10-18T17:27:24Z
2025-10-21T16:18:51Z
1
jagan-nuvai
huggingface/lerobot
2,245
release 0.4.0 and torch 2.8.0
Hello Lerobot Team! :) Quick question, do you have a time estimate for: - lerobot release 0.4.0 (ie next stable release using the new v30 data format) - bumping torch to 2.8 Thanks a lot in advance!
https://github.com/huggingface/lerobot/issues/2245
closed
[ "question", "dependencies" ]
2025-10-18T16:57:07Z
2025-10-19T18:34:47Z
null
antoinedandi
huggingface/lerobot
2,242
Is it no longer possible to fine-tune the previously used π0 model?
I previously trained a model using the following command for fine-tuning: `lerobot-train --dataset.repo_id=parkgyuhyeon/slice-clay --policy.path=lerobot/pi0 --output_dir=outputs/train/pi0_slice-clay --job_name=pi0_slice-clay --policy.device=cuda --wandb.enable=false --wandb.project=lerobot --log_freq=10 --steps=50000 ...
https://github.com/huggingface/lerobot/issues/2242
closed
[ "question", "policies" ]
2025-10-18T08:42:35Z
2025-10-20T00:18:03Z
null
pparkgyuhyeon
huggingface/lerobot
2,239
Models trained using openpi pi0.5 on Lerobot's pi0.5
Hi, can I check if models trained using the [pytorch port of openpi's pi0.5](https://github.com/Physical-Intelligence/openpi?tab=readme-ov-file#pytorch-support) are compatible with lerobot's defination of pi0.5? Thanks!
https://github.com/huggingface/lerobot/issues/2239
open
[ "question", "policies" ]
2025-10-18T02:01:45Z
2025-10-18T10:54:06Z
null
brycegoh
huggingface/lerobot
2,228
Trossen WidowX AI model, depth cameras and tests
Hi, Would you be open to receive pull requests to support more recent trossen robotics setups as well as depth cameras? I think for the robot part the pattern is quite well established. For depth cameras we solved it by tweaking a bit the dataset utils. Our implementation is fairly tested.
https://github.com/huggingface/lerobot/issues/2228
closed
[ "question", "robots" ]
2025-10-17T09:32:22Z
2025-10-31T19:15:25Z
null
lromor
vllm-project/vllm
27,090
[Usage]: Does vLLM support a data-parallel group spanning multiple nodes when starting an online service?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm Does vLLM support a data-parallel group spanning multiple nodes when starting an online service? ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and ask...
https://github.com/vllm-project/vllm/issues/27090
open
[ "usage" ]
2025-10-17T09:15:04Z
2025-10-20T02:37:19Z
2
KrisLu999
vllm-project/vllm
27,086
[Bug]: After enabling P-D Disaggregation, the final output results are not entirely identical.
### Your current environment vllm VERSION: 0.10.1 ### 🐛 Describe the bug When I fixed the random seed and ensured all environment variables were consistent, I noticed that launching PD separation with the same configuration produced inconsistent final outputs. This phenomenon may require multiple attempts to fully ...
https://github.com/vllm-project/vllm/issues/27086
open
[ "bug" ]
2025-10-17T07:56:41Z
2025-10-20T09:16:21Z
4
freedom-cui
huggingface/lerobot
2,227
How to easily run inference with a trained model
Hello, and thank you for sharing such an inspiring project! I’m currently working with a 7-DoF robotic arm (6 joint axes + 1 gripper) and generating datasets through video recordings for training on smolVLA. Since there’s still some ongoing engineering work related to dataset generation, I’d like to start by understan...
https://github.com/huggingface/lerobot/issues/2227
open
[ "question" ]
2025-10-17T05:41:15Z
2025-12-16T02:57:00Z
null
Biz-Joe