repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/trl | 4,376 | Rewrite `peft_integration.md` | This section of the documentation is widely outdated and rely only on PPO.
Ideally, we should have a clear documentation that shows how to use peft with SFT, DPO and GRPO at least, via the `peft_config` argument. We could have additional subsection about QLoRA and prompt-tuning. | https://github.com/huggingface/trl/issues/4376 | closed | [] | 2025-10-30T03:23:24Z | 2025-11-24T10:39:27Z | 0 | qgallouedec |
vllm-project/vllm | 27,778 | [Usage]: Is DP + PP a possible way to use vLLM? | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Hi there, I wonder if we can adopt DP + PP in vLLM to form a heterogeneous inference pipeline. For example, If i have two V100 32G GPUs and one A100 80G GPU, can I utilize them in pipeline parallelism w... | https://github.com/vllm-project/vllm/issues/27778 | open | [
"usage"
] | 2025-10-30T02:05:06Z | 2025-10-30T02:05:06Z | 0 | oldcpple |
pytorch/pytorch | 166,580 | torch/utils/cpp_extension.py:531] There are no /usr/bin/g++-14 version bounds defined for CUDA version 13.0 | ### 🐛 Describe the bug
Hi,
i'm getting that error message whatever torch version I'm trying >= 2.7
```
W1029 20:55:47.576000 79341 torch/utils/cpp_extension.py:531] There are no /usr/bin/g++-14 version bounds defined for CUDA version 13.0
building 'flash_attn_3._C' extension
```
what does that mean exactly ?
I n... | https://github.com/pytorch/pytorch/issues/166580 | closed | [
"module: cpp-extensions",
"triaged",
"actionable"
] | 2025-10-29T22:24:15Z | 2025-12-29T10:58:14Z | 1 | christopher5106 |
pytorch/pytorch | 166,563 | [RFC] Modifying Getting started page for Experimental Wheel Variant Support | ### Release highlight for proposed Feature
Related to Wheel Next Initiative: https://github.com/pytorch/pytorch/issues/159714
This proposal is for changes to the PyTorch "Getting Started" page to better promote variant enabled wheels and increase their visibility. This is a strategic move to ensure users are more a... | https://github.com/pytorch/pytorch/issues/166563 | open | [
"module: docs",
"triaged",
"release-feature-request"
] | 2025-10-29T20:11:37Z | 2025-10-31T15:22:23Z | 3 | atalman |
pytorch/pytorch | 166,555 | [dynamo, docs] Suggest torch.compiler.set_stance("force_eager") to determine if eager code causes issues | We should include in the programming model docs for users to try running their code on eager to see if eager-errors are causing graph breaks.
`torch.compiler.set_stance("force_eager")` is the preferred way to do this since users don't have to change their `torch.compile` decorators or `module.compile` calls.
See http... | https://github.com/pytorch/pytorch/issues/166555 | open | [
"module: docs",
"triaged",
"oncall: pt2",
"module: dynamo",
"compile-docs",
"module: compile ux"
] | 2025-10-29T19:15:49Z | 2025-12-03T00:48:27Z | 0 | williamwen42 |
pytorch/vision | 9,253 | Patch versions of the wheel available in the CPU only pypi registry | In the CPU only pypi registry, https://download.pytorch.org/whl/torchvision/, I can see some dev/patch versions of the wheels:
```
torchvision-0.24.0+0429d73-cp311-cp311-win_arm64.whl
torchvision-0.24.0+0429d73-cp312-cp312-win_arm64.whl
torchvision-0.24.0+0429d73-cp313-cp313-win_arm64.whl
torchvision-0.24.0+7a9db90-cp... | https://github.com/pytorch/vision/issues/9253 | open | [] | 2025-10-29T16:42:44Z | 2026-01-04T11:06:45Z | 3 | aandrestrumid |
vllm-project/vllm | 27,746 | [Bug]: `strict` value in function definitions causes request error when using Mistral tokenizer | ### Your current environment
Tested with latest vllm source build from main
### 🐛 Describe the bug
Start vLLM with a model that uses the mistral tokenizer:
```
vllm serve mistralai/Mistral-Small-24B-Instruct-2501 \
--enable-auto-tool-choice \
--tool-call-parser mistral \
--tokenizer-mode mistral
```
Send a ... | https://github.com/vllm-project/vllm/issues/27746 | open | [
"bug"
] | 2025-10-29T14:33:13Z | 2025-10-30T19:14:50Z | 4 | bbrowning |
huggingface/trl | 4,368 | GKD: multimodal inputs? | Does the Generalized Knowledge Distillation trainer (GKDTrainer) support multimodal inputs (VLMs)?
If yes, what's the expected dataset format? There is no example of this in the documentation.
Thanks! | https://github.com/huggingface/trl/issues/4368 | closed | [
"📚 documentation",
"❓ question",
"🏋 GKD"
] | 2025-10-29T14:08:44Z | 2025-11-07T19:26:23Z | 2 | e-zorzi |
pytorch/pytorch | 166,519 | Long queue for ROCM runners, also B200 and XPU queueing is observed | ## Current Status
mitigated
## Error looks like
Jobs requiring following runners will be queueing:
<img width="731" height="424" alt="Image" src="https://github.com/user-attachments/assets/c83de025-fb94-4b45-a125-c65c3baa1cb7" />
Please see:
https://hud.pytorch.org/metrics
## Incident timeline (all times pacific)
S... | https://github.com/pytorch/pytorch/issues/166519 | closed | [
"module: rocm",
"module: ci",
"triaged"
] | 2025-10-29T12:20:19Z | 2025-11-03T17:55:53Z | 4 | atalman |
pytorch/pytorch | 166,516 | Performance issue of torch._higher_order_ops.scan | ### 🐛 Describe the bug
I have a Monte Carlo code on CPU, and I want to get one sample each from many discrete distributions pi = Ai * Bi, where A and B are N x n, with n ~ 20 and N ~ 10^6. So I generate N random numbers from 0 ~ 1, and count the cumsum of pi below the random numbers. Ideally I want to loop over the... | https://github.com/pytorch/pytorch/issues/166516 | open | [
"module: autograd",
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 2025-10-29T11:57:03Z | 2025-11-04T21:32:28Z | 2 | SUSYUSTC |
huggingface/lerobot | 2,338 | policy gr00t not found when do async inference with gr00t | ### System Info
```Shell
lerobot version:
3f8c5d98 (HEAD -> main, origin/main, origin/HEAD) fix(video_key typo): fixing video_key typo in update_video_info (#2323)
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
I h... | https://github.com/huggingface/lerobot/issues/2338 | closed | [
"bug",
"question",
"policies"
] | 2025-10-29T05:36:20Z | 2025-11-21T15:34:21Z | null | jcl2023 |
huggingface/lerobot | 2,337 | Can I continue reinforcement learning in HIL-SERL using a pi0 | Can I continue reinforcement learning in HIL-SERL using a pi0 model from LERobot that has been fine-tuned via imitation learning? | https://github.com/huggingface/lerobot/issues/2337 | open | [
"question",
"policies"
] | 2025-10-29T04:30:26Z | 2025-11-11T03:13:23Z | null | pparkgyuhyeon |
huggingface/peft | 2,878 | peft " target_modules='all-linear' " have different behavior between x86 and aarch ? | ### System Info
i have tested on 2 arch (x86, arm) then find this bug.
both arch have peft==0.17.1
### Who can help?
@benjaminbossan @githubnemo
### Reproduction
Reproduction script : bug_reprod.py
```python
from transformers import AutoModelForImageTextToText
model = AutoModelForImageTextToText.from_pretrained("... | https://github.com/huggingface/peft/issues/2878 | closed | [] | 2025-10-29T03:43:02Z | 2025-12-07T15:03:33Z | 4 | HuangChiEn |
huggingface/peft | 2,877 | peft config 'all-linear' include lm_head, is there anyway to remove it ? | I'm not sure is it a bug or my modification affect the peft ?
> since some issue reveal that 'all-linear' will not include the lm_head
```python
if 'internvl' in self.variant.lower():
if '3_5' in self.variant:
self.model = AutoModelForImageTextToText.from_pretrained(self.variant, trust_remote_code=True)
... | https://github.com/huggingface/peft/issues/2877 | closed | [] | 2025-10-29T02:19:21Z | 2025-10-29T03:43:20Z | 1 | HuangChiEn |
huggingface/lerobot | 2,335 | How to Visualize All Episodes of a LeRobot Dataset Locally? | Hi everyone, I have a question about LeRobot datasets. I'd like to inspect my data locally, but using the command
_lerobot-dataset-viz --repo-id=${HF_USER}/record-test --episode-index=0_
only allows me to view one episode at a time, which is quite cumbersome.
Is there a way to visualize all episodes of a dataset local... | https://github.com/huggingface/lerobot/issues/2335 | open | [
"question",
"dataset"
] | 2025-10-29T02:01:01Z | 2025-12-29T12:18:57Z | null | Vacuame |
vllm-project/vllm | 27,692 | it run on rtx 5060 ti 16 gb | ### Your current environment
https://github.com/bokkob556644-coder/suc-vllm-rtx-5060-ti-16-gb/blob/main/suc_vllm.txt
### How would you like to use vllm
[I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
](https://github.com/bokkob556644-coder/suc-vllm-rtx-5060... | https://github.com/vllm-project/vllm/issues/27692 | open | [
"usage"
] | 2025-10-28T21:43:00Z | 2025-10-28T21:43:16Z | 1 | bokkob556644-coder |
huggingface/transformers | 41,919 | LFM2 image_processing_lfm2_vl_fast.py Mean Std swapped? | ### System Info
In LFM2-VL image_processing_lfm2_vl_fast.py line 212 following the MEAN and STD from imagenet is used for preprocessing.
However it seems like they are swapped:
image_mean = IMAGENET_STANDARD_STD
image_std = IMAGENET_STANDARD_MEAN
or is this correct ?
### Who can help?
@Cyrilvallez
### Inf... | https://github.com/huggingface/transformers/issues/41919 | closed | [
"bug"
] | 2025-10-28T16:17:44Z | 2025-10-31T15:02:40Z | 4 | florianvoss-commit |
vllm-project/vllm | 27,667 | [Usage]: DeepseekOCR on CPU missing implementation for fused_topk | ### Your current environment
Try to test if it is possible to run DeepseekOCR on CPU using current git main branch.
Fails because there is no implementation of `fused_topk` for CPU.
```
INFO 10-28 15:41:18 [v1/worker/cpu_model_runner.py:77] Warming up model for the compilation...
ERROR: Traceback (most recent cal... | https://github.com/vllm-project/vllm/issues/27667 | open | [
"usage"
] | 2025-10-28T16:14:40Z | 2025-10-28T16:14:40Z | 0 | brainlag |
vllm-project/vllm | 27,661 | [RFC]: Consolidated tool call parser implementations by type (JSON, Python, XML, Harmony) | ### Motivation.
When someone wants to add a new tool call parser today, they typically choose an existing tool call parser that looks close to what is needed, copy it into a new file, and adjust things here and there as needed for their specific model. Sometimes tests get added, and sometimes not. Sometimes the change... | https://github.com/vllm-project/vllm/issues/27661 | open | [
"RFC"
] | 2025-10-28T14:54:10Z | 2025-10-30T16:14:09Z | 2 | bbrowning |
pytorch/torchtitan | 1,950 | Break the tests/integration_tests/run_tests.py UT | ### Bug description
https://github.com/pytorch/torchtitan/pull/1922 this patch break the existing tests/integration_tests/run_tests.py
Error :
[rank0]:[rank0]: Traceback (most recent call last):
[rank0]:[rank0]: File "/home/dvasanth/miniforge3/envs/env_pt_2_10_ww42/lib/python3.10/runpy.py", line 196, in _run_modul... | https://github.com/pytorch/torchtitan/issues/1950 | closed | [
"question"
] | 2025-10-28T13:14:37Z | 2025-10-29T08:55:34Z | null | dayanandav |
huggingface/lerobot | 2,329 | smolvla base model ( the Vlm part) to other model | Can I change smolvla base model ( the Vlm part) to other model?
What should I do?
Thanks | https://github.com/huggingface/lerobot/issues/2329 | closed | [
"question",
"policies"
] | 2025-10-28T12:28:44Z | 2025-10-31T15:09:12Z | null | smartparrot |
pytorch/tutorials | 3,625 | Will you release the TorchRL C++ API in the future, similar to the PyTorch C++ API? | Will you release the TorchRL C++ API in the future, similar to the PyTorch C++ API? We look forward to using the TorchRL C++ API in the future. | https://github.com/pytorch/tutorials/issues/3625 | open | [
"question",
"Reinforcement Learning"
] | 2025-10-28T11:27:52Z | 2025-10-28T15:36:30Z | null | hyl20012 |
vllm-project/vllm | 27,649 | [Usage]: Qwen3-32B on RTX PRO 6000 (55s First Token Delay and 15t/s) | Why does the Qwen3-32B model take 55 seconds before producing the first token, and why is the generation speed only 15t/s?
My vLLM configuration:
Device: GB202GL [RTX PRO 6000 Blackwell Server Edition]
Nvidia Driver Version:580.95.05
CUDA Version:13.0
Docker configuration:
```sh
PORT=8085
MODEL_PATH=Qwen/Qwen3-32... | https://github.com/vllm-project/vllm/issues/27649 | open | [
"usage"
] | 2025-10-28T10:49:43Z | 2025-11-07T02:30:26Z | 4 | yizhitangtongxue |
vllm-project/vllm | 27,646 | [Usage]: How to use vllm bench serve to bench remote deployed vllm models (can't bench when ep enabled!!!) | ### Your current environment
I deployed dpskv3 in a remote server using:
```
export VLLM_USE_V1=1
export VLLM_ALL2ALL_BACKEND=deepep_low_latency
vllm serve /models/hf/models--deepseek-ai--DeepSeek-V3 --tensor-parallel-size 1 --data-parallel-size 8 --enable-expert-parallel --no-enforce-eager --load-format dummy
```
An... | https://github.com/vllm-project/vllm/issues/27646 | open | [
"usage"
] | 2025-10-28T09:56:37Z | 2025-10-28T15:23:06Z | 3 | Valerianding |
huggingface/transformers | 41,910 | Breaking change about AWQ Fused modules due to Attention Refactor | ### System Info
transformers==5.0.0dev
autoawq==0.2.9
autoawq_kernels==0.0.9
torch==2.6.0+cu124
### Who can help?
Due to PR #35235, the `past_key_values` is no longer a returned value of attention modules.
However, when using AWQ models with Fused modules [AWQ Fused modules docs](https://huggingface.co/docs/transfo... | https://github.com/huggingface/transformers/issues/41910 | closed | [
"bug"
] | 2025-10-28T08:29:03Z | 2025-11-20T13:41:34Z | 3 | fanqiNO1 |
vllm-project/vllm | 27,636 | [Usage]: vllm如何保留qwen3-vl中的special token | ### Your current environment
我微调过的qwen3-vl模型的grounding格式为:<|object_ref_start|>图片<|object_ref_end|><|box_start|>(x1,y1),(x2,y2)<|box_end|>
使用vllm serve推理的格式是:图片(460,66),(683,252),这个是直接忽略了special token么,是否有方法可以保留。
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't ... | https://github.com/vllm-project/vllm/issues/27636 | open | [
"usage"
] | 2025-10-28T06:52:16Z | 2025-10-28T06:52:16Z | 0 | qfs666 |
huggingface/diffusers | 12,553 | Reason to move from OpenCV to ffmpeg | I see that `diffusers.utils.export_to_video()` encourages ffmpeg usage instead of OpenCV. Can you share the reason? I'm looking for a way to add video decoding to my project so I'm collecting arguments. | https://github.com/huggingface/diffusers/issues/12553 | open | [] | 2025-10-28T06:49:48Z | 2025-11-07T13:27:03Z | 10 | Wovchena |
vllm-project/vllm | 27,634 | [Usage]: how to use --quantization option of `vllm serve`? | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/27634 | open | [
"usage"
] | 2025-10-28T06:24:38Z | 2025-10-28T15:57:47Z | 3 | Septemberlemon |
pytorch/pytorch | 166,363 | All Docker build failed due to Ubuntu archive outage | ## Current Status
Closed
## Error looks like
Docker build Error:
```
#9 82.65 W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy-updates/InRelease Could not connect to archive.ubuntu.com:80 (185.125.190.82), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.81), connection time... | https://github.com/pytorch/pytorch/issues/166363 | closed | [] | 2025-10-28T02:42:58Z | 2025-10-28T13:57:51Z | 0 | atalman |
huggingface/candle | 3,151 | Tensor conversion to_vec1() failing on 0.9.2-alpha.1 - Metal | Dependencies
```toml
candle-core = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] }
candle-nn = { git = "https://github.com/huggingface/candle", rev = "df618f8", features = ["metal"] }
candle-transformers = { git = "https://github.com/huggingface/candle", rev = "df618f8", featur... | https://github.com/huggingface/candle/issues/3151 | closed | [] | 2025-10-27T21:36:17Z | 2025-11-06T22:44:14Z | 2 | si-harps |
vllm-project/vllm | 27,604 | [Bug]: Is Flashinfer Attn backend supposed to work with FP8 KV cache on Hopper? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Amazon Linux 2023.7.20250428 (x86_64)
GCC version ... | https://github.com/vllm-project/vllm/issues/27604 | open | [
"bug",
"nvidia"
] | 2025-10-27T20:22:37Z | 2025-11-06T02:37:17Z | 10 | jmkuebler |
huggingface/smolagents | 1,834 | Discussion: how to edit the messages sent to the underlying LLM | Hi! I'm working on a feature to allow a user to add callbacks to modify the content before it is sent to the LLM, inside the agent loop.
I noticed this strange behavior where the first user message must start with "New Task:", otherwise I get this cryptic and misleading error message.
""Error:\nError while parsing ... | https://github.com/huggingface/smolagents/issues/1834 | closed | [] | 2025-10-27T17:28:38Z | 2025-10-27T19:02:39Z | null | njbrake |
pytorch/vision | 9,251 | roi_align onnx export fails while seemingly supported in torchvision code | ### 🐛 Describe the bug
ONNX export of a model using roi_align fails:
Code:
```
import torch
from torch import nn
from torchvision.ops import roi_align
class TestModel(nn.Module):
def forward(self, x, b):
return roi_align(x, b, output_size=(7, 7), spatial_scale=1/16.0)
x = torch.zeros((1, 128, 40, 40))... | https://github.com/pytorch/vision/issues/9251 | open | [] | 2025-10-27T16:21:07Z | 2025-10-28T11:58:28Z | 2 | timstokman |
pytorch/pytorch | 166,303 | Pytorch Operators on older pytorch version | ### 📚 The doc issue
Hi team,
I've seen that PyTorch has recently been transitioning to `pip install` (https://github.com/pytorch/pytorch/issues/152276).
For projects doing custom operators like Kaolin we want to support a reasonable version matrix of PyTorch, what are we supposed to do?
The documentation for custo... | https://github.com/pytorch/pytorch/issues/166303 | open | [
"needs reproduction",
"module: docs",
"triaged"
] | 2025-10-27T14:04:02Z | 2025-10-27T16:55:38Z | 2 | Caenorst |
huggingface/peft | 2,873 | Can I use Lora fine-tuning twice? | I’m planning to work with a two-stage LoRA fine-tuning pipeline (Stage 1: SFT with code completion outputs; Stage 2: SFT with full-code outputs; RL follows). My question is:
When I continue training the same LoRA adapter in Stage 2, will I risk overwriting or degrading the knowledge learned during Stage 1 ? In other wo... | https://github.com/huggingface/peft/issues/2873 | closed | [] | 2025-10-27T12:51:45Z | 2025-12-05T15:05:00Z | 8 | tohokulgq |
vllm-project/vllm | 27,572 | [Bug]: chat/completions stream intermittently returns null as finish_reason | ### Your current environment
```
My env:
vllm 0.10.0
```
### 🐛 Describe the bug
```
+ curl -kLsS https://127.0.0.1:7888/v1/chat/completions -H 'Content-Type: application/json' --data '{
"model": "ibm/granite-3-8b-instruct",
"stream": true,
"messages": [
{
"role... | https://github.com/vllm-project/vllm/issues/27572 | open | [
"bug"
] | 2025-10-27T12:14:03Z | 2025-11-24T20:27:24Z | 13 | shuynh2017 |
pytorch/torchtitan | 1,936 | Is it possible to train Vision-Language Model with different parallelism plan for vision and language parts of the model? | can we train a Vision-Language Model using torchtitan?
And can we set different parallelism plan for different parts of the model: fsdp2+dp for vision part, and fsdp2+dp+sp+ep+pp for the llm part? If it is possible, how to do it?
Thanks very much. | https://github.com/pytorch/torchtitan/issues/1936 | open | [] | 2025-10-27T06:47:47Z | 2025-10-27T14:16:04Z | 2 | airlsyn |
huggingface/chat-ui | 1,957 | Fail to use proxy | How to make this web app go through local proxy?
I tried a few methods, all of which don't work.
| https://github.com/huggingface/chat-ui/issues/1957 | open | [
"support"
] | 2025-10-27T06:31:51Z | 2025-10-30T03:31:24Z | 2 | geek0011 |
pytorch/pytorch | 166,282 | Why does my PR still show "Missing CLA Authorization" even though I have already signed the CLA document? | ### 🚀 The feature, motivation and pitch
Why does my PR still show "Missing CLA Authorization" even though I have already signed the CLA document?
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/pytorch/issues/166282 | closed | [] | 2025-10-27T01:19:21Z | 2025-10-27T16:45:23Z | 1 | wenlinchong17-web |
huggingface/diffusers | 12,547 | Fine tuning Dreambooth Flux Kontext I2I Error: the following arguments are required: --instance_prompt | ### Describe the bug
Hello HF team, @sayakpaul @bghira
I'm encountering a persistent issue when trying to fine-tune the black-forest-labs/FLUX.1-Kontext-dev model using the train_dreambooth_lora_flux_kontext.py script.
I am following the [official README instructions](https://github.com/huggingface/diffusers/blob/ma... | https://github.com/huggingface/diffusers/issues/12547 | closed | [
"bug"
] | 2025-10-27T00:21:34Z | 2025-10-28T02:31:42Z | 7 | MichaelMelgarejoFlorez |
huggingface/transformers | 41,876 | LlamaAttention num_heads | ### System Info
In older version of transformers, LlamaAttention init attribute num_heads.
class LlamaAttention(nn.Module):
def __init__(self, config):
self.num_heads = config.num_attention_heads
self.head_dim = config.hidden_size // config.num_attention_heads
However, in the recent versions, th... | https://github.com/huggingface/transformers/issues/41876 | closed | [
"bug"
] | 2025-10-27T00:07:31Z | 2025-10-31T00:13:31Z | 2 | shanhx2000 |
huggingface/transformers | 41,874 | Distributed training of SigCLIP | https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/siglip/modeling_siglip.py#L983, here define how to compute sigclip loss. In sigclip, different tpu will exchange data with each other. I want to know how to train a model in this way. | https://github.com/huggingface/transformers/issues/41874 | closed | [] | 2025-10-26T14:43:51Z | 2025-12-04T08:02:55Z | 1 | zyk1559676097-dot |
huggingface/transformers | 41,861 | transformers.Adafactor is almost 2x slower on Windows than Linux - even WSL is slow what can be reason? | I am training Qwen Image model with Kohya Musubi tuner : https://github.com/kohya-ss/musubi-tuner
Exactly same setup and same machine on Linux is almost 2x faster
9.5 second / it vs 5.8 second / it
On Windows it can't utilize GPU power it utilizes like 250 watt out of 575 watt
What can be culprit?
transformers==4... | https://github.com/huggingface/transformers/issues/41861 | closed | [
"bug"
] | 2025-10-25T15:49:47Z | 2025-12-03T08:02:55Z | null | FurkanGozukara |
pytorch/pytorch | 166,238 | [Dynamo][BUG] Regression about `collections.defaultdict` creation | ### 🐛 Describe the bug
See CI error log: https://github.com/pytorch/pytorch/actions/runs/18803810990/job/53655896530#step:27:2137
### Error logs
```pytb
----------------------------- Captured stdout call -----------------------------
inline_call [("Unsupported function call
Explanation: Dynamo does not know how t... | https://github.com/pytorch/pytorch/issues/166238 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-must-fix",
"dynamo-variable-tracker"
] | 2025-10-25T15:26:06Z | 2025-11-05T06:09:41Z | 4 | XuehaiPan |
huggingface/transformers | 41,859 | Human Verification not working? | ### System Info
Hello! I need your help because I can't verify my identity via email: I receive a link, open it, but get a blank page and nothing else(((
I've tried several times.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An o... | https://github.com/huggingface/transformers/issues/41859 | closed | [
"bug"
] | 2025-10-25T10:48:52Z | 2025-10-26T12:29:10Z | 4 | thefued |
pytorch/pytorch | 166,233 | license: Is it possible to stop using Conda in the Dockerfile? Due to Conda’s licensing issues, many companies have already received legal warning letters. | ### 🚀 The feature, motivation and pitch
Starting this year, many companies have received legal letters from Conda’s lawyers, explicitly stating that using Conda requires a paid license. Although I have checked Conda’s official website, it does not clearly specify this. I also noticed that the current PyTorch Dockerfi... | https://github.com/pytorch/pytorch/issues/166233 | open | [
"module: binaries",
"triaged",
"module: docker",
"better-engineering"
] | 2025-10-25T08:50:40Z | 2025-10-28T03:42:37Z | 2 | WangxuP |
huggingface/lerobot | 2,311 | Question: How I can train only online without dataset? | How I can train only online? without need of dataset. Can I do it without hugging face repo id? only local?
I try like that without success:
```
cat > "train_cfg.json" <<'JSON'
{
"job_name": "hilserl_fetch_pick_v4_cpu",
"seed": 0,
"env": {
... | https://github.com/huggingface/lerobot/issues/2311 | open | [
"question",
"dataset"
] | 2025-10-25T05:07:48Z | 2025-10-27T08:50:11Z | null | talregev |
vllm-project/vllm | 27,505 | [Bug]: Value error, Found conflicts between 'rope_type=default' (modern field) and 'type=mrope' | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
vllm 0.11.0
transformers 5.0.0.dev0
torch ... | https://github.com/vllm-project/vllm/issues/27505 | open | [
"bug"
] | 2025-10-25T04:39:53Z | 2025-10-26T07:33:27Z | 1 | asirgogogo |
vllm-project/vllm | 27,504 | [Usage]: `add_vision_id` ignored for Qwen 2.5-VL-32B-Instruct | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/27504 | open | [
"usage"
] | 2025-10-25T03:42:44Z | 2025-10-26T07:32:49Z | 1 | justachetan |
pytorch/pytorch | 166,219 | Why are there so many warnings when building the C++ libtorch project? How to resolve it? | ### 🐛 Describe the bug
When I compile the C++ libtorch project, there are many warnings. How can I resolve them? My configuration: Win11, MSVC, libtorch 2.8.0. My C++ code is as follows:
```cpp
#include <torch/torch.h>
#include <iostream>
int main() {
torch::Tensor tensor_zeros = torch::zeros({3, 3});
std::c... | https://github.com/pytorch/pytorch/issues/166219 | open | [
"module: windows",
"module: cpp-extensions",
"triaged"
] | 2025-10-25T03:09:34Z | 2025-10-25T15:39:20Z | null | hyl20012 |
huggingface/lighteval | 1,028 | How to evaluate MMLU-Pro | Hi,
Thank you for the wonderful work!
I just want to ask how to perform the evaluation on MMLU-Pro, as I don't see any related code besides the README. | https://github.com/huggingface/lighteval/issues/1028 | open | [] | 2025-10-24T20:03:10Z | 2025-11-04T10:40:46Z | null | qhz991029 |
pytorch/pytorch | 166,180 | AOTI _register_aoti_cleanup line 47 | ### 🐛 Describe the bug
Hi,
Trying to run [this code](https://huggingface.co/spaces/zerogpu-aoti/wan2-2-fp8da-aoti-faster/tree/main) on Modal, I got this error message I absolute don't know how to interpret
### Error logs
```
File "<ta-01K8BA92H6RT7D4R3V6CBA2Q9T>:/usr/local/lib/python3.12/site-packages/torch/utils... | https://github.com/pytorch/pytorch/issues/166180 | closed | [
"oncall: pt2"
] | 2025-10-24T18:58:27Z | 2025-10-28T09:20:17Z | 2 | christopher5106 |
huggingface/tokenizers | 1,879 | rust tokenizer | Hello.
Is there a rust tokenizer please? Chat gpt told me there used to be.
Best regards! | https://github.com/huggingface/tokenizers/issues/1879 | open | [] | 2025-10-24T17:03:04Z | 2025-10-24T22:03:31Z | 2 | gogo2464 |
pytorch/ao | 3,243 | TorchAO Missing 3.13T (free-threading) Wheels | Latest `0.14.1` cuda builds does produce wheels for `3.13t` which is the `nogil` build of Python.
On Ubuntu 24.04 x86_64
```py
# pip install torchao==0.14.1 --index-url https://download.pytorch.org/whl/cu130 -U
Looking in indexes: https://download.pytorch.org/whl/cu130
ERROR: Could not find a version that satisfies ... | https://github.com/pytorch/ao/issues/3243 | open | [] | 2025-10-24T16:53:03Z | 2025-10-30T19:30:57Z | 1 | Qubitium |
vllm-project/vllm | 27,482 | [Bug]: `return_token_ids` missing tokens when using tool calls | ### Your current environment
Testing with latest vLLM builds from main, as of Fri Oct 24th 2025 (when this bug was opened).
### 🐛 Describe the bug
The `return_token_ids` parameter that is supposed to return all generated token ids back to the client is missing quite a few tokens for Chat Completion streaming reque... | https://github.com/vllm-project/vllm/issues/27482 | closed | [
"bug"
] | 2025-10-24T16:10:31Z | 2025-12-04T19:09:41Z | 2 | bbrowning |
vllm-project/vllm | 27,479 | [Bug]: Low GPU utilization with Embedding Model | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
Initializing LLM(model="Qwen/Qwen3-Embedding-0.6B", task="embed") on a single B200 (180 GB) immediately reserves ~80... | https://github.com/vllm-project/vllm/issues/27479 | open | [
"bug"
] | 2025-10-24T15:18:05Z | 2025-10-24T15:25:38Z | 1 | JhaceLam |
vllm-project/vllm | 27,477 | [Bug]: First prompt token missing when requested with "echo" | ### Your current environment
vllm installed from main:
`vllm 0.11.1rc3.dev23+g61089465a.precompiled`
### 🐛 Describe the bug
Is it expected behavior that echo isn't returning the first token of the prompt?
I am trying to collect exact prompt_token_ids which went into the model served wi... | https://github.com/vllm-project/vllm/issues/27477 | closed | [
"bug"
] | 2025-10-24T14:43:50Z | 2025-10-24T15:04:01Z | 2 | eldarkurtic |
huggingface/text-generation-inference | 3,336 | Get inference endpoint model settings via client | ### Feature request
Enable commands via clients such as `OpenAI` that would get model settings from an inference endpoint.
Does this exist and I just can't find it?
### Motivation
There is currently no clear way to get inference model settings directly from an endpoint. Individual base models have their original s... | https://github.com/huggingface/text-generation-inference/issues/3336 | closed | [] | 2025-10-24T13:07:15Z | 2025-10-30T14:10:46Z | 1 | lingdoc |
huggingface/datasets | 7,829 | Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict | ### Describe the bug
Hi team, first off, I love the datasets library! 🥰
I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.
Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.
Training Task: I'm performin... | https://github.com/huggingface/datasets/issues/7829 | open | [] | 2025-10-24T09:51:38Z | 2025-11-06T13:31:26Z | 4 | raphaelsty |
huggingface/transformers | 41,842 | Incorrect usage of `num_items_in_batch`? | It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430).
However, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Do... | https://github.com/huggingface/transformers/issues/41842 | closed | [] | 2025-10-24T07:36:00Z | 2025-12-01T08:02:48Z | 2 | gohar94 |
vllm-project/vllm | 27,463 | [Usage]: How to request DeepSeek-OCR with http request | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
i want to request DeepSeek-OCR with http, is any example for it?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bott... | https://github.com/vllm-project/vllm/issues/27463 | closed | [
"usage"
] | 2025-10-24T07:07:29Z | 2025-10-29T17:26:49Z | 8 | YosanHo |
huggingface/lerobot | 2,306 | how to use groot without flash attention | my system is ubuntu 20.04 with glibc 2.3.1 which is not supported flash attention, If I can modify the config of groot to use it with normal attention? | https://github.com/huggingface/lerobot/issues/2306 | open | [
"question",
"policies",
"dependencies"
] | 2025-10-24T06:35:18Z | 2025-11-04T01:28:38Z | null | shs822 |
huggingface/lerobot | 2,305 | Error dependence about the `Transformer` library | ### System Info
```Shell
- lerobot version: 0.4.0
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.12.12
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.7.0+cu128
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.8
- GPU ... | https://github.com/huggingface/lerobot/issues/2305 | open | [
"question",
"policies",
"dependencies"
] | 2025-10-24T05:59:32Z | 2025-11-14T16:01:49Z | null | sunshineharry |
vllm-project/vllm | 27,454 | [Usage]: How to set the expert id on each EP by myself after setting EP in Deepseek (how to reorder experts?) | ### Your current environment
```text
vllm 0.8.5
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot liv... | https://github.com/vllm-project/vllm/issues/27454 | open | [
"usage"
] | 2025-10-24T03:15:16Z | 2025-10-24T07:27:50Z | 2 | HameWu |
vllm-project/vllm | 27,448 | [Usage]: how to pass multi turn multimode messages to Vllm? | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues... | https://github.com/vllm-project/vllm/issues/27448 | open | [
"usage"
] | 2025-10-24T02:41:45Z | 2025-10-24T03:33:13Z | 1 | cqray1990 |
huggingface/lerobot | 2,304 | How to load local model? | For example, i'm trying to fine-tune pi0, so I downloaded pi0_base locallly and save it in [position A,like lerobot/models/pi0_base] ,which has 5 files in total,including model.safetensors.
Then how to load it in code? I used to just set model.path=[position A] But followed tuorial, it uses pretrained_path_or_name as ... | https://github.com/huggingface/lerobot/issues/2304 | closed | [] | 2025-10-24T01:59:26Z | 2025-10-24T02:33:25Z | null | milong26 |
vllm-project/vllm | 27,441 | [Bug]: vllm/v1/core/sched/scheduler.py: Unintended reordering of requests during scheduling | ### Your current environment
<details>
This error is independent of the environment.
</details>
### 🐛 Describe the bug
### Description
The function `schedule()` in [vllm/v1/core/sched/scheduler.py](https://github.com/vllm-project/vllm/blob/main/vllm/v1/core/sched/scheduler.py) is responsible for scheduling inferen... | https://github.com/vllm-project/vllm/issues/27441 | open | [
"bug"
] | 2025-10-23T22:35:50Z | 2025-11-22T04:20:35Z | 1 | dongha-yoon |
pytorch/ao | 3,232 | nvfp4: why do we need to call weight.contiguous for Qwen3 during lm-eval? | TODO @andrewor14 add repro | https://github.com/pytorch/ao/issues/3232 | open | [] | 2025-10-23T21:20:54Z | 2025-10-28T22:36:03Z | 1 | vkuzo |
huggingface/lerobot | 2,303 | Question: Does the follower arm have an api for scripting movement? | Hi, apologies if this has been answered before or if it's not the right place to ask. I've been using the SO-101 arms for imitation learning, but recently I've wanted to try and test out the follower arm for embodied reasoning models such as Gemini ER 1.5. To do this, I figure I would need to have some way to map outpu... | https://github.com/huggingface/lerobot/issues/2303 | open | [
"question",
"robots",
"python"
] | 2025-10-23T20:40:56Z | 2025-10-23T22:29:28Z | null | Buttmunky1 |
huggingface/lerobot | 2,294 | Question about the HuggingFaceVLA/smolvla_libero Model Configuration | Hello,
Lerobot has officially ported [LIBERO](https://github.com/huggingface/lerobot/issues/1369#issuecomment-3323183721), and we can use the checkpoint at [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero) to evaluate the LIBERO benchmark.
However, the model configuration of [Huggi... | https://github.com/huggingface/lerobot/issues/2294 | open | [
"question",
"policies"
] | 2025-10-23T13:37:48Z | 2025-10-30T07:49:17Z | null | Hesh0629 |
vllm-project/vllm | 27,413 | [Usage]: how to request a qwen2.5-VL-7B classify model served by vllm using openai SDK? | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I launch a server with the following command to serving a Qwen2.5-VL-7B model finetued for seqence classification. (this model replaced the lm_head with a 2 classes score_head)
The launch command is :
... | https://github.com/vllm-project/vllm/issues/27413 | open | [
"good first issue",
"usage"
] | 2025-10-23T12:32:25Z | 2025-10-25T00:18:54Z | 12 | muziyongshixin |
huggingface/transformers.js | 1,447 | How to use half precision ONNX models? | ### Question
Hi,
I just exported a detection model with fp16 using optimum.
`--dtype fp16 `
This is my pipeline:
```javascript
const model = await AutoModel.from_pretrained(
"./onnx_llama",
{ dtype: "fp16", device: "cpu" }
const processor = await AutoProcessor.from_pretrained("./onnx_llama");
const { pixel_val... | https://github.com/huggingface/transformers.js/issues/1447 | open | [
"question"
] | 2025-10-23T09:18:26Z | 2025-10-23T09:18:26Z | null | richarddd |
huggingface/transformers | 41,810 | How do you use t5gemma decoder with a different encoder? | I am trying to combine the t5gemma decoder with a pretrained deberta encoder that I have trained from scratch using `EncoderDecoderModel`.
Here is the code:
```
model_1 = "WikiQuality/pre_filtered.am"
model_2 = "google/t5gemma-2b-2b-ul2"
encoder = AutoModel.from_pretrained(model_1)
decoder = AutoModel.from_pretrain... | https://github.com/huggingface/transformers/issues/41810 | closed | [] | 2025-10-23T08:48:19Z | 2025-12-01T08:02:53Z | 1 | kushaltatariya |
pytorch/pytorch | 166,116 | [CCA] CUDACachingAllocator always release physical memory handle when the expandable segment unmaps. | This may not be a bug. I'm just confused about the CUDACachingAllocator behavior.
When enable expandable segments, CCA uses the CUDA virtual memory API.([cuMemCreate](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__VA.html#group__CUDA__VA_1g899d69a862bba36449789c64b430dc7c)/[cuMemRelease](https://docs.nvidia... | https://github.com/pytorch/pytorch/issues/166116 | open | [
"triaged",
"module: CUDACachingAllocator"
] | 2025-10-23T07:30:24Z | 2025-10-29T02:57:00Z | 3 | PHLens |
huggingface/accelerate | 3,818 | Duplicate W&B initialization in offline mode | ### System Info
```Shell
- `Accelerate` version: 1.10.1
```
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (s... | https://github.com/huggingface/accelerate/issues/3818 | closed | [
"good first issue"
] | 2025-10-23T02:19:38Z | 2025-12-16T13:10:48Z | 3 | ShuyUSTC |
pytorch/pytorch | 166,106 | [Feature][BUG] need support for DispatchKey.AutocastXPU | ### 🚀 The feature, motivation and pitch
details information in this [issue](https://github.com/intel/intel-xpu-backend-for-triton/issues/5366#issuecomment-3433362148).
i get error when i use torch.compile+autocast+triton:
```
File "D:\miniconda3\envs\compile\Lib\site-packages\torch\_ops.py", line 493, in dispatch
... | https://github.com/pytorch/pytorch/issues/166106 | open | [
"triaged",
"module: xpu"
] | 2025-10-23T01:58:34Z | 2025-10-23T14:47:11Z | 1 | xiaohoua |
pytorch/vision | 9,249 | Non-local versions of torch are only available for linux(/mac) aarch64 | When checking https://download.pytorch.org/whl/torchvision/ for e.g. 0.24.0 on Python 3.12, the following list of wheels is available for non-local (no `+`) versions:
```
torchvision-0.24.0-cp312-cp312-macosx_11_0_arm64.whl
torchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl
torchvision-0.24.0-cp312-cp312-manyli... | https://github.com/pytorch/vision/issues/9249 | closed | [] | 2025-10-22T17:04:55Z | 2025-12-15T19:09:29Z | 3 | konstin |
vllm-project/vllm | 27,347 | [Usage]: vllm: error: unrecognized arguments: --all2all-backend deepep_low_latency | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/27347 | closed | [
"usage"
] | 2025-10-22T14:36:18Z | 2025-10-22T15:07:13Z | 1 | Valerianding |
vllm-project/vllm | 27,343 | [Usage]: Can't get result from /pooling api when using Qwen2.5-Math-PRM-7B online | ### Your current environment
```
The output of `python collect_env.py`
Collecting environment information... [140/1781]
============================== ... | https://github.com/vllm-project/vllm/issues/27343 | closed | [
"usage"
] | 2025-10-22T13:36:51Z | 2025-10-23T03:39:13Z | 3 | zgc6668 |
pytorch/ao | 3,226 | question of blockwise quant fp8 training | Hi, the [blockwise_fp8_training](https://github.com/pytorch/ao/tree/7e68d5ee6fe6749a667edd2510d5fd2b599a27e2/torchao/prototype/blockwise_fp8_training) has been there for a while. Is there any reason we dont merge it into [float8](https://github.com/pytorch/ao/tree/main/torchao/float8) folder?
And current moe training ... | https://github.com/pytorch/ao/issues/3226 | open | [
"float8",
"moe"
] | 2025-10-22T13:18:40Z | 2025-10-24T04:00:47Z | 3 | rakkit |
huggingface/transformers.js | 1,446 | Zhare-AI/sd-1-5-webgpu on HuggingFace.co lists itself as Transformer.js supported? | ### Question
[Zhare-AI/sd-1-5-webgpu](https://huggingface.co/Zhare-AI/sd-1-5-webgpu) is a `text-to-image` model and is marked as Transformers.js compatible, and even shows demo code using Transformers.js on its `huggingface.co` page. Their example code fails with an error saying `text-to-image` is not supported in Tra... | https://github.com/huggingface/transformers.js/issues/1446 | closed | [
"question"
] | 2025-10-22T12:20:16Z | 2025-10-24T14:33:17Z | null | LostBeard |
vllm-project/vllm | 27,336 | [Feature]: Make promt_token_ids optional in streaming response (disable by default) | ### 🚀 The feature, motivation and pitch
Starting with v0.10.2, the first server-sent event (SSE) in streaming responses now includes the full list of `prompt_token_ids`.
While this can be useful for debugging or detailed inspection, it introduces several practical issues in production environments:
1. Large payloa... | https://github.com/vllm-project/vllm/issues/27336 | closed | [
"feature request"
] | 2025-10-22T11:42:41Z | 2025-10-27T11:06:45Z | 1 | Gruner-atero |
huggingface/transformers | 41,775 | Hugging Face website and models not reachable | ### System Info
```
$ pip show transformers
Name: transformers
Version: 4.57.1
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/hugg... | https://github.com/huggingface/transformers/issues/41775 | closed | [
"bug"
] | 2025-10-22T07:40:32Z | 2025-11-21T08:10:00Z | 8 | christian-rauch |
vllm-project/vllm | 27,319 | [Usage]: Quantized FusedMoE crashed in graph compiled stage | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : 19.0.0git (https://github.com/RadeonOpenC... | https://github.com/vllm-project/vllm/issues/27319 | closed | [
"rocm",
"usage"
] | 2025-10-22T06:29:32Z | 2025-10-24T02:19:55Z | 1 | Rus-P |
vllm-project/vllm | 27,298 | [Doc]: Update metrics documentation to remove V0 references and add v1 changes. | ## Problem
The metrics documentation in `docs/design/metrics.md` still contains references to V0 metrics implementation, but V0 metrics have been removed after @njhill 's PR https://github.com/vllm-project/vllm/pull/27215 was merged. To avoid confusion, I think we should remove this and update it with the new set of v... | https://github.com/vllm-project/vllm/issues/27298 | closed | [
"documentation"
] | 2025-10-21T22:08:48Z | 2025-10-22T13:29:17Z | 1 | atalhens |
pytorch/pytorch | 166,020 | [doc] Clarify that torch.mean doesn't support integer dtypes like torch.long | ### 📚 The doc issue
[doc] Clarify that torch.mean doesn't support integer dtypes like torch.long
**Page:** `torch.mean` documentation
**Problem:** The documentation for `torch.mean` doesn't explicitly mention that integer dtypes (like `torch.long`) are not supported and will raise a runtime error.
**Current behavi... | https://github.com/pytorch/pytorch/issues/166020 | closed | [
"triaged"
] | 2025-10-21T19:27:50Z | 2025-10-21T22:13:29Z | 1 | har5hdeep5harma |
pytorch/pytorch | 166,014 | Make Inductor Fallback Nodes Less Reliant on Invariants from Functionalization / AOT Autograd | ### 🐛 Describe the bug
Inductor has generic support for invoking operators as they would have been [in eager execution](https://github.com/pytorch/pytorch/blob/3dfd0c75847aad61a24e63d91bb330083db11857/torch/_inductor/graph.py#L1626-L1630). This path is hardened and works well both for custom ops and for bisecting a ... | https://github.com/pytorch/pytorch/issues/166014 | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-10-21T18:59:09Z | 2025-10-21T18:59:31Z | 0 | eellison |
vllm-project/vllm | 27,268 | [Usage]: failed to infer device type on GCP COS despite nvidia container toolkit installed | ### Your current environment
I failed to run this script on GCP COS.
### How would you like to use vllm
I was trying to use VLLM on a Google Cloud (GCP) Container-Optimized OS (COS) instance via Docker.
I followed GCP's [documentation](https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus) to insta... | https://github.com/vllm-project/vllm/issues/27268 | open | [
"usage"
] | 2025-10-21T15:24:21Z | 2025-10-21T15:24:21Z | 0 | forrestbao |
vllm-project/vllm | 27,265 | [Usage]: Cannot register custom model (Out-of-Tree Model Integration) | ```
### Your current environment
==============================
Versions of relevant libraries
==============================
[pip3] flake8==7.1.1
[pip3] flashinfer==0.1.6+cu124torch2.4
[pip3] flashinfer-python==0.2.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-... | https://github.com/vllm-project/vllm/issues/27265 | closed | [
"usage"
] | 2025-10-21T14:17:17Z | 2025-10-25T13:19:40Z | 1 | Hyperwjf |
vllm-project/vllm | 27,263 | [Responses API] Support tool calling and ouput token streaming | Splitting off from #14721
> FYI a start has been made here https://github.com/vllm-project/vllm/pull/20504
>
> That PR (which was merged to `main` on [7/9/2025](https://github.com/vllm-project/vllm/pull/20504#event-18495144925)) explicitly has an unchecked boxes for
>
> * [ ] Tool/functional calling support
> * [ ] ... | https://github.com/vllm-project/vllm/issues/27263 | open | [] | 2025-10-21T12:36:44Z | 2025-12-07T01:06:46Z | 4 | markmc |
pytorch/pytorch | 165,985 | Can I provide a Chinese version of the readme file to submit | ### 📚 The doc issue
Can I provide a Chinese version of the readme file to submit?
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/pytorch/issues/165985 | closed | [] | 2025-10-21T11:32:28Z | 2025-10-27T23:04:37Z | 1 | wenlinchong17-web |
vllm-project/vllm | 27,252 | [Usage]: ”@app.post("/generate")“ API is support qwen2_vl or not? | ### Your current environment
i want tot know ”@app.post("/generate")“ API support qwen2_vl or not?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched... | https://github.com/vllm-project/vllm/issues/27252 | open | [
"usage"
] | 2025-10-21T07:30:11Z | 2025-10-21T07:30:11Z | 0 | wwkww |
huggingface/lerobot | 2,269 | how to configure pi0_base to train with single camera dataset |
Hi,
I'm trying to train pi0_base with "lerobot/aloha_sim_transfer_cube_human" dataset which has only one camera input "observation.images.top". However, pi0 seems to expect three camera inputs:
"observation.images.base_0_rgb",
"observation.images.left_wrist_0_rgb",
"observation.images.right_wrist_0_rgb"
"ValueError:... | https://github.com/huggingface/lerobot/issues/2269 | open | [
"question",
"policies",
"dataset"
] | 2025-10-21T01:32:50Z | 2025-10-21T17:36:17Z | null | dalishi |
vllm-project/vllm | 27,233 | gguf run good | ### Your current environment
from vllm import LLM, SamplingParams
gguf_path = "/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf"
llm = LLM(
gguf_path,
tokenizer="Qwen/Qwen3-1.7B"
)
params = SamplingParams(
temperature=0.8,
top_p=0.9,
top_k=40,
m... | https://github.com/vllm-project/vllm/issues/27233 | open | [
"usage"
] | 2025-10-21T00:11:26Z | 2025-10-22T00:44:10Z | 12 | kmnnmk212-source |
pytorch/xla | 9,684 | RFC: Evolving PyTorch/XLA for a more native experience on TPU | ### Motivation
For many years, `torch_xla` has been the primary way for the community to run PyTorch programs on Cloud TPUs. It has successfully enabled the training of massive models by bringing the power of the XLA compiler to the PyTorch ecosystem.
The current implementation, while powerful, presents a developer e... | https://github.com/pytorch/xla/issues/9684 | open | [
"RFC"
] | 2025-10-20T22:12:20Z | 2025-12-19T04:58:36Z | 18 | qcc4cp |
vllm-project/vllm | 27,228 | [Installation]: Compatibility with PyTorch 2.9.0? | ### Your current environment
```text
The output of `python collect_env.py`
```
### How you are installing vllm
Is there a version of vllm that is compatible with the latest PyTorch release 2.9.0?
```
pip install vllm==0.11.0
pip install torch==2.9.0
```
```
$ vllm bench latency --input-len 256 --output-len 256 --... | https://github.com/vllm-project/vllm/issues/27228 | closed | [
"installation"
] | 2025-10-20T21:10:24Z | 2025-10-21T22:40:15Z | 3 | andrewor14 |
pytorch/pytorch | 165,933 | [Distributed] fully_shard: support no_shard (ddp) strategy? | ### 🚀 The feature, motivation and pitch
It looks like the `fully_shard` API is recommended these days over `torch.distributed.FSDP`. The latter allows a `ShardingStrategy` argument to control the degree of sharding (i.e. zero1/2/3) - this is useful in some cases where we don't want to shard the params, only grads, or... | https://github.com/pytorch/pytorch/issues/165933 | open | [
"oncall: distributed"
] | 2025-10-20T20:48:14Z | 2025-10-22T14:44:13Z | 0 | rohan-varma |
vllm-project/vllm | 27,208 | [Feature]: Upgrade CUDA version to 12.9.1 in docker images | ### 🚀 The feature, motivation and pitch
The current builds display warning logs like these
```
Warning: please use at least NVCC 12.9 for the best DeepGEMM performance
```
Can we bump this version easily?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
-... | https://github.com/vllm-project/vllm/issues/27208 | closed | [
"feature request"
] | 2025-10-20T16:08:49Z | 2025-10-21T21:20:19Z | 1 | jhuntbach-bc |
pytorch/pytorch | 165,909 | AWS was down, GHA infrastructure effected / recovering | > NOTE: Remember to label this issue with "`ci: sev`"
> If you want autorevert to be disabled, keep the ci: disable-autorevert label
<!-- Add the `merge blocking` label to this PR to prevent PRs from being merged while this issue is open -->
## Current Status
Mitigated, queues are recovering.
AWS experienced... | https://github.com/pytorch/pytorch/issues/165909 | closed | [
"ci: sev",
"ci: sev-mitigated"
] | 2025-10-20T15:28:48Z | 2025-10-21T16:41:19Z | 0 | seemethere |
pytorch/pytorch | 165,907 | Feedback on profiler key_averages documentation | ### 📚 The doc issue
It would be great to have more documentation on how to use key_averages beyond the Table method. Right now there is no documentation for the EventList and FunctionEventAvg data types.
### Suggest a potential alternative/fix
Adding pages for EventList and FunctionEventAvg classes would be a good ... | https://github.com/pytorch/pytorch/issues/165907 | closed | [
"module: docs",
"actionable",
"oncall: profiler"
] | 2025-10-20T14:56:48Z | 2025-11-14T02:03:22Z | 0 | alexracape |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.