repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
pytorch/torchtitan
1,506
Correct MoE auxiliary-loss-free load balancing?
A very small question: why is the second `expert_bias_delta` assignment used here? https://github.com/pytorch/torchtitan/blob/cf30b2902718790cbe91900414c3201b6d7680b0/torchtitan/experiments/llama4/optimizer.py#L39-L43 This looks different than Algorithm 1 of https://arxiv.org/pdf/2408.15664, which would instead just ...
https://github.com/pytorch/torchtitan/issues/1506
closed
[]
2025-07-31T20:24:47Z
2025-08-01T15:34:42Z
2
garrett361
huggingface/diffusers
12,038
Dataset structure for train_text_to_image_lora.py
Hello. I am trying to use **train_text_to_image_lora.py** script following the instructions https://github.com/huggingface/diffusers/tree/main/examples/text_to_image I get errors on data structure and don't know what is the issue on my side. I have a folder **data** where I have folder **image** and **csv** file. C:/...
https://github.com/huggingface/diffusers/issues/12038
open
[]
2025-07-31T16:10:38Z
2025-08-01T16:44:48Z
1
HripsimeS
huggingface/lerobot
1,632
Are there plans to support distributed training?
[train.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) currently only supports single-GPU training. Is there a plan to support distributed training in the future?
https://github.com/huggingface/lerobot/issues/1632
closed
[ "question", "policies" ]
2025-07-31T03:31:46Z
2025-10-17T12:10:40Z
null
Hukongtao
huggingface/candle
3,039
Request support for Qwen2.5-vl or Fast-VLM
I'm trying to call some image-to-text visual models using candle, if anyone knows how to use Qwen2.5-vl or Fast-VLM, can you share it? Appreciate
https://github.com/huggingface/candle/issues/3039
open
[]
2025-07-31T02:41:33Z
2025-08-04T12:21:35Z
1
826327700
huggingface/transformers
39,801
ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981
### System Info _prepare_cache_for_generation raise ValueError( ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981 I got this error and i have no clue of how to solve it. I tried different implementation...
https://github.com/huggingface/transformers/issues/39801
closed
[ "bug" ]
2025-07-30T20:59:45Z
2025-09-07T08:02:42Z
2
jpitalopez
huggingface/lerobot
1,631
🥚 Filtering Eggs on Moving Table: Dirt/Breakage Detection Feasibility
Hi 👋 Thanks a lot for your work on lerobot! I am exploring the use of lerobot to filter eggs based on dirt or breakage while they move past the robot on a conveyor table. The goal is to detect anomalies in real time and eventually eject faulty eggs. Some specific questions I have: * Do you have any advice or feedb...
https://github.com/huggingface/lerobot/issues/1631
open
[ "question", "policies" ]
2025-07-30T18:35:12Z
2025-08-12T09:07:41Z
null
KannarFr
pytorch/ao
2,631
What is the intention of "NF4WeightOnlyConfig" ?
Hi guys, I confuse about how this class is structured in project. 1. Why "NF4WeightOnlyConfig" does not work the same way like others config? Such as: ```python from torchao.dtypes._nf4tensor_api import NF4WeightOnlyConfig from torchao import quantize_ config = NF4WeightOnlyConfig() quantize_(model,config) ``` I a...
https://github.com/pytorch/ao/issues/2631
closed
[]
2025-07-30T13:57:44Z
2025-07-31T16:05:50Z
null
hieubnt235
huggingface/optimum
2,330
Patch Release to support `transformers~=4.53`
### System Info ```shell optimum[onnxruntime-gpu]==1.26.1 torch==2.7.1 vllm==0.10.0 docker run --rm -it --platform linux/amd64 ghcr.io/astral-sh/uv:debian bash ``` ### Who can help? @JingyaHuang @echarlaix ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An offici...
https://github.com/huggingface/optimum/issues/2330
closed
[ "bug" ]
2025-07-30T02:40:41Z
2025-07-31T02:54:31Z
1
yxtay
pytorch/xla
9,519
Behaviour of xm.all_gather() in SPMD mode
## ❓ Questions and Help I would like to confirm whether my MLIR compiler's handling of `xm.all_gather()` when running Torch-XLA in SPMD mode is correct. Say I have the following: - A tensor `t` with shape [8192, 784] - A 2D named mesh `(batch, model)` of 8 devices in a [2, 4] configuration: ``` Device Mesh: 0 1 2 3 4 ...
https://github.com/pytorch/xla/issues/9519
open
[ "question", "distributed" ]
2025-07-29T19:05:08Z
2025-07-30T17:40:04Z
null
hshahTT
huggingface/lerobot
1,622
Why is LeRobot’s policy ignoring additional camera streams despite custom `input_features`?
I'm training a SO101 arm policy with 3 video streams (`front`, `above`, `gripper`) and a state vector. The dataset can be found at this [link](https://huggingface.co/datasets/aaron-ser/SO101-Dataset/tree/main). I created a custom JSON config (the `train_config.json` below) that explicitly lists the three visual strea...
https://github.com/huggingface/lerobot/issues/1622
open
[ "question", "policies" ]
2025-07-29T14:07:14Z
2025-09-23T14:01:54Z
null
Aaron-Serpilin
huggingface/trl
3,797
How to view the training parameters after training is completed
How to view the training parameters after training is completed?I am using GRPOTrainer for training, but after training multiple times, I have forgotten the parameters I set. How can I view the saved training parameters?
https://github.com/huggingface/trl/issues/3797
open
[ "❓ question", "🏋 GRPO" ]
2025-07-29T09:42:52Z
2025-07-29T13:07:50Z
null
Tuziking
huggingface/optimum
2,329
Support for exporting paligemma to onnx
### Feature request I’ve tried to export google/paligemma-3b-mix-224 to onnx using optimum. But it outputs: "ValueError: Trying to export a paligemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/opti...
https://github.com/huggingface/optimum/issues/2329
closed
[ "Stale" ]
2025-07-29T08:58:41Z
2025-09-06T02:04:25Z
2
DashaMed555
pytorch/torchtitan
1,482
Is there documentation on what exactly are 'dp_shard_mod_ep' and 'dp_shard_in_ep'] ?
Wondering where I can find detail on 'dp_shard_mod_ep', 'dp_shard_in_ep'] ? https://github.com/pytorch/torchtitan/blob/5bab356c29dfababd8f16ab7d8e3d50cba6326e5/torchtitan/distributed/parallel_dims.py#L70
https://github.com/pytorch/torchtitan/issues/1482
open
[ "documentation" ]
2025-07-29T06:48:43Z
2025-08-21T03:24:48Z
null
githubsgi
pytorch/helion
392
ImportError: cannot import name 'triton_key' from 'torch._inductor.runtime.triton_compat'
Does Helion require nightly PyTorch? (I'm using 2.7.1)
https://github.com/pytorch/helion/issues/392
closed
[ "question" ]
2025-07-29T04:34:20Z
2025-08-25T21:20:54Z
null
HanGuo97
huggingface/transformers
39,744
_supports_static_cache disappear
### System Info transformers main branch ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reprodu...
https://github.com/huggingface/transformers/issues/39744
closed
[ "bug" ]
2025-07-29T02:36:04Z
2025-07-29T08:17:00Z
4
jiqing-feng
pytorch/torchtitan
1,478
Is FSDP+TP+EP supported for Llama4 ?
Wondering if FSDP+TP+EP is supported for pre-training LLama4 ?
https://github.com/pytorch/torchtitan/issues/1478
closed
[ "question" ]
2025-07-28T22:55:43Z
2025-08-21T02:36:59Z
null
githubsgi
pytorch/pytorch
159,295
Invalid onnx model is exported for model where data is assigned using a mask and index
### 🐛 Describe the bug Exporting a model to onnx which assigns data with a mask and index produces a model which does not work. Exporting the model: ```python import torch import torch.nn as nn class TestModel(nn.Module): def __init__(self): super().__init__() def forward(self, R): B = R.s...
https://github.com/pytorch/pytorch/issues/159295
closed
[ "module: onnx", "triaged" ]
2025-07-28T20:54:13Z
2025-09-03T20:13:32Z
null
cgaudreau-ubisoft
pytorch/TensorRT
3,722
❓ [Question] Exporting models using FlashAttention package
I'd love to export a PyTorch model to TensorRT. In this model I use flash-attn package to speed-up attention. Is this supported?
https://github.com/pytorch/TensorRT/issues/3722
open
[ "question" ]
2025-07-28T11:54:07Z
2025-07-28T17:42:23Z
null
s1ddok
pytorch/pytorch
159,249
[ONNX] How to export RMS Norm
### 🚀 The feature, motivation and pitch I'm converting a Pytorch model to ONNX format, but I got this error: ``` torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::rms_norm' to ONNX opset version 20 is not supported ``` ### Alternatives I have read the ONNX documentation. They said that this ...
https://github.com/pytorch/pytorch/issues/159249
closed
[ "module: onnx", "triaged" ]
2025-07-28T09:34:51Z
2025-07-30T14:22:46Z
null
HuynhNguyenPhuc
huggingface/lerobot
1,607
how to control a so-101 with trained ACT model?
https://huggingface.co/initie/test_pick_result This is my pre-trained model for grabbing the switch on the desk by ACT model. How to run this policy model on the Anaconda? Already by way of example, python -m lerobot.record --robot.type=so101_follower --robot.port=COM3 --robot.id=ammd_follower_arm --robot.camer...
https://github.com/huggingface/lerobot/issues/1607
open
[ "question", "policies" ]
2025-07-28T05:23:24Z
2025-10-15T03:28:50Z
null
initia1013
huggingface/lerobot
1,602
How to perform multi-GPU training for SMoVLA?
I noticed that the paper used 4 GPUs for pretraining, but the current training code doesn’t seem to support it. Could you provide the corresponding code?
https://github.com/huggingface/lerobot/issues/1602
closed
[]
2025-07-27T09:46:04Z
2025-07-28T08:40:01Z
null
QZepHyr
huggingface/hmtl
72
How to create a website
https://github.com/huggingface/hmtl/issues/72
open
[]
2025-07-27T09:30:22Z
2025-07-27T09:30:22Z
null
Chi23-ike
huggingface/text-generation-inference
3,304
using trtllm-build instead of optimum-nvidia for engine building or optimum-nvidia wrong version ?
Hello, I'm experiencing significant issues when trying to use Text Generation Inference (TGI) with TensorRT-LLM as the backend. **Problem 1: Version Compatibility** I cannot use the latest version of TGI due to a known bug (see: https://github.com/huggingface/text-generation-inference/issues/3296). I'm therefore us...
https://github.com/huggingface/text-generation-inference/issues/3304
open
[]
2025-07-27T06:24:29Z
2025-10-06T09:56:29Z
4
psykokwak-com
huggingface/transformers
39,705
[i18n-<bn>] Translating docs to <Bengali>
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the Bengali-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingfac...
https://github.com/huggingface/transformers/issues/39705
open
[ "WIP" ]
2025-07-27T06:18:20Z
2025-07-27T11:58:32Z
1
ankitdutta428
huggingface/transformers
39,699
No flag to support Conditional Parameter Loading for gemma-3n-E2B models in transformer
### System Info Hi, While a lot has been mentioned about gemma-3n-E2B and gemma-3n-E4B about the COnditional parameter loading and reduced memory loading There is no configuration currently visible in transformers for supporting that. Is it possible to get the related configuration/code/documentation to make it work t...
https://github.com/huggingface/transformers/issues/39699
closed
[ "bug" ]
2025-07-26T18:08:00Z
2025-09-03T08:02:58Z
2
aakashgaur01
huggingface/tokenizers
1,835
Can you provide binary releases?
It seems that binaries are not available in recent versions. tokenizers module is essential for the latest models, and it would be preferable if it could be easily installed. Setting up a Rust compilation environment can be cumbersome, and it's almost impossible to do so offline. Could we possibly distribute somethi...
https://github.com/huggingface/tokenizers/issues/1835
closed
[]
2025-07-26T16:07:12Z
2025-09-08T13:49:52Z
4
goldenmomonga
huggingface/lerobot
1,599
Evaluation results of VLA models on MetaWorld Benchmark
Thank you for this excellent work! I noticed that the paper mentions evaluation results of VLA models on MetaWorld. However, in the original papers for Octo and π₀, results are only reported on the LIBERO benchmark, and I haven’t found their MetaWorld evaluations in other related studies. I’d like to know how Octo and ...
https://github.com/huggingface/lerobot/issues/1599
open
[ "enhancement", "question", "policies", "simulation" ]
2025-07-26T11:18:54Z
2025-08-12T09:17:44Z
null
Zooy138
huggingface/transformers
39,686
CRITICAL ISSUE REPORT! GEMMA 3 1B CANNOT RUN!
How to reproduce: Run this: ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer # Load the base model in FP16 base_model = AutoModelForCausalLM.from_pretrained( "unsloth/gemma-3-1b-pt", low_cpu_mem_usage=True, return_dict=True, torch_dtype=torch.float16, device_map="mps"...
https://github.com/huggingface/transformers/issues/39686
closed
[]
2025-07-26T00:22:27Z
2025-07-28T12:07:50Z
5
yukiarimo
huggingface/lerobot
1,592
Time spent on imitation learning training (ACT)
I use colab to make a policy with ACT model. The note said, "Training with the ACT policy for 100,000 steps typically takes about 1.5 hours on an NVIDIA A100 GPU,", and I used A100 model in colab too. However the expected time is 13 hours, which seems to be much longer than the standard value of 1.5 hours. Is it corre...
https://github.com/huggingface/lerobot/issues/1592
closed
[ "question", "policies" ]
2025-07-25T06:36:35Z
2025-10-08T08:32:32Z
null
initia1013
huggingface/datasets
7,699
Broken link in documentation for "Create a video dataset"
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken. https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset <img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
https://github.com/huggingface/datasets/issues/7699
open
[]
2025-07-24T19:46:28Z
2025-07-25T15:27:47Z
1
cleong110
huggingface/transformers
39,637
[BUG] Run 111B+ Teacher distributed inference and 8B Student distributed training on multi-node H200 GPUs using the Transformers Trainer without encountering OOM errors?
Hello, first off, apologies if this information is already available elsewhere. I've searched through the documentation and existing issues but haven't found a clear answer to my question. I have access to 2 to 4 nodes (16 to 32 GPUs in total), each equipped with 8x140GB H200 GPUs. My objective is to perform large-sca...
https://github.com/huggingface/transformers/issues/39637
closed
[]
2025-07-24T15:05:38Z
2025-09-01T08:03:18Z
3
seona21
huggingface/lerobot
1,586
Real-world deploy on ALOHA Robot
How could I deploy the policies on the ALOHA robot? And how could I deploy in the real world?
https://github.com/huggingface/lerobot/issues/1586
open
[ "question", "robots" ]
2025-07-24T12:52:06Z
2025-08-21T16:18:26Z
null
LogSSim
huggingface/diffusers
11,984
A compatibility issue when using custom Stable Diffusion with pre-trained ControlNets
I have successfully fine-tuned a Stable Diffusion v1.5 model using the Dreambooth script, and the results are excellent. However, I've encountered a compatibility issue when using this custom model with pre-trained ControlNets. Since the Dreambooth process modifies the U-Net weights, the original ControlNet is no longe...
https://github.com/huggingface/diffusers/issues/11984
closed
[]
2025-07-24T09:16:55Z
2025-07-24T15:15:20Z
6
ScienceLi1125
huggingface/lighteval
868
How to calculate perplexity from an OpenAI compatible API
Hello, I'm new to LightEval. I want to use LightEval to evaluate an LLM model that is served via an API. The API is OpenAI compatible. It also returns logprobs for each token. Is there a built-in function to evaluate the perplexity score? I'm asking because I see that it’s not implemented. https://github.com/huggingf...
https://github.com/huggingface/lighteval/issues/868
open
[]
2025-07-24T07:27:05Z
2025-07-24T07:27:05Z
null
mrtpk
huggingface/lerobot
1,580
Environment_State in act and SmolVLA policy
Hi, Thanks for the awesome work! I have been noticing a variable called observation.environment_state in the act policy. What is exactly the feature environment_state. Thanks!
https://github.com/huggingface/lerobot/issues/1580
closed
[ "question", "policies" ]
2025-07-24T03:32:31Z
2025-10-08T13:09:33Z
null
kasiv008
pytorch/tutorials
3,488
💡 [REQUEST] - tutorial on torchrl LLM API
### 🚀 Describe the improvement or the new tutorial I’d like to write a tutorial about TorchRL LLM post-training API including data formatting for RL, multi-turn conversation handling, tool usage etc @svekars what’s the policy on open-source models usage? Can I load and use a small model (0.5B) freely? ### Existing ...
https://github.com/pytorch/tutorials/issues/3488
open
[ "tutorial-proposal" ]
2025-07-23T21:23:25Z
2025-07-23T22:26:43Z
3
vmoens
huggingface/transformers.js
1,379
Why Do I Get Different Outputs in Python and JavaScript for the Same ONNX Model?
Hi , I'm running inference on the same ONNX model (t5-small-new) using both Python and JavaScript (via ONNX Runtime). However, I'm noticing that the outputs are different between the two environments, even though the inputs and model are the same. The output of the Python code is correct while JS is not accurate. Pyt...
https://github.com/huggingface/transformers.js/issues/1379
closed
[ "question" ]
2025-07-23T20:13:57Z
2025-08-29T23:43:21Z
null
mahdin75
pytorch/executorch
12,756
How to get ExecuTorch version in C++?
I using ExecuTorch in my C++ application and I want to get ExecuTorch version at compile time or runtime. But I haven't found some `#define` or `const std::string` like `EXECUTORCH_VERSION` or function like `get_version()`. For example, PyTorch has [`TORCH_VERSION`](https://github.com/pytorch/pytorch/blob/fe8f556006...
https://github.com/pytorch/executorch/issues/12756
open
[ "module: runtime", "module: user experience" ]
2025-07-23T19:17:40Z
2025-09-16T21:45:11Z
null
eltimen
huggingface/transformers
39,618
SageAttention for attention implementation?
### Feature request I've noticed it's been a while now, but transformers still only has flash attention as the fastest attention backend for calls like these: <img width="1307" height="780" alt="Image" src="https://github.com/user-attachments/assets/3f3d62f6-a166-4ca6-97a0-49263fd93299" /> Are there any plans to ad...
https://github.com/huggingface/transformers/issues/39618
open
[ "Feature request" ]
2025-07-23T19:10:47Z
2025-07-25T12:30:37Z
4
Many0therFunctions
huggingface/diffusers
11,977
how to load a finetuned model especially during validation phase
<img width="1034" height="743" alt="Image" src="https://github.com/user-attachments/assets/c4e9318f-10aa-4b91-9d60-e28a3be38f8a" /> As the above, I have finetuned the model and want to validate it, but the given demo which is train_dreambooth_sd3.py still uses "pipeline = StableDiffusion3Pipeline.from_pretrained( ...
https://github.com/huggingface/diffusers/issues/11977
open
[]
2025-07-23T11:54:16Z
2025-07-24T09:19:11Z
null
micklexqg
pytorch/executorch
12,749
How to run a executorch model directly from memory instead of saving it as a disk file
### 📚 The doc issue Hi, I wanted to know if there is any ExecuTorch runtime API that can accept a *.pte model available in the memory (in some sort of a buffer format) and use it to do load and infer? So far, I could only find a few which require the model to be passed as a disk file. ### Suggest a potential alt...
https://github.com/pytorch/executorch/issues/12749
open
[ "module: extension" ]
2025-07-23T11:09:04Z
2025-09-02T06:22:00Z
null
vikasbalaga
huggingface/lerobot
1,579
Is there a video backend supporting nondestructive encoding?
I saved images during recording through not deletng folder `images`. When I try to compare the first frame.png in `images` folder and dataset=make_dataset(config)'s first image, I found the saved png file is nondestructive. But the image I got by lerobot is not. How I find: in `def save_episode` ``` # img_dir...
https://github.com/huggingface/lerobot/issues/1579
open
[ "question", "dataset" ]
2025-07-23T08:38:39Z
2025-08-12T09:22:26Z
null
milong26
huggingface/candle
3,032
`matmul` (and others) Precision issues between Candle & PyTorch
We noticed there's some precision discrepancy in matrix multiplication and the linear layer between between Candle and PyTorch. This matters a lot when reproducing LLMs originated from PyTorch into Candle. We used the `hf_hub::api::Api` to get the safetensors from the hub and for testing the precision issues for each m...
https://github.com/huggingface/candle/issues/3032
closed
[]
2025-07-23T04:07:08Z
2025-09-27T21:25:51Z
4
andrew-shc
huggingface/lerobot
1,578
Lerobot metaworld dataset only provides 49 tasks
https://huggingface.co/datasets/lerobot/metaworld_mt50 There are only 49 tasks and "Push the puck to a goal" task repeates twice
https://github.com/huggingface/lerobot/issues/1578
open
[ "question", "simulation" ]
2025-07-23T04:03:17Z
2025-08-12T09:23:12Z
null
chenkang455
huggingface/lerobot
1,577
test failed after training SVLA
I collected 76 sets of data and used the same calibration file as during collection. However, after training for 24k steps, the model obtained was unable to complete the grasping task during inference. Can anyone help me deal with the problem? [dataset](https://huggingface.co/datasets/Xiaoyan97/orange_block_pickplace)
https://github.com/huggingface/lerobot/issues/1577
open
[ "question", "policies" ]
2025-07-23T03:59:26Z
2025-08-12T09:23:26Z
null
Liu-Xiaoyan97
huggingface/lerobot
1,576
Multiple Dataset training
How to train multiple lerobot dataset? is there any function I can use it
https://github.com/huggingface/lerobot/issues/1576
open
[ "question", "dataset" ]
2025-07-23T03:46:03Z
2025-10-10T09:30:06Z
null
JustinKai0527
huggingface/transformers
39,596
Does transformers support python3.13 -- disable-gil or python3.14 free threading?
Does transformers support python3.13 -- disable-gil or python3.14 free threading? I got an error when trying to install transformers on these two python versions.
https://github.com/huggingface/transformers/issues/39596
closed
[]
2025-07-23T02:34:03Z
2025-08-30T08:02:54Z
2
SoulH-qqq
huggingface/transformers.js
1,374
nanoVLM support
### Question I would like to know if there is any plan to support models built with nanoVLM [https://github.com/huggingface/nanoVLM], thanks.
https://github.com/huggingface/transformers.js/issues/1374
open
[ "question" ]
2025-07-22T11:43:57Z
2025-07-23T09:02:15Z
null
sbrzz
huggingface/diffusers
11,971
What is the minimum memory requirement for model training?
Hello, I would like to try training an SDXL model using my own dataset. What is the minimum memory size required for the model?
https://github.com/huggingface/diffusers/issues/11971
closed
[]
2025-07-22T07:52:28Z
2025-07-22T08:26:27Z
null
WWWPPPGGG
pytorch/torchtitan
1,439
Duplicate definition of vocab_size?
Hi @wwwjn @H-Huang @tianyu-l thanks for the amazing work on deepseek v3 Have a minor question: why is there a definition of vocab size here https://github.com/pytorch/torchtitan/blob/4e73af3e2c5f99ad3cb5a21612e615a64b0b75e7/torchtitan/models/deepseek_v3/__init__.py#L50-L51C9 which then gets overridden by the tokeniz...
https://github.com/pytorch/torchtitan/issues/1439
closed
[]
2025-07-21T22:59:11Z
2025-07-23T04:09:56Z
1
vwxyzjn
huggingface/transformers
39,565
Model forward execution in full eager mode?
I know there is a flag `attn_implementation` which could trigger specialized attention kernel implementation. Besides this, does everything run in native PyTorch eager mode? Does `transformers` have any other custom op or kernel? ```python model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B", device_m...
https://github.com/huggingface/transformers/issues/39565
closed
[]
2025-07-21T21:49:05Z
2025-08-21T08:34:59Z
3
22quinn
huggingface/lerobot
1,564
How are Episode Stats used?
I'm looking to create a subset of an episode (ie sec 2-4) in a 30 second episode, and wanted to know how episode_stats are used later on for training / inference? Are they used to normalize model inputs or are they used somewhere else as well? ie. in modeling_act.py ``` self.normalize_inputs = Normalize( ...
https://github.com/huggingface/lerobot/issues/1564
closed
[ "question", "policies", "processor" ]
2025-07-21T19:06:21Z
2025-08-12T09:27:29Z
null
andlyu
huggingface/lerobot
1,561
will you release the libero ft&eval setting?
hello your smolVLA is a wonderful work ,i notice that you finetuned it on the **libero** and evalaute at the same time.but i couldn't achieve the same or similar success rate**(just 76% ,much lower than your '96%')** **have you use the async inference in libero?** I think it must be the different hyperparameters w...
https://github.com/huggingface/lerobot/issues/1561
closed
[ "enhancement", "question", "policies" ]
2025-07-21T13:57:13Z
2025-09-23T09:25:04Z
null
JuilieZ
huggingface/transformers
39,554
Why `is_causal` is not used in `flash_attention_forward` ?
I want to perform bidirectional attention in the Qwen3 model to train an embedding model, so I passed `is_causal=False` in the model `forward` (I manually added `is_causal` arguments in all `forward` method such as `Qwen3Model` and `Qwen3Attention` in`modeling_qwen3.py`): ```python class Qwen3Attention(nn.Module): ...
https://github.com/huggingface/transformers/issues/39554
closed
[ "Flash Attention" ]
2025-07-21T12:08:00Z
2025-11-11T12:32:41Z
9
lucaswychan
huggingface/peft
2,660
Custom models LoRA
Is there any way to fine-tune models that are not in the support list or custom models? Currently, many public models have their LLM parts from Qwen. Can LLaMA-Factory use the Qwen template and only fine-tune the LLM part? Thank you
https://github.com/huggingface/peft/issues/2660
closed
[]
2025-07-21T11:52:30Z
2025-07-24T12:53:34Z
6
stillbetter
huggingface/lerobot
1,559
Is the current model framework suitable for using automatic mixed precision?
I saw that `.to(torch.float32)` and `.to(torch.bfloat16)` were used in many places in the Pi0 model code. Then I implemented parallel training of Pi0 based on accelerate, and found that if I want to use AMP, the code will report an error of dtype mismatch. I want to know whether the existing code is suitable for automa...
https://github.com/huggingface/lerobot/issues/1559
open
[ "question", "policies" ]
2025-07-21T10:45:26Z
2025-08-12T09:27:59Z
null
xliu0105
huggingface/transformers
39,549
Is there plan to integrate ColQwen2.5 into Transformers?
### Model description Is ColQwen2ForRetrieval integrated into the transformers library, and are there plans to add [ColQwen2.5](https://github.com/illuin-tech/colpali/blob/main/colpali_engine/models/qwen2_5/colqwen2_5/modeling_colqwen2_5.py) in the future? ### Open source status - [x] The model implementation is ava...
https://github.com/huggingface/transformers/issues/39549
closed
[ "New model" ]
2025-07-21T10:08:47Z
2025-11-03T23:31:08Z
0
rebel-thkim
huggingface/diffusers
11,966
How about forcing the first and last block on device when groupoffloading is used?
**Is your feature request related to a problem? Please describe.** When group offloading is enabled, the offload and onload cannot be streamed between steps and this is really a big time comsuming problem. **Describe the solution you'd like.** Is it possible to add an option that could make the first and last block fo...
https://github.com/huggingface/diffusers/issues/11966
open
[ "contributions-welcome", "group-offloading" ]
2025-07-21T08:38:30Z
2025-12-02T15:30:23Z
13
seed93
huggingface/tokenizers
1,829
The parameter in initial_alphabet of the "class BpeTrainer(Trainer)" does not allow more than one character to initialized
Hi everyone, I am working on Tamil and Sinhala languages which are morphologically rich languages, in these languages a character is actually a combination of multiple unicode codepoints (similar to emojis) so it would be greatly beneficial to initialize the BPE alphabet with graphemes instead of the characters. Is the...
https://github.com/huggingface/tokenizers/issues/1829
open
[]
2025-07-21T08:30:21Z
2025-07-21T08:30:21Z
0
vmenan
huggingface/lerobot
1,554
How to use local datasets to train and evaluate
Due to network issues, I want to use only local datasets during training and evaluation and prevent huggingface from uploading data or retrieve datasets on the hub.Is there any good solution?
https://github.com/huggingface/lerobot/issues/1554
closed
[ "question", "dataset" ]
2025-07-21T07:54:07Z
2025-10-08T12:58:32Z
null
zym123321
pytorch/tutorials
3,481
[BUG] - Broken links of PyTorch Libraries(torchao, torchrec etc) on the right side of the tutorial index page
### Add Link https://docs.pytorch.org/tutorials/index.html ### Describe the bug Those links to the "PyTorch Libraries" section on the side bar are broken, they should pointed to `https://docs.pytorch.org/ao` instead of `https://docs.ppytorch.org/ao`, same for other libraries. I searched the codebase and seems thes...
https://github.com/pytorch/tutorials/issues/3481
closed
[ "bug", "website" ]
2025-07-21T06:56:26Z
2025-07-22T15:46:47Z
2
sniper35
huggingface/optimum
2,324
AutoConfig.from_dict Missing in transformers==4.51.3 — Incompatibility with optimum==1.26.1
### System Info ```shell I am running into a critical compatibility issue between optimum and recent versions of transformers. ❗ Error Summary When using: transformers==4.51.3 optimum==1.26.1 onnx==1.17.0 onnxruntime==1.20.0 The following runtime error is thrown when attempting to load an ONNX model using ORTModelFo...
https://github.com/huggingface/optimum/issues/2324
open
[ "bug" ]
2025-07-21T06:04:58Z
2025-08-01T07:10:20Z
5
rratnakar09
huggingface/diffusers
11,964
KeyError when loading LoRA for Flux model: missing lora_unet_final_layer_adaLN_modulation_1 weights
I'm trying to run Overlay-Kontext-Dev-LoRA locally by loading the LoRA weights using the pipe.load_lora_weights() function. However, I encountered the following error during execution: > KeyError: 'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight' ``` import torch from diffusers import DiffusionPipeline fro...
https://github.com/huggingface/diffusers/issues/11964
open
[]
2025-07-21T05:16:34Z
2025-07-21T09:14:00Z
1
NEWbie0709
huggingface/transformers
39,545
Is the new Intel–Weizmann speculative decoding algorithm integrated into Transformers?
Hi, I recently read about a new speculative decoding algorithm developed by Intel Labs and the Weizmann Institute, which reportedly improves inference speed by up to 2.8×, even when using draft and target models with different vocabularies or architectures. References: - [Intel Newsroom](https://newsroom.intel.com/a...
https://github.com/huggingface/transformers/issues/39545
closed
[]
2025-07-21T02:47:48Z
2025-07-22T12:15:54Z
4
NEWbie0709
huggingface/lerobot
1,552
Support smolvla training on Intel GPU
Current script is only supporting `cuda`, `mps` and `cpu`. With PyTorch 2.7 with Intel GPU support, once PyTorch is installed, Intel GPU can be utilized in the training script.
https://github.com/huggingface/lerobot/issues/1552
open
[ "enhancement", "question", "policies" ]
2025-07-21T01:47:38Z
2025-10-09T07:40:10Z
null
xiangyang-95
huggingface/transformers
39,542
ValueError: You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time
### System Info - `transformers` version: 4.53.2 - Platform: **Ubuntu 22.04** Linux 5.15.0-139-generic - **Python 3.10.18** + ipykernel 6.29.5 - Pytorch 2.7.1+cu118 ### Who can help? @ArthurZucker @SunMarc ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An offi...
https://github.com/huggingface/transformers/issues/39542
closed
[ "Usage", "Good First Issue", "trainer", "bug" ]
2025-07-21T01:06:27Z
2025-08-22T05:53:51Z
10
xjackzenvey
huggingface/transformers
39,551
InformerForPrediction [I would like to seek your opinions, everyone, How can I set the dynamic real features for prediction]
Here is the description cited from the docs of InformerForPrediction: > future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) — Required time features for the prediction window, which the model internally will add to future_values. These could be things like “month of year”, “...
https://github.com/huggingface/transformers/issues/39551
closed
[]
2025-07-20T11:38:50Z
2025-08-28T08:03:20Z
null
2004learner
pytorch/torchtitan
1,422
[Gemma3] Support?
Hi Authors, Is there a plan for Gemme3 series? Best, Peter
https://github.com/pytorch/torchtitan/issues/1422
open
[]
2025-07-20T03:22:02Z
2025-08-21T03:25:09Z
1
YHPeter
huggingface/diffusers
11,961
New Adapter/Pipeline Request: IT-Blender for Creative Conceptual Blending
## Model/Pipeline/Scheduler description ### Name of the model/pipeline/scheduler "Image-and-Text Concept Blender" (IT-Blender), a diffusion adapter that blends visual concepts from a real reference image with textual concepts from a prompt in a disentangled manner. The goal is to enhance human creativity in design tas...
https://github.com/huggingface/diffusers/issues/11961
open
[]
2025-07-20T03:07:38Z
2025-07-20T03:08:06Z
0
WonwoongCho
huggingface/transformers
39,522
T5Gemma failing on provided example
### System Info - `transformers` version: 4.53.2 - Platform: Linux-6.14.0-23-generic-x86_64-with-glibc2.41 - Python version: 3.13.3 - Huggingface_hub version: 0.33.4 - Safetensors version: 0.5.3 - Accelerate version: 1.8.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: NO - mixed_prec...
https://github.com/huggingface/transformers/issues/39522
closed
[ "bug" ]
2025-07-19T11:07:26Z
2025-08-27T07:51:08Z
7
jadermcs
pytorch/executorch
12,659
Fix bug in export recipe logic where quantization output is not being forwarded and reexport if quantized.
### 🚀 The feature, motivation and pitch I've found couple of issues with the original export recipes logic has incomplete functionality: 1. The output of quantize stage is not getting propagated to next stages 2. When quantize stage is run, we should re-export the model before we lower to edge. ### Alternatives _No...
https://github.com/pytorch/executorch/issues/12659
closed
[ "module: exir", "triaged" ]
2025-07-19T03:22:30Z
2025-07-23T21:48:14Z
null
abhinaykukkadapu
huggingface/lerobot
1,540
Controlling robot with text using SmolVLA
Is it possible to control the robot with text inputs? I thought that's what a VLA model was... I cannot find any instructions on how to do this anywhere... I found this https://huggingface.co/masato-ka/smolvla_block_instruction , but control_robot was split into multiple files recently - none of which seem to work....
https://github.com/huggingface/lerobot/issues/1540
open
[ "question", "policies" ]
2025-07-18T23:09:11Z
2025-08-12T09:35:59Z
null
drain-pipe
pytorch/tutorials
3,473
💡trace images are too small to see anything
### 🚀 Describe the improvement or the new tutorial The trace images in https://docs.pytorch.org/tutorials/intermediate/pinmem_nonblock.html are not quite readable because they are massively scaled down. Is it possible to make them clickable/zoom-able? <img width="901" height="1120" alt="Image" src="https://github.co...
https://github.com/pytorch/tutorials/issues/3473
open
[ "website" ]
2025-07-18T22:18:37Z
2025-07-18T22:34:43Z
0
stas00
huggingface/diffusers
11,956
Frequency-Decoupled Guidance (FDG) for diffusion models
FDG is a new method for applying CFG in the frequency domain. It improves generation quality at low CFG scales while inherently avoiding the harmful effects of high CFG values. It could be a nice addition to the guiders part of diffusers. The implementation details for FDG are available on page 19 of the paper. https:...
https://github.com/huggingface/diffusers/issues/11956
closed
[ "help wanted", "Good second issue", "contributions-welcome", "advanced", "consider-for-modular-diffusers" ]
2025-07-18T19:12:50Z
2025-08-07T05:51:03Z
5
Msadat97
pytorch/torchtitan
1,415
[Feature request] Use omegaconf or hydra for the config system
Is there a plan to use Omegaconf or Hydra for the configuration system? The current .toml-based configuration system is simple but verbose: it does not support configuration inheritance or composition, which prevents config reuse. If this is needed, I am interested in contributing an alternative configuration soluti...
https://github.com/pytorch/torchtitan/issues/1415
open
[]
2025-07-18T18:28:34Z
2025-07-19T00:49:55Z
3
yzhao30
huggingface/datasets
7,689
BadRequestError for loading dataset?
### Describe the bug Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error: ``` huggingface_hub.errors.BadRequestError: (Request ID: ...) Bad request: * Invalid input: expected array, received string * at paths * Invalid...
https://github.com/huggingface/datasets/issues/7689
closed
[]
2025-07-18T09:30:04Z
2025-07-18T11:59:51Z
17
WPoelman
huggingface/diffusers
11,951
Kontext model loading quantization problem
Hello, can kontext be loaded quantitatively at present? Because I only have a 4090 with 24g video memory, the current fp16 loading method will cause OOM. Like flux, can it be loaded with torchao or gguf, so that this model can run on 4090?
https://github.com/huggingface/diffusers/issues/11951
closed
[]
2025-07-18T03:20:48Z
2025-07-18T05:39:28Z
2
babyta
pytorch/executorch
12,627
How to build executorch for Cortex-A cpu
### 🚀 The feature, motivation and pitch I wan to run executorch in Cortex-A cpu devices; How can i do? Thank you very much ### Alternatives _No response_ ### Additional context _No response_ ### RFC (Optional) _No response_
https://github.com/pytorch/executorch/issues/12627
closed
[ "need-user-input", "triaged" ]
2025-07-18T01:34:51Z
2025-07-21T12:28:34Z
null
barbecacov
huggingface/transformers
39,484
Transformers still tries to use apex.amp which is no longer a thing in apex.
### System Info ``` root@12bb27e08b1b:/# pip show transformers Name: transformers Version: 4.52.3 ``` trainer.py contains this: ``` if is_apex_available(): from apex import amp ``` Apex (built from source, as they recommend) does no longer come with amp. How to reproduce? 1. install transformers 2. install ap...
https://github.com/huggingface/transformers/issues/39484
closed
[ "bug" ]
2025-07-17T16:43:14Z
2025-08-25T08:03:03Z
4
yselivonchyk
huggingface/datasets
7,688
No module named "distributed"
### Describe the bug hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this? ### Steps to reproduce the bug 1. pip install datasets 2. from datasets.di...
https://github.com/huggingface/datasets/issues/7688
open
[]
2025-07-17T09:32:35Z
2025-07-25T15:14:19Z
3
yingtongxiong
huggingface/alignment-handbook
220
A little question: why num examples is much less than the total amount of my training dataset?
I am using this repo to SFT a model, and I notice that: I print the total amount of my training dataset, which is 7473 `Number of raw training samples: 7473` But during training, I find the log: [INFO|trainer.py:2314] 2025-07-17 17:03:23,908 >> ***** Running training ***** [INFO|trainer.py:2315] 2025-07-17 17:03:23...
https://github.com/huggingface/alignment-handbook/issues/220
closed
[]
2025-07-17T09:12:08Z
2025-07-23T23:30:33Z
3
Red-Scarff
pytorch/ao
2,566
FP8 PerRow quantization (CUDA capability>=9.0)
I found a description as below: -------------------------------------------------------------------------------------------------------------- A8W8 Float8 Dynamic Quantization with Rowwise Scaling # for torch 2.5+ from torchao.quantization import quantize_, PerRow, Float8DynamicActivationFloat8WeightConfig quantize_(mo...
https://github.com/pytorch/ao/issues/2566
open
[]
2025-07-17T04:04:24Z
2025-07-17T18:26:55Z
2
zzlin-0629
pytorch/TensorRT
3,691
❓ [Question] How to understand the value of this project
## ❓ Question I am sorry for I did not use this tool before. but since there is a `tensorrt` released in Nvidia tensorrt lib, and this project depends on the Nvidia tensorrt lib, so what is the value of this project? Is it more safe to use this tool to convert pytorch checkpoints to tensorrt engine file directly, the...
https://github.com/pytorch/TensorRT/issues/3691
closed
[ "question" ]
2025-07-17T03:47:19Z
2025-08-19T07:22:22Z
null
JohnHerry
huggingface/diffusers
11,945
Floating point exception with nightly PyTorch and CUDA
### Describe the bug When running any code snippet using diffusers it fails with floating point exception, and doesn't print any traceback. For example this one would cause the issue (the example of Stable Diffusion 3.5 medium): ``` import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3...
https://github.com/huggingface/diffusers/issues/11945
open
[ "bug" ]
2025-07-17T03:16:02Z
2025-08-02T13:48:05Z
1
MxtAppz
huggingface/course
1,009
How Transformers solve tasks - ASR section refers to task using Whisper but task actually uses Wav2Vec2
The [Automatic speech recognition](https://huggingface.co/learn/llm-course/chapter1/5?fw=pt#automatic-speech-recognition) segment of Section 1 "Transformer Models" > "How 🤗 Transformers solve tasks" refers to > Check out our complete [automatic speech recognition guide](https://huggingface.co/docs/transformers/tasks...
https://github.com/huggingface/course/issues/1009
open
[]
2025-07-16T23:25:55Z
2025-07-16T23:25:55Z
null
renet10
huggingface/diffusers
11,930
how to run convert_cosmos_to_diffusers.py correctly?
### Describe the bug hi. I have tried to convert the cosmos-transfer1's base model to diffuers using "convert_cosmos_to_diffusers.py" code with options --transformer_type Cosmo s-1.0-Diffusion-7B-Video2World --vae_type CV8x8x8-1.0 --transformer_ckpt_path ../fsdp_edge_v1/iter_000016000_ema_model_only.pt --output_path ....
https://github.com/huggingface/diffusers/issues/11930
open
[ "bug" ]
2025-07-15T16:20:09Z
2025-07-15T16:24:47Z
null
dedoogong
huggingface/transformers
39,426
object detection : matchin outputs.last_hidden_state with results
### Feature request it seems to me that would be possible with a little modification in the function post_process_object_detection with ``` ``for score, label, box, index in zip(scores, labels, boxes, indexes): results.append( { "scores": score[score > threshold], ...
https://github.com/huggingface/transformers/issues/39426
open
[ "Feature request" ]
2025-07-15T13:34:08Z
2025-07-22T11:08:23Z
5
fenaux
huggingface/peft
2,647
How can I merge the original model weights with LoRA weights?
I'm currently fine-tuning Qwen2.5_VL. Specifically, I used PEFT for LoRA fine-tuning on the linear layers of the LLM part. Meanwhile, I performed regular fine-tuning on other components like visual.merger and embed_tokens (with param.requires_grad set to True). After generating the files, as follow: <img width="946" h...
https://github.com/huggingface/peft/issues/2647
closed
[]
2025-07-15T11:40:33Z
2025-08-23T15:03:44Z
4
guoguo1314
huggingface/transformers
39,421
Speculative Decoding(do_sample=False) get different outputs
> @transcend-0 hey! > > > > The issue was solved in [#30068](https://github.com/huggingface/transformers/pull/30068). You can install transformers from `main` with the following line for the correct generation with assisted decoding: > > > > `!pip install --upgrade git+https://github.com/huggingface/transformers....
https://github.com/huggingface/transformers/issues/39421
closed
[]
2025-07-15T11:36:31Z
2025-07-19T03:11:04Z
13
nighty8
pytorch/TensorRT
3,683
❓ [Question] HELP:dynamic shape of offset and input is not supported in aten_ops_embedding_bag converter
## offset and input with dynamic shape is not supported Its failed When using tensorrt to compile embedding bag module with dynamic shape in aot mode, What confuses me is whether the aten_ops_embedding_bag converter supports dynamic shapes for the offset and indices parameters. The official test demo only covers ...
https://github.com/pytorch/TensorRT/issues/3683
closed
[ "question" ]
2025-07-15T09:01:39Z
2025-09-09T20:44:07Z
null
theflyfish
huggingface/lerobot
1,508
so101_dualarm_triplecam config to evaluate ACT policy?
I recently fine-tuned an ACT policy where my data was from 3 cameras (1 overhead + 2 wrist) and two so101's. Then I tried to evaluate it but noticed there is currently a config file missing to support this. Does or will this support exist soon?
https://github.com/huggingface/lerobot/issues/1508
open
[ "question", "robots" ]
2025-07-15T03:44:32Z
2025-08-12T09:30:41Z
null
sebastiandavidlee
huggingface/transformers
39,410
FP8 training support for Model Parallel / Tensor Parallel (MP/TP)
### Feature request I recieve message "ValueError: The model you are trying to fine-tune is quantized with QuantizationMethod.FP8 but that quantization method do not support training. Please open an issue on GitHub: https://github.com/huggingface/transformers to request the support for training support for Quantizatio...
https://github.com/huggingface/transformers/issues/39410
open
[ "Feature request" ]
2025-07-15T02:13:05Z
2025-07-15T13:30:27Z
2
edgeinfinity1
huggingface/transformers
39,409
TypeError: couldn't find storage object Float8_e4m3fnStorage - which version is needed for this?
Tested so many versions but can't find a version that won't give this error ``` !pip install bitsandbytes==0.45.0 --upgrade !pip install insightface --upgrade !pip install huggingface_hub==0.25.1 hf_transfer diffusers==0.31.0 transformers==4.36.0 !pip uninstall xformers triton --yes !pip install torch==2.2.0+cu121 to...
https://github.com/huggingface/transformers/issues/39409
closed
[ "bug" ]
2025-07-15T01:51:08Z
2025-08-02T12:06:59Z
1
FurkanGozukara
huggingface/datasets
7,682
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
### Describe the bug Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails in version 4.0.0 but not in version 3.6.0 ### Steps to reproduce the bug The following `uv script` should be able to reproduce the bug in version 4.0.0 and pass in version 3.6.0 on a macOS ...
https://github.com/huggingface/datasets/issues/7682
closed
[]
2025-07-14T18:41:02Z
2025-07-15T12:10:39Z
2
luatil-cloud
huggingface/lerobot
1,507
[PI0] Evaluation result on the metaworld
Has anyone tried training pi0 on the Metaworld benchmark? My evaluation results are relatively low 30~%.
https://github.com/huggingface/lerobot/issues/1507
closed
[ "bug", "question", "policies", "simulation" ]
2025-07-14T14:56:38Z
2025-10-08T08:47:31Z
null
chenkang455
huggingface/transformers
39,401
Qwen3 tokenizer wrong offset_mapping
### System Info transformers 4.53.2, Ubuntu 22.04.4, python 3.11.13 ### Who can help? @ArthurZucker and @itazap There must be a problem with the `offset_mapping` of Qwen3 `tokenizer`. The starting point in the text for each token, except the first and the last, is one position behind. I compared it with the BERT's `...
https://github.com/huggingface/transformers/issues/39401
closed
[ "bug" ]
2025-07-14T14:21:08Z
2025-07-16T09:59:35Z
4
contribcode
huggingface/lerobot
1,506
episode: None
When I run "python -m lerobot.scripts.train --dataset.root=./lerobot_datasets/my_robot_dataset/ --output_dir=./lerobot_datasets/outputs/ --policy.type=pi0 --dataset.repo_id=lerobot/tape --policy.push_to_hub=false", I got ‘’ 'dataset': {'episodes': None, 'image_transforms': {'enable': False... } ‘’. Is thi...
https://github.com/huggingface/lerobot/issues/1506
open
[ "question", "policies" ]
2025-07-14T13:29:07Z
2025-08-12T09:31:16Z
null
LogSSim
huggingface/finetrainers
420
How to fine-tune Wan 2.1 with Context Parallelism?
I am trying to fine-tune the Wan 2.1 model and would like to leverage the Context Parallelism (CP) feature to manage memory and scale the training. I saw in the main README that `CP support` is listed as a key feature. I have looked through the `examples/training` directory and the documentation, but I couldn't find a...
https://github.com/huggingface/finetrainers/issues/420
open
[]
2025-07-14T06:55:39Z
2025-07-15T05:09:45Z
null
vviper25
huggingface/lerobot
1,503
LeRobot So100 and Groot N1.5 Model Multi-Robot Deployment Feasibility Inquiry
Hello, I am conducting various tests using LeRobot's So100 (robot arm) with Groot N1.5 for training. I have some questions to ask. **Main Question** Is it possible to simultaneously apply a model trained with Groot N1.5 base on one robot to multiple robots of the same model? **Question Background (Actual Experience)*...
https://github.com/huggingface/lerobot/issues/1503
open
[ "enhancement", "question", "policies", "dataset" ]
2025-07-14T05:55:44Z
2025-08-12T09:31:35Z
null
devedgar
pytorch/helion
303
RuntimeError: Tile(0) is not tracked with proxy for
Hi, I noticed the following when a tile is used in a function: Code: ```python import ast import torch import helion import helion.language as hl from helion.language import _decorators from helion._compiler.inductor_lowering import CodegenState @_decorators.api() def func( tensor: torch.Tensor, tile: tuple[i...
https://github.com/pytorch/helion/issues/303
closed
[ "question" ]
2025-07-13T07:45:20Z
2025-08-25T21:25:22Z
null
HanGuo97