repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/alignment-handbook
215
Use alignment-handbook on Apple Silicon
Hi, is it possible to install and use this tool on Apple Silicon? I am aware that certain dependencies, such as Flash Attention, do not work on Apple Silicon. Has anyone tried and successfully installed this tool without those dependencies?
https://github.com/huggingface/alignment-handbook/issues/215
closed
[]
2025-04-11T01:28:02Z
2025-04-27T01:09:55Z
0
minhquoc0712
huggingface/lerobot
968
没有物理机器人我如何进行仿真机器人,我应该如何学习
没有物理机器人我如何进行仿真机器人,我应该如何学习仿真机器人呢,有没有好的推荐吗
https://github.com/huggingface/lerobot/issues/968
closed
[ "question", "simulation" ]
2025-04-10T18:10:47Z
2025-10-08T12:54:19Z
null
harryhu0301
huggingface/diffusers
11,285
value errors in convert to/from diffusers from original stable diffusion
### Describe the bug There's a hardcode somewhere for 77 tokens, when it should be using the dimensions of what is actually in the model. I have a diffusers-layout SD1.5 model, with LongCLIP. https://huggingface.co/opendiffusionai/xllsd-alpha0 I can pull it locally, then convert to single file format, with python ...
https://github.com/huggingface/diffusers/issues/11285
open
[ "bug" ]
2025-04-10T17:16:42Z
2025-05-12T15:03:03Z
2
ppbrown
huggingface/diffusers
11,272
what is the difference between from diffusion import *** and from diffusers import ***?
I have installed diffusers and it runs ok, however the code gets wrong with " no module named diffusion " when goes to from diffusion import ***? What is the difference between from diffusion import *** and from diffusers import ***? Need I install them all and what is the difference between diffusion and diffusers?
https://github.com/huggingface/diffusers/issues/11272
closed
[]
2025-04-10T05:11:56Z
2025-04-30T02:11:51Z
null
micklexqg
huggingface/inference-benchmarker
11
How to set the OPENAI_API_KEY?
There is no api_key param for inference-benchmarker. How to set the OPENAI_API_KEY? Thanks~ code there: https://github.com/huggingface/inference-benchmarker/blob/d91a0162bdfe318fe95b9a9bbb53b1bdc39194a9/src/requests.rs#L145C1-L153C36 ```bash root@P8757303A244:/opt/inference-benchmarker# inference-benchmarker -h Usage...
https://github.com/huggingface/inference-benchmarker/issues/11
closed
[]
2025-04-10T04:36:11Z
2025-04-25T13:13:18Z
null
handsome-chips
huggingface/transformers
37,408
How to solve the error of converting Qwen onnx_model to tensorRT_model?
### **1. The transformers' Qwen ONNX model has been exported successfully.** ### **2. Convert ONNX_model to tensorRT_model failed by trtexec.** **error info** ``` [04/10/2025-11:04:52] [E] Error[3]: IExecutionContext::setInputShape: Error Code 3: API Usage Error (Parameter check failed, condition: engineDims.d[i] ==...
https://github.com/huggingface/transformers/issues/37408
closed
[]
2025-04-10T04:08:47Z
2025-06-28T08:03:06Z
null
dearwind153
pytorch/pytorch
150,967
[MPS] `where`: silent incorrectness when cond is not contiguous
### 🐛 Describe the bug ```python device = "mps" diff = torch.tensor([[True, True], [True, True]], dtype=torch.bool) diff = diff.T target = torch.tensor([[0, 0], [0, 1]]) rcpu = torch.where(diff, target, 0) diffmps = diff.to(device) targetmps = target.to(device) rmps = torch.where(diffmps, targetmps, 0) print(rc...
https://github.com/pytorch/pytorch/issues/150967
closed
[ "triaged", "module: correctness (silent)", "module: mps" ]
2025-04-09T23:13:38Z
2025-04-13T20:44:52Z
null
qqaatw
pytorch/torchtitan
1,081
Torch.compile and TP during multiresolution Training
is it correct to assume that we should only enable torch.compile in single resolution training or when we have the same sequence lengths to avoid recompiles and slow down?
https://github.com/pytorch/torchtitan/issues/1081
open
[ "question", "module: torch.compile" ]
2025-04-09T18:08:41Z
2025-04-10T15:05:57Z
null
nighting0le01
huggingface/lerobot
964
RuntimeError: Could not load libtorchcodec during lerobot/scripts/train.py script
### System Info ```Shell - `lerobot` version: 0.1.0 - Platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.29.3 - Dataset version: 3.4.1 - Numpy version: 1.26.4 - PyTorch version (GPU?): 2.5.1+cu124 (True) - Cuda version: 12040 Additionally: ffmpeg version :...
https://github.com/huggingface/lerobot/issues/964
closed
[ "question" ]
2025-04-09T14:25:38Z
2025-04-15T13:32:24Z
null
shrutichakraborty
huggingface/transformers
37,390
how to reduce original model's tokenizer vocabulary
`###` Feature request I am working on model distillation. I am currently using the nllb-distilled-600M model, but the parameters of this model are still too large, and the vocabulary supports more than 100 languages. My use case is single language translation, such as English to Hebrew. Therefore, I need to reduce the...
https://github.com/huggingface/transformers/issues/37390
open
[ "Feature request" ]
2025-04-09T10:45:56Z
2025-04-09T10:53:07Z
null
masterwang22327
huggingface/datasets
7,506
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM
### Describe the bug I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er...
https://github.com/huggingface/datasets/issues/7506
open
[]
2025-04-09T06:32:04Z
2025-06-29T06:04:59Z
2
calvintanama
huggingface/lerobot
960
pi0-fintune-performance
I have been fine-tuning the provided pi0-base model on my dataset using LeRobot. After training for 100,000 steps, I found that the model performs well on tasks that appeared in my dataset, but its performance on unseen tasks is very poor. It seems to lack the generalization ability of a VLA model. Is this phenomenon n...
https://github.com/huggingface/lerobot/issues/960
closed
[ "question", "policies" ]
2025-04-09T01:21:12Z
2025-10-08T08:43:22Z
null
yanghb1
pytorch/pytorch
150,891
[ONNX] How to export Llama4
### 🐛 Describe the bug I am trying to do an onnx export for the Llama 4 Scout model but it fails saying: `RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: Dyn...
https://github.com/pytorch/pytorch/issues/150891
closed
[ "module: onnx", "triaged" ]
2025-04-09T00:11:49Z
2025-11-13T10:13:08Z
null
srijanie03
huggingface/lerobot
956
pi0 multi gps train
if i have multi 4090, how to modify to train pi0? only 1 4090 just error ![Image](https://github.com/user-attachments/assets/5f1900f2-6d0a-4e05-be99-81587f0bb22d)
https://github.com/huggingface/lerobot/issues/956
closed
[ "question" ]
2025-04-08T13:06:27Z
2025-11-20T03:07:56Z
null
ximiluuuu
huggingface/transformers
37,364
How to find a specific func doc when using transformers doc?
### Feature request Better UX for doc ### Motivation The search and UI layout make it so hard to find a func doc, especially when there are so many func doc in one webpage and your just can not find what you want by web page search. ### Your contribution no, right now
https://github.com/huggingface/transformers/issues/37364
open
[ "Feature request" ]
2025-04-08T10:48:04Z
2025-09-15T19:16:35Z
null
habaohaba
huggingface/open-r1
586
what is next for this project?
https://github.com/huggingface/open-r1/issues/586
open
[]
2025-04-07T21:29:54Z
2025-04-07T21:29:54Z
null
Mnaik2
pytorch/xla
8,948
Torch-XLA not compatible with static python
## ❓ Questions and Help I am trying to use Torch-XLA v2.3.0 but it fails with: ``` line 7, in <module> import _XLAC ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory ``` I noticed this message [here](https://github.com/pytorch/xla/blob/9e23ca853331aa229dcdba2473d20ca5af2d...
https://github.com/pytorch/xla/issues/8948
open
[ "question" ]
2025-04-07T18:25:43Z
2025-04-23T14:32:47Z
null
drewjenks01
huggingface/lerobot
949
Optional deps in using LeRobot as am optional package
Hi, we are working on enabling LeRobot dataset generation in [IsaacLab](https://github.com/isaac-sim/IsaacLab), such that developers could create data with IsaacLab data generation workflow and use it in their robot learning models. The asks are, 1. Is there any scheduled release, such that downstream devs could ha...
https://github.com/huggingface/lerobot/issues/949
closed
[ "question", "dataset", "simulation", "stale" ]
2025-04-07T16:55:48Z
2025-10-21T02:29:27Z
null
xyao-nv
huggingface/datasets
7,502
`load_dataset` of size 40GB creates a cache of >720GB
Hi there, I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows: ```python ds = DatasetDict( ...
https://github.com/huggingface/datasets/issues/7502
closed
[]
2025-04-07T16:52:34Z
2025-04-15T15:22:12Z
2
pietrolesci
huggingface/trl
3,254
How to get completion_length?
I noticed that during GRPO training, `completion_length` is recorded. However, I found that it’s not simply obtained by `len(completion)`. How is this calculated—by tokens? Is it possible for me to access the `completion_length` for each sample?
https://github.com/huggingface/trl/issues/3254
open
[ "❓ question", "🏋 GRPO" ]
2025-04-07T15:02:04Z
2025-04-11T03:10:20Z
null
Tuziking
huggingface/diffusers
11,220
Unconditional image generation documentation page not working as expected
### Describe the bug When consulting the documentation for [unconditional image generation](https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation), the last embedded page seems to contain an error that blocks it from being shown (see image below). This is @stevhliu's model stored in [thi...
https://github.com/huggingface/diffusers/issues/11220
closed
[ "bug" ]
2025-04-07T10:32:45Z
2025-04-08T08:47:18Z
2
alvaro-mazcu
huggingface/transformers.js
1,275
How to use @xenova/transformers in a musl-based environment?
### Question Hi, I encountered the following error when using @xenova/transformers: ```bash Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /app/node_modules/onnxruntime-node/bin/napi-v3/linux/x64//libonnxruntime.so.1.14.0) ``` After investigating the issue, I found tha...
https://github.com/huggingface/transformers.js/issues/1275
closed
[ "question" ]
2025-04-07T06:34:51Z
2025-10-07T21:23:36Z
null
ezcolin2
huggingface/open-r1
583
num_iterations in GRPOConfig does NOT DO what it is supposed to DO
Hi @qgallouedec and @lewtun Thanks again for the amazing work ! I got the chance to try the v0.16.0 trl release in open-r1. I was excited about num_iterations which was supposed to make the training 6 times faster. Simply one needs something like: `training_args = GRPOConfig(..., num_iterations=4) ` But I did not...
https://github.com/huggingface/open-r1/issues/583
closed
[]
2025-04-06T15:57:43Z
2025-04-12T06:00:21Z
null
ahatamiz
pytorch/pytorch
150,741
how to install pytorch with cuda 12.2 and py3.12
### 🐛 Describe the bug I wanna know how to install pytorch with CUDA12.2 ### Versions I used the following command , and many issue occured conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
https://github.com/pytorch/pytorch/issues/150741
closed
[]
2025-04-06T14:33:59Z
2025-04-07T14:34:48Z
null
goactiongo
huggingface/agents-course
412
[QUESTION] - Dummy Agent Library
_--- Do you see the issue? The answer was hallucinated by the model. We need to stop to actually execute the function! Let’s now stop on “Observation” so that we don’t hallucinate the actual function response. ---_ Can someone explain how the system is hallucinating in this example. I am kind of stuck on this.
https://github.com/huggingface/agents-course/issues/412
open
[ "question" ]
2025-04-06T09:44:14Z
2025-04-06T09:44:14Z
null
NewTonDBA
huggingface/lerobot
940
Possible mismatch in observations.state metadata in Libero datasets on Hugging Face
Hello, I believe there might be a mistake in the Libero datasets hosted on huggingface/datasets. Specifically, the issue is with the `observations.state` column. According to `meta/info.json`, the structure is described as: ``` "observation.state": { "dtype": "float32", "shape": [ 8 ], "names"...
https://github.com/huggingface/lerobot/issues/940
closed
[ "question", "dataset", "stale" ]
2025-04-06T04:18:55Z
2025-10-19T02:32:09Z
null
ozgraslan
pytorch/data
1,471
torchdata or torchdata-contrib?
my team has been implementing quite several utilities. some are close to core features, some other are more advanced and utilities. for example, their class names and features are like: ```python class RoundRobinNode(BaseNode[T]): """A node that cycles through multiple datasets in a round-robin way. ``` ```pytho...
https://github.com/meta-pytorch/data/issues/1471
open
[]
2025-04-06T01:11:33Z
2025-05-12T21:38:37Z
2
keunwoochoi
pytorch/torchtitan
1,058
Issue of using fully_shard (FSDP2) for Huggingface model: Cannot copy out of meta tensor; no data!
Dear community, Thanks for introducing FSDP2 to Pytorch. I am meeting with an issue using fully_shard for Huggingface model. Just want to know if you have any insights into this issue. The code is inherited from [#743 ](https://github.com/pytorch/torchtitan/issues/743) ``` import os import torch from torch.distrib...
https://github.com/pytorch/torchtitan/issues/1058
closed
[ "question", "module: checkpoint", "module: distributed_state_dict" ]
2025-04-05T01:48:49Z
2025-04-15T23:08:01Z
null
mingdianliu
pytorch/xla
8,940
User built torch-xla wheel fails on import
## ❓ Questions and Help After following the build instructions in CONTRIBUTING.md, and then running `python setup.py bdist_wheel` inside of `pytorch/xla`, a wheel is generated for `torch-xla` After installing that wheel in the environment of a different project this error appears upon import: ``` Traceback (most recen...
https://github.com/pytorch/xla/issues/8940
closed
[ "question", "build" ]
2025-04-04T20:14:21Z
2025-04-09T14:38:59Z
null
LPanosTT
huggingface/diffusers
11,208
MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline
### Describe the bug When using `StableDiffusion3ControlNetInpaintingPipeline` with `SD3MultiControlNetModel`, I receive an error: `NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.` ### Reproduction Example reproduction code: ```python import os import torch from dif...
https://github.com/huggingface/diffusers/issues/11208
open
[ "bug", "help wanted", "Good Example PR", "contributions-welcome" ]
2025-04-04T12:39:10Z
2025-05-11T15:03:00Z
5
DanilaAniva
pytorch/torchtitan
1,055
Is the currnet configuration system over-engineered?
It seems that a training job in TorchTitan is currently defined using a combination of TOML and Python. When users launch a training job, they are expected to provide a TOML file that specifies the model.name: https://github.com/pytorch/torchtitan/blob/351e9fb40fe345dd8a7fb3403881328b7cc0b21b/torchtitan/models/llama/...
https://github.com/pytorch/torchtitan/issues/1055
open
[ "question" ]
2025-04-03T22:46:39Z
2025-04-04T01:21:27Z
null
wangkuiyi
pytorch/torchtitan
1,054
Clarify PP split point documentation.
### Bug description The current documentation is as follows. ``` self.parser.add_argument( "--parallelism.pipeline_parallel_split_points", type=string_list, nargs="+", default=[], help=""" Specify comma-separated names of modules to ...
https://github.com/pytorch/torchtitan/issues/1054
closed
[ "question" ]
2025-04-03T22:36:08Z
2025-08-21T03:09:16Z
null
githubsgi
huggingface/sentence-transformers
3,308
How to load locally saved transformer models into sentence transformer?
I’ve made some modifications to the NVEMBEDV2 model architecture and saved the updated version locally using `model.save_pretrained()`. However, when I try to wrap the saved model in a SentenceTransformer, I encounter a `KeyError: 'NVEmbedConfig'`. I checked the documentation, and while loading pretrained models seems...
https://github.com/huggingface/sentence-transformers/issues/3308
open
[]
2025-04-03T15:11:20Z
2025-04-08T15:48:26Z
null
samehkhattab
pytorch/serve
3,409
Why Use TorchScript Format Models?
When customizing handler.py, we can load any format of model in the initialize function without needing to package the model into a .mar file. Why do the tutorials recommend converting the model to TorchScript format and packaging it together with handler.py into a .mar file?
https://github.com/pytorch/serve/issues/3409
open
[]
2025-04-03T09:00:11Z
2025-04-03T09:00:11Z
0
CongSuxu
huggingface/datasets
7,497
How to convert videos to images?
### Feature request Does someone know how to return the images from videos? ### Motivation I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two versi...
https://github.com/huggingface/datasets/issues/7497
open
[ "enhancement" ]
2025-04-03T07:08:39Z
2025-04-15T12:35:15Z
null
Loki-Lu
pytorch/torchtitan
1,044
How are the TP, CP, and PP marked in PyTorch profiler traces ?
How are TP, CP and PP labelled in PyTorch profiler traces ? FSDP appears to be clearly marked.
https://github.com/pytorch/torchtitan/issues/1044
open
[]
2025-04-02T22:27:30Z
2025-04-03T18:04:41Z
1
githubsgi
huggingface/blog
2,781
How to submit revised version of Arxiv paper (v2) to Daily Papers
I would like to submit a revised version (v2) of our arXiv paper to Daily Papers, but the original submission (v1) was uploaded too long ago, so it's not eligible through the regular submission form. However, this v2 version was recently accepted to CVPR 2025, and it is a completely different paper compared to v1, bot...
https://github.com/huggingface/blog/issues/2781
closed
[]
2025-04-02T09:20:30Z
2025-11-03T15:22:36Z
null
eveningglow
pytorch/pytorch
150,523
[Question] How to load extremely large model checkpoint for FSDP wrapped model?
Hello, We tried to train DeepSeek v3 model with the parallelism of `FSDP+Expert Parallel`. It works well with random initialized weights. But if we want do SFT or RLHF, we need to load the 670B model weights from https://huggingface.co/deepseek-ai/DeepSeek-V3-0324/tree/main So, does PyTorch has ways to load extremely...
https://github.com/pytorch/pytorch/issues/150523
closed
[ "oncall: distributed", "triaged", "module: fsdp" ]
2025-04-02T08:05:12Z
2025-05-08T16:28:40Z
null
zigzagcai
huggingface/lerobot
927
How to train a model for VLN?
### System Info ```Shell To control four legs dogs. ``` ### Information - [ ] One of the scripts in the examples/ folder of LeRobot - [ ] My own task or dataset (give details below) ### Reproduction rt ### Expected behavior tret
https://github.com/huggingface/lerobot/issues/927
closed
[ "question" ]
2025-04-01T13:26:20Z
2025-04-01T15:50:04Z
null
lucasjinreal
huggingface/agents-course
391
[QUESTION] UNIT-3 not yet published ?
<img width="1440" alt="Image" src="https://github.com/user-attachments/assets/aa8ed881-f998-4c63-805f-8af936d630c5" />
https://github.com/huggingface/agents-course/issues/391
closed
[ "question" ]
2025-04-01T11:24:07Z
2025-04-30T04:50:26Z
null
ynareshkalyan21
huggingface/hub-docs
1,664
Page: "how to be registered as a provider"?
https://github.com/huggingface/hub-docs/issues/1664
closed
[]
2025-04-01T10:55:01Z
2025-04-03T13:03:26Z
null
hanouticelina
huggingface/lerobot
926
[Question] Deploy leRobot for a delta kinematic
Bonjour everyone, I'm currently working on the development of an **open source delta robot** via ROS. I'm wondering if any of you have a clue to help me integrate leRobot ACT algorithm to the custom kinematic of my delta. ATM the inverse kinematic is managed by a marlin CNC firmware (on arudino mega), so we communi...
https://github.com/huggingface/lerobot/issues/926
closed
[ "question" ]
2025-04-01T09:46:29Z
2025-04-28T10:57:31Z
null
man0n0n0
huggingface/optimum
2,220
optimum-cli diffusion policy model issue
### System Info ```shell Hi, Trying to export a diffusion policy model to onnx format. From the error message and printed list of model types, it looks like “diffusion” model cannot be exported to onnx. Is there a way to get around this? optimum-cli export onnx --model lerobot/diffusion_pusht --task reinforcement-lea...
https://github.com/huggingface/optimum/issues/2220
closed
[ "bug" ]
2025-04-01T04:59:53Z
2025-06-11T13:57:20Z
1
kraza8
pytorch/torchtitan
1,035
Profiling only a select group of ranks
Is it possible to profile only a select group of ranks. Becomes hard to handle the large number of files when there are many ranks. I understand that there could be imbalances when only a few ranks are profiled. Do not know if there are ways to profile , but not dump the profile output file.
https://github.com/pytorch/torchtitan/issues/1035
open
[]
2025-03-31T20:02:30Z
2025-08-21T03:10:16Z
3
githubsgi
huggingface/lerobot
923
Cannot install Lerobot
I am getting an error when the installation is building the av wheel. It is not passing this part of the installation
https://github.com/huggingface/lerobot/issues/923
closed
[ "documentation", "question", "dependencies" ]
2025-03-31T18:26:16Z
2025-07-03T01:32:17Z
null
Prasit7
pytorch/xla
8,906
Profiler and `use_spmd()` order.
## 📚 Documentation In #8057, [@zjjott mentioned](https://github.com/pytorch/xla/issues/8057#issuecomment-2408428441) that `xp.start_server(...)` should be used after `use_spmd()`. I didn't find it written anywhere in the documentation. So, is this actually true? If so, we should write this down somewhere. cc @miladm...
https://github.com/pytorch/xla/issues/8906
open
[ "distributed", "documentation" ]
2025-03-31T15:34:03Z
2025-03-31T21:29:39Z
6
ysiraichi
pytorch/torchtitan
1,034
Context parallel on Turing GPUs?
As the title suggests, is torchtitan CP supported on Turing GPU? I got the error `RuntimeError: No available kernel. Aborting execution.` using the default `run_train.sh` script with CP changed to 2. I know Turing GPUs don't have flash attention support yet, but I read the torchtitan CP blog post [here](https://discu...
https://github.com/pytorch/torchtitan/issues/1034
open
[ "question", "module: context parallel" ]
2025-03-31T09:36:47Z
2025-08-21T03:11:02Z
null
dingqingy
huggingface/open-r1
564
How to evaluate pass@16 for aime 2024 benchmark?
https://github.com/huggingface/open-r1/issues/564
open
[]
2025-03-31T09:27:02Z
2025-03-31T09:27:02Z
null
Cppowboy
huggingface/diffusers
11,176
How to use attention_mask and encoder_attention_mask or apply prompts to specific areas in the image?
Hi, I'm aware of the attention_mask and encoder_attention_mask that exist in the forward function of the UNet2DConditionModel yet there are no examples on how to use this I would appreciate some help on that, thank you in advance @patrickvonplaten @Birch-san
https://github.com/huggingface/diffusers/issues/11176
open
[ "stale" ]
2025-03-30T16:56:40Z
2025-04-30T15:03:34Z
null
alexblattner
pytorch/tutorials
3,308
💡 [REQUEST] - <title>Pruning tutorial: clarify how to achieve comparable performance to non-pruned?
### 🚀 Describe the improvement or the new tutorial In the pruning tutorial https://pytorch.org/tutorials/intermediate/pruning_tutorial.html, the method of pruning that is implemented appears to be completely random. "In this example, we will prune at random 30% of the connections..." But isn't the goal of pruning pr...
https://github.com/pytorch/tutorials/issues/3308
open
[]
2025-03-30T15:49:41Z
2025-03-30T15:49:41Z
null
drscotthawley
huggingface/lerobot
920
[Question] How to convert dataset locally
I've noticed that `convert_dataset_v20_to_v21.py` convert LeRobot dataset from v20 to v21 that've already been pushed to the hub. But is there a script to do with local dataset?
https://github.com/huggingface/lerobot/issues/920
closed
[ "question", "dataset", "stale" ]
2025-03-30T13:32:50Z
2025-10-13T02:30:26Z
null
Frozenkiddo
huggingface/lerobot
919
[Question] Why does "action" exist?
I am a beginner and I am very confused about it. What I can understand is that during my entire operation, I sampled at fixed time intervals. It's like a signal being collected by a letter. I only have to observe and what does action mean? Many data sets in the project have data with the column title `action`. Moreover...
https://github.com/huggingface/lerobot/issues/919
closed
[ "question" ]
2025-03-30T10:45:57Z
2025-03-31T07:50:19Z
null
ipc-robot
huggingface/trl
3,179
How to resume from the last checkpoint?
I want to continue training from the last checkpoint. How should I do it? I set resume_from_checkpoint=True in the GRPOConfig, but based on the output, it seems to start training from the first step. Do I also need to change the model to the checkpoint path?
https://github.com/huggingface/trl/issues/3179
closed
[ "❓ question", "🏋 GRPO" ]
2025-03-30T02:30:47Z
2025-03-30T04:35:58Z
null
Tuziking
huggingface/diffusers
11,168
Sage Attention for diffuser library
**Is your feature request related to a problem? No **Describe the solution you'd like.** A clear and concise description of what you want to happen. Incorporate a way to add sage attention to the diffusers library: Flux pipeline, Wan pipeline, etc. **Describe alternatives you've considered.** None **Additional conte...
https://github.com/huggingface/diffusers/issues/11168
open
[ "wip" ]
2025-03-28T20:39:30Z
2025-06-23T05:59:27Z
12
ukaprch
huggingface/agents-course
381
[QUESTION]LLM or Agent?
In the tutorial, a lot of the contents mislead to a wrong conectp with LLM and Agents. ``` The Stop and Parse Approach One key method for implementing actions is the stop and parse approach. This method ensures that the agent’s output is structured and predictable: Generation in a Structured Format: The agent outputs...
https://github.com/huggingface/agents-course/issues/381
closed
[ "question" ]
2025-03-28T15:36:45Z
2025-04-30T04:50:54Z
null
joshhu
huggingface/lerobot
912
[Question]When will MultiLeRobotDataset available?
Hello, the MultiLeRobotDataset is very useful for training on large amounts of data; without it, training complex tasks would be difficult. However, I noticed that after the Simplify configs(#550) commit on January 31st, MultiLeRobotDataset have been marked as unavailable(raise NotImplementedError("The MultiLeRobotDat...
https://github.com/huggingface/lerobot/issues/912
closed
[ "question", "dataset", "stale" ]
2025-03-28T09:16:06Z
2025-10-22T02:30:53Z
null
Vacuame
huggingface/agents-course
380
[QUESTION] Question on using HuggingFace space
First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord However, if you prefer you can ask here, please **be specific**. I am on AI Agents course now. I have trouble in using HuggingFace space. I studied this course at company so I have to open a fi...
https://github.com/huggingface/agents-course/issues/380
closed
[ "question" ]
2025-03-28T08:28:23Z
2025-04-30T04:47:14Z
null
kjh0303
pytorch/xla
8,900
Reset Peak Memory Usage
## 🚀 Feature <!-- A clear and concise description of the feature proposal --> Provides a method to reset peak used memory size to current memory being used or 0. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If...
https://github.com/pytorch/xla/issues/8900
open
[ "enhancement" ]
2025-03-28T05:29:44Z
2025-03-28T05:29:44Z
0
yaochengji
pytorch/torchtitan
1,027
Linear layer weights are in float32 ?
### Bug description I am seeing Linear layer weights in float32 ( wq.weight.dtype torch.float32 ) even after setting the following. mixed_precision_param = "bfloat16" mixed_precision_reduce = "bfloat16" Is that expected or I hit upon a bug ? ### Versions 1. Yes. 2. See the description section . 3. It is e...
https://github.com/pytorch/torchtitan/issues/1027
closed
[ "question" ]
2025-03-28T01:35:22Z
2025-05-08T21:15:49Z
null
githubsgi
pytorch/torchtitan
1,026
Any plan to add Llama 1B and/or 3B models ?
Wondering if there is any plan to add the 1B and/or 3B models to the TorchTitan set of example models ? It is probably fairly straight forward to do that , if I am not missing anything, Another toml file and adds at a few places. The optimizer and lr_scheduler section may requires some trial and error.
https://github.com/pytorch/torchtitan/issues/1026
open
[]
2025-03-28T01:21:59Z
2025-04-01T18:29:00Z
4
githubsgi
pytorch/xla
8,899
The Stable Diffusion notebook is broken.
## 📚 Documentation The README points to a [Stable Diffusion notebook](https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb) to help a user get started. However, this notebook cannot be run successfully: 1. The `import torch_xla` step results in an error: ``` oduleNotFoundError ...
https://github.com/pytorch/xla/issues/8899
open
[ "bug", "documentation" ]
2025-03-27T22:38:23Z
2025-11-13T00:44:20Z
0
zhanyong-wan
pytorch/vision
9,008
Torchvision bounding boxes do not match the images, becuase the bboxes are from the pre-cropped, pre-resized version.
### 🐛 Describe the bug CelebA bounding boxes were calculated on the so called "in-the-wild" images, prior to cropping and resizing. But torchvision.datasets returns the version that is cropped to 178x218. So for example, on the ninth image, the bbox is outside the image size. CODE TO REPRO ``` from torchvision imp...
https://github.com/pytorch/vision/issues/9008
open
[]
2025-03-27T17:34:15Z
2026-01-05T16:15:54Z
6
yaoshiang
huggingface/Math-Verify
47
Question: How to configure `verify` for strict multi-part answer checking?
Hi Math-Verify Team, I'm currently using `math-verify` for evaluating LLM outputs, specifically for questions that might require multiple answers (e.g., "Find all X..."). I've observed that the `verify` function in `grader.py`, which seems to use logic similar to `any(product(gold, target))`, can return `True` even i...
https://github.com/huggingface/Math-Verify/issues/47
closed
[]
2025-03-27T16:54:52Z
2025-07-01T19:31:51Z
null
TweedBeetle
huggingface/transformers.js
1,259
3.2.4 has wrong env check in transformers.web.js
### Question ## Background I have developed a chrome extension which is followed by the [example](https://github.com/huggingface/transformers.js/tree/main/examples/extension). The example was used the package @xenova/transformers. ## Motivation It seems that multithreads is work now. [Issue](https://github.com/huggin...
https://github.com/huggingface/transformers.js/issues/1259
closed
[ "question" ]
2025-03-27T07:35:23Z
2025-07-02T04:45:26Z
null
sanixa
huggingface/datasets
7,480
HF_DATASETS_CACHE ignored?
### Describe the bug I'm struggling to get things to respect HF_DATASETS_CACHE. Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE. Current version: 3.2.1dev. In the process...
https://github.com/huggingface/datasets/issues/7480
open
[]
2025-03-26T17:19:34Z
2025-10-23T15:59:18Z
8
stephenroller
huggingface/transformers.js
1,258
Tokenizer encode and decode get different token ids and text, missing word_ids
### Question ```js import { AutoTokenizer } from '@huggingface/transformers'; const tokenizer = await AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1') console.log(tokenizer.encode(" e.g., ♩")) console.log(tokenizer.decode([105])) console.log(tokenizer.encode("♩")) ``` ``` [ 312, 3588, 1042, 30717, 105 ] � [...
https://github.com/huggingface/transformers.js/issues/1258
closed
[ "question" ]
2025-03-26T10:44:12Z
2025-03-31T20:18:45Z
null
liho00
huggingface/lerobot
905
Supporting selection of obs and action keys in dataset
Hi all, thanks a lot for the framework. Currently, it seems the LeRobotDataset format requires users to have a fixed state/environment state/images or actions defined in their dataset. However, this means that for multiple similar applications, the user has to record different datasets with different state or action d...
https://github.com/huggingface/lerobot/issues/905
closed
[ "question", "dataset", "stale" ]
2025-03-26T08:12:10Z
2025-10-10T02:27:27Z
null
Mayankm96
pytorch/xla
8,884
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. Error
Hello, I'm trying to train my Transformer Encoder-Decoder model on google Colab `v2-8` TPUs. My code like this: ```python import torch.distributed as dist import torch_xla.core.xla_model as xm import torch_xla.distributed.parallel_loader as pl import torch_xla.distributed.xla_multiprocessing as xmp import torch_xla.di...
https://github.com/pytorch/xla/issues/8884
open
[ "bug", "needs reproduction" ]
2025-03-25T23:52:17Z
2025-03-26T21:38:26Z
2
oayk23
pytorch/xla
8,883
[RFC] Use shard_as to improve sharding and avoid OOM
# 🚀 Use shard_as to improve sharding and avoid OOM ## Summary 2D sharding propagation is harder than 1D sharding propagation due to incompatible sharding. This problem is worse in a `scan` / XLA `While` op, and the <code>[shard_as][shard_as]</code> GSPMD feature seems to help. ## Motivation This proposal is prima...
https://github.com/pytorch/xla/issues/8883
closed
[ "enhancement", "distributed" ]
2025-03-25T22:14:04Z
2025-03-29T03:26:36Z
0
tengyifei
huggingface/chat-ui
1,772
USE_LOCAL_WEBSEARCH No results found for this search query
## Bug description With `USE_LOCAL_WEBSEARCH=true`, Web Search always reports _No results found for this search query_. ## Steps to reproduce - enable search - enter and submit question ## Screenshots <img width="488" alt="Image" src="https://github.com/user-attachments/assets/b948b629-ff67-4edb-9f7c-25ca9d3d1325"...
https://github.com/huggingface/chat-ui/issues/1772
open
[ "bug", "help wanted", "websearch" ]
2025-03-25T21:28:11Z
2025-10-22T21:13:54Z
6
brechtm
huggingface/chat-ui
1,771
Client disconnects before response is received
## Bug description If an answer takes several minutes to complete, the chat-ui client simply disconnects. This disconnection happens at 1 minute, but I'm unsure. ## Steps to reproduce Ask your LLM a riddle but change it a little, so it becomes confused and wonders for a while. man and a goat are one one side of a ...
https://github.com/huggingface/chat-ui/issues/1771
open
[ "bug" ]
2025-03-25T19:14:54Z
2025-06-14T13:46:28Z
3
drewwells
huggingface/datasets
7,477
What is the canonical way to compress a Dataset?
Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset? Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https:...
https://github.com/huggingface/datasets/issues/7477
open
[]
2025-03-25T16:47:51Z
2025-04-03T09:13:11Z
null
eric-czech
huggingface/lerobot
901
Any tutorial on how to make experiments on the SimXArm enviroment?
https://github.com/huggingface/lerobot/issues/901
closed
[]
2025-03-25T13:29:59Z
2025-03-25T16:42:11Z
null
chenkang455
huggingface/chat-ui
1,765
`truncate` parameter ignored for OpenAI chat_completions endpoint
## Bug description The `truncate` parameter in the ChatUI configuration is not being applied when using the OpenAI chat_completions endpoint. ## Root Cause The issue arises because the chat_completions endpoint does not utilize the buildPrompt function where the `truncate` parameter is handled. The logic for truncat...
https://github.com/huggingface/chat-ui/issues/1765
open
[ "bug" ]
2025-03-25T10:13:40Z
2025-03-25T10:20:33Z
0
calycekr
huggingface/finetrainers
350
how to train wan using 8 GPUs
I notice that there is only 4 GPUs scripts, even though I modify the script for 8 GPU training, it gets some errors.
https://github.com/huggingface/finetrainers/issues/350
open
[]
2025-03-25T05:02:18Z
2025-05-06T14:54:50Z
null
tanshuai0219
pytorch/xla
8,876
Missing torch-xla-gpu-plugin
A user reported the following issue: we have been trying to use `torch-xla` nightly builds to get around some of the slowness issues seen in torch-xla 2.5. We found `torch-xla` nightly builds for GPU under `gs://pytorch-xla-releases/wheels/cuda/12.6`, however these don’t contain `torch-xla-gpu-plugin` (this was prese...
https://github.com/pytorch/xla/issues/8876
open
[ "xla:gpu" ]
2025-03-24T18:03:36Z
2025-04-02T14:25:22Z
11
tengyifei
pytorch/xla
8,874
Contribution suggestion?
## ❓ Questions and Help I want to have a deeper understanding of pytorch/xla by contributing to it. I notice that the majority of the [issues with "good first issue" tag](https://github.com/pytorch/xla/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22) are of one kind (i.e. Op info test) and are ...
https://github.com/pytorch/xla/issues/8874
closed
[ "question" ]
2025-03-24T17:03:26Z
2025-03-24T18:24:49Z
null
iwknow
huggingface/diffusers
11,147
[LTX0.9.5] make LTX0.9.5 works with text-to-video
see more context here https://github.com/huggingface/diffusers/issues/11143#issuecomment-2747390564
https://github.com/huggingface/diffusers/issues/11147
closed
[ "help wanted" ]
2025-03-24T09:56:47Z
2025-04-04T14:43:16Z
9
yiyixuxu
huggingface/search-and-learn
47
How to run this project on CPU?
Hello, I'm going to run the code for the project on cpu The graphics card I have now is 4060ti, but even with the lightest option (minimum batch size, use 1.5B model, etc.), I couldn't run the project due to memory capacity issues So I want to move this project to cpu and see the results even if it takes some time H...
https://github.com/huggingface/search-and-learn/issues/47
open
[]
2025-03-24T01:13:44Z
2025-03-24T01:13:44Z
null
pss0204
pytorch/pytorch
149,826
How to handle dynamic output size with torch.onnx.export (through dynamo) for Resize
### 🐛 Describe the bug I would like to export with torch.onnx.export (through dynamo) some code that contains a resize operation. The output width and height is dynamic. An example model is as follows: ``` import torch class Model(torch.nn.Module): def __init__(self): super().__init__() def forwar...
https://github.com/pytorch/pytorch/issues/149826
closed
[ "module: onnx", "triaged", "oncall: pt2" ]
2025-03-23T09:27:37Z
2025-04-24T15:17:47Z
null
FabianSchuetze
pytorch/pytorch
149,771
How to remove the “internal api” notice?
### 📚 The doc issue What is the option that will remove this notice? > This page describes an internal API which is not intended to be used outside of the PyTorch codebase and can be modified or removed without notice. We would like to remove it for https://pytorch.org/docs/stable/onnx_dynamo.html and a few onnx p...
https://github.com/pytorch/pytorch/issues/149771
closed
[ "module: docs", "triaged" ]
2025-03-21T22:46:30Z
2025-03-27T22:02:25Z
null
justinchuby
huggingface/datasets
7,473
Webdataset data format problem
### Describe the bug Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1 Error code: FileFormatMismatchBetweenSplitsError All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted ...
https://github.com/huggingface/datasets/issues/7473
closed
[]
2025-03-21T17:23:52Z
2025-03-21T19:19:58Z
1
edmcman
huggingface/datasets
7,470
Is it possible to shard a single-sharded IterableDataset?
I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not. Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs mo...
https://github.com/huggingface/datasets/issues/7470
closed
[]
2025-03-21T04:33:37Z
2025-11-22T07:55:43Z
6
jonathanasdf
huggingface/lerobot
884
[Question] Support of PointCloud
Hi, I'm currently developing a plugin for lerobot and would like to know if there are any plans to support PointCloud data. Additionally, I'd like to ask if there is a recommended storage format for handling PointCloud data within the project. Looking forward to your response. Thanks
https://github.com/huggingface/lerobot/issues/884
closed
[ "question", "dataset", "stale" ]
2025-03-21T04:29:15Z
2025-10-07T02:26:39Z
null
yilin404
huggingface/inference-benchmarker
4
Can i use local model's tokenizer and local dataset?
Hello, may I specify the paths of the locally downloaded model and dataset through the ./inference-benchmarker command, instead of accessing Hugging Face via the network?
https://github.com/huggingface/inference-benchmarker/issues/4
open
[ "question" ]
2025-03-21T01:55:03Z
2025-03-27T18:44:04Z
null
handsome-chips
pytorch/torchx
1,021
Suggested way to get timestamp of the job submission?
## Description Hi team, I am looking for a way to get the exact timestamp when the command `torchx run` is being run. Is there a formal way that is scheduler / component agnostic? The timestamp should be accessible from the training app. ## Motivation/Background The actual use case is to calculate the overhead betwe...
https://github.com/meta-pytorch/torchx/issues/1021
open
[]
2025-03-20T21:58:47Z
2025-03-20T21:59:41Z
0
HanFa
huggingface/video-dataset-scripts
20
parquet file how to convert to Training Dataset Format for finetrainers
parquet file how to convert to Training Dataset Format for finetrainers ?
https://github.com/huggingface/video-dataset-scripts/issues/20
closed
[]
2025-03-20T16:22:39Z
2025-04-10T17:46:06Z
null
kanghua309
pytorch/pytorch
149,586
UserWarning: Dynamo does not know how to trace the builtin `None.pybind11_object.__new__.`
### 🐛 Describe the bug I'm filing an issue since this is a Python built-in (granted the error message implies that it is not since it references PyBind11, but I'm opening an issue anyway since it is caused by using returning/using `None` in a compiled function). ### Versions 2.7.0a0+gitebd087e cc @chauhang @pengui...
https://github.com/pytorch/pytorch/issues/149586
open
[ "triaged", "oncall: pt2", "module: dynamo", "module: higher order operators", "module: compiled autograd", "module: pt2-dispatcher", "module: flex attention" ]
2025-03-20T00:32:49Z
2025-03-21T19:28:30Z
null
cora-codes
pytorch/xla
8,862
Replace `xm.mark_step` with `torch_xla.sync()` in examples and tests
`torch_xla.sync()` is easier to spell than `xm.mark_step()`. We should at least replace `mark_step` in all public examples.
https://github.com/pytorch/xla/issues/8862
closed
[ "enhancement", "usability", "documentation" ]
2025-03-19T22:25:09Z
2025-05-16T17:56:25Z
1
tengyifei
pytorch/xla
8,861
Document the difference between `device=` vs `.to(device)`
## 📚 Documentation There's a subtle difference between `torch.foo(device=xla)` vs `torch.foo().to(xla)` and we should document this in a FAQ section or similar. The first one runs the `foo` on the TPU. The second one runs the `foo` on the CPU and then moves the buffer to the TPU.
https://github.com/pytorch/xla/issues/8861
closed
[ "enhancement", "good first issue", "documentation" ]
2025-03-19T22:23:19Z
2025-06-12T06:07:46Z
2
tengyifei
pytorch/xla
8,859
Improve `torch_xla.compile` documentation
## 📚 Documentation The best doc I could find that mentions this is https://pytorch.org/xla/release/r2.5/eager_mode.html. However, `torch_xla.compile` is usable separate from PyTorch/XLA eager mode and we should make this more front-and-center compared to mark_step.
https://github.com/pytorch/xla/issues/8859
closed
[ "enhancement", "good first issue", "documentation" ]
2025-03-19T22:15:04Z
2025-05-30T04:11:41Z
0
tengyifei
pytorch/xla
8,858
Document the difference between tracing time and execution time
## 📚 Documentation If we write a loop like ``` start = time.time() for step in range(num_steps): run_model() xm.mark_step() end = time.time() ``` Then `end - start` will only measure the tracing time. We'll need to do `torch_xla.sync(wait=True)` to block on device execution to measure the execution time. We sh...
https://github.com/pytorch/xla/issues/8858
closed
[ "enhancement", "good first issue", "documentation" ]
2025-03-19T22:13:49Z
2025-05-30T04:10:37Z
4
tengyifei
pytorch/torchtitan
987
Is EP (Expert Parallelism) coming ?
Currently TorchTitan supports PP, CP, FSDP, PP parallelisms. Is there a plan to support Expert Parallelism (EP) ? Along the same line, see some DeepSeek files in the repo. Is there a plan to support DeepSeek training on TorchTitan ?
https://github.com/pytorch/torchtitan/issues/987
closed
[ "question" ]
2025-03-19T21:41:14Z
2025-03-24T17:21:20Z
null
githubsgi
pytorch/torchtitan
986
Is a PP+FSDP+TP config + toml available for pre-training 405B model ?
Would appreciate if someone can share a toml file to do PP+FSDP+TP for 405B model.
https://github.com/pytorch/torchtitan/issues/986
closed
[]
2025-03-19T21:35:43Z
2025-08-21T03:11:32Z
3
githubsgi
pytorch/vision
8,986
Speed up JPEG decoding by allowing resize during decode
### 🚀 The feature Torchvision's `read_image` currently decodes JPEG images at full resolution. However, both `libjpeg` and `libjpeg-turbo` support decoding at lower resolutions (1/2, 1/4, 1/8 of the original size). Introducing a `size_hint` parameter would allow users to specify an approximate target size, with `tor...
https://github.com/pytorch/vision/issues/8986
open
[]
2025-03-19T19:08:46Z
2025-04-29T07:32:47Z
3
gyf304
huggingface/trl
3,114
What is the reason for using only one GPU when integration with llm?
At [line](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L507) of the code, when using vllm, a unique GPU device is specified here. However, in fact, it is quite common to use a single vllm instance with multiple GPUs. 1. What is the reason that the code is designed to only select a single ...
https://github.com/huggingface/trl/issues/3114
closed
[ "❓ question", "🏋 GRPO" ]
2025-03-19T16:20:03Z
2025-04-05T17:01:33Z
null
spencergotowork
huggingface/smollm
67
How to fine tune smolvlm on OCR
Is there any guid to finet-tune smovlm on OCR like in https://huggingface.co/ds4sd/SmolDocling-256M-preview
https://github.com/huggingface/smollm/issues/67
open
[ "Image" ]
2025-03-19T14:17:33Z
2025-07-29T13:09:05Z
null
abdelkareemkobo
huggingface/peft
2,436
Fine-tuning with Multiple LoRAs
Thanks for your valuable work! I would like to know if it's possible to jointly train two LoRAs while only loading one base model. The overall output depends on the respective outputs of LoRA1 and LoRA2. For example, logits1 is obtained from the base model with LoRA1, and logits2 is obtained from the base model with L...
https://github.com/huggingface/peft/issues/2436
closed
[]
2025-03-19T13:49:28Z
2025-07-19T05:45:12Z
7
xymou
huggingface/setfit
590
How do I disable requests to huggingface.co:443 after training?
I'm currently evaluating setfit in a proof of concept situation. Unfortunately, I'm working behind a company firewall, where I do not have access to the world wide web, only to company-internal URLs. That's a bit annoying in terms of downloading models, but I can work around that. More importantly, it seems there are ...
https://github.com/huggingface/setfit/issues/590
open
[]
2025-03-19T08:42:12Z
2025-03-19T18:44:12Z
null
AdrianSchneble
huggingface/diffusers
11,114
channel inconsistency in cogvideo Lora training example
### Describe the bug while using the training script in (https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_image_to_video_lora.py) I made a dataset as described in readme and run training. but a bug occurred at the forward pass process.It is because the model in-channel is 16 but m...
https://github.com/huggingface/diffusers/issues/11114
open
[ "bug", "stale" ]
2025-03-19T07:55:00Z
2025-04-18T15:02:52Z
2
MrTom34