repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/lerobot
1,080
Update `control_sim_robot.py` to use the new configs
Adding this issue to track one of the TODO's of this MR #550 As of now, [this script](https://github.com/huggingface/lerobot/blob/8cfab3882480bdde38e42d93a9752de5ed42cae2/lerobot/scripts/control_sim_robot.py) is outdated; It does not use the new configuration classes.
https://github.com/huggingface/lerobot/issues/1080
closed
[ "question" ]
2025-05-07T11:37:47Z
2025-06-19T14:04:11Z
null
jccalvojackson
huggingface/Math-Verify
53
How to turn off error print?
When using multiprocessing, there is a lot of error message printed.
https://github.com/huggingface/Math-Verify/issues/53
closed
[]
2025-05-07T08:19:36Z
2025-07-02T16:07:02Z
null
wenxueru
pytorch/executorch
10,745
How to use tokenizer.json in ExecuTorch Android demo (without tokenizer.model)?
### 📚 The doc issue I'm trying to deploy a language or vision-language model on Android using the ExecuTorch Android demo app. The model I'm working with only provides tokenizer.json, but the current Android implementation appears to expect a tokenizer.model file instead. Is tokenizer.model mandatory for the ExecuTo...
https://github.com/pytorch/executorch/issues/10745
closed
[ "triaged", "module: android" ]
2025-05-07T03:22:03Z
2025-05-07T21:33:46Z
null
jordanqi
huggingface/peft
2,533
Integrate TLoRA (Tri-Matrix LoRA)
### Feature request We would like to propose integrating a novel parameter-efficient fine-tuning method called **TLoRA (Tri-Matrix LoRA)** into the `peft` library. We believe TLoRA offers significant advantages in terms of parameter efficiency, making it a valuable addition to the PEFT ecosystem. Our method is detail...
https://github.com/huggingface/peft/issues/2533
closed
[]
2025-05-06T21:22:50Z
2025-06-15T15:03:57Z
2
itanvir
huggingface/candle
2,945
Operating steps from scratch for beginners?
from a To Z
https://github.com/huggingface/candle/issues/2945
open
[]
2025-05-06T15:34:02Z
2025-05-06T15:34:02Z
0
Qarqor5555555
pytorch/torchtitan
1,169
how to inference with pretrained model?
hi, after pretrain/sft with torchtitan, how to inference with the checkpoint? does the repo provide the inference code? thank you.
https://github.com/pytorch/torchtitan/issues/1169
closed
[]
2025-05-06T10:28:50Z
2025-08-21T03:18:05Z
null
dragen1860
pytorch/torchtitan
1,168
How to use fsdp2 cpu_offload?
I am currently using `cpuOffloadPolicy` in the following way: ```py transformer_cls_to_wrap = list() for layer_class in transformer_cls_names_to_wrap: transformer_cls = get_module_class_from_name(model_to_wrap, layer_class) if transformer_cls is not None: transforme...
https://github.com/pytorch/torchtitan/issues/1168
closed
[ "module: fsdp" ]
2025-05-06T07:44:48Z
2025-05-12T03:29:30Z
null
KimmiShi
huggingface/lerobot
1,072
How to merge collected data into one?
For stability I collect data 10 episode by 10. Then forming this: repo_id/first,repo_id_second... I want to merge them together to repo_id/one_task for training, but it's hard to fix meta files. I'm not sure if this approach helps with training, or if I should determine the number of episodes needed for training in a...
https://github.com/huggingface/lerobot/issues/1072
closed
[ "question", "dataset" ]
2025-05-06T02:27:24Z
2025-05-07T02:29:27Z
null
milong26
pytorch/xla
9,095
Support Dynamic Grid in Pallas Kernel
## 🚀 Feature <!-- A clear and concise description of the feature proposal --> Support dynamic grid feature of pallas kernel through PyTorch/XLA wrapper. Below is an example of dynamic grid in jax. ``` import functools import time import jax from jax._src.pallas.pallas_call import _trace_kernel_to_jaxpr import jax.n...
https://github.com/pytorch/xla/issues/9095
open
[ "enhancement", "pallas" ]
2025-05-05T22:28:33Z
2025-05-06T12:24:45Z
0
yaochengji
huggingface/diffusers
11,499
[Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change.
### Sys env: OS Ubuntu 22.04 PyTorch 2.4.0+cu121 sana == 0.0.1 Diffusers == 0.34.0.dev0 ### Reproduce: Try the demo test code: ``` import torch from diffusers import SanaPAGPipeline pipe = SanaPAGPipeline.from_pretrained( # "Efficient-Large-Model/Sana_1600M_512px_diffusers", "Efficient-Large-Model/SANA1.5_1.6...
https://github.com/huggingface/diffusers/issues/11499
closed
[]
2025-05-05T21:26:51Z
2025-08-08T23:44:59Z
11
David-Dingle
huggingface/candle
2,944
finetuning yolo 8 candle model
What is the correct way to finetune yolo8 model to be used here ? Finetuning model using candle is not straightforward. candle\candle-examples\examples\yolo-v8\main.rs // model model architecture points at ultralytics : https://github.com/ultralytics/ultralytics/issues/189 But my model trained using ultralytics and co...
https://github.com/huggingface/candle/issues/2944
open
[]
2025-05-05T15:21:48Z
2025-05-05T18:46:52Z
0
flutter-painter
pytorch/rl
2,939
PPO with composite distribution crash before giving the warning on how to fix it.
This block https://github.com/pytorch/rl/blob/795e362cb82b3539faa30db771e5b2f1d50f8c8a/torchrl/objectives/ppo.py#L601-L602 causes ```AttributeError: 'Tensor' object has no attribute 'batch_size'``` Before, the warning on how to fix it is shown. https://github.com/pytorch/rl/blob/795e362cb82b3539faa30db771e5b2f1d50f8...
https://github.com/pytorch/rl/issues/2939
closed
[]
2025-05-04T23:31:53Z
2025-05-20T10:09:02Z
null
siegelaaron94
huggingface/diffusers
11,489
Error when I'm trying to train a Flux lora with train_dreambooth_lora_flux_advanced
### Describe the bug Hi! I'm trying to train my lora model with [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py) script. When I'm trying to train my model with prior preservation tag I give an error. ...
https://github.com/huggingface/diffusers/issues/11489
open
[ "bug", "training" ]
2025-05-04T21:19:23Z
2025-07-06T19:38:40Z
4
Mnwa
huggingface/diffusers
11,488
Sincerely Request The Support for Flux PAG Pipeline
When the pag pipeline of flux can be supported?
https://github.com/huggingface/diffusers/issues/11488
open
[ "help wanted", "Good second issue" ]
2025-05-04T11:12:05Z
2025-05-16T04:53:52Z
2
PlutoQyl
huggingface/text-generation-inference
3,208
Can I use TGI in a Supercomputer?
I want to generate somewhere around 1 trillion tokens and I was thinking of using TGI on a European Supercomputer. is there a way to achieve this without relying on docker and downloading the model natively and then load it on the compute node and serve it? @Wauplin
https://github.com/huggingface/text-generation-inference/issues/3208
open
[]
2025-05-03T15:13:24Z
2025-05-15T08:55:08Z
4
sleepingcat4
pytorch/xla
9,082
Educate users on mat mul precision
mat mul precision will be exposed idiomatically to Pytorch in #9081.
https://github.com/pytorch/xla/issues/9082
closed
[ "documentation" ]
2025-05-02T20:03:54Z
2025-05-21T20:34:32Z
0
yaoshiang
huggingface/transformers.js
1,305
Trying to convert dinov2 model
### Question I tried to convert [this model](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.2.3) using the following command: `python -m scripts.convert --model_id nguyenkhoa/dinov2_Liveness_detection_v2.2.3 --quantize --task image-classification` but got the following error: ``ValueError: Trying t...
https://github.com/huggingface/transformers.js/issues/1305
closed
[ "question" ]
2025-05-01T19:56:28Z
2025-05-05T22:18:48Z
null
jdp8
pytorch/executorch
10,593
Advice on how to run the training example in Android
Hello Team, We have followed https://pytorch.org/executorch/main/using-executorch-android.html#building-from-source to build the "aar" file. We can run the inference example on Android. We are wondering how to run the training example on Android. Are there some flags / some config we need to add to the building pro...
https://github.com/pytorch/executorch/issues/10593
open
[ "module: android", "module: training" ]
2025-04-30T19:51:03Z
2025-07-15T22:59:28Z
null
YuanTingHsieh
huggingface/datasets
7,545
Networked Pull Through Cache
### Feature request Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service. Enable a three-tier cache lookup for datasets: 1. Local on-disk cache 2. Configurable network cache proxy 3. Official Hugging Face Hub ### Motivation - Dis...
https://github.com/huggingface/datasets/issues/7545
open
[ "enhancement" ]
2025-04-30T15:16:33Z
2025-04-30T15:16:33Z
0
wrmedford
huggingface/transformers
37,895
How to backpropagate the gradients of the embeddings output by the image processor to the input image tensor?
### Feature request I'm using the processor of Qwen2.5-VL, and the image processor within it should be Qwen2ImageProcessor. The input image I provide is a PyTorch tensor with gradients, and the processor outputs the feature embeddings of the image. How can I ensure that the gradient flow is not interrupted during this...
https://github.com/huggingface/transformers/issues/37895
open
[ "Feature request" ]
2025-04-30T15:06:40Z
2025-05-01T13:36:24Z
null
weiminbai
pytorch/xla
9,063
Add explanation of Clang usage after Hermetic CUDA.
## 📚 Documentation Follow up from: #8665 and #9053 After #8665 is merged, we should add an explanation on the default usage of Clang due to the adoption of Hermetic CUDA. This is somewhat related to #9061.
https://github.com/pytorch/xla/issues/9063
open
[ "documentation" ]
2025-04-30T12:17:26Z
2025-04-30T12:18:12Z
0
ysiraichi
huggingface/diffusers
11,466
Finetuning of flux or scratch training
I am new to this field and wanted to know if Is there any code available for training the flux from scratch or even finetuning the existing model. All I see is the dreambooth or Lora finetuning.
https://github.com/huggingface/diffusers/issues/11466
open
[]
2025-04-30T07:45:49Z
2025-05-30T16:32:33Z
2
preethamp0197
pytorch/executorch
10,571
where is pytorch_tokenizers.tools.llama2c.convert?
### 🐛 Describe the bug I can not find pytorch_tokenizers.tools.llama2c.convert with command "python -m pytorch_tokenizers.tools.llama2c.convert -t ../tokenizer.model -o ../tokenizer.bin" according to docs. the env I use is built by "pip install executorch" ### Versions Collecting environment information... PyTor...
https://github.com/pytorch/executorch/issues/10571
closed
[ "module: llm" ]
2025-04-30T03:15:59Z
2025-05-08T06:20:26Z
null
hayyaw
pytorch/xla
9,056
Fix the contribution instructions for creating PRs
## 📚 Documentation https://github.com/pytorch/xla/blob/master/CONTRIBUTING.md suggests to clone the original PyTorch/XLA repo directly. However, doing so makes it impossible to create PRs later unless the user has write permission to the repo. Instead, it should ask the users to fork the repo first, and then work aga...
https://github.com/pytorch/xla/issues/9056
closed
[ "documentation" ]
2025-04-29T18:23:43Z
2025-05-07T13:37:33Z
0
zhanyong-wan
huggingface/hf-hub
104
What is this software licensed under?
Would this also be Apache 2 like in https://github.com/huggingface/huggingface_hub/? Thanks!
https://github.com/huggingface/hf-hub/issues/104
closed
[]
2025-04-29T16:27:10Z
2025-06-16T09:09:43Z
null
nathankw
pytorch/vision
9,042
Make the C++ backend of the torchvision wheel usable for C++ development
### 🚀 The feature Currently, the torchvision wheel packages the C++ DSO as `_C.so` for python bindings. We'd like the python wheel to have the C++ backend be standalone, so it can be extracted/used by C++ applications, like is done today for the PyTorch wheels. This means: - export DSO as `libtorchvision.so` inste...
https://github.com/pytorch/vision/issues/9042
open
[]
2025-04-29T15:04:25Z
2025-05-19T23:58:53Z
5
agirault
huggingface/optimum
2,248
Export cli export RT-Detr
```python Traceback (most recent call last): File "/usr/local/bin/optimum-cli", line 8, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.11/dist-packages/optimum/commands/optimum_cli.py", line 208, in main service.run() File "/usr/local/lib/python3.11/dist-packages/optimum/com...
https://github.com/huggingface/optimum/issues/2248
closed
[]
2025-04-29T08:23:17Z
2025-05-05T08:03:21Z
1
TheMattBin
huggingface/open-muse
144
how to set the minimum learning rate for cosine lr_scheduler?
@dataclass class TrainingArguments(transformers.TrainingArguments): gradient_checkpointing_kwargs={'use_reentrant':False} lr_scheduler_kwargs={ "eta_min":1e-6, "num_cycles":1, } It did not work. how to set the minimum learning rate in transformers-4.51.3?
https://github.com/huggingface/open-muse/issues/144
closed
[]
2025-04-29T02:18:59Z
2025-04-29T02:20:42Z
null
xubuvd
pytorch/torchchat
1,536
Improve Tokenizer New Type Onboarding
### 🚀 The feature, motivation and pitch --- As a sequel to https://github.com/pytorch/torchchat/issues/1518 where we added an enum for tokenizer types to simplify `TokenizerArgs __post_init__`, we need to further improve it to simplify new tokenizer type onboarding: ### Tasks --- - Move TokenizerType to a centralized...
https://github.com/pytorch/torchchat/issues/1536
open
[ "good first issue", "actionable", "triaged" ]
2025-04-28T18:31:33Z
2025-05-13T17:54:18Z
3
zhenyan-zhang-meta
huggingface/lerobot
1,045
Inefficient Config Structure without Hydra
Hi, I notice that the repo used Hydra before, which can modify some config param or create new config yaml files. However, this was deprecated. I wonder how to efficiently modify a new config file for policy without writing these params in the command line each time?
https://github.com/huggingface/lerobot/issues/1045
closed
[ "question", "configuration", "stale" ]
2025-04-28T11:48:08Z
2025-11-18T02:30:46Z
null
jiangranlv
pytorch/torchtitan
1,150
[Feature] Support validation
For some workloads, it is really important to perform validation on a different dataset every n iterations. This seems reasonably straight forward to add to the training loop and training specs, while being kept as optional. Is there any plan to support this functionality in the near future?
https://github.com/pytorch/torchtitan/issues/1150
closed
[]
2025-04-28T11:01:47Z
2025-08-21T03:17:19Z
4
CarlosGomes98
huggingface/diffusers
11,432
`.from_pretrained` `torch_dtype="auto"` argument not working a expected
### Describe the bug Hey dear diffusers team, thanks a lot for all your hard work! I would like to make use of the `torch_dtype="auto"` keyword argument when loading a model/pipeline as specified [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.t...
https://github.com/huggingface/diffusers/issues/11432
closed
[ "bug" ]
2025-04-28T04:31:26Z
2025-05-13T01:42:37Z
3
johannaSommer
huggingface/lerobot
1,041
image transform of pi0 is inconsistent with openpi
Thank you for pi0 work in lerobot.However, i found that image transform was quite different from openpi. image transform of lerobot pi0: ![Image](https://github.com/user-attachments/assets/6ff30d08-bc84-4005-8cb9-adc917f9817e) image transform of openpi: ![Image](https://github.com/user-attachments/assets/75845f92-d5...
https://github.com/huggingface/lerobot/issues/1041
closed
[ "question", "policies", "stale" ]
2025-04-28T03:08:10Z
2025-11-20T02:30:12Z
null
wushandinghua
pytorch/torchtitan
1,147
[Question] FSDP+TP CUDA_DEVICE_MAX_CONNECTIONS
In Megatron repo https://github.com/NVIDIA/Megatron-LM/blob/4429e8ebe21fb011529d7401c370841ce530785a/megatron/training/arguments.py#L779 It’s recommended that FSDP should use larger values of `CUDA_DEVICE_MAX_CONNECTIONS` but Megatron TP requires it to be 1. Is it also the case for torch implementation of TP using DTe...
https://github.com/pytorch/torchtitan/issues/1147
open
[ "documentation", "question", "module: fsdp" ]
2025-04-27T20:48:50Z
2025-04-29T21:54:07Z
null
ChenchaoZhao
huggingface/diffusers
11,423
Lora Hotswap no clear documentation
Hello everyone. Here is the scenario I have. I have say 10 LoRAs that I would like to load and use depending on the request. Option one: using `load_lora_weights` - reads from the disk and moves to device: expensive operation Option two: load all loras and weights of non-used LoRAS with `set_adapters` method to 0....
https://github.com/huggingface/diffusers/issues/11423
open
[ "stale" ]
2025-04-26T13:44:08Z
2025-05-26T15:03:03Z
2
vahe-toffee
huggingface/diffusers
11,419
How to know that "Textual inversion" file I have loaded and not turn it on?
Reviewing the documentation I understand the load of IT with: # Add Embeddings Pipeline.load_textual_inversion("Sd-Concepts-Library/Cat-Toy"), # Remave All Token Embeddings Pipeline.unload_textual_inversion() # Remove Just One Token Pipeline.unload_textual_inversion ("<Moe-Bius>") But how do you know which are c...
https://github.com/huggingface/diffusers/issues/11419
closed
[ "stale" ]
2025-04-25T17:18:07Z
2025-05-27T18:09:45Z
null
Eduardishion
huggingface/diffusers
11,418
How to add flux1-fill-dev-fp8.safetensors
### Describe the bug Hi! How to use flux1-fill-dev-fp8.safetensors in diffusers? Now I have code: ``` def init_pipeline(device: str): logger.info(f"Loading FLUX Inpaint Pipeline (Fill‑dev) on {device}") pipe = FluxFillPipeline.from_pretrained( "black-forest-labs/FLUX.1-Fill-dev", torch_dtype=t...
https://github.com/huggingface/diffusers/issues/11418
closed
[ "bug" ]
2025-04-25T14:58:08Z
2025-04-28T19:06:17Z
null
SlimRG
huggingface/optimum
2,242
[onnx] What are the functions of the generated files by optimum-cli?
### System Info ```shell I try to use **optimum-cli** to export the onnx file for llama, but i don't get a onnx file as expect, but get a lot of files, so I don't know what are they used for ? (MindSpore) [ma-user llama149]$ls onnx_model/ config.json generation_config.json model.onnx model.onnx_data special_token...
https://github.com/huggingface/optimum/issues/2242
closed
[]
2025-04-25T13:12:35Z
2025-04-28T09:18:06Z
1
vfdff
huggingface/diffusers
11,417
attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'?
### Describe the bug attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'? ### Reproduction export MODEL_NAME="black-forest-labs/FLUX.1-dev" export OUTPUT_DIR="trained-flux-dev-dreambooth-lora" accelerate launch train_dreambooth_lora_flux.py \ --pretrained_model_name_or_...
https://github.com/huggingface/diffusers/issues/11417
open
[ "bug", "stale" ]
2025-04-25T03:30:52Z
2025-05-25T15:02:30Z
1
asjqmasjqm
huggingface/datasets
7,536
[Errno 13] Permission denied: on `.incomplete` file
### Describe the bug When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS. It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can somet...
https://github.com/huggingface/datasets/issues/7536
closed
[]
2025-04-24T20:52:45Z
2025-05-06T13:05:01Z
4
ryan-clancy
pytorch/pytorch
152,100
What is the difference between normal_tensor.storage().use_count() and viewed_tensor's?
In the test2() below, why is b.storage().use_count() still 2 even when I deleted the source tensor a? ``` import torch def test1(): print("=============== test 1 ===============") a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16) b = a.view(-1) # b.storage().use_count() is 2 def test2(): ...
https://github.com/pytorch/pytorch/issues/152100
closed
[]
2025-04-24T12:54:21Z
2025-04-25T07:39:39Z
null
CLiqing
pytorch/audio
3,901
2.7.0 release tag
### 🚀 The feature Although there is a 2.7.0 release on PyPI, there is no release of the source code on GitHub. Can we get a 2.7.0 release tagged? ### Motivation, pitch Package managers like Spack build from source code, not from pre-compiled wheels. This is especially important for libraries like torchaudio which g...
https://github.com/pytorch/audio/issues/3901
closed
[]
2025-04-24T09:54:48Z
2025-04-24T15:25:16Z
2
adamjstewart
pytorch/torchtitan
1,141
Meet Error when using AMD server (MI250)
Hi, when I using torchtitan on AMD server (Mi250), it reports the following errors: ![Image](https://github.com/user-attachments/assets/54046f8f-f183-4006-99b5-1730cae0bf1b). Does torchtitan support AMD server like MI250? Thanks.
https://github.com/pytorch/torchtitan/issues/1141
closed
[]
2025-04-24T07:48:10Z
2025-04-25T08:46:06Z
5
StillKeepTry
huggingface/diffusers
11,396
How to convert the hidream lora trained by diffusers to a format that comfyui can load?
### Describe the bug The hidream lora trained by diffusers can't load in comfyui, how could I convert it? ### Reproduction No ### Logs ```shell ``` ### System Info No ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/11396
closed
[ "bug", "stale" ]
2025-04-23T13:13:34Z
2025-06-23T09:49:19Z
null
yinguoweiOvO
huggingface/candle
2,916
how to save and load the model
I just use the varmap.save the varmap,but when I use the varmap.load then achieved a empty varmap. is there any way to save the trained model?
https://github.com/huggingface/candle/issues/2916
closed
[]
2025-04-23T11:10:04Z
2025-04-24T02:25:37Z
null
liguheng
huggingface/tokenizers
1,768
How to debug tokenizers with python?
Hi, I have a technical question. After installing transformers via pip, I successfully installed tokenizers==0.21.1 and transformers==4.49.0. When running the code: `tokenizer = AutoTokenizer.from_pretrained("../Qwen2") # (tokenizer configs in this folder)` `tokenizer.encode(data)` I want to trace the program flow to ...
https://github.com/huggingface/tokenizers/issues/1768
open
[]
2025-04-23T09:37:20Z
2025-04-30T14:11:11Z
null
JinJieGan
pytorch/torchtitan
1,133
How to correctly use FSDP2 do mixed precision training?
Hi, I am currently doing this way: ```py model = AutoModel.from_pretrained(...) # make sure model is in fp32, so we have a fp32 mater weight in optimizer model.to(torch.float32) mp_policy = MixedPrecisionPolicy( param_dtype=torch.bfloat16, reduce_dtype=torch.float32, ) fsdp_kwargs = { "reshard_after_f...
https://github.com/pytorch/torchtitan/issues/1133
closed
[]
2025-04-23T06:55:40Z
2025-04-27T10:03:20Z
null
KimmiShi
pytorch/torchtitan
1,132
FSDP2 reduce_scatter_reduce_op for context parallelism
Hi, FSDP2 reduce_scatter by default seems to take the average over the entire shard world, which consists of dp_shard and cp. Averaging gradients over dp_shard makes sense, but I wonder if sum is the better reduce op for CP? Logically, it seems to me gradient should be agnostic to the choice of CP. Thanks!
https://github.com/pytorch/torchtitan/issues/1132
closed
[ "question" ]
2025-04-23T01:44:19Z
2025-04-24T16:39:05Z
null
dingqingy
pytorch/xla
9,026
Where to find TPU-dependent compile-pipeline/optimizations in XLA?
## ❓ Questions and Help I'm diving into the XLA source code to understand the compilation pipeline for the TPU backend and any TPU-dependent optimizations. However, I couldn't find details about the TPU compilation pipeline in xla/service dir, while CPU and GPU pipelines seem more visible. I see some cost-model-based ...
https://github.com/pytorch/xla/issues/9026
closed
[]
2025-04-23T01:42:17Z
2025-04-23T12:07:58Z
0
Bolzano983
huggingface/diffusers
11,390
Better image interpolation in training scripts follow up
With https://github.com/huggingface/diffusers/pull/11206 we did a small quality improvement for the SDXL Dreambooth LoRA script by making `LANCZOS` the default interpolation mode for the image resizing. This issue is to ask for help from the community to bring this change to the other training scripts, specially for t...
https://github.com/huggingface/diffusers/issues/11390
closed
[ "good first issue", "contributions-welcome" ]
2025-04-23T00:04:10Z
2025-05-05T16:35:18Z
20
asomoza
huggingface/lerobot
1,019
How to resume dataset creation after interruption instead of starting from scratch?
Recently our dataset creation + upload got interrupted due to an error not related to LeRobot. However, I have not been able to launch the dataset creation using the information already processed. My cache folder shows the data, meta, and videos folders, and I was able to determine using the episodes.jsonl file in meta...
https://github.com/huggingface/lerobot/issues/1019
closed
[]
2025-04-22T21:30:12Z
2025-04-22T21:45:00Z
null
Anas-7
huggingface/peft
2,508
How to save the custom module into adapter_model.safetensrs when integrating new peft method
Just don't know where to save and load the module, or something can mark which module need to be saved. For example, we want a moe of lora, where multi-lora and a router will be the trainable part and need to be saved.
https://github.com/huggingface/peft/issues/2508
closed
[]
2025-04-22T15:46:39Z
2025-04-30T11:01:58Z
null
AaronZLT
huggingface/lerobot
1,015
How to efficiently collect and standardize datasets from multiple Gymnasium environments?
Hello, I am studying how to collect datasets from various Gymnasium environments for reinforcement learning and imitation learning experiments. Currently, I can collect some data from real environments, but how to collect data from Gymnasium?
https://github.com/huggingface/lerobot/issues/1015
closed
[ "question", "dataset", "good first issue" ]
2025-04-22T08:50:34Z
2025-10-17T11:16:09Z
null
ybu-lxd
huggingface/lerobot
1,013
When creating dataset, how to save_episode with existing video?
For video with compatible frames, height and width that is recorded/rendered elsewhere, how can I add it to an episode directly without redundant decode-encode round-trip?
https://github.com/huggingface/lerobot/issues/1013
closed
[ "enhancement", "dataset", "stale" ]
2025-04-22T04:05:10Z
2025-12-25T02:35:25Z
null
jjyyxx
huggingface/lerobot
1,012
why chunk_size not used in PI0?
https://github.com/huggingface/lerobot/blob/b43ece89340e7d250574ae7f5aaed5e8389114bd/lerobot/common/policies/pi0/modeling_pi0.py#L658 Is it more meaningful and reasonable here to change `n_action_steps` to `chunk_size`, since `chunk_size` means prediction action horizon and `n_action_steps` means action steps actually...
https://github.com/huggingface/lerobot/issues/1012
closed
[ "question", "policies", "stale" ]
2025-04-22T03:43:38Z
2025-11-04T02:30:18Z
null
feixyz10
huggingface/huggingface_hub
3,020
How to run apps in local mode? local_files_only is failing
The app is running perfectly fine when internet available All models downloaded into `os.environ['HF_HOME'] = os.path.abspath(os.path.realpath(os.path.join(os.path.dirname(__file__), './hf_download')))` When i set like below ``` # Set local_files_only based on offline mode local_files_only = args.offline if local_...
https://github.com/huggingface/huggingface_hub/issues/3020
closed
[ "bug" ]
2025-04-21T23:46:06Z
2025-04-22T09:24:57Z
null
FurkanGozukara
pytorch/torchtitan
1,126
fully_shard() for huggingface model: pytorch caches too much GPU memory
Dear Community, I'm working on fine-tuning the Qwen2-VL model using `fully_shard()` and wrote a script for it. However, I noticed that GPU memory usage stays high (around 50GB to 60GB) even as I scale up the number of GPUs. Besides, it will run into OOM when I try to fine tune 72B model with 128 GPUs. I'm wondering i...
https://github.com/pytorch/torchtitan/issues/1126
open
[ "question", "module: fsdp" ]
2025-04-21T21:37:43Z
2025-05-13T05:09:52Z
null
mingdianliu
pytorch/pytorch
151,829
profile for torch.add(x, x) where x is a zero-sized tensor looks bogus
```py from torch.profiler import profile, record_function, ProfilerActivity import torch x = torch.randn(0) with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof: with record_function("model_inference"): x + x print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10)...
https://github.com/pytorch/pytorch/issues/151829
closed
[ "oncall: profiler" ]
2025-04-21T20:53:57Z
2025-06-07T23:58:54Z
null
zou3519
huggingface/finetrainers
378
How to finetune CogVideoX1.5-5B T2V LoRA?
Hello. I still unfamiliar with the finetuning process. I want to finetune CogVideoX1.5-5B T2V with LoRA. I have single RTX 4090. I try to re-run the bash script "finetrainers\examples\training\sft\cogvideox\crush_smol_lora\train.sh" with my own dataset and end up with error message `train.sh: line 130: accelerate: comm...
https://github.com/huggingface/finetrainers/issues/378
open
[]
2025-04-21T17:17:08Z
2025-04-24T06:24:06Z
null
MaulanaYusufIkhsanRobbani
huggingface/trl
3,333
How can I set the dataset to not shuffle? It seems there is no such option.
I'm using GRPOTrainer for training, and based on the logs I've printed, it seems that the dataset is being shuffled. However, the order of samples in the dataset is very important to me, and I don't want it to be shuffled. What should I do? I've checked the documentation but couldn't find any parameter to control this.
https://github.com/huggingface/trl/issues/3333
closed
[ "❓ question", "🏋 GRPO" ]
2025-04-21T11:11:53Z
2025-04-21T21:34:33Z
null
Tuziking
pytorch/ao
2,086
How to automatically install the latest TorchAO nightly wheel
When I try to install TorchAO the same way I install the nightly torch wheel (pip3 install torchao --index-url https://download.pytorch.org/whl/nightly/cpu), I end up getting version 0.10.0 of TorchAO, instead of the expected https://download.pytorch.org/whl/nightly/cpu/torchao-0.11.0.dev20250418+cpu-py3-none-any.whl f...
https://github.com/pytorch/ao/issues/2086
open
[ "triaged", "distribution" ]
2025-04-21T06:48:43Z
2025-04-29T22:28:47Z
null
MingxuZh
huggingface/trl
3,331
how to run multi-adapter PPO training in TRL==0.16.1 ?
In `TRL==0.11.0`, we can use multi-adapter to train PPO model like: - $\pi_\text{sft}$ sft model as base model - $\pi_\text{sft} + \text{LoRA}_\text{rm}$ as reward model - $\pi_\text{sft} + \text{LoRA}_\text{policy}$ as policy model - $\pi_\text{sft} + \text{LoRA}_\text{critic}$ as value model in v0.16.0 how to run...
https://github.com/huggingface/trl/issues/3331
closed
[ "❓ question", "🏋 PPO", "🏋 SFT" ]
2025-04-21T06:26:32Z
2025-06-17T08:59:11Z
null
dhcode-cpp
huggingface/huggingface_hub
3,019
How to solve "Spaces stuck in Building" problems
### Describe the bug Public spaces may stuck in Building after restarting, error log as follows: build error Unexpected job error ERROR: failed to push spaces-registry.huggingface.tech/spaces/:cpu--: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-: 401 Una...
https://github.com/huggingface/huggingface_hub/issues/3019
closed
[ "bug" ]
2025-04-21T03:11:11Z
2025-04-22T07:50:01Z
null
ghost
huggingface/datasets
7,530
How to solve "Spaces stuck in Building" problems
### Describe the bug Public spaces may stuck in Building after restarting, error log as follows: build error Unexpected job error ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401...
https://github.com/huggingface/datasets/issues/7530
closed
[]
2025-04-21T03:08:38Z
2025-11-11T00:57:14Z
null
ghost
huggingface/lerobot
1,005
[pi0] n_action_step vs chunk_size
In modeling_pi0.py, the config variable `chunk_size` is never used. Instead, the action queue is set to be the size of `n_action_step`, and the training loss is also calculated on the actions of size `n_action_step`. But I thought what should happen is that the model would predict actions of length `chunk size` (and ...
https://github.com/huggingface/lerobot/issues/1005
closed
[ "question", "policies", "stale" ]
2025-04-20T04:00:23Z
2025-11-07T02:30:27Z
null
IrvingF7
pytorch/pytorch
151,746
[AotInductor][Export][Triton] how to export custom triton kernels when use torch.export.export
### 🐛 Describe the bug our framework is based on torch, and includes some custom triton kernels. in inference phase, we try use different gpu type(such as training on H100, inference on L40). so we should load exported model and call aoti_compile_and_package to generate aot model based on inference gpu, but error w...
https://github.com/pytorch/pytorch/issues/151746
open
[ "oncall: pt2", "export-triaged", "oncall: export", "module: aotinductor", "module: user triton" ]
2025-04-19T13:26:03Z
2025-04-25T23:11:04Z
null
zzq96
pytorch/executorch
10,314
This document(https://pytorch.org/executorch/stable/demo-apps-android.html#running-the-app) is out of date. Where is examples/demo-apps/android/ExecuTorchDemo?
https://pytorch.org/executorch/stable/demo-apps-android.html#running-the-app ![Image](https://github.com/user-attachments/assets/73116d1b-fb01-4263-9adc-ae1aeb8e7a06) ![Image](https://github.com/user-attachments/assets/7076302d-364d-4b71-b990-6cec92fe52a0) cc @mergennachin @iseeyuan @lucylq @helunwencser @tarun292 @...
https://github.com/pytorch/executorch/issues/10314
closed
[ "module: examples" ]
2025-04-19T09:36:52Z
2025-12-23T20:39:22Z
null
Kennems
huggingface/lerobot
1,000
How to implement a new policy?
How can I integrate a new policy (e.g., OpenVLA) into LeRobot, and specifically, which files do I need to modify?
https://github.com/huggingface/lerobot/issues/1000
closed
[ "enhancement", "policies" ]
2025-04-19T08:53:48Z
2025-07-29T14:30:18Z
null
Elycyx
huggingface/prettier-plugin-vertical-align
2
how to use
https://github.com/huggingface/prettier-plugin-vertical-align#installation Add plugins: ["@huggingface/prettier-plugin-vertical-align"] to your .prettierrc file. Are you sure to .prettierrc file?
https://github.com/huggingface/prettier-plugin-vertical-align/issues/2
closed
[]
2025-04-19T04:15:29Z
2025-04-24T02:53:42Z
null
twotwoba
pytorch/xla
9,002
Update debugger documentation to demonstrate lldb
It's possible lldb is faster than gdb. Feature request is to explore if that is true, and if so, write docs on how to use lldb command line and lldb in VSCode. This is an enhancement of #8997
https://github.com/pytorch/xla/issues/9002
open
[ "documentation" ]
2025-04-18T16:28:50Z
2025-04-21T12:33:58Z
0
yaoshiang
huggingface/lerobot
997
how to convert pi0 fast
i just meet pi0 convert, how to convert pi0 fast ![Image](https://github.com/user-attachments/assets/ca6b8c52-4000-478e-88a0-501f0ce3c205)
https://github.com/huggingface/lerobot/issues/997
closed
[ "question" ]
2025-04-18T14:27:29Z
2025-10-14T14:06:30Z
null
ximiluuuu
huggingface/diffusers
11,359
[Feature request] LTX-Video v0.9.6 15x faster inference than non-distilled model.
**Is your feature request related to a problem? Please describe.** No problem. This request is Low priority. As and when time allows. **Describe the solution you'd like.** Please support the new release of LTX-Video 0.9.6 **Describe alternatives you've considered.** Original repo have support but it is easier to use ...
https://github.com/huggingface/diffusers/issues/11359
closed
[]
2025-04-18T08:05:27Z
2025-05-09T16:03:34Z
6
nitinmukesh
pytorch/xla
8,997
Add guide to debugging
For now, it can cover just PyTorch pending #8996
https://github.com/pytorch/xla/issues/8997
closed
[ "documentation" ]
2025-04-17T18:30:31Z
2025-04-20T08:01:29Z
0
yaoshiang
huggingface/transformers.js
1,291
@xenova/transformers vs. @huggingface/transformers npm package
### Question It's pretty confusing to have both of these on npm. Which are we supposed to use? Can you please deprecate the one that we aren't supposed to use? (`npm deprecate`)
https://github.com/huggingface/transformers.js/issues/1291
open
[ "question" ]
2025-04-17T16:10:36Z
2025-10-24T10:19:03Z
null
nzakas
huggingface/accelerate
3,510
Accelerate Config Error - How to debug this?
### System Info ```Shell pip list absl-py 2.2.2 accelerate 1.6.0 annotated-types 0.7.0 bitsandbytes 0.45.5 diffusers 0.33.0.dev0 /data/roy/diffusers ftfy 6.3.1 huggingface-hub 0.30.2 numpy 2.2.4 nvidia-c...
https://github.com/huggingface/accelerate/issues/3510
closed
[]
2025-04-17T11:12:50Z
2025-05-19T08:46:12Z
null
KihongK
pytorch/TensorRT
3,478
❓ [Question] Is SAM2 supported when compiling with the Dynamo backend on JetPack 6.1 or 6.2?
## ❓ Question Will SAM2 be compatible with the Dynamo backend on JetPack 6.1/6.2? Are there any workarounds for the TensorRT version mismatch? ## What you have already tried Here are my attempts and issues encountered, my device is jetson AGX Orin, I only compile the ImageEncoder (Hiera & FPN which remove position_e...
https://github.com/pytorch/TensorRT/issues/3478
open
[ "question" ]
2025-04-17T08:32:07Z
2025-06-28T07:09:31Z
null
AyanamiReiFan
huggingface/diffusers
11,351
Why Wan i2v video processor always float32 datatype?
### Describe the bug I found image = self.video_processor.preprocess(image, height=height, width=width).to(device, dtype=torch.float32) https://github.com/huggingface/diffusers/blob/29d2afbfe2e09a4ee7cc51455e51ce8b8c0e252d/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L633 in pipeline_wan_i2v.py why datatype ...
https://github.com/huggingface/diffusers/issues/11351
closed
[ "bug" ]
2025-04-17T07:00:42Z
2025-05-07T03:48:24Z
2
DamonsJ
pytorch/xla
8,993
Is there a way to attach metadata to a layer in a way that is included in the StableHLO export?
## ❓ Questions and Help I am looking at a use case where metadata about a trained model's layers needs to be attached to the StableHLO export. I am using `exported_program_to_stablehlo` One option I had considered is exporting the data completely separately from `exported_program_to_stablehlo` (say, by writing some r...
https://github.com/pytorch/xla/issues/8993
open
[ "question", "stablehlo" ]
2025-04-17T06:04:47Z
2025-04-25T00:44:25Z
null
j2kun
huggingface/transformers
37,570
How to streaming output audio of Qwen2.5-omni-7b
All the examples of qwen2.5-omni-7b did not show how to streaming output audio, with passing streamer, I am able to get streaming text, but how can I get the streaming audio output?
https://github.com/huggingface/transformers/issues/37570
closed
[]
2025-04-17T04:16:35Z
2025-07-30T08:03:44Z
null
qinxuye
pytorch/tutorials
3,332
Tutorial mention of batch samples as features?
Hello kindly confirm if it is correct to say that the batch_size =64 will give 64 features and 64 labels. Arent there 28 by 28 features and 64 samples ? <img width="903" alt="Image" src="https://github.com/user-attachments/assets/7fe5d741-58c9-404a-a181-145e2bbfc086" />
https://github.com/pytorch/tutorials/issues/3332
open
[]
2025-04-17T02:35:14Z
2025-04-17T02:35:58Z
0
monaja
pytorch/xla
8,986
When trying to run this code with connection to tpu in google colab i had this error: AssertionError: 4 results for replica 0
## ❓ Questions and Help When trying to run this code in google colab: ```import os import torch_xla import torch_xla.core.xla_model as xm import torch_xla.distributed.xla_multiprocessing as xmp import torch_xla.runtime as xr import torchvision import multiprocessing as mp os.environ['TPU_NUM_DEVICES'] = '8' os.envir...
https://github.com/pytorch/xla/issues/8986
closed
[ "question", "xla:tpu" ]
2025-04-16T11:56:22Z
2025-04-18T12:11:07Z
null
Neckto0
huggingface/diffusers
11,339
How to multi-GPU WAN inference
Hi,I didn't find multi-gpu inferences example in the documentation. Can you give me an example, such as Wan2.1-I2V-14B-720P-Diffusers. I would appreciate some help on that, thank you in advance
https://github.com/huggingface/diffusers/issues/11339
closed
[ "stale" ]
2025-04-16T10:22:41Z
2025-07-05T21:18:01Z
null
HeathHose
huggingface/trl
3,295
i have 2 gpu,but default gpu:0,How to specify a gpu:1 for training?
### Reproduction ```python from trl import ... ``` outputs: ``` Traceback (most recent call last): File "example.py", line 42, in <module> ... ``` ### System Info i have 2 gpu,but default gpu:0,How to specify a gpu:1 for training? ### Checklist - [x] I have checked that my issue isn't already filed (see ...
https://github.com/huggingface/trl/issues/3295
closed
[ "❓ question", "📱 cli" ]
2025-04-15T08:29:26Z
2025-04-24T19:46:37Z
null
Aristomd
huggingface/lerobot
981
How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations?
How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations?I am a beginner.
https://github.com/huggingface/lerobot/issues/981
closed
[ "question", "simulation" ]
2025-04-15T04:04:33Z
2025-10-17T11:19:34Z
null
harryhu0301
huggingface/diffusers
11,321
flux controlnet train ReadMe have a bug
### Describe the bug ![Image](https://github.com/user-attachments/assets/bc20df10-80b0-46fa-b013-799a3b1865b4) what is the controlnet config parameters? text is num_single_layers = 10, but the code set num_single_layers=0? ### Reproduction check readme file ### Logs ```shell ``` ### System Info diffusers ==0....
https://github.com/huggingface/diffusers/issues/11321
closed
[ "bug", "stale" ]
2025-04-15T01:30:58Z
2025-10-11T09:58:52Z
14
Johnson-yue
huggingface/agents-course
428
[QUESTION] Current schedule is non-sensical
First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord However, if you prefer you can ask here, please **be specific**. The course page states: > There’s a deadline for the certification process: all the assignments must be finished before May 1st...
https://github.com/huggingface/agents-course/issues/428
closed
[ "question" ]
2025-04-14T18:13:31Z
2025-04-28T06:51:58Z
null
mindcrime
pytorch/audio
3,899
Segmentation fault (core dumped) in torchaudio.io.AudioEffector
### 🐛 Describe the bug Occasionally, a core dump error may occur with a specific audio file as input, which a Python exception cannot capture. This error is rare, but when it does occur, the entire Python process will be killed. It only happens with some ”special audio”. Unfortunately, I did not find out what the sp...
https://github.com/pytorch/audio/issues/3899
open
[]
2025-04-14T13:20:04Z
2025-04-14T13:20:56Z
0
LiChenda
huggingface/lerobot
975
[Question] How to modify model & dataset to accept two input images in observation.image?
Hi, thank you for the great repo! I’ve been going through the first three examples, and now I’d like to explore training a diffusion policy with some customized input. Specifically: My goal: I want each observation.image to contain two images as input (they have the same shape as the original single image). I want t...
https://github.com/huggingface/lerobot/issues/975
closed
[ "dataset", "stale" ]
2025-04-14T08:35:47Z
2025-11-04T02:30:23Z
null
Keith-Luo
huggingface/candle
2,893
How to build a multi-node inference/training in candle?
Hi team, I'd like to have an example on mulit-node inference/training of candle, how can I find it? Thanks :) -- Klaus
https://github.com/huggingface/candle/issues/2893
open
[]
2025-04-14T08:03:20Z
2025-04-14T08:03:20Z
null
k82cn
huggingface/chat-ui
1,795
Offline Custom Tools
Would it be possible to define/use tools that the LLMs can use in an offline state? "Tools must use Hugging Face Gradio Spaces as we detect the input and output types automatically from the [Gradio API](https://www.gradio.app/guides/sharing-your-app#api-page)." Is there any reason that the tools can't be hosted loca...
https://github.com/huggingface/chat-ui/issues/1795
open
[ "enhancement" ]
2025-04-14T02:41:19Z
2025-04-14T02:41:19Z
0
cr-intezra
huggingface/chat-ui
1,794
Docker Image and Local Install missing file/image/etc upload
I've used the chat-ui-db:latest image as well as cloning the repo, setting up mongo and npm install/run dev and the UI I get does not have the icons or ability to upload in image or file. It only has the web search button. This would be for release 0.9.4. Is there something in .env.local that I am missing to enable t...
https://github.com/huggingface/chat-ui/issues/1794
open
[]
2025-04-13T19:30:29Z
2025-04-13T19:30:29Z
0
cr-intezra
pytorch/audio
3,898
forcing other not allowed frequencies to be accepted
I'm trying to work with frequencies below 20hz, preferably at 18,98hz but as the documentation says it only supports above 4000, 8000, and 9000. Even though is there a way to force torch to work with my desire frequency?? please
https://github.com/pytorch/audio/issues/3898
open
[]
2025-04-13T15:31:41Z
2025-04-13T15:31:41Z
0
andrewessel
pytorch/xla
8,968
Alternative to torch.select_mask
## ❓ Questions and Help Most of the time we can adapted routines to avoid graph recompilations, however there is instance where this is a bit tricky. When computing a masked mean, we are currently using sum and valids as follows: ``` replaced = input_tensor*is_valid sum_valid = replaced.sum() n_valid = is_valid.sum...
https://github.com/pytorch/xla/issues/8968
closed
[ "question" ]
2025-04-13T14:38:55Z
2025-05-01T20:31:05Z
null
ttdd11
huggingface/optimum
2,228
Unable to convert an audio-to-audio model.
### Feature request ``` bash optimum-cli export onnx --model microsoft/speecht5_vc speecht5_vc_onnx/ ``` Output: ``` log The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling...
https://github.com/huggingface/optimum/issues/2228
closed
[ "Stale" ]
2025-04-13T00:50:26Z
2025-05-18T02:17:06Z
1
divinerapier
huggingface/lerobot
971
Can different robotic arms share the same dataset and model?
English: I currently have datasets and models for the Koch, SO100, and ALOHA robotic arms. Is it possible for these three arms to share the same dataset and model? If so, how should this be implemented? If not—given the significant hardware differences—what is the practical value of data sharing in this context? @Caden...
https://github.com/huggingface/lerobot/issues/971
closed
[ "question", "dataset", "stale" ]
2025-04-12T05:03:27Z
2025-10-17T12:06:45Z
null
ZhangWuWei
pytorch/TensorRT
3,469
❓ [Question] How wo you export a triton kernel with model to a serialized engine that can be run in c++?
## ❓ Question <!-- Your question --> How wo you export a triton kernel with model to a serialized engine that can be run in c++? ## What you have already tried Read through python examples. <!-- A clear and concise description of what you have already done. --> ## Environment > Build information about Torch-Tensor...
https://github.com/pytorch/TensorRT/issues/3469
open
[ "question" ]
2025-04-11T16:53:33Z
2025-12-12T01:58:55Z
null
cmgreen210
huggingface/autotrain-advanced
881
Accelerators: Error fetching data. how to troubleshoot
Getting this error message when trying to train my model using Autotrain Accelerators: Error fetching data Error fetching training status My data file is a csv & correctly formatted. What are possible ways to troubleshoot this problem? I'm new to fine-tuning so would love any assistance
https://github.com/huggingface/autotrain-advanced/issues/881
closed
[ "stale" ]
2025-04-11T16:04:12Z
2025-06-02T15:02:09Z
null
innerspacestudio
pytorch/torchtitan
1,093
why is shard(1) in the colwiseparallel for lm head?
I found ColwiseParallel here for output linear layer has input_layout Shard(1). In that way, the input will be sharded accross different devices in the sequence dimension, and also the linear layer's output dimension (e.g., vocab dimension) has also been distributed? Is that something desired? Because on my understandi...
https://github.com/pytorch/torchtitan/issues/1093
closed
[]
2025-04-11T11:20:02Z
2025-04-11T11:46:04Z
0
wimh966
pytorch/torchtitan
1,092
Step Time Increase Leading to NCCL Timeout with FSDP2
**Description** I am encountering an issue when using fsdp2 where step time significantly increases after a certain number of steps, leading to NCCL timeouts. Initially, each step takes around 2 seconds, as shown in the earlier logs. However, after reaching step 1800, most processes experience a noticeable increase in ...
https://github.com/pytorch/torchtitan/issues/1092
closed
[ "question" ]
2025-04-11T10:50:55Z
2025-04-14T05:24:10Z
null
xhwang22
pytorch/torchtitan
1,091
FSDP2 root level parameter management
Hi, I am curious about the design decision of managing both token embeddings and the final output layer at the root fsdp level instead of treating them as different layers like other transformer blocks? This coupled management seems to unshard the final output layer too early and reshard the token embedding too late ...
https://github.com/pytorch/torchtitan/issues/1091
closed
[ "question", "module: fsdp" ]
2025-04-11T01:54:57Z
2025-07-29T02:40:22Z
null
dingqingy