repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot | 1,080 | Update `control_sim_robot.py` to use the new configs | Adding this issue to track one of the TODO's of this MR #550
As of now, [this script](https://github.com/huggingface/lerobot/blob/8cfab3882480bdde38e42d93a9752de5ed42cae2/lerobot/scripts/control_sim_robot.py) is outdated; It does not use the new configuration classes. | https://github.com/huggingface/lerobot/issues/1080 | closed | [
"question"
] | 2025-05-07T11:37:47Z | 2025-06-19T14:04:11Z | null | jccalvojackson |
huggingface/Math-Verify | 53 | How to turn off error print? | When using multiprocessing, there is a lot of error message printed. | https://github.com/huggingface/Math-Verify/issues/53 | closed | [] | 2025-05-07T08:19:36Z | 2025-07-02T16:07:02Z | null | wenxueru |
pytorch/executorch | 10,745 | How to use tokenizer.json in ExecuTorch Android demo (without tokenizer.model)? | ### 📚 The doc issue
I'm trying to deploy a language or vision-language model on Android using the ExecuTorch Android demo app.
The model I'm working with only provides tokenizer.json, but the current Android implementation appears to expect a tokenizer.model file instead.
Is tokenizer.model mandatory for the ExecuTorch demo app?
If I only have a tokenizer.json file (from HuggingFace), is there any recommended way to convert or load it in the app?
### Suggest a potential alternative/fix
_No response_
cc @kirklandsign @cbilgin | https://github.com/pytorch/executorch/issues/10745 | closed | [
"triaged",
"module: android"
] | 2025-05-07T03:22:03Z | 2025-05-07T21:33:46Z | null | jordanqi |
huggingface/peft | 2,533 | Integrate TLoRA (Tri-Matrix LoRA) | ### Feature request
We would like to propose integrating a novel parameter-efficient fine-tuning method called **TLoRA (Tri-Matrix LoRA)** into the `peft` library. We believe TLoRA offers significant advantages in terms of parameter efficiency, making it a valuable addition to the PEFT ecosystem.
Our method is detailed in the paper: **https://arxiv.org/abs/2504.18735**
**What is TLoRA?**
TLoRA is a variation of LoRA that introduces a tri-matrix decomposition for the weight update matrix $\Delta W$. Instead of the standard $W + A B$, TLoRA uses $W + \alpha A B C $, where:
* $W$ is the original pre-trained weight matrix.
* $A$ is a fixed, non-trainable matrix (e.g., initialized randomly or using Kaiming/Xavier).
* $B$ is the _only_ trainable matrix.
* $C$ is another fixed, non-trainable matrix (similar initialization as A).
* $\alpha$ is a trainable scaling parameter.
The $\Delta W$ update is computed as the product of three matrices: a fixed input projection matrix $A$, a small trainable bottleneck matrix $B$, and a fixed output projection matrix $C$. Only matrix $B$ is updated during fine-tuning.
**TLoRA Implementation:**
The core idea can be represented in a layer similar to this (based on our implementation):
```python
class TLoRALayer(nn.Module):
def __init__(self, weight, bias, rank=32):
super(TLoRALayer, self).__init__()
row, column = weight.shape
# Restore Linear layer
if bias is None:
self.linear = nn.Linear(column, row, bias=False)
self.linear.load_state_dict({"weight": weight})
else:
self.linear = nn.Linear(column, row)
self.linear.load_state_dict({"weight": weight, "bias": bias})
# Create TLoRA weights with initialization
self.random_A = nn.Parameter(
torch.zeros(column, rank), requires_grad=False
) # First matrix, non-trainable
nn.init.kaiming_normal_(self.random_A, a=math.sqrt(5))
self.lora_B = nn.Parameter(torch.zeros(rank, rank)) # Second matrix (trainable)
self.random_C = nn.Parameter(
torch.zeros(rank, row), requires_grad=False
) # Third matrix
nn.init.kaiming_normal_(self.random_C, a=math.sqrt(5))
self.lora_scaling = nn.Parameter(torch.ones(1))
self.dropout = nn.Dropout(0.5)
def forward(self, input):
# Standard linear transformation
x = self.linear(input)
# Low-rank adaptation with tri-matrix TLoRA
# Using the scaling to control the LoRA output
y = self.lora_scaling * (input @ self.random_A @ self.lora_B @ self.random_C)
y = self.dropout(y)
return x + y
```
Full Repo: https://github.com/itanvir/tlora
### Motivation
1. **Extreme Parameter Efficiency:** The core trainable component in TLoRA is the matrix $B$ with dimensions `rank x rank`. Compared to standard LoRA's trainable matrices $A$ (`input_dim x rank`) and $B$ (`rank x output_dim`), TLoRA's trainable parameters are significantly fewer. This makes TLoRA potentially one of the most parameter-efficient methods in PEFT for a given rank.
2. **Competitive Performance:** The fixed matrices $A$ and $C$ can be seen as defining fixed subspaces. By training only the matrix $B$ connecting these subspaces, TLoRA might capture more focused and effective updates compared to training the full $A$ and $B$ matrices in standard LoRA. Our paper provides empirical evidence supporting its effectiveness.
### Your contribution
Can give inputs on the design. It should be straightforward. | https://github.com/huggingface/peft/issues/2533 | closed | [] | 2025-05-06T21:22:50Z | 2025-06-15T15:03:57Z | 2 | itanvir |
huggingface/candle | 2,945 | Operating steps from scratch for beginners? | from
a
To
Z | https://github.com/huggingface/candle/issues/2945 | open | [] | 2025-05-06T15:34:02Z | 2025-05-06T15:34:02Z | 0 | Qarqor5555555 |
pytorch/torchtitan | 1,169 | how to inference with pretrained model? | hi, after pretrain/sft with torchtitan, how to inference with the checkpoint? does the repo provide the inference code? thank you. | https://github.com/pytorch/torchtitan/issues/1169 | closed | [] | 2025-05-06T10:28:50Z | 2025-08-21T03:18:05Z | null | dragen1860 |
pytorch/torchtitan | 1,168 | How to use fsdp2 cpu_offload? | I am currently using `cpuOffloadPolicy` in the following way:
```py
transformer_cls_to_wrap = list()
for layer_class in transformer_cls_names_to_wrap:
transformer_cls = get_module_class_from_name(model_to_wrap, layer_class)
if transformer_cls is not None:
transformer_cls_to_wrap.append(transformer_cls)
if len(transformer_cls_to_wrap) == 0:
raise NotImplementedError("len(transformer_cls_to_wrap) == 0, please check the wrapping rules!")
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.float32,
)
fsdp_kwargs = {
"reshard_after_forward": True,
"mp_policy": mp_policy,
"offload_policy": CPUOffloadPolicy() if self.args.adam_offload else OffloadPolicy(),
}
for cls_to_wrap in transformer_cls_to_wrap:
for module in model_to_wrap.modules():
if isinstance(module, cls_to_wrap):
fully_shard(module, **fsdp_kwargs)
for name, module in model_to_wrap.named_modules():
if 'lm_head' in name:
fully_shard(module, **fsdp_kwargs)
fully_shard(model_to_wrap, **fsdp_kwargs)
# cast model into fp32 to create optimizer with fp32 states
# https://github.com/pytorch/torchtitan/issues/1133#issuecomment-2824429682
model_to_wrap = model_to_wrap.to(torch.float32)
if is_meta_initialized(model_to_wrap):
model.to_empty(device='cuda')
return model
```
The model is created from huggingface pretrained model, but I got the following error when doing `clip_grad_norm`:
```
grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=self.grad_clip)
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/utils/clip_grad.py", line 30, in _no_grad_wrapper
[rank4]: return func(*args, **kwargs)
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/utils/clip_grad.py", line 105, in clip_grad_norm_
[rank4]: clip_coef = max_norm / (total_norm + 1e-6)
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/_tensor.py", line 39, in wrapped
[rank4]: return f(*args, **kwargs)
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/_tensor.py", line 1032, in __rdiv__
[rank4]: return self.reciprocal() * other
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
[rank4]: return disable_fn(*args, **kwargs)
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
[rank4]: return fn(*args, **kwargs)
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_api.py", line 340, in __torch_dispatch__
[rank4]: return DTensor._op_dispatcher.dispatch(
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 181, in dispatch
[rank4]: self.redistribute_local_args(
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 317, in redistribute_local_args
[rank4]: resharded_local_tensor = redistribute_local_tensor(
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_redistribute.py", line 195, in redistribute_local_tensor
[rank4]: new_local_tensor = partial_spec._reduce_value(
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/_ops/_math_ops.py", line 126, in _reduce_value
[rank4]: reduced_tensor = super()._reduce_value(tensor, mesh, mesh_dim)
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/tensor/placement_types.py", line 599, in _reduce_value
[rank4]: return funcol.all_reduce(
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/distributed/_functional_collectives.py", line 175, in all_reduce
[rank4]: tensor = torch.ops._c10d_functional.all_reduce(self, reduceOp.lower(), group_name)
[rank4]: File "/root/miniconda3/lib/python3.10/site-packages/torch/_ops.py", line 1116, in __call__
[rank4]: return self._op(*args, **(kwargs or {}))
[rank4]: RuntimeError: No backend type associated with device type cpu
```
Is there anything wrong in my model init device? | https://github.com/pytorch/torchtitan/issues/1168 | closed | [
"module: fsdp"
] | 2025-05-06T07:44:48Z | 2025-05-12T03:29:30Z | null | KimmiShi |
huggingface/lerobot | 1,072 | How to merge collected data into one? | For stability I collect data 10 episode by 10. Then forming this:
repo_id/first,repo_id_second...
I want to merge them together to repo_id/one_task for training, but it's hard to fix meta files.
I'm not sure if this approach helps with training, or if I should determine the number of episodes needed for training in advance when collecting data. | https://github.com/huggingface/lerobot/issues/1072 | closed | [
"question",
"dataset"
] | 2025-05-06T02:27:24Z | 2025-05-07T02:29:27Z | null | milong26 |
pytorch/xla | 9,095 | Support Dynamic Grid in Pallas Kernel | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Support dynamic grid feature of pallas kernel through PyTorch/XLA wrapper. Below is an example of dynamic grid in jax.
```
import functools
import time
import jax
from jax._src.pallas.pallas_call import _trace_kernel_to_jaxpr
import jax.numpy as jnp
from jax.experimental import pallas as pl
from jax import export
import numpy as np
def matmul_kernel(x_ref, y_ref, o_ref):
block_m, block_l = x_ref.shape
block_l2, block_n = y_ref.shape
assert block_l2 == block_l
assert o_ref.shape == (block_m, block_n)
@pl.when(pl.program_id(axis=2) == 0)
def _():
o_ref[...] = jnp.zeros_like(o_ref)
o_ref[...] += jnp.dot(x_ref[...], y_ref[...])
@functools.partial(jax.jit, static_argnames=['block_shape'])
def matmul(
x: jax.Array,
y: jax.Array,
m: int,
n: int,
l: int,
*,
block_shape=(128, 128, 128)
):
block_m, block_n, block_l = block_shape
grid = (m, n, l)
fused_matmul = pl.pallas_call(
functools.partial(matmul_kernel),
out_shape=jax.ShapeDtypeStruct((x.shape[0], y.shape[1]), jnp.float32),
in_specs=[
pl.BlockSpec((block_m, block_l), lambda i, j, k: (i, k)),
pl.BlockSpec((block_l, block_n), lambda i, j, k: (k, j)),
],
out_specs=pl.BlockSpec((block_m, block_n), lambda i, j, k: (i, j)),
grid=grid,
debug=False,
# interpret=jtu.test_device_matches(["cpu"]),
)
return fused_matmul(x, y)
x_shape = (8192, 8192)
y_shape = (8192, 8192)
n = l = 64
for m in range(4, 65, 4):
key = jax.random.key(m)
key1, key2 = jax.random.split(key, 2)
x = jax.random.normal(key1, x_shape, dtype=np.float32).block_until_ready()
y = jax.random.normal(key2, y_shape, dtype=np.float32).block_until_ready()
start_time = time.time()
res = matmul(x, y, m, n, l).block_until_ready()
end_time = time.time()
print("[1st run] m: ", m, " time: ", f"{(end_time - start_time) * 1000:.3f}ms", flush=True)
native = (x @ y)[:m * 128]
assert jax.numpy.allclose(native, res[:m * 128])
key = jax.random.key(m + 1000)
key1, key2 = jax.random.split(key, 2)
x = jax.random.normal(key1, x_shape, dtype=np.float32).block_until_ready()
y = jax.random.normal(key2, y_shape, dtype=np.float32).block_until_ready()
start_time = time.time()
res = matmul(x, y, m, n, l).block_until_ready()
end_time = time.time()
print("[2nd run] m: ", m, " time: ", f"{(end_time - start_time) * 1000:.3f}ms", flush=True)
```
| https://github.com/pytorch/xla/issues/9095 | open | [
"enhancement",
"pallas"
] | 2025-05-05T22:28:33Z | 2025-05-06T12:24:45Z | 0 | yaochengji |
huggingface/diffusers | 11,499 | [Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change. | ### Sys env:
OS Ubuntu 22.04
PyTorch 2.4.0+cu121
sana == 0.0.1
Diffusers == 0.34.0.dev0
### Reproduce:
Try the demo test code:
```
import torch
from diffusers import SanaPAGPipeline
pipe = SanaPAGPipeline.from_pretrained(
# "Efficient-Large-Model/Sana_1600M_512px_diffusers",
"Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers",
torch_dtype=torch.bfloat16,
pag_applied_layers="transformer_blocks.8",
)
pipe.to("cuda")
pipe.text_encoder.to(torch.bfloat16)
pipe.vae.to(torch.bfloat16)
prompt = 'a cyberpunk cat with a neon sign that says "Sana"'
image = pipe(
prompt=prompt,
guidance_scale=5.0,
pag_scale=2.0,
num_inference_steps=20,
generator=torch.Generator(device="cuda").manual_seed(42),
)[0]
image[0].save('sana.png')
```
Inference data will go through [SanaLinearAttnProcessor2_0](https://github.com/huggingface/diffusers/blob/58431f102cf39c3c8a569f32d71b2ea8caa461e1/src/diffusers/models/attention_processor.py#L6007)
### Issue Description:
Lines 6042 and 6043 first transposed a contiguous tensor and then did type casting. Type casting invokes a data copy from an old type tensor to a new one. But if you print the new tensor's stride(), you will see:
```
hidden_states = hidden_states.flatten(1, 2).transpose(1, 2)
hidden_states = hidden_states.to(original_dtype)
print("Contiguity after type casting: ", hidden_states.is_contiguous()) # False
hidden_states = attn.to_out[0](hidden_states)
hidden_states = attn.to_out[1](hidden_states)
```
The problem is typecasting copies, only did the dtype transmission based on the input tensor's strides. And the bad-strided tensor is immediately used by the latter two functions. Inefficiency is broadcast.
### How to Fix:
let `hidden_states.to(original_dtype)` do contiguous and typecasting simultaneously.
One possible approach:
```
@torch.compile
def transpose_cast_kernel(input_tensor: torch.Tensor) -> torch.Tensor:
"""
torch-compiled kernel that transposes a 2D tensor and converts it to bfloat16
"""
converted = input_tensor.to(torch.bfloat16)
transposed = torch.transpose(converted, 1, 2).contiguous()
return transposed
```
Use the versatile operation to handle the creation of the new tensor.
```
hidden_states = hidden_states.flatten(1, 2).transpose(1, 2)
hidden_states = transpose_cast_kernel(hidden_states)
# hidden_states.is_contiguous() True
hidden_states = attn.to_out[0](hidden_states)
hidden_states = attn.to_out[1](hidden_states)
```
Or, your expert team could do even better.
### Measurement:
By adopting the previous change, the **SanaLinearAttnProcessor2_0.__call__ enjoys** 1.06X speedup on RTX3090.
PAGCFGSanaLinearAttnProcessor2_0, and PAGIdentitySanaLinearAttnProcessor2_0 have similar logic and lose performance as well.
| https://github.com/huggingface/diffusers/issues/11499 | closed | [] | 2025-05-05T21:26:51Z | 2025-08-08T23:44:59Z | 11 | David-Dingle |
huggingface/candle | 2,944 | finetuning yolo 8 candle model | What is the correct way to finetune yolo8 model to be used here ? Finetuning model using candle is not straightforward.
candle\candle-examples\examples\yolo-v8\main.rs
// model model architecture points at ultralytics : https://github.com/ultralytics/ultralytics/issues/189
But my model trained using ultralytics and converted to safetensors yield tensor errors when used in candle ylo 8 example. Renaming the tensors to match the candle yolo model did not work.
I see DarkNet struct in the model.rs so I wonder if one should rather use [Darknet](https://github.com/hank-ai/darknet) instead (@LaurentMazare ) ?
| https://github.com/huggingface/candle/issues/2944 | open | [] | 2025-05-05T15:21:48Z | 2025-05-05T18:46:52Z | 0 | flutter-painter |
pytorch/rl | 2,939 | PPO with composite distribution crash before giving the warning on how to fix it. | This block
https://github.com/pytorch/rl/blob/795e362cb82b3539faa30db771e5b2f1d50f8c8a/torchrl/objectives/ppo.py#L601-L602
causes
```AttributeError: 'Tensor' object has no attribute 'batch_size'```
Before, the warning on how to fix it is shown.
https://github.com/pytorch/rl/blob/795e362cb82b3539faa30db771e5b2f1d50f8c8a/torchrl/objectives/ppo.py#L603-L614
The order of the 2 blocks needs to be swapped | https://github.com/pytorch/rl/issues/2939 | closed | [] | 2025-05-04T23:31:53Z | 2025-05-20T10:09:02Z | null | siegelaaron94 |
huggingface/diffusers | 11,489 | Error when I'm trying to train a Flux lora with train_dreambooth_lora_flux_advanced | ### Describe the bug
Hi! I'm trying to train my lora model with [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py) script.
When I'm trying to train my model with prior preservation tag I give an error.
How can I fix it?
### Reproduction
```bash
accelerate launch train_dreambooth_lora_flux_advanced.py \
--pretrained_model_name_or_path="black-forest-labs/FLUX.1-dev" \
--dataset_name="./ds5" \
--instance_prompt="1boy, 1girl" \
--validation_prompt="1boy, 1girl" \
--class_prompt="1boy, 1girl" \
--num_class_images=200 \
--with_prior_preservation \
--class_data_dir="./cdi" \
--output_dir="crtr-SDXL-LoRA" \
--caption_column="text" \
--mixed_precision="bf16" \
--prior_generation_precision="bf16" \
--resolution=1024 \
--train_batch_size=8 \
--repeats=1 \
--gradient_accumulation_steps=8 \
--gradient_checkpointing \
--learning_rate=1.0 \
--optimizer="prodigy"\
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--rank=64 \
--num_train_epochs=200 \
--validation_epochs=100 \
--center_crop \
--adam_beta2=0.99 \
--adam_weight_decay=0.01 \
--allow_tf32
```
### Logs
```shell
Traceback (most recent call last):
File "/workspace/train_dreambooth_lora_flux_advanced.py", line 2423, in <module>
main(args)
File "/workspace/train_dreambooth_lora_flux_advanced.py", line 2213, in main
(weighting.float() * (model_pred_prior.float() - target_prior.float()) ** 2).reshape(
~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 0
```
### System Info
Diffusers 0.33
CUDA 12.9
Torch 2.7
Docker image
nvcr.io/nvidia/pytorch:25.04-py3
### Who can help?
@sayakpaul | https://github.com/huggingface/diffusers/issues/11489 | open | [
"bug",
"training"
] | 2025-05-04T21:19:23Z | 2025-07-06T19:38:40Z | 4 | Mnwa |
huggingface/diffusers | 11,488 | Sincerely Request The Support for Flux PAG Pipeline | When the pag pipeline of flux can be supported? | https://github.com/huggingface/diffusers/issues/11488 | open | [
"help wanted",
"Good second issue"
] | 2025-05-04T11:12:05Z | 2025-05-16T04:53:52Z | 2 | PlutoQyl |
huggingface/text-generation-inference | 3,208 | Can I use TGI in a Supercomputer? | I want to generate somewhere around 1 trillion tokens and I was thinking of using TGI on a European Supercomputer. is there a way to achieve this without relying on docker and downloading the model natively and then load it on the compute node and serve it? @Wauplin | https://github.com/huggingface/text-generation-inference/issues/3208 | open | [] | 2025-05-03T15:13:24Z | 2025-05-15T08:55:08Z | 4 | sleepingcat4 |
pytorch/xla | 9,082 | Educate users on mat mul precision | mat mul precision will be exposed idiomatically to Pytorch in #9081. | https://github.com/pytorch/xla/issues/9082 | closed | [
"documentation"
] | 2025-05-02T20:03:54Z | 2025-05-21T20:34:32Z | 0 | yaoshiang |
huggingface/transformers.js | 1,305 | Trying to convert dinov2 model | ### Question
I tried to convert [this model](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.2.3) using the following command:
`python -m scripts.convert --model_id nguyenkhoa/dinov2_Liveness_detection_v2.2.3 --quantize --task image-classification`
but got the following error:
``ValueError: Trying to export a dinov2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type dinov2 to be supported natively in the ONNX export.``
I looked a bit into the `custom_onnx_configs` flag and found [this conversion example](https://github.com/huggingface/transformers.js/issues/906#issuecomment-2315290257). My question is regarding what should I pass to `custom_onnx_configs` for the conversion to work? I could pass `gpt2` as used in the example but I'm wondering what is the correct `custom_onnx_configs` input for dinov2 models.
Thank you! | https://github.com/huggingface/transformers.js/issues/1305 | closed | [
"question"
] | 2025-05-01T19:56:28Z | 2025-05-05T22:18:48Z | null | jdp8 |
pytorch/executorch | 10,593 | Advice on how to run the training example in Android | Hello Team,
We have followed https://pytorch.org/executorch/main/using-executorch-android.html#building-from-source to build the "aar" file.
We can run the inference example on Android.
We are wondering how to run the training example on Android.
Are there some flags / some config we need to add to the building procedure (https://github.com/pytorch/executorch/blob/main/scripts/build_android_library.sh)?
Thanks!
cc @kirklandsign @cbilgin @JacobSzwejbka | https://github.com/pytorch/executorch/issues/10593 | open | [
"module: android",
"module: training"
] | 2025-04-30T19:51:03Z | 2025-07-15T22:59:28Z | null | YuanTingHsieh |
huggingface/datasets | 7,545 | Networked Pull Through Cache | ### Feature request
Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.
Enable a three-tier cache lookup for datasets:
1. Local on-disk cache
2. Configurable network cache proxy
3. Official Hugging Face Hub
### Motivation
- Distributed training & ephemeral jobs: In high-performance or containerized clusters, relying solely on a local disk cache either becomes a streaming bottleneck or incurs a heavy cold-start penalty as each job must re-download datasets.
- Traffic & cost reduction: A pull-through network cache lets multiple consumers share a common cache layer, reducing duplicate downloads from the Hub and lowering egress costs.
- Better streaming adoption: By offloading repeat dataset pulls to a locally managed cache proxy, streaming workloads can achieve higher throughput and more predictable latency.
- Proven pattern: Similar proxy-cache solutions (e.g. Harbor’s Proxy Cache for Docker images) have demonstrated reliability and performance at scale: https://goharbor.io/docs/2.1.0/administration/configure-proxy-cache/
### Your contribution
I’m happy to draft the initial PR for adding HF_DATASET_CACHE_NETWORK_LOCATION support in datasets and sketch out a minimal cache-service prototype.
I have limited bandwidth so I would be looking for collaborators if anyone else is interested. | https://github.com/huggingface/datasets/issues/7545 | open | [
"enhancement"
] | 2025-04-30T15:16:33Z | 2025-04-30T15:16:33Z | 0 | wrmedford |
huggingface/transformers | 37,895 | How to backpropagate the gradients of the embeddings output by the image processor to the input image tensor? | ### Feature request
I'm using the processor of Qwen2.5-VL, and the image processor within it should be Qwen2ImageProcessor. The input image I provide is a PyTorch tensor with gradients, and the processor outputs the feature embeddings of the image. How can I ensure that the gradient flow is not interrupted during this process?
### Motivation
I want to backpropagate the gradients of the embeddings output by the Qwen2 image processor to the input image tensor
### Your contribution
I can coporate to fix this issue | https://github.com/huggingface/transformers/issues/37895 | open | [
"Feature request"
] | 2025-04-30T15:06:40Z | 2025-05-01T13:36:24Z | null | weiminbai |
pytorch/xla | 9,063 | Add explanation of Clang usage after Hermetic CUDA. | ## 📚 Documentation
Follow up from: #8665 and #9053
After #8665 is merged, we should add an explanation on the default usage of Clang due to the adoption of Hermetic CUDA. This is somewhat related to #9061. | https://github.com/pytorch/xla/issues/9063 | open | [
"documentation"
] | 2025-04-30T12:17:26Z | 2025-04-30T12:18:12Z | 0 | ysiraichi |
huggingface/diffusers | 11,466 | Finetuning of flux or scratch training | I am new to this field and wanted to know if Is there any code available for training the flux from scratch or even finetuning the existing model. All I see is the dreambooth or Lora finetuning. | https://github.com/huggingface/diffusers/issues/11466 | open | [] | 2025-04-30T07:45:49Z | 2025-05-30T16:32:33Z | 2 | preethamp0197 |
pytorch/executorch | 10,571 | where is pytorch_tokenizers.tools.llama2c.convert? | ### 🐛 Describe the bug
I can not find pytorch_tokenizers.tools.llama2c.convert with command "python -m pytorch_tokenizers.tools.llama2c.convert -t ../tokenizer.model -o ../tokenizer.bin" according to docs. the env
I use is built by "pip install executorch"
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900KF
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] executorch==0.6.0
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnxruntime==1.21.0
[pip3] optree==0.15.0
[pip3] torch==2.7.0
[pip3] torchao==0.10.0
[pip3] torchaudio==2.7.0
[pip3] torchvision==0.22.0
[pip3] triton==3.3.0
[conda] executorch 0.6.0 pypi_0 pypi
[conda] numpy 2.2.5 pypi_0 pypi
[conda] nvidi | https://github.com/pytorch/executorch/issues/10571 | closed | [
"module: llm"
] | 2025-04-30T03:15:59Z | 2025-05-08T06:20:26Z | null | hayyaw |
pytorch/xla | 9,056 | Fix the contribution instructions for creating PRs | ## 📚 Documentation
https://github.com/pytorch/xla/blob/master/CONTRIBUTING.md suggests to clone the original PyTorch/XLA repo directly. However, doing so makes it impossible to create PRs later unless the user has write permission to the repo. Instead, it should ask the users to fork the repo first, and then work against their fork. This allows creating PRs without having write access to the original repo.
While at this, we can also clarify the steps for creating PRs. | https://github.com/pytorch/xla/issues/9056 | closed | [
"documentation"
] | 2025-04-29T18:23:43Z | 2025-05-07T13:37:33Z | 0 | zhanyong-wan |
huggingface/hf-hub | 104 | What is this software licensed under? | Would this also be Apache 2 like in https://github.com/huggingface/huggingface_hub/?
Thanks! | https://github.com/huggingface/hf-hub/issues/104 | closed | [] | 2025-04-29T16:27:10Z | 2025-06-16T09:09:43Z | null | nathankw |
pytorch/vision | 9,042 | Make the C++ backend of the torchvision wheel usable for C++ development | ### 🚀 The feature
Currently, the torchvision wheel packages the C++ DSO as `_C.so` for python bindings.
We'd like the python wheel to have the C++ backend be standalone, so it can be extracted/used by C++ applications, like is done today for the PyTorch wheels.
This means:
- export DSO as `libtorchvision.so` instead of `_C.so`
- do not hardlink `libtorchvision.so` against `libtorch_python.so`.
- _maybe `_C.so` is kept for symbols that require `libtorch_python.so` ?_
- export cpp headers
- export CMake configs
### Motivation, pitch
C++ developers can currently use the distributed PyTorch wheels to develop C++ native applications against libtorch, as libraries, headers, and cmake configs are available in the wheels.
C++ developers who also need to use torchvision cannot leverage the standard `vision` wheel the same way even though all C++ symbols are available in `_C.so`. Instead, they must build libtorchvision C++ from source which is more cumbersome, requires extra dev packages to be installed, especially for cuda support.
### Additional context
<details>
<summary> see ld links for torchvision 0.22.0+cu128 (wheel) </summary>
```sh
libc.so.6
libc10.so
libc10_cuda.so
libcudart.so.12
libdl.so.2
libgcc_s.so.1
libm.so.6
libpthread.so.0
librt.so.1
libstdc++.so.6
libtorch.so
libtorch_cpu.so
libtorch_cuda.so
libtorch_python.so # requires python
linux-vdso.so.1
```
</details>
<details>
<summary> see ld links for c++ source build of torchvision </summary>
> no link against `libtorch_python.so`
```sh
libc.so.6
libc10.so
libc10_cuda.so
libcudart.so.12
libdl.so.2
libgcc_s.so.1
libm.so.6
libpthread.so.0
librt.so.1
libstdc++.so.6
libtorch.so
libtorch_cpu.so
libtorch_cuda.so
linux-vdso.so.1
```
</details>
<details>
<summary> example of a cpp torchvision installation with files needed for C++ development </summary>
> The install tree below can be imported for building with CMake with:
```
cmake ... -D TorchVision_ROOT="$torch_vision_install_dir" # Or add to CMAKE_PREFIX_PATH
```
```cmake
find_package(TorchVision)
```
```tree
├── include
│ └── torchvision
│ ├── io
│ │ └── image
│ │ ├── cpu
│ │ │ ├── common_jpeg.cpp
│ │ │ ├── common_jpeg.h
│ │ │ ├── common_png.h
│ │ │ ├── decode_gif.cpp
│ │ │ ├── decode_gif.h
│ │ │ ├── decode_image.cpp
│ │ │ ├── decode_image.h
│ │ │ ├── decode_jpeg.cpp
│ │ │ ├── decode_jpeg.h
│ │ │ ├── decode_png.cpp
│ │ │ ├── decode_png.h
│ │ │ ├── encode_jpeg.cpp
│ │ │ ├── encode_jpeg.h
│ │ │ ├── encode_png.cpp
│ │ │ ├── encode_png.h
│ │ │ ├── exif.h
│ │ │ ├── giflib
│ │ │ │ ├── dgif_lib.c
│ │ │ │ ├── gif_hash.c
│ │ │ │ ├── gif_hash.h
│ │ │ │ ├── gif_lib.h
│ │ │ │ ├── gif_lib_private.h
│ │ │ │ ├── gifalloc.c
│ │ │ │ └── openbsd-reallocarray.c
│ │ │ ├── read_write_file.cpp
│ │ │ └── read_write_file.h
│ │ ├── cuda
│ │ │ ├── decode_jpeg_cuda.cpp
│ │ │ ├── encode_decode_jpegs_cuda.h
│ │ │ ├── encode_jpegs_cuda.cpp
│ │ │ └── encode_jpegs_cuda.h
│ │ ├── image.cpp
│ │ ├── image.h
│ │ └── image_read_mode.h
│ ├── macros.h
│ ├── ops
│ │ ├── autocast
│ │ │ ├── deform_conv2d_kernel.cpp
│ │ │ ├── nms_kernel.cpp
│ │ │ ├── ps_roi_align_kernel.cpp
│ │ │ ├── ps_roi_pool_kernel.cpp
│ │ │ ├── roi_align_kernel.cpp
│ │ │ └── roi_pool_kernel.cpp
│ │ ├── autograd
│ │ │ ├── deform_conv2d_kernel.cpp
│ │ │ ├── ps_roi_align_kernel.cpp
│ │ │ ├── ps_roi_pool_kernel.cpp
│ │ │ ├── roi_align_kernel.cpp
│ │ │ └── roi_pool_kernel.cpp
│ │ ├── cpu
│ │ │ ├── deform_conv2d_kernel.cpp
│ │ │ ├── nms_kernel.cpp
│ │ │ ├── ps_roi_align_kernel.cpp
│ │ │ ├── ps_roi_pool_kernel.cpp
│ │ │ ├── roi_align_common.h
│ │ │ ├── roi_align_kernel.cpp
│ │ │ └── roi_pool_kernel.cpp
│ │ ├── cuda
│ │ │ ├── cuda_helpers.h
│ │ │ ├── deform_conv2d_kernel.cu
│ │ │ ├── nms_kernel.cu
│ │ │ ├── ps_roi_align_kernel.cu
│ │ │ ├── ps_roi_pool_kernel.cu
│ │ │ ├── roi_align_kernel.cu
│ │ │ └── roi_pool_kernel.cu
│ │ ├── deform_conv2d.cpp
│ │ ├── deform_conv2d.h
│ │ ├── nms.cpp
│ │ ├── nms.h
│ │ ├── ops.h
│ │ ├── ps_roi_align.cpp
│ │ ├── ps_roi_align.h
│ │ ├── ps_roi_pool.cpp
│ │ ├── ps_roi_pool.h
│ │ ├── roi_align.cpp
│ │ ├── roi_align.h
│ │ ├── roi_pool.cpp
│ │ └ | https://github.com/pytorch/vision/issues/9042 | open | [] | 2025-04-29T15:04:25Z | 2025-05-19T23:58:53Z | 5 | agirault |
huggingface/optimum | 2,248 | Export cli export RT-Detr | ```python
Traceback (most recent call last):
File "/usr/local/bin/optimum-cli", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/dist-packages/optimum/commands/optimum_cli.py", line 208, in main
service.run()
File "/usr/local/lib/python3.11/dist-packages/optimum/commands/export/onnx.py", line 265, in run
main_export(
File "/usr/local/lib/python3.11/dist-packages/optimum/exporters/onnx/__main__.py", line 375, in main_export
onnx_export_from_model(
File "/usr/local/lib/python3.11/dist-packages/optimum/exporters/onnx/convert.py", line 1033, in onnx_export_from_model
raise ValueError(
ValueError: Trying to export a rt-detr model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type rt-detr to be supported natively in the ONNX export.
```
When I try to export my fine-tuned model with RT-DETR, it always pops up with the above error.
Even with the cmd line `optimum-cli export onnx -m PekingU/rtdetr_r18vd --task object-detection test_onnx` still shows the same error. So, it should not be an issue related to finetuned model.
I would like to know how to export a finetuned model. It would be helpful if anyone can give me some hint. Thanks!
| https://github.com/huggingface/optimum/issues/2248 | closed | [] | 2025-04-29T08:23:17Z | 2025-05-05T08:03:21Z | 1 | TheMattBin |
huggingface/open-muse | 144 | how to set the minimum learning rate for cosine lr_scheduler? | @dataclass
class TrainingArguments(transformers.TrainingArguments):
gradient_checkpointing_kwargs={'use_reentrant':False}
lr_scheduler_kwargs={
"eta_min":1e-6,
"num_cycles":1,
}
It did not work. how to set the minimum learning rate in transformers-4.51.3? | https://github.com/huggingface/open-muse/issues/144 | closed | [] | 2025-04-29T02:18:59Z | 2025-04-29T02:20:42Z | null | xubuvd |
pytorch/torchchat | 1,536 | Improve Tokenizer New Type Onboarding | ### 🚀 The feature, motivation and pitch
---
As a sequel to https://github.com/pytorch/torchchat/issues/1518 where we added an enum for tokenizer types to simplify `TokenizerArgs __post_init__`, we need to further improve it to simplify new tokenizer type onboarding:
### Tasks
---
- Move TokenizerType to a centralized place
- We now have two of them: https://github.com/pytorch/torchchat/blob/0299a37a342348803763e37e9f4823c5bcb12c92/dist_run.py#L67-L69 https://github.com/pytorch/torchchat/blob/0299a37a342348803763e37e9f4823c5bcb12c92/torchchat/cli/builder.py#L241-L245
- Check all getters of tokenizer types
- It may be able to be simplified as inline https://github.com/pytorch/torchchat/blob/0299a37a342348803763e37e9f4823c5bcb12c92/torchchat/generate.py#L368
- Add documentation for future tokenizer onboard.
- We may need to point people to update the model validation logic: https://github.com/pytorch/torchchat/blob/0299a37a342348803763e37e9f4823c5bcb12c92/torchchat/cli/builder.py#L290-L322
---
To test, run a model with each tokenizer type:
- python torchchat.py generate llama2
- python torchchat.py generate llama3
- python torchchat.py generate granite-code
cc @Jack-Khuu @byjlw | https://github.com/pytorch/torchchat/issues/1536 | open | [
"good first issue",
"actionable",
"triaged"
] | 2025-04-28T18:31:33Z | 2025-05-13T17:54:18Z | 3 | zhenyan-zhang-meta |
huggingface/lerobot | 1,045 | Inefficient Config Structure without Hydra | Hi, I notice that the repo used Hydra before, which can modify some config param or create new config yaml files. However, this was deprecated. I wonder how to efficiently modify a new config file for policy without writing these params in the command line each time? | https://github.com/huggingface/lerobot/issues/1045 | closed | [
"question",
"configuration",
"stale"
] | 2025-04-28T11:48:08Z | 2025-11-18T02:30:46Z | null | jiangranlv |
pytorch/torchtitan | 1,150 | [Feature] Support validation | For some workloads, it is really important to perform validation on a different dataset every n iterations.
This seems reasonably straight forward to add to the training loop and training specs, while being kept as optional.
Is there any plan to support this functionality in the near future? | https://github.com/pytorch/torchtitan/issues/1150 | closed | [] | 2025-04-28T11:01:47Z | 2025-08-21T03:17:19Z | 4 | CarlosGomes98 |
huggingface/diffusers | 11,432 | `.from_pretrained` `torch_dtype="auto"` argument not working a expected | ### Describe the bug
Hey dear diffusers team,
thanks a lot for all your hard work!
I would like to make use of the `torch_dtype="auto"` keyword argument when loading a model/pipeline as specified [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.torch_dtype), but the usage does not work as expected (see example below). Can you help me out with some guidance on how to use it correctly or let me know whether there is something wrong with the handling of this argument?
Thank you!
### Reproduction
```python
from diffusers import StableDiffusionPipeline
model = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype="auto")
```
### Logs
```shell
Passed `torch_dtype` torch.float32 is not a `torch.dtype`. Defaulting to `torch.float32`.
```
### System Info
- 🤗 Diffusers version: 0.33.1
- Platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.17
- PyTorch version (GPU?): 2.7.0+cu126 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.30.2
- Transformers version: 4.51.3
- Accelerate version: 1.6.0
- PEFT version: 0.15.2
- Bitsandbytes version: 0.45.5
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA H100 PCIe, 81559 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11432 | closed | [
"bug"
] | 2025-04-28T04:31:26Z | 2025-05-13T01:42:37Z | 3 | johannaSommer |
huggingface/lerobot | 1,041 | image transform of pi0 is inconsistent with openpi | Thank you for pi0 work in lerobot.However, i found that image transform was quite different from openpi.
image transform of lerobot pi0:

image transform of openpi:

Are there some special considerations? By the way, resize_with_pad is also different. | https://github.com/huggingface/lerobot/issues/1041 | closed | [
"question",
"policies",
"stale"
] | 2025-04-28T03:08:10Z | 2025-11-20T02:30:12Z | null | wushandinghua |
pytorch/torchtitan | 1,147 | [Question] FSDP+TP CUDA_DEVICE_MAX_CONNECTIONS | In Megatron repo https://github.com/NVIDIA/Megatron-LM/blob/4429e8ebe21fb011529d7401c370841ce530785a/megatron/training/arguments.py#L779
It’s recommended that FSDP should use larger values of `CUDA_DEVICE_MAX_CONNECTIONS` but Megatron TP requires it to be 1. Is it also the case for torch implementation of TP using DTensor?
How should I configure the environment variable when using torch implementation of FSDP(2) and/or TP/CP/SP? | https://github.com/pytorch/torchtitan/issues/1147 | open | [
"documentation",
"question",
"module: fsdp"
] | 2025-04-27T20:48:50Z | 2025-04-29T21:54:07Z | null | ChenchaoZhao |
huggingface/diffusers | 11,423 | Lora Hotswap no clear documentation | Hello everyone.
Here is the scenario I have.
I have say 10 LoRAs that I would like to load and use depending on the request.
Option one:
using `load_lora_weights` - reads from the disk and moves to device: expensive operation
Option two:
load all loras and weights of non-used LoRAS with `set_adapters` method to 0.0. Not practical since the forward pass becomes expensive. Since all LoRAS are still loaded.
Option three:
Find an elegant way of loading LoRAs to CPU and then moving them to GPU as needed. While I was trying to do that, I saw the new parameter of hotswapping in hte load_lora_weights method. And this is what is described in the documentation:
hotswap — (bool, optional) Defaults to False. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter in-place. This means that, instead of loading an additional adapter, this will take the existing adapter weights and replace them with the weights of the new adapter. This can be faster and more memory efficient. However, the main advantage of hotswapping is that when the model is compiled with torch.compile, loading the new adapter does not require recompilation of the model. When using hotswapping, the passed adapter_name should be the name of an already loaded adapter. **If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need to call an additional method before loading the adapter**
could someone help me out here and name the mysterious function to be called?
and optionally would be great if someone could help me with my scenario.
| https://github.com/huggingface/diffusers/issues/11423 | open | [
"stale"
] | 2025-04-26T13:44:08Z | 2025-05-26T15:03:03Z | 2 | vahe-toffee |
huggingface/diffusers | 11,419 | How to know that "Textual inversion" file I have loaded and not turn it on? | Reviewing the documentation I understand the load of IT with:
# Add Embeddings
Pipeline.load_textual_inversion("Sd-Concepts-Library/Cat-Toy"),
# Remave All Token Embeddings
Pipeline.unload_textual_inversion()
# Remove Just One Token
Pipeline.unload_textual_inversion ("<Moe-Bius>")
But how do you know which are charged to the pipeline? | https://github.com/huggingface/diffusers/issues/11419 | closed | [
"stale"
] | 2025-04-25T17:18:07Z | 2025-05-27T18:09:45Z | null | Eduardishion |
huggingface/diffusers | 11,418 | How to add flux1-fill-dev-fp8.safetensors | ### Describe the bug
Hi!
How to use flux1-fill-dev-fp8.safetensors in diffusers?
Now I have code:
```
def init_pipeline(device: str):
logger.info(f"Loading FLUX Inpaint Pipeline (Fill‑dev) on {device}")
pipe = FluxFillPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Fill-dev",
torch_dtype=torch.bfloat16,
trust_remote_code=True
).to(device)
logger.info("Pipeline loaded successfully")
return pipe
```
Another try:
```
transformer = FluxTransformer2DModel.from_single_file(
"https://huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/blob/main/flux1-fill-dev-Q4_0.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16
)
pipe = FluxFillPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Fill-dev",
transformer=transformer,
torch_dtype=torch.bfloat16,
trust_remote_code=True
).to(device)
pipe.enable_model_cpu_offload()
```
### Reproduction
https://huggingface.co/boricuapab/flux1-fill-dev-fp8/blob/main/README.md
https://huggingface.co/pengxian/diffusion_models/blob/main/flux1-fill-dev_fp8.safetensors
### Logs
```shell
```
### System Info
Windows 11
Python 11
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11418 | closed | [
"bug"
] | 2025-04-25T14:58:08Z | 2025-04-28T19:06:17Z | null | SlimRG |
huggingface/optimum | 2,242 | [onnx] What are the functions of the generated files by optimum-cli? | ### System Info
```shell
I try to use **optimum-cli** to export the onnx file for llama, but i don't get a onnx file as expect, but get a lot of files, so I don't know what are they used for ?
(MindSpore) [ma-user llama149]$ls onnx_model/
config.json generation_config.json model.onnx model.onnx_data special_tokens_map.json tokenizer_config.json tokenizer.json
> refer to https://zhuanlan.zhihu.com/p/663971402
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
> (py39) [ma-user llama149]$optimum-cli export onnx --model models--daryl149--llama-2-7b-hf onnx_model --task text-generation
### Expected behavior
get a onnx file only, that is similar to **torch.onnx.export** | https://github.com/huggingface/optimum/issues/2242 | closed | [] | 2025-04-25T13:12:35Z | 2025-04-28T09:18:06Z | 1 | vfdff |
huggingface/diffusers | 11,417 | attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'? | ### Describe the bug
attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'?
### Reproduction
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export OUTPUT_DIR="trained-flux-dev-dreambooth-lora"
accelerate launch train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--mixed_precision="bf16" \
--train_text_encoder\
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--guidance_scale=1 \
--gradient_accumulation_steps=4 \
--optimizer="prodigy" \
--learning_rate=1. \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=500 \
--validation_prompt="A photo of sks dog in a bucket" \
--seed="0" \
--push_to_hub
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.12
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.30.2
- Transformers version: 4.44.1
- Accelerate version: 0.32.1
- PEFT version: 0.15.2
- Bitsandbytes version: not installed
- Safetensors version: 0.4.2
- xFormers version: 0.0.27.post2
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11417 | open | [
"bug",
"stale"
] | 2025-04-25T03:30:52Z | 2025-05-25T15:02:30Z | 1 | asjqmasjqm |
huggingface/datasets | 7,536 | [Errno 13] Permission denied: on `.incomplete` file | ### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed.
Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)?
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset
builder_instance.download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare
self._download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare
super()._download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
.venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators
downloaded_files = dl_manager.download(files)
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download
downloaded_path_or_paths = map_nested(
.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested
_single_map_nested((function, obj, batched, batch_size, types, None, True, None))
.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested
return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched
return thread_map(
.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
.venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__
for obj in iterable:
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator
yield _result_or_cancel(fs.pop())
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel
return fut.result(timeout)
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result
return self.__get_result()
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result
raise self._exception
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run
result = self.fn(*self.args, **self.kwargs)
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single
out = cached_path(url_or_filename, download_config=download_config)
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path
output_path = get_from_cache(
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache
fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get
fs.get_file(path, temp_file.name, callback=callback)
.venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper
return sync(self.loop, func, *args, **kwargs)
.venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync
raise return_result
.venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner
result[0] = await coro
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70>
rpath = '<my-bucket>/<my-prefix>/img_1.jpg'
lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete'
callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0>
version_id = None, kwargs = {}
_open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120>
body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0>
content_length = 521923, failed_reads = 0, bytes_read = 0
async def _get_file(
self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs
):
| https://github.com/huggingface/datasets/issues/7536 | closed | [] | 2025-04-24T20:52:45Z | 2025-05-06T13:05:01Z | 4 | ryan-clancy |
pytorch/pytorch | 152,100 | What is the difference between normal_tensor.storage().use_count() and viewed_tensor's? | In the test2() below, why is b.storage().use_count() still 2 even when I deleted the source tensor a?
```
import torch
def test1():
print("=============== test 1 ===============")
a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)
b = a.view(-1)
# b.storage().use_count() is 2
def test2():
print("=============== test 2 ===============")
a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)
b = a.view(-1)
del a
# b.storage().use_count() is 2
def test3():
print("=============== test 3 ===============")
a = torch.empty(size=(17, 32, 128, 16), dtype=torch.float16)
b = a.view(-1)
del b
# a.storage().use_count() is 1
test1()
test2()
test3()
```
I thought use_count=2 was because a and b each referenced the storage once, and deleting either tensor would make the use_comunt be 1, but that's not the case. | https://github.com/pytorch/pytorch/issues/152100 | closed | [] | 2025-04-24T12:54:21Z | 2025-04-25T07:39:39Z | null | CLiqing |
pytorch/audio | 3,901 | 2.7.0 release tag | ### 🚀 The feature
Although there is a 2.7.0 release on PyPI, there is no release of the source code on GitHub. Can we get a 2.7.0 release tagged?
### Motivation, pitch
Package managers like Spack build from source code, not from pre-compiled wheels. This is especially important for libraries like torchaudio which get frequent bug fixes as PRs but don't always get those PRs merged due to lack of maintenance.
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/audio/issues/3901 | closed | [] | 2025-04-24T09:54:48Z | 2025-04-24T15:25:16Z | 2 | adamjstewart |
pytorch/torchtitan | 1,141 | Meet Error when using AMD server (MI250) | Hi, when I using torchtitan on AMD server (Mi250), it reports the following errors:
.
Does torchtitan support AMD server like MI250?
Thanks. | https://github.com/pytorch/torchtitan/issues/1141 | closed | [] | 2025-04-24T07:48:10Z | 2025-04-25T08:46:06Z | 5 | StillKeepTry |
huggingface/diffusers | 11,396 | How to convert the hidream lora trained by diffusers to a format that comfyui can load? | ### Describe the bug
The hidream lora trained by diffusers can't load in comfyui, how could I convert it?
### Reproduction
No
### Logs
```shell
```
### System Info
No
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11396 | closed | [
"bug",
"stale"
] | 2025-04-23T13:13:34Z | 2025-06-23T09:49:19Z | null | yinguoweiOvO |
huggingface/candle | 2,916 | how to save and load the model | I just use the varmap.save the varmap,but when I use the varmap.load then achieved a empty varmap. is there any way to save the trained model? | https://github.com/huggingface/candle/issues/2916 | closed | [] | 2025-04-23T11:10:04Z | 2025-04-24T02:25:37Z | null | liguheng |
huggingface/tokenizers | 1,768 | How to debug tokenizers with python? | Hi, I have a technical question. After installing transformers via pip, I successfully installed tokenizers==0.21.1 and transformers==4.49.0. When running the code:
`tokenizer = AutoTokenizer.from_pretrained("../Qwen2") # (tokenizer configs in this folder)`
`tokenizer.encode(data)`
I want to trace the program flow to understand:
- How tokenizers.encode_batch works internally
- The implementation details of BPE (Byte Pair Encoding)
However, I'm currently stuck because the code appears to be compiled into tokenizers.abi3.so, making the source code inaccessible. How can I debug or inspect these components? | https://github.com/huggingface/tokenizers/issues/1768 | open | [] | 2025-04-23T09:37:20Z | 2025-04-30T14:11:11Z | null | JinJieGan |
pytorch/torchtitan | 1,133 | How to correctly use FSDP2 do mixed precision training? | Hi, I am currently doing this way:
```py
model = AutoModel.from_pretrained(...)
# make sure model is in fp32, so we have a fp32 mater weight in optimizer
model.to(torch.float32)
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.float32,
)
fsdp_kwargs = {
"reshard_after_forward": True,
"mp_policy": mp_policy,
}
for cls_to_wrap in transformer_cls_to_wrap:
for module in model.modules():
if isinstance(module, cls_to_wrap):
fully_shard(module, **fsdp_kwargs)
fully_shard(model, **fsdp_kwargs)
```
The first question is: is this correct? As far as I understand, the model param is in fp32, and optimizer states will also be fp32, the fwd and bwd pass will use bf16.
I am wondering if I can init fsdp with a bf16 model and then transfer this FSDP module into fp32? As In this way, it will take less CPU memory when loading Large LLMs. like the following demo:
```py
model = AutoModel.from_pretrained(...)
# just for demo, make sure model is in bf16
model.to(torch.bfloat16)
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.float32,
)
fsdp_kwargs = {
"reshard_after_forward": True,
"mp_policy": mp_policy,
}
for cls_to_wrap in transformer_cls_to_wrap:
for module in model.modules():
if isinstance(module, cls_to_wrap):
fully_shard(module, **fsdp_kwargs)
fully_shard(model, **fsdp_kwargs)
model.to(torch.float32)
``` | https://github.com/pytorch/torchtitan/issues/1133 | closed | [] | 2025-04-23T06:55:40Z | 2025-04-27T10:03:20Z | null | KimmiShi |
pytorch/torchtitan | 1,132 | FSDP2 reduce_scatter_reduce_op for context parallelism | Hi,
FSDP2 reduce_scatter by default seems to take the average over the entire shard world, which consists of dp_shard and cp. Averaging gradients over dp_shard makes sense, but I wonder if sum is the better reduce op for CP?
Logically, it seems to me gradient should be agnostic to the choice of CP.
Thanks! | https://github.com/pytorch/torchtitan/issues/1132 | closed | [
"question"
] | 2025-04-23T01:44:19Z | 2025-04-24T16:39:05Z | null | dingqingy |
pytorch/xla | 9,026 | Where to find TPU-dependent compile-pipeline/optimizations in XLA? | ## ❓ Questions and Help
I'm diving into the XLA source code to understand the compilation pipeline for the TPU backend and any TPU-dependent optimizations. However, I couldn't find details about the TPU compilation pipeline in xla/service dir, while CPU and GPU pipelines seem more visible. I see some cost-model-based fusion in GPU backend, so I wonder where are the equivalent optimizations done for TPU backend? | https://github.com/pytorch/xla/issues/9026 | closed | [] | 2025-04-23T01:42:17Z | 2025-04-23T12:07:58Z | 0 | Bolzano983 |
huggingface/diffusers | 11,390 | Better image interpolation in training scripts follow up | With https://github.com/huggingface/diffusers/pull/11206 we did a small quality improvement for the SDXL Dreambooth LoRA script by making `LANCZOS` the default interpolation mode for the image resizing.
This issue is to ask for help from the community to bring this change to the other training scripts, specially for the popular ones.
Since this is a really easy to make contribution I'll ask that we leave this issue for beginners and people that want to start learning how to contribute to open source projects.
What I think are the most important ones:
- [x] [train_dreambooth_flux](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flux.py)
- [x] [train_dreambooth_lora](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py)
- [x] [train_dreambooth_lora_lumina2](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)
- [x] [train_dreambooth_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py)
- [x] [train_controlnet_flux](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_flux.py)
- [x] [train_controlnet_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_sdxl.py)
- [x] [train_text_to_image](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)
- [x] [train_text_to_image_lora](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py)
- [x] [train_text_to_image_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py)
- [x] [train_text_to_image_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py)
- [x] [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py)
- [x] [train_dreambooth_lora_sd15_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py)
- [x] [train_dreambooth_lora_sdxl_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py)
If you have other preference, please feel free to ask me to add it.
If you want to contribute just answer to this issue with the one you want to do and tag me in the PR. Please only take one since I want to use this issue to get people to learn the ropes on how to contribute and get started with open source. | https://github.com/huggingface/diffusers/issues/11390 | closed | [
"good first issue",
"contributions-welcome"
] | 2025-04-23T00:04:10Z | 2025-05-05T16:35:18Z | 20 | asomoza |
huggingface/lerobot | 1,019 | How to resume dataset creation after interruption instead of starting from scratch? | Recently our dataset creation + upload got interrupted due to an error not related to LeRobot. However, I have not been able to launch the dataset creation using the information already processed. My cache folder shows the data, meta, and videos folders, and I was able to determine using the episodes.jsonl file in meta folder that there were 579 episodes processed.
When I try to resume from 580th episode, the `LeRobotDataset.create()` command gives the error that `FileExistsError: [Errno 17] File exists:` because the cache has it. How to resume it instead of having to start from scratch again? | https://github.com/huggingface/lerobot/issues/1019 | closed | [] | 2025-04-22T21:30:12Z | 2025-04-22T21:45:00Z | null | Anas-7 |
huggingface/peft | 2,508 | How to save the custom module into adapter_model.safetensrs when integrating new peft method | Just don't know where to save and load the module, or something can mark which module need to be saved.
For example, we want a moe of lora, where multi-lora and a router will be the trainable part and need to be saved. | https://github.com/huggingface/peft/issues/2508 | closed | [] | 2025-04-22T15:46:39Z | 2025-04-30T11:01:58Z | null | AaronZLT |
huggingface/lerobot | 1,015 | How to efficiently collect and standardize datasets from multiple Gymnasium environments? | Hello, I am studying how to collect datasets from various Gymnasium environments for reinforcement learning and imitation learning experiments. Currently, I can collect some data from real environments, but how to collect data from Gymnasium? | https://github.com/huggingface/lerobot/issues/1015 | closed | [
"question",
"dataset",
"good first issue"
] | 2025-04-22T08:50:34Z | 2025-10-17T11:16:09Z | null | ybu-lxd |
huggingface/lerobot | 1,013 | When creating dataset, how to save_episode with existing video? | For video with compatible frames, height and width that is recorded/rendered elsewhere, how can I add it to an episode directly without redundant decode-encode round-trip? | https://github.com/huggingface/lerobot/issues/1013 | closed | [
"enhancement",
"dataset",
"stale"
] | 2025-04-22T04:05:10Z | 2025-12-25T02:35:25Z | null | jjyyxx |
huggingface/lerobot | 1,012 | why chunk_size not used in PI0? | https://github.com/huggingface/lerobot/blob/b43ece89340e7d250574ae7f5aaed5e8389114bd/lerobot/common/policies/pi0/modeling_pi0.py#L658
Is it more meaningful and reasonable here to change `n_action_steps` to `chunk_size`, since `chunk_size` means prediction action horizon and `n_action_steps` means action steps actually applied to control the robot? | https://github.com/huggingface/lerobot/issues/1012 | closed | [
"question",
"policies",
"stale"
] | 2025-04-22T03:43:38Z | 2025-11-04T02:30:18Z | null | feixyz10 |
huggingface/huggingface_hub | 3,020 | How to run apps in local mode? local_files_only is failing | The app is running perfectly fine when internet available
All models downloaded into
`os.environ['HF_HOME'] = os.path.abspath(os.path.realpath(os.path.join(os.path.dirname(__file__), './hf_download')))`
When i set like below
```
# Set local_files_only based on offline mode
local_files_only = args.offline
if local_files_only:
print("Running in OFFLINE mode - using local models only")
# Disable any online connections for HuggingFace when in offline mode
os.environ['HF_HUB_OFFLINE'] = '1'
os.environ['TRANSFORMERS_OFFLINE'] = '1'
os.environ['DIFFUSERS_OFFLINE'] = '1'
# Load models with local_files_only parameter when in offline mode
text_encoder = LlamaModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='text_encoder', torch_dtype=torch.float16, local_files_only=local_files_only).cpu()
text_encoder_2 = CLIPTextModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='text_encoder_2', torch_dtype=torch.float16, local_files_only=local_files_only).cpu()
tokenizer = LlamaTokenizerFast.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='tokenizer', local_files_only=local_files_only)
tokenizer_2 = CLIPTokenizer.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='tokenizer_2', local_files_only=local_files_only)
vae = AutoencoderKLHunyuanVideo.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder='vae', torch_dtype=torch.float16, local_files_only=local_files_only).cpu()
feature_extractor = SiglipImageProcessor.from_pretrained("lllyasviel/flux_redux_bfl", subfolder='feature_extractor', local_files_only=local_files_only)
image_encoder = SiglipVisionModel.from_pretrained("lllyasviel/flux_redux_bfl", subfolder='image_encoder', torch_dtype=torch.float16, local_files_only=local_files_only).cpu()
transformer = HunyuanVideoTransformer3DModelPacked.from_pretrained('lllyasviel/FramePackI2V_HY', torch_dtype=torch.bfloat16, local_files_only=local_files_only).cpu()
```
and run with turning off internet i get below error
`local_files_only = set as True`
```
Running in OFFLINE mode - using local models only
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 262.52it/s]
Traceback (most recent call last):
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connection.py", line 198, in _new_conn
sock = connection.create_connection(
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "C:\Python310\lib\socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
response = self._make_request(
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 488, in _make_request
raise new_e
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 464, in _make_request
self._validate_conn(conn)
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 1093, in _validate_conn
conn.connect()
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connection.py", line 704, in connect
self.sock = sock = self._new_conn()
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connection.py", line 205, in _new_conn
raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001A126F7ED70>: Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\connectionpool.py", line 841, in urlopen
retries = retries.increment(
File "Q:\FramePack_v1\FramePack\venv\lib\site-packages\urllib3\util\retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/lllyasviel/FramePackI2V_HY (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000001A126F7ED70>: Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)"))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "Q:\FramePack_v1\FramePack\app.py", line 72, in <module>
transformer | https://github.com/huggingface/huggingface_hub/issues/3020 | closed | [
"bug"
] | 2025-04-21T23:46:06Z | 2025-04-22T09:24:57Z | null | FurkanGozukara |
pytorch/torchtitan | 1,126 | fully_shard() for huggingface model: pytorch caches too much GPU memory | Dear Community,
I'm working on fine-tuning the Qwen2-VL model using `fully_shard()` and wrote a script for it. However, I noticed that GPU memory usage stays high (around 50GB to 60GB) even as I scale up the number of GPUs. Besides, it will run into OOM when I try to fine tune 72B model with 128 GPUs.
I'm wondering if there might be any issues with my code or configuration. I'd really appreciate any insights or suggestions you might have. Thanks in advance!
My code:
```
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from transformers import Qwen2VLForConditionalGeneration, Qwen2VLProcessor, AutoModelForVision2Seq, AutoConfig
from qwen_vl_utils import process_vision_info
from peft import LoraConfig, get_peft_model
from datasets import load_dataset
import numpy as np
from PIL import Image
import io
import logging
import os
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch.distributed.device_mesh import init_device_mesh
from transformers.models.qwen2_vl.modeling_qwen2_vl import Qwen2VLDecoderLayer, Qwen2VLVisionBlock
from torch.distributed._composable.fsdp import fully_shard
from torch.distributed import init_process_group, destroy_process_group
from torch.distributed.checkpoint import DefaultLoadPlanner, DefaultSavePlanner
from torch.distributed._composable.fsdp import (
CPUOffloadPolicy,
fully_shard,
MixedPrecisionPolicy,
)
# Set up logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
# init dist
distributed_backend = "nccl" # gloo for cpu
dist.init_process_group(distributed_backend)
local_rank = int(os.environ["LOCAL_RANK"])
world_size = int(os.environ["WORLD_SIZE"])
device = torch.device(f"cuda:{local_rank}")
torch.cuda.set_device(device)
# model_name = "Qwen/Qwen2-VL-2B-Instruct"
# revision = "895c3a49bc3fa70a340399125c650a463535e71c"
model_name = "Qwen/Qwen2-VL-7B-Instruct"
revision = "a28a094eb66a9f2ac70eef346f040d8a79977472"
# model_name = "Qwen/Qwen2-VL-72B-Instruct"
# revision = "f9b556a74d58e6d9915f73227c21045c87342b42"
dataset_id = "HuggingFaceM4/ChartQA"
processor = Qwen2VLProcessor.from_pretrained(model_name,
revision=revision,
)
# Configuration
class Config:
dataset_id = "HuggingFaceM4/ChartQA"
output_dir = "/tmp_ckpt"
batch_size = 2
num_epochs = 3
learning_rate = 5e-5
max_seq_length = 512
lora_rank = 32
lora_alpha = 64
lora_dropout = 0.1
device = "cuda" if torch.cuda.is_available() else "cpu"
system_message = """You are a Vision Language Model specialized in interpreting visual data from chart images.
Your task is to analyze the provided chart image and respond to queries with concise answers, usually a single word, number, or short phrase.
The charts include a variety of types (e.g., line charts, bar charts) and contain colors, labels, and text.
Focus on delivering accurate, succinct answers based on the visual information. Avoid additional explanation unless absolutely necessary."""
def format_data(sample):
return [
{
"role": "system",
"content": [{"type": "text", "text": system_message}],
},
{
"role": "user",
"content": [
{
"type": "image",
"image": sample["image"],
},
{
"type": "text",
"text": sample["query"],
},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": sample["label"][0]}],
},
]
# Training function
def train_model(model, train_loader, optimizer, config):
model.train()
total_steps = len(train_loader) * config.num_epochs
step = 0
scaler = torch.amp.GradScaler("cuda", enabled=True)
for epoch in range(config.num_epochs):
total_loss = 0
for batch_idx, batch in enumerate(train_loader):
inputs, labels = batch
inputs = inputs.to(config.device)
labels = labels.to(config.device)
# Mixed precision training
loss = model(**inputs, labels=labels).loss
loss.backward() # no scaler
optimizer.step()
optimizer.zero_grad()
step += 1
logger.info(f"Epoch {epoch+1}/{config.num_epochs}, Step {step}/{total_steps}, Loss: {loss.item():.4f}")
del loss
# Create a data collator to encode text and image pairs
def collate_fn(examples):
# Get the texts and images, and apply the chat template
texts = [
processor.apply_chat_template(example, tokenize=False) for example in examples
] # Prepare texts for processing
image_inputs = [process | https://github.com/pytorch/torchtitan/issues/1126 | open | [
"question",
"module: fsdp"
] | 2025-04-21T21:37:43Z | 2025-05-13T05:09:52Z | null | mingdianliu |
pytorch/pytorch | 151,829 | profile for torch.add(x, x) where x is a zero-sized tensor looks bogus | ```py
from torch.profiler import profile, record_function, ProfilerActivity
import torch
x = torch.randn(0)
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
with record_function("model_inference"):
x + x
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
```
Gives:
```
In [7]: print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::matmul 0.46% 8.994us 62.32% 1.213ms 606.382us 2
aten::dot 61.72% 1.201ms 61.86% 1.204ms 601.884us 2
model_inference 6.61% 128.555us 8.13% 158.251us 158.251us 1
aten::to 1.04% 20.242us 5.30% 103.077us 3.221us 32
aten::_to_copy 2.19% 42.586us 4.26% 82.835us 2.589us 32
aten::ones 2.08% 40.453us 2.87% 55.895us 13.974us 4
aten::add 2.32% 45.200us 2.59% 50.328us 12.582us 4
aten::abs 1.27% 24.757us 2.20% 42.744us 21.372us 2
aten::__lshift__ 0.67% 12.990us 1.76% 34.283us 34.283us 1
aten::pow 1.40% 27.282us 1.58% 30.817us 10.272us 3
----------------------------- ------------ ------------ ------------ ------------ ------------ ------------
```
which seems really bizarre
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | https://github.com/pytorch/pytorch/issues/151829 | closed | [
"oncall: profiler"
] | 2025-04-21T20:53:57Z | 2025-06-07T23:58:54Z | null | zou3519 |
huggingface/finetrainers | 378 | How to finetune CogVideoX1.5-5B T2V LoRA? | Hello. I still unfamiliar with the finetuning process. I want to finetune CogVideoX1.5-5B T2V with LoRA. I have single RTX 4090. I try to re-run the bash script "finetrainers\examples\training\sft\cogvideox\crush_smol_lora\train.sh" with my own dataset and end up with error message
`train.sh: line 130: accelerate: command not found
train.sh: line 131: $'(\r --parallel_backend accelerate\r --pp_degree 1 --dp_degree 1 --dp_shards 1 --cp_degree 1 --tp_degree 1\r\r)\r': command not found
: No such file or directory_path THUDM/CogVideoX1.5-5B
--dataset_config D:/TA_ucup/finetrainers/examples/training/sft/cogvideox/crush_smol_: No such file or directoryize 10
train.sh: line 134: $'(\r --dataloader_num_workers 0\r)\r': command not found
train.sh: line 135: $'(\r --flow_weighting_scheme logit_normal\r)\r': command not found
train.sh: line 136: $'(\r --training_type lora\r --seed 42\r --batch_size 1\r --train_steps 3000\r --rank 32\r --lora_alpha 32\r --target_modules (transformer_blocks|single_transformer_blocks).*(to_q|to_k|to_v|to_out.0)\r --gradient_accumulation_steps 1\r --gradient_checkpointing\r --checkpointing_steps 1000\r --checkpointing_limit 2\r --enable_slicing\r --enable_tiling\r)\r': command not found
train.sh: line 137: $'(\r --optimizer adamw\r --lr 5e-5\r --lr_scheduler constant_with_warmup\r --lr_warmup_steps 1000\r --lr_num_cycles 1\r --beta1 0.9\r --beta2 0.99\r --weight_decay 1e-4\r --epsilon 1e-8\r --max_grad_norm 1.0\r)\r': command not found
--validation_dataset_file D:/TA_ucup/finetrainers/examples/training/sft/cogvideox/cr: No such file or directoryon
: No such file or directoryogvideoxeox`
I already install the library requirements and the diffusers. Is there anything I missing? | https://github.com/huggingface/finetrainers/issues/378 | open | [] | 2025-04-21T17:17:08Z | 2025-04-24T06:24:06Z | null | MaulanaYusufIkhsanRobbani |
huggingface/trl | 3,333 | How can I set the dataset to not shuffle? It seems there is no such option. | I'm using GRPOTrainer for training, and based on the logs I've printed, it seems that the dataset is being shuffled. However, the order of samples in the dataset is very important to me, and I don't want it to be shuffled. What should I do? I've checked the documentation but couldn't find any parameter to control this. | https://github.com/huggingface/trl/issues/3333 | closed | [
"❓ question",
"🏋 GRPO"
] | 2025-04-21T11:11:53Z | 2025-04-21T21:34:33Z | null | Tuziking |
pytorch/ao | 2,086 | How to automatically install the latest TorchAO nightly wheel | When I try to install TorchAO the same way I install the nightly torch wheel (pip3 install torchao --index-url https://download.pytorch.org/whl/nightly/cpu), I end up getting version 0.10.0 of TorchAO, instead of the expected https://download.pytorch.org/whl/nightly/cpu/torchao-0.11.0.dev20250418+cpu-py3-none-any.whl for example.
I'd like to know how to automatically install the latest TorchAO nightly wheel, and why the latest TorchAO nightly build is only available for Python3.9?
log:
(torch27) [xxx@xxx localdisk]$ pip3 install torchao --index-url https://download.pytorch.org/whl/nightly/cpu
Looking in indexes: https://download.pytorch.org/whl/nightly/cpu
Collecting torchao
Using cached https://download.pytorch.org/whl/nightly/cpu/torchao-0.10.0%2Bcpu-py3-none-any.whl.metadata (14 kB)
Using cached https://download.pytorch.org/whl/nightly/cpu/torchao-0.10.0%2Bcpu-py3-none-any.whl (710 kB)
Installing collected packages: torchao
Successfully installed torchao-0.10.0+cpu
| https://github.com/pytorch/ao/issues/2086 | open | [
"triaged",
"distribution"
] | 2025-04-21T06:48:43Z | 2025-04-29T22:28:47Z | null | MingxuZh |
huggingface/trl | 3,331 | how to run multi-adapter PPO training in TRL==0.16.1 ? | In `TRL==0.11.0`, we can use multi-adapter to train PPO model like:
- $\pi_\text{sft}$ sft model as base model
- $\pi_\text{sft} + \text{LoRA}_\text{rm}$ as reward model
- $\pi_\text{sft} + \text{LoRA}_\text{policy}$ as policy model
- $\pi_\text{sft} + \text{LoRA}_\text{critic}$ as value model
in v0.16.0 how to run multi-adapter PPO training. | https://github.com/huggingface/trl/issues/3331 | closed | [
"❓ question",
"🏋 PPO",
"🏋 SFT"
] | 2025-04-21T06:26:32Z | 2025-06-17T08:59:11Z | null | dhcode-cpp |
huggingface/huggingface_hub | 3,019 | How to solve "Spaces stuck in Building" problems | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/:cpu--: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-: 401 Unauthorized
### Reproduction
_No response_
### Logs
```shell
```
### System info
```shell
This problem can still happen in python gradio spaces without requirements.txt
``` | https://github.com/huggingface/huggingface_hub/issues/3019 | closed | [
"bug"
] | 2025-04-21T03:11:11Z | 2025-04-22T07:50:01Z | null | ghost |
huggingface/datasets | 7,530 | How to solve "Spaces stuck in Building" problems | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized
### Steps to reproduce the bug
Restart space / Factory rebuild cannot avoid it
### Expected behavior
Fix this problem
### Environment info
no requirements.txt can still happen
python gradio spaces | https://github.com/huggingface/datasets/issues/7530 | closed | [] | 2025-04-21T03:08:38Z | 2025-11-11T00:57:14Z | null | ghost |
huggingface/lerobot | 1,005 | [pi0] n_action_step vs chunk_size | In modeling_pi0.py, the config variable `chunk_size` is never used. Instead, the action queue is set to be the size of `n_action_step`, and the training loss is also calculated on the actions of size `n_action_step`.
But I thought what should happen is that the model would predict actions of length `chunk size` (and the loss is calculated on this action length as well), and the actual execution only takes `n_action_step`. At the very least, the variable that defines the size of `action_queue` should not be the same as the variable that defines the size of the predicted action vector. They may take the same value, but should be different variables, so the user can use the config to adjust how often they want to do inference
This is also what happens in pi0fast's implementation, if I am not mistaken
Am I missing something here? Thanks in advance | https://github.com/huggingface/lerobot/issues/1005 | closed | [
"question",
"policies",
"stale"
] | 2025-04-20T04:00:23Z | 2025-11-07T02:30:27Z | null | IrvingF7 |
pytorch/pytorch | 151,746 | [AotInductor][Export][Triton] how to export custom triton kernels when use torch.export.export | ### 🐛 Describe the bug
our framework is based on torch, and includes some custom triton kernels.
in inference phase, we try use different gpu type(such as training on H100, inference on L40). so we should load exported model and call aoti_compile_and_package to generate aot model based on inference gpu, but error with below msg when call torch.load:
```
torch._export.serde.serialize.SerializeError: Unsupported target type for node Node(target='torch.ops.triton_kernel.add.default', inputs=[NamedArgument(name='x', arg=Argument(as_tensor=TensorArgument(name='linear')), kind=1), NamedArgument(name='y', arg=Argument(as_tensor=TensorArgument(name='mul')), kind=1)], outputs=[Argument(as_tensor=TensorArgument(name='add'))], metadata={'stack_trace': ' File "/usr/local/app/torch_learn/export/model_export.py", line 72, in forward\n output = triton_add(dense_output, bias)\n File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/_library/custom_ops.py", line 671, in __call__\n return self._opoverload(*args, **kwargs)\n', 'nn_module_stack': 'L__self__,,__main__.SimpleModel', 'source_fn_stack': 'add_default,torch.ops.triton_kernel.add.default',
'torch_fn': 'add.default_1;OpOverload.add.default'}, is_hop_single_tensor_return=None): <class 'str'>
```
In my understanding, torch need source code of triton kernels when load exported_model.
but our framwork is big, and in some cases, user may define their custom triton kernels.
it's diffcult for us to obtain user source code and download this big framework in inference gpu machine.
any suggestions?
the simple model code is:
```python
import torch
import torch.nn as nn
import torch
import triton
import triton.language as tl
@triton.jit
def add_kernel(
x_ptr, y_ptr, output_ptr,
n_elements,
BLOCK_SIZE: tl.constexpr,
):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(x_ptr + offsets, mask=mask)
y = tl.load(y_ptr + offsets, mask=mask)
output = x + y
tl.store(output_ptr + offsets, output, mask=mask)
@torch.library.triton_op("triton_kernel::add", mutates_args={})
def triton_add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
n_elements = x.numel()
output = torch.empty_like(x)
BLOCK_SIZE = 1024
grid = (triton.cdiv(n_elements, BLOCK_SIZE),)
torch.library.wrap_triton(add_kernel)[grid](
x, y, output,
n_elements,
BLOCK_SIZE,
)
return output
class SimpleModel(nn.Module):
def __init__(self, input_dim, hidden_dim):
super(SimpleModel, self).__init__()
self.dense = nn.Linear(input_dim, hidden_dim)
def forward(self, x):
dense_output = self.dense(x)
bias = torch.ones_like(dense_output) * 0.5
output = triton_add(dense_output, bias)
return output
def main():
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_dim = 10
hidden_dim = 20
batch_size = 16
model = SimpleModel(input_dim, hidden_dim).to(device)
x = torch.randn(batch_size, input_dim, device=device)
with torch.no_grad():
output = model(x)
exported_model = torch.export.export(
model,
(x,),
)
torch.export.save(exported_model, "exported_model.pt")
if __name__ == "__main__":
main()
```
run this code, a exported_model is in `./exported_model.pt`
then run aot export code:
```python
import torch
torch.set_default_device("cuda")
saved_exported_program = torch.export.load(f"exported_model.pt")
torch._inductor.aoti_compile_and_package(
saved_exported_program,
package_path=f"aot_model.pt2",
)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
GCC version: (GCC) 10.3.1 20210422 (Red Hat 10.3.1-1)
Clang version: 9.0.1 (Red Hat 9.0.1-2.module_el8.2.0+309+0c7b6b03)
CMake version: version 3.19.0
Libc version: glibc-2.28
Python version: 3.9.16 (main, Dec 11 2024, 20:47:20) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] (64-bit runtime)
Python platform: Linux-5.4.119-1-tlinux4-0010.3-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10
GPU 1: NVIDIA A10
GPU 2: NVIDIA A10
GPU 3: NVIDIA A10
Nvidia driver version: 470.141.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.7
/usr/lib/libcudnn_adv_infer.so.8.9.7
/usr/lib/libcudnn_adv_train.so.8.9.7
/usr/lib/libcudnn_cnn_infer.so.8.9.7
/usr/lib/libcudnn_cnn_train.so.8.9.7
/usr/lib/libcudnn_ops_infer.so.8.9.7
/usr/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socke | https://github.com/pytorch/pytorch/issues/151746 | open | [
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor",
"module: user triton"
] | 2025-04-19T13:26:03Z | 2025-04-25T23:11:04Z | null | zzq96 |
pytorch/executorch | 10,314 | This document(https://pytorch.org/executorch/stable/demo-apps-android.html#running-the-app) is out of date. Where is examples/demo-apps/android/ExecuTorchDemo? | https://pytorch.org/executorch/stable/demo-apps-android.html#running-the-app


cc @mergennachin @iseeyuan @lucylq @helunwencser @tarun292 @kimishpatel @jackzhxng | https://github.com/pytorch/executorch/issues/10314 | closed | [
"module: examples"
] | 2025-04-19T09:36:52Z | 2025-12-23T20:39:22Z | null | Kennems |
huggingface/lerobot | 1,000 | How to implement a new policy? | How can I integrate a new policy (e.g., OpenVLA) into LeRobot, and specifically, which files do I need to modify? | https://github.com/huggingface/lerobot/issues/1000 | closed | [
"enhancement",
"policies"
] | 2025-04-19T08:53:48Z | 2025-07-29T14:30:18Z | null | Elycyx |
huggingface/prettier-plugin-vertical-align | 2 | how to use | https://github.com/huggingface/prettier-plugin-vertical-align#installation
Add plugins: ["@huggingface/prettier-plugin-vertical-align"] to your .prettierrc file.
Are you sure to .prettierrc file? | https://github.com/huggingface/prettier-plugin-vertical-align/issues/2 | closed | [] | 2025-04-19T04:15:29Z | 2025-04-24T02:53:42Z | null | twotwoba |
pytorch/xla | 9,002 | Update debugger documentation to demonstrate lldb | It's possible lldb is faster than gdb. Feature request is to explore if that is true, and if so, write docs on how to use lldb command line and lldb in VSCode.
This is an enhancement of #8997 | https://github.com/pytorch/xla/issues/9002 | open | [
"documentation"
] | 2025-04-18T16:28:50Z | 2025-04-21T12:33:58Z | 0 | yaoshiang |
huggingface/lerobot | 997 | how to convert pi0 fast | i just meet pi0 convert, how to convert pi0 fast

| https://github.com/huggingface/lerobot/issues/997 | closed | [
"question"
] | 2025-04-18T14:27:29Z | 2025-10-14T14:06:30Z | null | ximiluuuu |
huggingface/diffusers | 11,359 | [Feature request] LTX-Video v0.9.6 15x faster inference than non-distilled model. | **Is your feature request related to a problem? Please describe.**
No problem. This request is Low priority. As and when time allows.
**Describe the solution you'd like.**
Please support the new release of LTX-Video 0.9.6
**Describe alternatives you've considered.**
Original repo have support but it is easier to use with diffusers
**Additional context.**
April, 15th, 2025: New checkpoints v0.9.6:
Release a new checkpoint [ltxv-2b-0.9.6-dev-04-25](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-dev-04-25.safetensors) with improved quality
Release a new distilled model [ltxv-2b-0.9.6-distilled-04-25](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-distilled-04-25.safetensors)
15x faster inference than non-distilled model.
Does not require classifier-free guidance and spatio-temporal guidance.
Supports sampling with 8 (recommended), 4, 2 or 1 diffusion steps.
Improved prompt adherence, motion quality and fine details.
New default resolution and FPS: 1216 × 704 pixels at 30 FPS
Still real time on H100 with the distilled model.
Other resolutions and FPS are still supported.
Support stochastic inference (can improve visual quality when using the distilled model)
https://github.com/Lightricks/LTX-Video
Feedback on distilled model
https://www.reddit.com/r/StableDiffusion/comments/1k1xk1m/6_seconds_video_in_60_seconds_in_this_quality_is/
https://www.reddit.com/r/StableDiffusion/comments/1k1o4x8/the_new_ltxvideo_096_distilled_model_is_actually/ | https://github.com/huggingface/diffusers/issues/11359 | closed | [] | 2025-04-18T08:05:27Z | 2025-05-09T16:03:34Z | 6 | nitinmukesh |
pytorch/xla | 8,997 | Add guide to debugging | For now, it can cover just PyTorch pending #8996 | https://github.com/pytorch/xla/issues/8997 | closed | [
"documentation"
] | 2025-04-17T18:30:31Z | 2025-04-20T08:01:29Z | 0 | yaoshiang |
huggingface/transformers.js | 1,291 | @xenova/transformers vs. @huggingface/transformers npm package | ### Question
It's pretty confusing to have both of these on npm. Which are we supposed to use?
Can you please deprecate the one that we aren't supposed to use? (`npm deprecate`) | https://github.com/huggingface/transformers.js/issues/1291 | open | [
"question"
] | 2025-04-17T16:10:36Z | 2025-10-24T10:19:03Z | null | nzakas |
huggingface/accelerate | 3,510 | Accelerate Config Error - How to debug this? | ### System Info
```Shell
pip list
absl-py 2.2.2
accelerate 1.6.0
annotated-types 0.7.0
bitsandbytes 0.45.5
diffusers 0.33.0.dev0 /data/roy/diffusers
ftfy 6.3.1
huggingface-hub 0.30.2
numpy 2.2.4
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-cusparselt-cu12 0.6.2
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
packaging 24.2
peft 0.15.2
pip 22.0.2
protobuf 5.29.4
safetensors 0.5.3
setuptools 59.6.0
tokenizers 0.21.1
torch 2.6.0
torchvision 0.21.0
transformers 4.51.3
triton 3.2.0
wandb 0.19.9
... etc
nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA H100 PCIe Off | 00000000:2E:00.0 Off | 0 |
| N/A 43C P0 84W / 350W | 16460MiB / 81559MiB | 100% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA H100 PCIe Off | 00000000:30:00.0 Off | 0 |
| N/A 45C P0 89W / 350W | 11456MiB / 81559MiB | 100% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA H100 PCIe Off | 00000000:3F:00.0 Off | 0 |
| N/A 40C P0 86W / 350W | 11384MiB / 81559MiB | 100% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA H100 PCIe Off | 00000000:41:00.0 Off | 0 |
| N/A 36C P0 47W / 350W | 1MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 4 NVIDIA H100 PCIe Off | 00000000:B0:00.0 Off | 0 |
| N/A 46C P0 87W / 350W | 11384MiB / 81559MiB | 100% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 5 NVIDIA H100 PCIe Off | 00000000:B1:00.0 Off | 0 |
| N/A 39C P0 48W / 350W | 1MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 6 NVIDIA H100 PCIe Off | 00000000:C1:00.0 Off | 0 |
| N/A 35C P0 52W / 350W | 1MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 7 NVIDIA H100 PCIe Off | 00000000:C2:00.0 Off | 0 |
| N/A 35C P0 51W / 350W | 1MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: | https://github.com/huggingface/accelerate/issues/3510 | closed | [] | 2025-04-17T11:12:50Z | 2025-05-19T08:46:12Z | null | KihongK |
pytorch/TensorRT | 3,478 | ❓ [Question] Is SAM2 supported when compiling with the Dynamo backend on JetPack 6.1 or 6.2? | ## ❓ Question
Will SAM2 be compatible with the Dynamo backend on JetPack 6.1/6.2?
Are there any workarounds for the TensorRT version mismatch?
## What you have already tried
Here are my attempts and issues encountered, my device is jetson AGX Orin, I only compile the ImageEncoder (Hiera & FPN which remove position_encoding) of SAM2, the SAM2 code is from https://github.com/chohk88/sam2/tree/torch-trt:
**_JetPack 6.1 + PyTorch 2.5 (from https://developer.download.nvidia.cn) + Torch-TensorRT 2.5_**
Tried compiling SAM2 but encountered errors.
Observed that the PyTorch 2.5 documentation does not mention SAM2 support, likely indicating SAM2 is not yet adapted for this version.
**_JetPack 6.1 + PyTorch 2.6 (from https://pypi.jetson-ai-lab.dev/jp6/cu126) + Torch-TensorRT 2.6_**
Installed PyTorch 2.6 from [jp6/cu126](https://pypi.jetson-ai-lab.dev/jp6/cu126) and Torch-TensorRT 2.6.
Importing torch_tensorrt failed with ModuleNotFoundError: No module named 'tensorrt.plugin'.
Root cause: Torch-TensorRT 2.6 requires TensorRT 10.7, but JetPack 6.1 provides only TensorRT 10.3.
Found no straightforward way to upgrade TensorRT within JetPack 6.1 due to dependency conflicts.
_**Cross-Platform Attempt: Compile on x86 + Run on JetPack 6.1**_
Compiled SAM2 on x86 with Torch-TensorRT 2.6 and exported the model.
Tried running it on JetPack 6.1 with Torch-TensorRT 2.5.
Failed unsurprisingly due to serialization version incompatibility between 2.6 and 2.5.
| https://github.com/pytorch/TensorRT/issues/3478 | open | [
"question"
] | 2025-04-17T08:32:07Z | 2025-06-28T07:09:31Z | null | AyanamiReiFan |
huggingface/diffusers | 11,351 | Why Wan i2v video processor always float32 datatype? | ### Describe the bug
I found
image = self.video_processor.preprocess(image, height=height, width=width).to(device, dtype=torch.float32)
https://github.com/huggingface/diffusers/blob/29d2afbfe2e09a4ee7cc51455e51ce8b8c0e252d/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L633
in pipeline_wan_i2v.py
why datatype always float32, maybe it's a bug
### Reproduction
just run
### Logs
```shell
```
### System Info
any platform
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11351 | closed | [
"bug"
] | 2025-04-17T07:00:42Z | 2025-05-07T03:48:24Z | 2 | DamonsJ |
pytorch/xla | 8,993 | Is there a way to attach metadata to a layer in a way that is included in the StableHLO export? | ## ❓ Questions and Help
I am looking at a use case where metadata about a trained model's layers needs to be attached to the StableHLO export. I am using `exported_program_to_stablehlo`
One option I had considered is exporting the data completely separately from `exported_program_to_stablehlo` (say, by writing some random json to disk), but then I don't know how to connect the written metadata back to the stableHLO export, because the layer names do not appear to be attached to the generated StableHLO ops such.
Another option I tried was to attach the metadata directly to the torch nodes before calling `exported_program_to_stablehlo`, but I can't figure out how to do so in a way that results in the metadata being exported as MLIR attributes. It would suffice to export the attributes as, e.g., an op attribute with a given string name and value.
Could someone advise on whether this is possible, or suggest an alternative? (Or add a feature that would support this?) | https://github.com/pytorch/xla/issues/8993 | open | [
"question",
"stablehlo"
] | 2025-04-17T06:04:47Z | 2025-04-25T00:44:25Z | null | j2kun |
huggingface/transformers | 37,570 | How to streaming output audio of Qwen2.5-omni-7b | All the examples of qwen2.5-omni-7b did not show how to streaming output audio, with passing streamer, I am able to get streaming text, but how can I get the streaming audio output? | https://github.com/huggingface/transformers/issues/37570 | closed | [] | 2025-04-17T04:16:35Z | 2025-07-30T08:03:44Z | null | qinxuye |
pytorch/tutorials | 3,332 | Tutorial mention of batch samples as features? | Hello kindly confirm if it is correct to say that the batch_size =64 will give 64 features and 64 labels. Arent there 28 by 28 features and 64 samples ?
<img width="903" alt="Image" src="https://github.com/user-attachments/assets/7fe5d741-58c9-404a-a181-145e2bbfc086" />
| https://github.com/pytorch/tutorials/issues/3332 | open | [] | 2025-04-17T02:35:14Z | 2025-04-17T02:35:58Z | 0 | monaja |
pytorch/xla | 8,986 | When trying to run this code with connection to tpu in google colab i had this error: AssertionError: 4 results for replica 0 | ## ❓ Questions and Help
When trying to run this code in google colab:
```import os
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
import torch_xla.runtime as xr
import torchvision
import multiprocessing as mp
os.environ['TPU_NUM_DEVICES'] = '8'
os.environ ['XLA_USE_SPMD'] = '1'
os.environ ['XLA_TENSOR_ALLOCATOR_MAXSIZE'] = '8G'
lock = mp.Manager().Lock()
def _mp_fn(i, lock, device):
with lock:
pass
print(f"Process {i}: device = {device} (BEFORE RETURN)")
return i, device
if __name__ == '__main__':
nprocs = None
device = xm.xla_device()
print(f"Main process device: {device}")
results = xmp.spawn(_mp_fn, args=(lock, device), start_method='fork', nprocs=nprocs)
print("Results:")
for key, value in results.items():
print(f" Key: {key}, Value: {value}")
for i, device in results.items():
print('process', i, device)```
I get this error:
```Main process device: xla:0
Process 0: device = xla:0 (BEFORE RETURN)
Process 0: device = xla:0 (BEFORE RETURN)Process 0: device = xla:0 (BEFORE RETURN)
Process 0: device = xla:0 (BEFORE RETURN)
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-1-60755fe4d950> in <cell line: 0>()
27 print(f"Main process device: {device}")
28
---> 29 results = xmp.spawn(_mp_fn, args=(lock, device), start_method='fork', nprocs=nprocs) #передаем device
30 print("Results:")
3 frames
/usr/local/lib/python3.11/dist-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method)
37 return None.
38 """
---> 39 return pjrt.spawn(fn, nprocs, start_method, args)
40
41
/usr/local/lib/python3.11/dist-packages/torch_xla/_internal/pjrt.py in spawn(fn, nprocs, start_method, args)
211 % nprocs)
212
--> 213 run_multiprocess(spawn_fn, start_method=start_method)
214
215
/usr/local/lib/python3.11/dist-packages/torch_xla/_internal/pjrt.py in run_multiprocess(fn, start_method, *args, **kwargs)
171 result.items() for result in process_results))
172
--> 173 return _merge_replica_results(replica_results)
174
175
/usr/local/lib/python3.11/dist-packages/torch_xla/_internal/pjrt.py in _merge_replica_results(replica_results)
37 ordinal for ordinal, _ in replica_results)
38 replica, num_results = replica_counts.most_common(1)[0]
---> 39 assert num_results == 1, f'{num_results} results for replica {replica}'
40
41 return dict(replica_results)
AssertionError: 4 results for replica 0```
As first I tryed many diffrent versions, but it didn't help:
```!pip install -U pip
!pip install cloud-tpu-client==0.10
!pip install torch~=2.1.0 'torch_xla[tpu]~=2.1.0' \
-f https://storage.googleapis.com/libtpu-releases/inde6x.html \
-f https://storage.googleapis.com/libtpu-wheels/index.html```
What should I do, how to fix it. I tried many different ways to connect tpu but I still couldn't connect normally and start training | https://github.com/pytorch/xla/issues/8986 | closed | [
"question",
"xla:tpu"
] | 2025-04-16T11:56:22Z | 2025-04-18T12:11:07Z | null | Neckto0 |
huggingface/diffusers | 11,339 | How to multi-GPU WAN inference | Hi,I didn't find multi-gpu inferences example in the documentation. Can you give me an example, such as Wan2.1-I2V-14B-720P-Diffusers.
I would appreciate some help on that, thank you in advance | https://github.com/huggingface/diffusers/issues/11339 | closed | [
"stale"
] | 2025-04-16T10:22:41Z | 2025-07-05T21:18:01Z | null | HeathHose |
huggingface/trl | 3,295 | i have 2 gpu,but default gpu:0,How to specify a gpu:1 for training? | ### Reproduction
```python
from trl import ...
```
outputs:
```
Traceback (most recent call last):
File "example.py", line 42, in <module>
...
```
### System Info
i have 2 gpu,but default gpu:0,How to specify a gpu:1 for training?
### Checklist
- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [x] I have included my system information
- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any traceback provided is complete | https://github.com/huggingface/trl/issues/3295 | closed | [
"❓ question",
"📱 cli"
] | 2025-04-15T08:29:26Z | 2025-04-24T19:46:37Z | null | Aristomd |
huggingface/lerobot | 981 | How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations? | How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations?I am a beginner. | https://github.com/huggingface/lerobot/issues/981 | closed | [
"question",
"simulation"
] | 2025-04-15T04:04:33Z | 2025-10-17T11:19:34Z | null | harryhu0301 |
huggingface/diffusers | 11,321 | flux controlnet train ReadMe have a bug | ### Describe the bug

what is the controlnet config parameters? text is num_single_layers = 10, but the code set num_single_layers=0?
### Reproduction
check readme file
### Logs
```shell
```
### System Info
diffusers ==0.33.0
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11321 | closed | [
"bug",
"stale"
] | 2025-04-15T01:30:58Z | 2025-10-11T09:58:52Z | 14 | Johnson-yue |
huggingface/agents-course | 428 | [QUESTION] Current schedule is non-sensical | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.
The course page states:
> There’s a deadline for the certification process: all the assignments must be finished before May 1st 2025.
But the "when will the next units be published" graph doesn't have Unit 4 even being released until "The end of April". And as of today (April 14, 2025) we still have no idea what any of the "use case assignments" are. As it stands, it appears to be impossible to actually complete this course.
And no one from Hugging Face seems to be answering, or even acknowledging, any questions on this topic. It would be nice to get some clarity / updates.
| https://github.com/huggingface/agents-course/issues/428 | closed | [
"question"
] | 2025-04-14T18:13:31Z | 2025-04-28T06:51:58Z | null | mindcrime |
pytorch/audio | 3,899 | Segmentation fault (core dumped) in torchaudio.io.AudioEffector | ### 🐛 Describe the bug
Occasionally, a core dump error may occur with a specific audio file as input, which a Python exception cannot capture.
This error is rare, but when it does occur, the entire Python process will be killed. It only happens with some ”special audio”. Unfortunately, I did not find out what the special was.
How to reproduce:
1. Download the numpy array that causes the core dump in my environment.
[a.npy.zip](https://github.com/user-attachments/files/19736212/a.npy.zip)
2. Run the following code:
```python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy
from torchaudio.io import AudioEffector, CodecConfig
import torch
module = AudioEffector(
format='ogg',
encoder='opus',
codec_config=CodecConfig(qscale=1),
pad_end=True,)
audio = numpy.load('./a.npy')
output = module.apply(torch.from_numpy(audio), 44100).numpy()
```
```
[W414 21:10:43.989426875 encode_process.cpp:179] Warning: "opus" encoder is selected. Enabling '-strict experimental'. If this is not desired, please provide "strict" encoder option with desired value. (function operator())
[1] 2613659 segmentation fault (core dumped) python debug.py
```
My python and package versions:
```
numpy 2.0.2
torch 2.6.0
torch-complex 0.4.4
torchaudio 2.6.0
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 9950X 16-Core Processor
CPU family: 26
Model: 68
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 67%
CPU max MHz: 5752.0000
CPU min MHz: 600.0000
BogoMIPS: 8599.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 768 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected | https://github.com/pytorch/audio/issues/3899 | open | [] | 2025-04-14T13:20:04Z | 2025-04-14T13:20:56Z | 0 | LiChenda |
huggingface/lerobot | 975 | [Question] How to modify model & dataset to accept two input images in observation.image? | Hi, thank you for the great repo!
I’ve been going through the first three examples, and now I’d like to explore training a diffusion policy with some customized input. Specifically:
My goal:
I want each observation.image to contain two images as input (they have the same shape as the original single image).
I want the output of the model to remain the same as in the original diffusion policy.
My question:
Since I’m new to this repo, I’d like to ask for guidance on what needs to be modified to support this:
Model architecture: which parts of the model code should I look at or modify to handle a double-image input?
Dataset / Data loading: where should I modify the dataset to provide observation.image with two images instead of one?
Are there any other components I should be aware of (e.g., pre-processing, normalization, config changes, etc.)?
Any advice or pointers to relevant parts of the code would be greatly appreciated!
Thanks in advance 🙏 | https://github.com/huggingface/lerobot/issues/975 | closed | [
"dataset",
"stale"
] | 2025-04-14T08:35:47Z | 2025-11-04T02:30:23Z | null | Keith-Luo |
huggingface/candle | 2,893 | How to build a multi-node inference/training in candle? | Hi team,
I'd like to have an example on mulit-node inference/training of candle, how can I find it?
Thanks :)
-- Klaus | https://github.com/huggingface/candle/issues/2893 | open | [] | 2025-04-14T08:03:20Z | 2025-04-14T08:03:20Z | null | k82cn |
huggingface/chat-ui | 1,795 | Offline Custom Tools | Would it be possible to define/use tools that the LLMs can use in an offline state?
"Tools must use Hugging Face Gradio Spaces as we detect the input and output types automatically from the [Gradio API](https://www.gradio.app/guides/sharing-your-app#api-page)."
Is there any reason that the tools can't be hosted locally with the same ability for the LLM to use? | https://github.com/huggingface/chat-ui/issues/1795 | open | [
"enhancement"
] | 2025-04-14T02:41:19Z | 2025-04-14T02:41:19Z | 0 | cr-intezra |
huggingface/chat-ui | 1,794 | Docker Image and Local Install missing file/image/etc upload | I've used the chat-ui-db:latest image as well as cloning the repo, setting up mongo and npm install/run dev and the UI I get does not have the icons or ability to upload in image or file. It only has the web search button.
This would be for release 0.9.4.
Is there something in .env.local that I am missing to enable this feature?
Otherwise the chat-ui works as intended, I am able to use different models but wanted to test the ability to use a vision model.
 | https://github.com/huggingface/chat-ui/issues/1794 | open | [] | 2025-04-13T19:30:29Z | 2025-04-13T19:30:29Z | 0 | cr-intezra |
pytorch/audio | 3,898 | forcing other not allowed frequencies to be accepted | I'm trying to work with frequencies below 20hz, preferably at 18,98hz but as the documentation says it only supports above 4000, 8000, and 9000. Even though is there a way to force torch to work with my desire frequency?? please | https://github.com/pytorch/audio/issues/3898 | open | [] | 2025-04-13T15:31:41Z | 2025-04-13T15:31:41Z | 0 | andrewessel |
pytorch/xla | 8,968 | Alternative to torch.select_mask | ## ❓ Questions and Help
Most of the time we can adapted routines to avoid graph recompilations, however there is instance where this is a bit tricky.
When computing a masked mean, we are currently using sum and valids as follows:
```
replaced = input_tensor*is_valid
sum_valid = replaced.sum()
n_valid = is_valid.sum(dtype=input_tensor.dtype)
return torch.nan_to_num(sum_valid / n_valid)
```
Where valid is 0 or 1 if the entry is in the dataset.
This effectively calculates the mean while ignoring zeros. This works well when the data is close to full in most examples. However in some instance we insert sparse data that have a reduced is_valid that is consistent across the data set. The result is that the 0 entries in the is_valid are reinforced, and when testing against the test of sparse data, it won't predict the other entries.
To avoid this - we have historically used torch.select_mask, which only selects the non-zero entries and the gradients only back prop. through those - basically we don't get the reinforced 0-0.
I'm wondering if there is an alternative or work around to torch.select_mask as this increases computation time by ~6X because of the frequent recompilations.
Thank you again for this awesome tool and let me know if you have any questions.
| https://github.com/pytorch/xla/issues/8968 | closed | [
"question"
] | 2025-04-13T14:38:55Z | 2025-05-01T20:31:05Z | null | ttdd11 |
huggingface/optimum | 2,228 | Unable to convert an audio-to-audio model. | ### Feature request
``` bash
optimum-cli export onnx --model microsoft/speecht5_vc speecht5_vc_onnx/
```
Output:
``` log
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
Traceback (most recent call last):
File "/usr/local/bin/optimum-cli", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/optimum/commands/optimum_cli.py", line 208, in main
service.run()
File "/usr/local/lib/python3.12/dist-packages/optimum/commands/export/onnx.py", line 265, in run
main_export(
File "/usr/local/lib/python3.12/dist-packages/optimum/exporters/onnx/__main__.py", line 296, in main_export
raise ValueError(
ValueError: Asked to export a speecht5 model for the task audio-to-audio (auto-detected), but the Optimum ONNX exporter only supports the tasks text-to-audio for speecht5. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task audio-to-audio to be supported in the ONNX export for speecht5.
```
### Motivation
My primary objective is to convert Hugging Face models to TensorRT, but according to the documentation I've reviewed, ONNX must be used as an intermediate step
### Your contribution
I don't believe I have the technical capability to implement this feature. | https://github.com/huggingface/optimum/issues/2228 | closed | [
"Stale"
] | 2025-04-13T00:50:26Z | 2025-05-18T02:17:06Z | 1 | divinerapier |
huggingface/lerobot | 971 | Can different robotic arms share the same dataset and model? | English:
I currently have datasets and models for the Koch, SO100, and ALOHA robotic arms. Is it possible for these three arms to share the same dataset and model? If so, how should this be implemented? If not—given the significant hardware differences—what is the practical value of data sharing in this context?
@Cadene
中文:
我这里有koch、so100、alhoa的数据集和模型,三款机械臂能共用数据集合模型么?如果能,怎么用?如果不能,那硬件千差万别,数据共享的意义何在?
| https://github.com/huggingface/lerobot/issues/971 | closed | [
"question",
"dataset",
"stale"
] | 2025-04-12T05:03:27Z | 2025-10-17T12:06:45Z | null | ZhangWuWei |
pytorch/TensorRT | 3,469 | ❓ [Question] How wo you export a triton kernel with model to a serialized engine that can be run in c++? | ## ❓ Question
<!-- Your question -->
How wo you export a triton kernel with model to a serialized engine that can be run in c++?
## What you have already tried
Read through python examples.
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| https://github.com/pytorch/TensorRT/issues/3469 | open | [
"question"
] | 2025-04-11T16:53:33Z | 2025-12-12T01:58:55Z | null | cmgreen210 |
huggingface/autotrain-advanced | 881 | Accelerators: Error fetching data. how to troubleshoot |
Getting this error message when trying to train my model using Autotrain
Accelerators: Error fetching data
Error fetching training status
My data file is a csv & correctly formatted.
What are possible ways to troubleshoot this problem?
I'm new to fine-tuning so would love any assistance | https://github.com/huggingface/autotrain-advanced/issues/881 | closed | [
"stale"
] | 2025-04-11T16:04:12Z | 2025-06-02T15:02:09Z | null | innerspacestudio |
pytorch/torchtitan | 1,093 | why is shard(1) in the colwiseparallel for lm head? | I found ColwiseParallel here for output linear layer has input_layout Shard(1). In that way, the input will be sharded accross different devices in the sequence dimension, and also the linear layer's output dimension (e.g., vocab dimension) has also been distributed? Is that something desired? Because on my understanding, it should be ColwiseParallel(input_layouts=Replicate(), output_layouts=Shard(-1) if loss_parallel else Replicate(), use_local_output=not loss_parallel)
`
parallelize_module(
model,
tp_mesh,
{
"tok_embeddings": RowwiseParallel(
input_layouts=Replicate(),
output_layouts=Shard(1),
),
"norm": SequenceParallel(),
"output": ColwiseParallel(
input_layouts=Shard(1),
output_layouts=Shard(-1) if loss_parallel else Replicate(),
use_local_output=not loss_parallel,
),
},
| https://github.com/pytorch/torchtitan/issues/1093 | closed | [] | 2025-04-11T11:20:02Z | 2025-04-11T11:46:04Z | 0 | wimh966 |
pytorch/torchtitan | 1,092 | Step Time Increase Leading to NCCL Timeout with FSDP2 | **Description**
I am encountering an issue when using fsdp2 where step time significantly increases after a certain number of steps, leading to NCCL timeouts. Initially, each step takes around 2 seconds, as shown in the earlier logs. However, after reaching step 1800, most processes experience a noticeable increase in step time except for one. This behavior causes errors such as:
```
[rank3]:[E410 14:46:34.629385703 ProcessGroupNCCL.cpp:684] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank3]:[E410 14:46:34.629438241 ProcessGroupNCCL.cpp:698] [Rank 3] To avoid data inconsistency, we are taking the entire process down.
[rank3]:[E410 14:46:34.630696293 ProcessGroupNCCL.cpp:1896] [PG ID 0 PG GUID 0(default_pg) Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=319682, OpType=_ALLGATHER_BASE, NumelIn=65667328, NumelOut=525338624, Timeout(ms)=100000) ran for 138169 milliseconds before timing out.
```
The discrepancy in step time across processes seems to result in the NCCL operations timing out.
**Observations**
At earlier steps (e.g., step 10), the step time is approximately 2.5 seconds across all processes.
By later steps (e.g., step 1800), most processes experience longer step times except for one process, leading to the timeout error.
My training configuration (in TOML) is as follows:
```
[metrics]
log_freq = 1
enable_tensorboard = true
save_tb_folder = "tb"
[optimizer]
name = "AdamW"
lr = 1.5e-4
[training]
batch_size = 1
seq_len = 4096
warmup_steps = 2000
max_norm = 1.0
steps = 15000
data_parallel_replicate_degree = 1
data_parallel_shard_degree = -1
tensor_parallel_degree = 1
compile = false
[experimental]
context_parallel_degree = 1
pipeline_parallel_degree = 1
[checkpoint]
enable_checkpoint = true
folder = "checkpoint"
interval_type = "steps"
interval = 15000
model_weights_only = false
export_dtype = "float32"
async_mode = "disabled"
```
Are there any recommended solutions to solve this?

 | https://github.com/pytorch/torchtitan/issues/1092 | closed | [
"question"
] | 2025-04-11T10:50:55Z | 2025-04-14T05:24:10Z | null | xhwang22 |
pytorch/torchtitan | 1,091 | FSDP2 root level parameter management | Hi,
I am curious about the design decision of managing both token embeddings and the final output layer at the root fsdp level instead of treating them as different layers like other transformer blocks?
This coupled management seems to unshard the final output layer too early and reshard the token embedding too late in forward for example.
Also for the optimization (see [here](https://github.com/pytorch/torchtitan/blob/main/torchtitan/models/llama3/parallelize_llama.py#L369)) that disables `reshard_after_forward` for the last transformer block layer, would it be more appropriate to perform this optimization on the final linear layer instead of the last transformer block?
Thanks! | https://github.com/pytorch/torchtitan/issues/1091 | closed | [
"question",
"module: fsdp"
] | 2025-04-11T01:54:57Z | 2025-07-29T02:40:22Z | null | dingqingy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.