repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/trl | 3,109 | where is file https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py | ### Reproduction
```python
from trl import ...
```
outputs:
```
Traceback (most recent call last):
File "example.py", line 42, in <module>
...
```
### System Info
https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py
### Checklist
- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [x] I have included my system information
- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any traceback provided is complete | https://github.com/huggingface/trl/issues/3109 | closed | [
"🐛 bug",
"🏋 SFT"
] | 2025-03-19T02:20:26Z | 2025-03-19T02:22:23Z | null | zh794390558 |
pytorch/xla | 8,853 | Have documentation to point to all our environment variables and their meaning | ## 📚 Documentation
Prepare a documentation to point to all our environment variables and their meaning. This world should be a forcing function to (1) make the yaml file up to date (2) rename it to something like `env_vraiable_definitions.yaml`, (3) start a workstream to trim down on these env variables to avoid usability pain.
https://github.com/pytorch/xla/blob/master/configuration.yaml
@tengyifei @yaoshiang for viz and support | https://github.com/pytorch/xla/issues/8853 | open | [
"usability",
"documentation"
] | 2025-03-19T00:23:51Z | 2025-03-19T00:26:22Z | 1 | miladm |
pytorch/TensorRT | 3,446 | ValueError: Invalid input type <class 'bool'> encountered when compiling FLUX.1-dev model with Torch-TensorRT | ## ❓ Question
When trying to compile the FLUX.1-dev model using Torch-TensorRT following the official example/blog post, I'm encountering a `ValueError` during the `torch_tensorrt.dynamo.compile()` step. The error suggests there's an issue with input parsing where it's encountering a boolean value that it doesn't know how to handle.
## What you have already tried
I'm following the exact steps from the example provided in the documentation (https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_export_flux_dev.html). I've:
1. Successfully loaded the FLUX.1-dev model
2. Defined the dynamic shapes properly
3. Created dummy inputs with the recommended dimensions
4. Successfully exported the model using `_export`
5. Attempted to compile with Torch-TensorRT using the same parameters shown in the example
The error occurs specifically at the compilation step:
```python
trt_gm = torch_tensorrt.dynamo.compile(
ep,
inputs=dummy_inputs,
enabled_precisions={torch.float32},
truncate_double=True,
min_block_size=1,
use_fp32_acc=True,
use_explicit_typing=True,
)
```
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.6.0
- CPU Architecture:
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.11.10
- CUDA version: cuda_12.4.r12.4/compiler.34097967_0
- GPU models and configuration: A100
- Any other relevant information:
## Additional context
The error message specifically points to an issue with boolean input types:
```
ValueError: Invalid input type <class 'bool'> encountered in the dynamo_compile input parsing. Allowed input types: {torch_tensorrt.Input, torch.Tensor, list, tuple, dict}
```
It looks like the `return_dict=False` parameter in my dummy inputs is causing the issue since it's a boolean value. The example shows that this should be supported, but the error suggests that booleans aren't handled correctly in the input parsing logic.
Full traceback:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/workspace/flux-dev-tensorrt.ipynb Cell 4 line 1
----> <a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0'>1</a> trt_gm = torch_tensorrt.dynamo.compile(
<a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1'>2</a> ep,
<a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2'>3</a> inputs=dummy_inputs,
<a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3'>4</a> enabled_precisions={torch.float32},
<a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4'>5</a> truncate_double=True,
<a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=5'>6</a> min_block_size=1,
<a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=6'>7</a> use_fp32_acc=True,
<a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=7'>8</a> use_explicit_typing=True,
<a href='vscode-notebook-cell://ssh-remote%2B216.81.245.143/workspace/flux-dev-tensorrt.ipynb#W5sdnNjb2RlLXJlbW90ZQ%3D%3D?line=8'>9</a> )
File /usr/local/lib/python3.11/dist-packages/torch_tensorrt/dynamo/_compiler.py:606, in compile(exported_program, inputs, arg_inputs, kwarg_inputs, device, disable_tf32, assume_dynamic_shape_support, sparse_weights, enabled_precisions, engine_capability, debug, num_avg_timing_iters, workspace_size, dla_sram_size, dla_local_dram_size, dla_global_dram_size, truncate_double, require_full_compilation, min_block_size, torch_executed_ops, torch_executed_modules, pass_through_build_failures, max_aux_streams, version_compatible, optimization_level, use_python_runtime, use_fast_partitioner, enable_experimental_decompositions, dryrun, hardware_compatible, timing_cache_path, lazy_engine_init, cache_built_engines, reuse_cached_engines, engine_cache_dir, engine_cache_size, custom_engine_cache, use_explicit_typing, use_fp32_acc, refit_identical_engine_weights, strip_engine_weights, immutable_weights, enable_weight_streaming, **kwargs)
603 arg_inputs = [arg_inputs] # type: ignore
605 # Prepare torch_trt inputs
--> 606 trt_arg_inputs: Sequence[Input] = prepare_inputs(arg_inputs)
607 trt_kwarg | https://github.com/pytorch/TensorRT/issues/3446 | open | [
"question"
] | 2025-03-18T21:55:16Z | 2025-03-21T23:57:54Z | null | yachty66 |
huggingface/transformers.js | 1,245 | QuestionAnsweringOutput does not return start/end index | ### Question
Question/Answering pipeline does not seem to return start/end index.
console output example
``` { answer: 'anywhere', score: 0.8719829671013909 }```
source code in pipeline.js
```
class QuestionAnsweringPipeline ...
// TODO add start and end?
// NOTE: HF returns character index
toReturn.push({
answer, score
});```
| https://github.com/huggingface/transformers.js/issues/1245 | open | [
"question"
] | 2025-03-18T21:20:25Z | 2025-03-18T21:20:25Z | null | sleep9 |
huggingface/transformers.js | 1,243 | Transformer.js compatibility with Angular17 | ### Question
I want to add transformer.js in Angular 17 project. Getting several errors can some one guide me how to add transformer.js with Angular project | https://github.com/huggingface/transformers.js/issues/1243 | open | [
"question"
] | 2025-03-18T16:15:30Z | 2025-03-24T21:27:11Z | null | AnuragPant01 |
huggingface/diffusers | 11,108 | Is there a way to generate a single image using multiple GPUs? | This is related to #2977 and #3392, but I would like to know how to generate a single image using multiple GPUs. If such a method does not exist, I would also like to know if Accelerate's [Memory-efficient pipeline parallelism](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference#memory-efficient-pipeline-parallelism-experimental) can be applied to this. | https://github.com/huggingface/diffusers/issues/11108 | closed | [
"stale"
] | 2025-03-18T13:43:05Z | 2025-05-02T21:00:31Z | 12 | suzukimain |
huggingface/lerobot | 876 | Multiple GPU Training Support | Hi, lerobot team!
Thanks for the great work and organized content.
Are there plans to support PyTorch's Distributed Data Parallel (DDP) training in this framework? | https://github.com/huggingface/lerobot/issues/876 | closed | [
"enhancement",
"question",
"stale"
] | 2025-03-18T12:44:43Z | 2025-10-07T02:26:45Z | null | kingchou007 |
huggingface/open-r1 | 521 | How to use my own dataset in sft? | Could you please give an instruction/demo on how to use my own dataset (any column name) to apply sft? | https://github.com/huggingface/open-r1/issues/521 | open | [] | 2025-03-18T11:38:19Z | 2025-03-18T14:21:36Z | null | dongdongzhaoUP |
huggingface/diffusers | 11,103 | Which repo should I use for LTX-Video 0.9.5 diffusers | I see the changes are merged
Checked repo and it is empty
https://huggingface.co/Lightricks/LTX-Video-0.9.5/tree/main
Noticed in test pipeline it is
repo = "YiYiXu/ltx-95"
So can I safely assume that the above can be used?
@yiyixuxu | https://github.com/huggingface/diffusers/issues/11103 | closed | [] | 2025-03-18T10:50:41Z | 2025-03-18T11:00:34Z | 2 | nitinmukesh |
huggingface/trl | 3,103 | How are Lora parameters used in VLLM generation? (_move_model_to_vllm in GRPO trainer) | From the following code does not see the process of moving lora training parameters to VLLM? How guarantee that generated with the latest parameters? Can someone help explain.
<img width="1123" alt="Image" src="https://github.com/user-attachments/assets/62cacf0a-0197-4210-b326-c4e24b9b6701" />
And I printed the vllm loaded model, and I didn't see LORA-related parameters either.
<img width="1157" alt="Image" src="https://github.com/user-attachments/assets/8d085743-97b9-4d9e-9c4b-558153a6cb05" />
More, LORARequest was also not seen in the generation calls
<img width="1117" alt="Image" src="https://github.com/user-attachments/assets/3193f66f-607d-4b0b-8903-f5f1b45d7adc" />
| https://github.com/huggingface/trl/issues/3103 | closed | [
"❓ question",
"⚡ PEFT"
] | 2025-03-18T09:24:48Z | 2025-03-24T18:32:19Z | null | cuiyuhao1996 |
pytorch/xla | 8,847 | How to compile torch-xla form source? | ## ❓ Questions and Help
I have reviewed the relevant materials on torch-xla but have not found a clear guide on how to compile torch-xla from source. The instructions mentioned on [this page](https://pytorch.org/xla/master/contribute/bazel.html) are somewhat disorganized. Could you provide a detailed compilation process? I need to build it from source to verify my modifications. Thanks
Now I am use python setup.py develop to build from source code , but encounter ERROR as follows:
the command is
XLA_CUDA=1 python setup.py install , and i am use the torch-xla v2.5.1

| https://github.com/pytorch/xla/issues/8847 | open | [
"question",
"build"
] | 2025-03-18T02:31:05Z | 2025-03-24T17:40:13Z | null | south-ocean |
pytorch/xla | 8,846 | Need a documentation page that always hosts the latest stable documentation | ## 📚 Documentation
PyTorch has https://pytorch.org/docs/stable/index.html that always contains the documentation for the latest stable branch.
The same URL variant doesn't work for PyTorch/XLA https://pytorch.org/xla/release/stable/index.html
| https://github.com/pytorch/xla/issues/8846 | open | [
"enhancement",
"documentation"
] | 2025-03-18T00:19:41Z | 2025-05-01T07:46:15Z | 3 | tengyifei |
pytorch/vision | 8,980 | nvjpeg missing from all linux GPU wheel build jobs | Linux CUDA: https://github.com/pytorch/vision/actions/runs/13901104094/job/38892841516?pr=8601
Linux aarch64 CUDA: https://github.com/pytorch/vision/actions/runs/13901104115/job/38892844332?pr=8601
Failing the smoke test part with:
```
+ echo 'pytorch/vision/test/smoke_test.py found'
+ conda run -p /__w/_temp/conda_environment_13901104115 python pytorch/vision/test/smoke_test.py
/__w/_temp/conda_environment_13901104115/lib/python3.9/site-packages/torchvision/io/image.py:14: UserWarning: Failed to load image Python extension: 'libnvjpeg.so.12: cannot open shared object file: No such file or directory'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
``` | https://github.com/pytorch/vision/issues/8980 | closed | [] | 2025-03-17T15:05:04Z | 2025-03-18T11:28:18Z | 1 | NicolasHug |
huggingface/datasets | 7,457 | Document the HF_DATASETS_CACHE env variable | ### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`.
It should be nice to add `HF_DATASETS_CACHE` to datasets documentation if it's an intended feature.
If it's not, I think a depreciation warning would be appreciated.
### Motivation
This variable is fully working and similar to what `HF_HUB_CACHE` does for models, so it's nice to know that this exists. This seems to be a quick change to implement.
### Your contribution
I could contribute since this is only affecting a small portion of the documentation | https://github.com/huggingface/datasets/issues/7457 | closed | [
"enhancement"
] | 2025-03-17T12:24:50Z | 2025-05-06T15:54:39Z | 4 | LSerranoPEReN |
pytorch/pytorch | 149,315 | How to Retain Computational Graph in torch.func.jvp() for Parameter Gradients? | ### 🚀 The feature, motivation and pitch
## Help Needed: Making `torch.func.jvp` Work with `torch.autograd.grad`
Hi all,
Thanks so much for all the functionalities of pytorch! I'm trying to make the following code valid (and efficient):
```python
output_values, output_grads = torch.func.jvp(model, input_value, input_grads)
torch.autograd.grad(output_values, tuple(model.parameters()), grad_outputs=output_grads)
```
One way to phrase it is that we have a function $f: \mathbb{R}^d \times \mathbb{R}^m \to \mathbb{R}^p$. Then, given $(x, t_x) \in \mathbb{R}^{d}\times \mathbb{R}^{d}$, the goal is to compute: $y = f(x,w)$, the tangent vector $t_y = D_1 f(x, w).t_x$ and the gradient $t_w = D_2 f(x, w)^T.t_y$, in order to materialize the mapping: $((x, t_x), w) \to ((y, t_y), t_w)$.
Currently, the code fails because `torch.func.jvp()` does not retain the computational graph of the forward pass, which makes sense for the dual vectors associated with the input. However, for example, I know it's possible to efficiently decouple the computation of input gradients and weight gradients by selectively extracting parts of the computational graph.
I'd like to do something similar here. My goal is to develop a procedure that achieves this while requiring only a single forward pass (and freeing unnecessary memory).
Would you have any insights on how to implement this efficiently? I believe it's related to [this paper](https://arxiv.org/pdf/2402.14212), which provides a solution in JAX, but I think it should also be possible in PyTorch.
Any guidance or suggestions would be greatly appreciated—thanks in advance for your help!
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @zou3519 @Chillee @samdow @kshitij12345 | https://github.com/pytorch/pytorch/issues/149315 | open | [
"module: autograd",
"triaged",
"module: functorch"
] | 2025-03-17T12:10:21Z | 2025-06-24T14:30:39Z | null | edouardoyallon |
huggingface/transformers | 36,762 | When what needs to be loaded is in the cache directory, there is no need to make a request to the remote | ### Feature request
When what needs to be loaded is in the cache directory, there is no need to make a request to the remote.
### Motivation
I noticed that when `AutoTokenizer` loads a file using `from_pretrained`, it first tries to load it from a cached directory when `pretrained_model_name_or_path` is a model_id (such as gpt2).
However, `commit_hash` is `None` by default, e.g. `AutoTokenizer` will call `get_tokenizer_config` to load the configuration file, where the code to get `commit_hash` is: `commit_hash = kwargs.get("_commit_ hash”, None)`.
Since it is None, the `cached_file` method doesn't know where the corresponding file is actually stored, so it uses the `hf_hub_download` method to request the corresponding `commit_hash` first.
Although this request is very simple and infrequent, **in offline environments (e.g., a company or school intranet that does not allow access to the extranet), it will report an error.**
I know I can copy files from the cache to my project directory, but the host is usually used by multiple people, which means it may have to be copied many times, which defeats the purpose of using a cached directory in the first place.
### Your contribution
**I suggest changing `commit_hash = kwargs.get(“_commit_hash”, None)` to `commit_hash = kwargs.get(“_commit_hash”, “main”)`**. | https://github.com/huggingface/transformers/issues/36762 | closed | [
"Feature request"
] | 2025-03-17T11:20:24Z | 2025-03-19T15:49:04Z | null | JinFish |
huggingface/diffusers | 11,086 | RuntimeError after using apply_group_offloading on diffusers: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same | Can anyone help me?
I used WanX's diffusers and used apply_group_offloading according to url: https://huggingface.co/docs/diffusers/main/en/optimization/memory.
The code is as follows:
```
image_encoder = CLIPVisionModel.from_pretrained(local_model_path, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(local_model_path, subfolder="vae", torch_dtype=torch.float32)
scheduler_b = UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=5.0)
pipe = WanImageToVideoPipeline.from_pretrained(local_model_path, vae=vae, image_encoder=image_encoder, scheduler=scheduler_b, torch_dtype=torch.bfloat16)
pipe.transformer.enable_group_offload(onload_device=torch.device("cuda"), offload_device=torch.device("cpu"), offload_type="block_level", num_blocks_per_group=1, use_stream=True)
apply_group_offloading(pipe.text_encoder, onload_device=torch.device("cuda"), offload_type="block_level", num_blocks_per_group=1, use_stream=True)
apply_group_offloading(pipe.vae, onload_device=torch.device("cuda"), offload_type="block_level", num_blocks_per_group=1, use_stream=True)
apply_group_offloading(pipe.image_encoder, onload_device=torch.device("cuda"), offload_type="block_level", num_blocks_per_group=1, use_stream=True)
```
Then print the device information:
`Before apply_offload:
text_encoder device: cpu
transformer device: cpu
vae device: cpu
image_encoder device: cpu
start to group_offload_block_1_stream
After apply_offload:
text_encoder device: cpu
transformer device: cpu
vae device: cpu
image_encoder device: cpu`
Finally, an exception is thrown:
` return F.conv3d(
^^^^^^^^^
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same`
Does anyone know how to fix this? Thanks a lot. | https://github.com/huggingface/diffusers/issues/11086 | open | [
"stale"
] | 2025-03-17T11:03:48Z | 2025-04-16T15:03:36Z | 5 | tiga-dudu |
huggingface/trl | 3,093 | How to use a custom function as the reward model for PPO training | The new version of TRL's PPOtrainer requires Module as the reward model, but I need a custom function calculation to calculate the reward. I tried to lower the TRL version to 0.11.4, but the old version does not seem to support the peft model. I get the following error:
ValueError: model must be a PreTrainedModelWrapper, got <class 'peft.peft_model.PeftModelForCausalLM'> - supported architectures are: (<class 'trl.models.modeling_value_head.AutoModelForCausalLMWithValueHead'>, <class 'trl.models.modeling_value_head.AutoModelForSeq2SeqLMWithValueHead'>)
However, I see the is_peft_model parameter in PPOConfig, but there is no such parameter as peft_config in PPOTrainer
So I am very troubled now. Is there a good brother who can help me?
| https://github.com/huggingface/trl/issues/3093 | open | [
"❓ question",
"🏋 PPO",
"⚡ PEFT"
] | 2025-03-16T09:02:25Z | 2025-03-20T10:33:02Z | null | JWQZ |
huggingface/ai-deadlines | 19 | How to know the rankings of a conference? | @NielsRogge, may I know where we can get the conference rankings? | https://github.com/huggingface/ai-deadlines/issues/19 | closed | [] | 2025-03-15T18:32:34Z | 2025-03-15T21:45:02Z | null | julurisaichandu |
huggingface/diffusers | 11,063 | prepare_attention_mask - incorrect padding? | ### Describe the bug
I'm experimenting with attention masking in Stable Diffusion (so that padding tokens aren't considered for cross attention), and I found that UNet2DConditionModel doesn't work when given an `attention_mask`.
https://github.com/huggingface/diffusers/blob/8ead643bb786fe6bc80c9a4bd1730372d410a9df/src/diffusers/models/attention_processor.py#L740
For the attn1 blocks (self-attention), the target sequence length is different from the current length (target 4096, but it's only 77 for a typical CLIP output). The padding routine pads by *adding* `target_length` zeros to the end of the last dimension, which results in a sequence length of 4096 + 77, rather than the desired 4096. I think it should be:
```diff
- attention_mask = F.pad(attention_mask, (0, target_length), value=0.0)
+ attention_mask = F.pad(attention_mask, (0, target_length - current_length), value=0.0)
```
`encoder_attention_mask` works fine - it's passed to the attn2 block and no padding ends up being necessary.
It seems that this would additionally fail if current_length were greater than target_length, since you can't pad by a negative amount, but I don't know that that's a practical concern.
(I know that particular masking isn't even semantically valid, but that's orthogonal to this issue!)
### Reproduction
```python
# given a Stable Diffusion pipeline
# given te_mask = tokenizer_output.attention_mask
pipeline.unet(latent_input, timestep, text_encoder_output, attention_mask=te_mask).sample
```
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.28.1
- Transformers version: 4.48.3
- Accelerate version: 1.3.0
- PEFT version: not installed
- Bitsandbytes version: 0.45.2
- Safetensors version: 0.5.2
- xFormers version: 0.0.29.post2
- Accelerator: NVIDIA GeForce RTX 3060, 12288 MiB
NVIDIA GeForce RTX 4060 Ti, 16380 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11063 | open | [
"bug",
"stale"
] | 2025-03-14T19:01:01Z | 2025-04-14T15:03:14Z | 2 | cheald |
huggingface/transformers.js | 1,237 | Using pipeline API in Mobile Devices | ### Question
How can I do the pipeline running in mobile devices?
Like here:
pipeline('background-removal', 'briaai/RMBG-1.4', { device: "webgpu" })
Or it depends from the model avaliable?
I don't find documentations about pipeline API options, like 'device' and others params... | https://github.com/huggingface/transformers.js/issues/1237 | open | [
"question"
] | 2025-03-14T17:55:27Z | 2025-05-11T19:58:39Z | null | LuSrodri |
huggingface/autotrain-advanced | 869 | How to fine-tune a custom model for Ollama? | Probably a stupid question, but I'm trying to upload a .csv dataset and fine-tune an 8B model in Autotrain. But when I add the model name taken from Ollama (e.g. deepseek-r1:8b or DeepSeek-R1-Distill-Llama-8B-NexaQuant) and try to train, I get an error.
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for LLMTrainingParams
token
Input should be a valid string [type=string_type, input_value=<starlette.templating._Te...bject at 0x7f7e9daa3a00>, input_type=_TemplateResponse]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
I'm too stupid to know what's wrong or how to correct it, so any help gratefully received. I can fine-tune with existing models in the drop-down list OK, so the setup seems to be working. | https://github.com/huggingface/autotrain-advanced/issues/869 | closed | [
"stale"
] | 2025-03-14T14:46:23Z | 2025-05-03T15:01:33Z | null | nigelp |
huggingface/diffusers | 11,060 | `prepare_image` in Kandinsky pipelines doesn't support `torch.Tensor` | Hi, I want to report a bug in Kandinsky pipelines.
https://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L413-L420
According to the above contents, elements in `image` can be either `PIL.Image.Image` or `torch.Tensor`.
https://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L98-L104
However, the `prepare_image` function is only for `PIL.Image.Image`, and does not support `torch.Tensor`.
Can you resolve this problem by implementing an image resize function for `torch.Tensor`? | https://github.com/huggingface/diffusers/issues/11060 | closed | [
"good first issue",
"help wanted"
] | 2025-03-14T10:34:30Z | 2025-04-21T18:41:10Z | 1 | dk-hong |
huggingface/Math-Verify | 39 | How to choose ExprExtractionConfig() and LatexExtractionConfig() | Hi. Thanks for your awesome tool.
I want to ask how I should set the configuration when the answer is either LaTeX or Expr? I found that if the case below (without $$ $$) is not set, the output will be false when the expected result is true.
```python
from math_verify import parse, verify
gold = parse("\\frac{\sqrt{3}}{3}")
answer = parse("sqrt(3)/3")
# Order here is important!
verify(gold, answer)
``` | https://github.com/huggingface/Math-Verify/issues/39 | closed | [] | 2025-03-13T23:36:27Z | 2025-04-28T20:42:03Z | null | Zhuofeng-Li |
huggingface/diffusers | 11,055 | Training on unconditional image generation creates colorized images | ### Describe the bug
Hi, I'm trying to follow the tutorial from unconditional image generation on my own dataset, and I'm getting weirdly colored images. I originally thought it was due to RGB/BGR channel order, but I've switched it around and got the same result. Do you have any suggestions of how to fix it?
### Reproduction
NA
### Logs
```shell
```
### System Info
NA
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11055 | open | [
"bug",
"stale"
] | 2025-03-13T20:47:22Z | 2025-04-13T15:02:53Z | 1 | esizikova-fda |
huggingface/lerobot | 860 | Modify camera async_read/read API to return a dictionary instead of tuple for better compatability? | Currently the intel real sense camera api supports returning either a single rgb image or a rgb image and depth image as a 2-uple
https://github.com/huggingface/lerobot/blob/3c0a209f9fac4d2a57617e686a7f2a2309144ba2/lerobot/common/robot_devices/cameras/intelrealsense.py#L440-L443
However this is not super compatible to work with since not all cameras might return two values (open cv one only does rgb?). For a potentially better API would it be possible to have the async read / read functions always return a dictionary instead with some standard names and data types for the types of image data returned?
e.g.
```
return dict(rgb=..., depth=...)
```
This way it is also easier for me to check if the returned data has depth data or not. The current solution is a bit complicated as I need to check if its the IntelRealSenseCamera and if its config has use_depth=True or not.
Thanks! | https://github.com/huggingface/lerobot/issues/860 | closed | [
"enhancement",
"question"
] | 2025-03-13T18:44:20Z | 2025-05-26T09:28:48Z | null | StoneT2000 |
huggingface/transformers.js | 1,230 | Using background-removal pipeline produces images with 50% opacity | ### Question
I have a issue using the background-removal pipeline. Some models returns the exacly same image, but 50% opacite (RGBA: [X, Y, Z, 127]). So other models, returns an error like this: Uncaught Error: Unsupported model type: null transformers:1:670067.
How can I procede? | https://github.com/huggingface/transformers.js/issues/1230 | closed | [
"question"
] | 2025-03-13T17:00:13Z | 2025-03-25T22:28:37Z | null | LuSrodri |
huggingface/lerobot | 858 | DATASET conversion from V.16 to V2.0 ❌❌❌ |
Hi @aliberts @Cadene
Thanks for your amazing work. I have one doubt, I forked lerobot repo and training some policies, now i want to convert to V1.6 to V2.0, but my episodes are .pth format not in parquet format. I check remaining issues, i didn't find anything. right now while conversion it takes only parquet format.
image
Can you please help me here
Thanks
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [x] My own task or dataset (give details below)
### Reproduction
tried covert_v1_to_v2.py
But its expecting only parquet but mine is pth
### Expected behavior
 | https://github.com/huggingface/lerobot/issues/858 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-13T15:22:51Z | 2025-10-07T02:26:46Z | null | Kacchan16 |
huggingface/optimum | 2,215 | not able to convert DeepSeek-R1 into Onnx using optimum-cli | ### System Info
```shell
v1.24.0
```
### Who can help?
@michaelbenayoun
I'm trying to convert DeepSeek-R1 into a onnx format, but i'm being presented with
> ValueError: Loading deepseek-ai/DeepSeek-R1 requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
I'm trying to do this using optimum-cli
`optimum-cli export onnx --model deepseek-ai/DeepSeek-R1 --task causal-lm C:\DeepSeek-R1-Onnx`
Can i somehow enable this using cli, or do i have to manually download the model into my system and using cli i would have to perform onnx instead of repo link
if yes, then how can i enable trust_remote_code=True once i download the repo?
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
optimum-cli export onnx --model deepseek-ai/DeepSeek-R1 --task causal-lm C:\DeepSeek-R1-Onnx
Running this command doesn't provide an output
### Expected behavior
The conversion should start for DeepSeek-R1 to ONNX | https://github.com/huggingface/optimum/issues/2215 | open | [
"bug"
] | 2025-03-13T07:07:10Z | 2025-05-13T11:13:36Z | 1 | volcano619 |
huggingface/trl | 3,066 | How to switch on the multi-GPU for GRPOTrainer? | Issue:
OOM errors during GRPO training - Need multi-GPU support for combined VRAM
Problem Description:
I'm encountering Out-of-Memory (OOM) errors while using GRPOTrainer to train reasoning capabilities similar to DeepSeek R1.
My Question:
How to switch on multi-GPU support for GRPOTrainer to utilize the combined VRAM across multiple GPUs (e.g., 40GB × 8 cards = 320GB total VRAM)?
Thank you! | https://github.com/huggingface/trl/issues/3066 | closed | [
"🏋 GRPO"
] | 2025-03-13T05:01:12Z | 2025-04-05T17:04:50Z | null | tjoymeed |
pytorch/pytorch | 149,096 | How to determine which part of torch.compile undergoes recompiling after caching | ### 🐛 Describe the bug
Thanks for the helpful blog: https://dev-discuss.pytorch.org/t/how-to-bring-compile-time-down-to-zero-our-plans-and-direction-may-14th-edition/2089
I am currently caching all 3 stages of the compiler but only seeing ~50% reduction in compile time.
How do I determine which part of the compilation is not being properly cached or recompiled every time?
P.S. I am interested in finding which part of the process recompiles and any techniques to avoid recompilation not mentioned here: https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html#dealing-with-recompilations
### Error logs
_No response_
### Versions
torch 2.5
CUDA 12.4
GPU = A10G
cc @chauhang @penguinwu | https://github.com/pytorch/pytorch/issues/149096 | open | [
"triaged",
"oncall: pt2"
] | 2025-03-13T02:33:58Z | 2025-03-13T06:40:24Z | null | janak2 |
huggingface/agents-course | 314 | [QUESTION] agent.run(stream=True) How get finall result | agent = CodeAgent(
tools=[],
model=model,
max_steps=10,
verbosity_level=2
)
response = agent.run(
"""
descripe image
""",
images=image_urls,
stream=True
)
print()??? | https://github.com/huggingface/agents-course/issues/314 | open | [
"question"
] | 2025-03-13T02:32:47Z | 2025-03-13T02:32:47Z | null | via007 |
pytorch/pytorch | 149,094 | How to skip backward specific steps in torch.compile | ### 🐛 Describe the bug
I couldn't find much documentation around how we can skip backward specific-steps in torch.compile/AOT autograd.
Some info would be helpful.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu | https://github.com/pytorch/pytorch/issues/149094 | open | [
"triaged",
"oncall: pt2"
] | 2025-03-13T02:12:44Z | 2025-03-17T23:55:31Z | null | janak2 |
huggingface/diffusers | 11,046 | flux pipeline inference with controlnet, inpainting, plus ip-adapter | ### Describe the bug
Hi, I would like to utilize flux pipeline. But for now, I have gpu issues to use origin flux pipeline.
If I would like to use nf4 version, How can I set up the inference file on controlnet, inpainting, ip-adapter?
Do I use Fluxcontrol depth or canny and mask, ip-adapter model? or fluxcontrol, fluxfill, ip-adapter?
Thanks,
@hlky, @sayakpaul
### Reproduction
import torch
from diffusers import FluxControlInpaintPipeline
from diffusers.models.transformers import FluxTransformer2DModel
from transformers import T5EncoderModel
from diffusers.utils import load_image, make_image_grid
from image_gen_aux import DepthPreprocessor # https://github.com/huggingface/image_gen_aux
from PIL import Image
import numpy as np
access_token = ""
pipe = FluxControlInpaintPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Depth-dev",
torch_dtype=torch.bfloat16, token=access_token)
# use following lines if you have GPU constraints
# ---------------------------------------------------------------
transformer = FluxTransformer2DModel.from_pretrained(
"sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="transformer", torch_dtype=torch.bfloat16
)
text_encoder_2 = T5EncoderModel.from_pretrained(
"sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="text_encoder_2", torch_dtype=torch.bfloat16
)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
pipe.enable_model_cpu_offload()
# ---------------------------------------------------------------
pipe.to("cuda")
prompt = "a blue robot sad expressions"
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
head_mask = np.zeros_like(image)
head_mask[65:580,300:642] = 255
mask_image = Image.fromarray(head_mask)
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(image)[0].convert("RGB")
output = pipe(
prompt=prompt,
image=image,
control_image=control_image,
mask_image=mask_image,
num_inference_steps=30,
strength=1,
guidance_scale=10.0,
generator=torch.Generator().manual_seed(42),
).images[0]
make_image_grid([image, control_image, mask_image, output.resize(image.size)], rows=1, cols=4).save("output.png")
changing depth to canny, and add ip-adapter?
### Logs
```shell
```
### System Info
.
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11046 | open | [
"bug",
"stale"
] | 2025-03-12T20:14:01Z | 2025-04-12T15:02:52Z | 1 | john09282922 |
huggingface/lerobot | 854 | How to train diffusion policy in only state space, no images? | I have been having a lot of trouble trying to only train a model on purely a state space task so there are no images involved. I have already looked through every tutorial and most source code files and just can not get this working.
I have a script that creates a LeRobotDataset through human demonstrations. The script is simplified and only contains the relevant information. I simply record 10 demonstrations to create a LeRobotDataset from. There are no images the only observations is a (31, ) numpy float array.
```
feature_dict = {
"next.reward": {
"dtype": "float",
"shape": (1,),
"names": None,
},
"action": {
"dtype": "float64",
"shape": (5, 1),
"names": None
},
"next.success": {
"dtype": "bool",
"shape": (1,),
"names": None,
},
# "timestamp": {
# "dtype": "float32",
# "shape": (1, ),
# "names": None,
# },
"observation.environment_state": {
"dtype": "float64",
"shape": (31, ),
"names": None
},
}
dataset_le_name = "second_save"
dataset_dir = os.path.join(os.path.dirname(__file__), "./files/", dataset_le_name)
le_dataset = LeRobotDataset.create(
repo_id=dataset_le_name,
fps=500,
root=dataset_dir,
features=feature_dict
)
env.reset()
for _ in range(10):
while True:
step_start = time.time()
obs, reward, terminated, _, _ = env.step(None)
action = teleoperate_command()
frame = {
"action": torch.from_numpy(action),
"next.reward": np.array([reward]),
"next.success": np.array([not terminated]),
#"timestamp": np.array([env.unwrapped.sim_object.data.time], dtype=np.float32).reshape(1,),
"observation.environment_state": obs,
"task": "flick switch"
}
le_dataset.add_frame(frame)
if terminated:
print("Task completed")
break
le_dataset.save_episode()
```
This script works fine and is able to create the dataset with no errors. But then when I try to train a diffusion policy from scratch, the exact example script from https://github.com/huggingface/lerobot/blob/main/examples/3_train_policy.py
```# Create a directory to store the training checkpoint.
output_directory = Path("outputs/train/example_pusht_diffusion")
output_directory.mkdir(parents=True, exist_ok=True)
# # Select your device
device = torch.device("cuda")
# Number of offline training steps (we'll only do offline training for this example.)
# Adjust as you prefer. 5000 steps are needed to get something worth evaluating.
training_steps = 5000
log_freq = 1
# When starting from scratch (i.e. not from a pretrained policy), we need to specify 2 things before
# creating the policy:
# - input/output shapes: to properly size the policy
# - dataset stats: for normalization and denormalization of input/outputs
dataset_le_name = "second_save"
dataset_dir = os.path.join(os.path.dirname(__file__), "./files/imitationDataset", dataset_le_name)
dataset_metadata = LeRobotDatasetMetadata(dataset_le_name, root=dataset_dir)
features = dataset_to_policy_features(dataset_metadata.features)
output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}
input_features = {key: ft for key, ft in features.items() if key not in output_features}
print(input_features)
# Policies are initialized with a configuration class, in this case `DiffusionConfig`. For this example,
# we'll just use the defaults and so no arguments other than input/output features need to be passed.
cfg = DiffusionConfig(input_features=input_features, output_features=output_features)
# We can now instantiate our policy with this config and the dataset stats.
policy = DiffusionPolicy(cfg, dataset_stats=dataset_metadata.stats)
```
I keep getting the error
```Traceback (most recent call last):
File "path/trainDiffusion.py", line 105, in <module>
main()
File "path/trainDiffusion.py", line 44, in main
policy = DiffusionPolicy(cfg, dataset_stats=dataset_metadata.stats)
File "path/lerobot/lerobot/common/policies/diffusion/modeling_diffusion.py", line 70, in __init__
config.validate_features()
File "pathlerobot/lerobot/common/policies/diffusion/configuration_diffusion.py", line 220, in validate_features
first_image_key, first_image_ft = next(iter(self.image_features.items()))
StopIteration
```
Looking at the source code it seems its always checking for image features in the validate feature function, but I just want to train a diffusion policy with no images. How do I do this? | https://github.com/huggingface/lerobot/issues/854 | closed | [
"question",
"policies",
"stale"
] | 2025-03-12T16:01:19Z | 2025-10-26T02:30:57Z | null | Nicholas-Baldassini |
huggingface/diffusers | 11,045 | Crash when loading Flux Schnell 1 model with train_dreambooth_lora_flux | ### Describe the bug
When using the `Diffusers/example/dreambooth/train_dreambooth_lora_flux` script with the Flux Schnell 1 model, the process consistently crashes during the transformer shard loading at 33% (1/3), causing my entire Google JupyterLab kernel to crash.
**Question:** Is this related to using the Flux Schnell model instead of a Dev model? Is there a known incompatibility?
**Logs:** 03/12/2025 14:14:26 - INFO - __main__ - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: bf16
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{'use_karras_sigmas', 'shift_terminal', 'use_beta_sigmas', 'time_shift_type', 'invert_sigmas', 'use_exponential_sigmas'} was not found in config. Values will be initialized to default values.
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|████████ | 1/2 [00:13<00:13, 13.01s/it]
Loading checkpoint shards: 100%|████████████████| 2/2 [00:25<00:00, 12.53s/it]
Loading checkpoint shards: 100%|████████████████| 2/2 [00:25<00:00, 12.60s/it]
Instantiating AutoencoderKL model under default dtype torch.float32.
All model checkpoint weights were used when initializing AutoencoderKL.
All the weights of AutoencoderKL were initialized from the model checkpoint at /home/jupyter/flux_model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use AutoencoderKL for predictions without further training.
Instantiating FluxTransformer2DModel model under default dtype torch.float32.
{'out_channels', 'axes_dims_rope'} was not found in config. Values will be initialized to default values.
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]
Loading checkpoint shards: 33%|█████▎ | 1/3 [00:26<00:52, 26.10s/it]
### Reproduction
export MODEL_NAME="black-forest-labs/FLUX.1-schnell"
export INSTANCE_DIR="images"
export OUTPUT_DIR="output"
accelerate launch train_dreambooth_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--mixed_precision="bf16" \
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--guidance_scale=1 \
--gradient_accumulation_steps=4 \
--optimizer="prodigy" \
--learning_rate=1. \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=500 \
--validation_prompt="A photo of sks dog in a bucket" \
--validation_epochs=25 \
--seed="0"
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.10.16
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.3
- Transformers version: 4.49.0
- Accelerate version: 1.4.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA L4, 23034 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11045 | closed | [
"bug",
"stale"
] | 2025-03-12T15:08:11Z | 2025-05-07T15:18:15Z | 4 | rleygonie |
huggingface/diffusers | 11,043 | When will we be getting Quanto support for Wan 2.1? | The diffusers library for quantizers currently doesn't contain an entry for Quantro:
https://github.com/huggingface/diffusers/tree/main/src/diffusers/quantizers
Isn't this needed to perform requantization on a quantized Transformer for WAN 2.1?
Currently we can't do this due to missing Quanto quantizer after we've quantized and stored a Transformer:
` print('Quantize transformer')
class QuantizedWanTransformer3DModel(QuantizedDiffusersModel):
base_class = WanTransformer3DModel
transformer = QuantizedWanTransformer3DModel.from_pretrained(
"./wan quantro T2V 14B Diffusers/basemodel/wantransformer3dmodel_qint8"
).to(dtype=dtype)` | https://github.com/huggingface/diffusers/issues/11043 | closed | [] | 2025-03-12T12:43:59Z | 2025-03-23T18:17:53Z | 2 | ukaprch |
huggingface/lerobot | 853 | How to customize adding other robot and manipulator? | Thanks for your great work! Now I got a problem how to customize adding other robot and manipulator.
I have 7DOF bimanual manipulators robot, which is powered by servo-motor. I want to add it to lerobot so I can use this fantastic platform to collect data and train. Specially the ACT and diffusion policy.
I have the URDF file, and already setup in ROS moveit and Isaac Sim, using 485 to drive the real robot.
I checked the code and maybe I should crate new yaml file in /configs/robot an some other files for my robot.
Which is simpler compared to directly collecting data and training with ACT repository? Is there any tutorial on how to add a custom robot for fresh man?
Thanks a lot !
 | https://github.com/huggingface/lerobot/issues/853 | closed | [
"question",
"robots"
] | 2025-03-12T11:39:19Z | 2025-10-08T20:16:23Z | null | meijie-jesse |
huggingface/smollm | 65 | How to set video size when fine tuning | Hi,
I've tried a bunch of variants but I can't seem to figure out how to set the video size. Currently, I have:
```py
processor.video_size = { "longest_edge": 128 }
processor.do_image_splitting = False
def sample_indices_fn(metadata, num_frames=None, fps=None, **kwargs):
return np.arange(0, 20, dtype=int)
messages = [
{"role": "user", "content": [
{ "type": "video", "path": example["clip_chunked_path"] },
] },
{
"role": "assistant",
"content": [
{"type": "text", "text": json.dumps(last_player_inputs)},
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
sample_indices_fn=sample_indices_fn,
video_load_backend="torchvision",
images_kwargs={ "max_image_size": {"longest_edge": 128 } }
).to(model.device, dtype=model.dtype)
print("FRAMES", inputs["pixel_values"].shape)
```
Which gives me a pixel_values shape of `[1, 20, 3, 128, 128]` (which is what I want), but then training crashes:
```
(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
2025-03-12 04:16:13,286 ERROR tune_controller.py:1331 -- Trial task failed for trial TorchTrainer_4b80b_00000
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
result = ray.get(future)
^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/worker.py", line 2772, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/worker.py", line 919, in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(RuntimeError): ray::_Inner.train() (pid=308044, ip=172.31.24.115, actor_id=164821b0515a3af42f0d03bc68000000, repr=TorchTrainer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/tune/trainable/trainable.py", line 331, in train
raise skipped from exception_cause(skipped)
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/utils.py", line 57, in check_for_failure
ray.get(object_ref)
^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ray.exceptions.RayTaskError(RuntimeError): ray::_RayTrainWorker__execute.get_next() (pid=308152, ip=172.31.24.115, actor_id=3794a93b2a61f6b6efb8496d68000000, repr=<ray.train._internal.worker_group.RayTrainWorker object at 0x79e43e8d7890>)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/worker_group.py", line 33, in __execute
raise skipped from exception_cause(skipped)
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/utils.py", line 176, in discard_return_wrapper
train_func(*args, **kwargs)
File "/tmp/ray/session_2025-03-04_07-50-04_397300_8643/runtime_resources/working_dir_files/_ray_pkg_77cdef2c25570eb4/agent/train_smol.py", line 214, in train_func
trainer.train()
File "/home/ray/anaconda3/lib/python3.12/site-packages/transformers/trainer.py", line 2243, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/transformers/trainer.py", line 2554, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_i | https://github.com/huggingface/smollm/issues/65 | open | [
"Video"
] | 2025-03-12T11:20:28Z | 2025-07-29T13:12:05Z | null | FredrikNoren |
huggingface/accelerate | 3,437 | Need help on how to disable enable_model_cpu_offload / enable_sequential_cpu_offload | So during my testing when used individually, I observed that
enable_sequential_cpu_offload require- 11 GB VRAM
enable_model_cpu_offload require - 8 GB VRAM
I am using Diffusers + nunchaku + sd_embed
Problem: sd_embed does not support enable_sequential_cpu_offload but support enable_model_cpu_offload
Requirement:
1. Form pipe
2. Use sd_embed to generate prompt_embeds using enable_model_cpu_offload
3. Disable enable_model_cpu_offload
4. Enable enable_sequential_cpu_offload and do inference
So I tried this code
1. During prompt_embeds VRAM is ~6 GB
2. During inference VRAM is ~8GB
Noticed enable_model_cpu_offload is not disabled after invoking optionally_disable_offloading and enabling enable_sequential_cpu_offload. The VRAM requirement remains same as what is needed for enable_model_cpu_offload .
Is this something that is doable or not supported? Any guidance is appreciated.
```python
import torch
from diffusers import FluxPipeline
import torch.nn as nn
from accelerate.hooks import CpuOffload, AlignDevicesHook, remove_hook_from_module
from nunchaku import NunchakuFluxTransformer2dModel, NunchakuT5EncoderModel
from sd_embed.embedding_funcs import get_weighted_text_embeddings_flux1
def optionally_disable_offloading(_pipeline):
is_model_cpu_offload = False
is_sequential_cpu_offload = False
if _pipeline is not None:
for _, component in _pipeline.components.items():
if isinstance(component, nn.Module) and hasattr(component, "_hf_hook"):
if not is_model_cpu_offload:
is_model_cpu_offload = isinstance(component._hf_hook, CpuOffload)
if not is_sequential_cpu_offload:
is_sequential_cpu_offload = isinstance(component._hf_hook, AlignDevicesHook)
remove_hook_from_module(component, recurse=True)
return (is_model_cpu_offload, is_sequential_cpu_offload)
transformer = NunchakuFluxTransformer2dModel.from_pretrained("mit-han-lab/svdq-int4-flux.1-schnell")
text_encoder_2 = NunchakuT5EncoderModel.from_pretrained("mit-han-lab/svdq-flux.1-t5")
pipeline = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-schnell",
text_encoder_2=text_encoder_2,
transformer=transformer,
torch_dtype=torch.bfloat16,
)
pipeline.enable_model_cpu_offload()
prompt = """\
A dreamy, soft-focus photograph capturing a romantic Jane Austen movie scene,
in the style of Agnes Cecile. Delicate watercolors, misty background,
Regency-era couple, tender embrace, period clothing, flowing dress, dappled sunlight,
ethereal glow, gentle expressions, intricate lace, muted pastels, serene countryside,
timeless romance, poetic atmosphere, wistful mood, look at camera.
"""
prompt_embeds, pooled_prompt_embeds = get_weighted_text_embeddings_flux1(
pipe = pipeline
, prompt = prompt
)
print(">>>>>>>", optionally_disable_offloading(pipeline))
pipeline.enable_sequential_cpu_offload()
image = pipeline(
prompt_embeds=prompt_embeds,
pooled_prompt_embeds=pooled_prompt_embeds,
num_inference_steps=4,
guidance_scale=3.5,
generator=torch.Generator(device="cpu").manual_seed(123456)
).images[0]
image.save("flux.1-schnell_sd-embed1.png")
prompt = """\
A dreamy, soft-focus photograph capturing a romantic Jane Austen movie scene,
in the style of Agnes Cecile. Delicate watercolors, misty background,
Regency-era couple, tender embrace, period clothing, flowing dress, dappled sunlight,
ethereal glow, gentle expressions, intricate lace, muted pastels, serene countryside,
timeless romance, poetic atmosphere, wistful mood, look at camera.
"""
print(">>>>>>>", optionally_disable_offloading(pipeline))
pipeline.enable_model_cpu_offload()
prompt_embeds, pooled_prompt_embeds = get_weighted_text_embeddings_flux1(
pipe = pipeline
, prompt = prompt
)
print(">>>>>>>", optionally_disable_offloading(pipeline))
pipeline.enable_sequential_cpu_offload()
image = pipeline(
prompt_embeds=prompt_embeds,
pooled_prompt_embeds=pooled_prompt_embeds,
num_inference_steps=4,
guidance_scale=3.5,
generator=torch.Generator(device="cpu").manual_seed(12345678)
).images[0]
image.save("flux.1-schnell_sd-embed2.png")
``` | https://github.com/huggingface/accelerate/issues/3437 | closed | [] | 2025-03-12T09:29:08Z | 2025-03-12T10:10:33Z | null | nitinmukesh |
huggingface/diffusers | 11,042 | ZeroDivisionError when performing forward pass with UNet3DConditionModel | ### Describe the bug
# ZeroDivisionError when performing forward pass with UNet3DConditionModel
I'm encountering a ZeroDivisionError when attempting to perform a forward pass with the UNet3DConditionModel. This seems to be related to the num_attention_heads parameter being None, which causes self.inner_dim to be 0.
Here's the code I'm using:
```python
from diffusers import UNet3DConditionModel
import torch
model = UNet3DConditionModel(
down_block_types=(
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"DownBlock3D",
),
up_block_types=(
"UpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
),
block_out_channels=(32, 64, 128, 128),
norm_num_groups=4,
)
data = torch.randn(1, 4, 32, 32, 32)
model(data, timestep=3, encoder_hidden_states=torch.zeros(1, 4, 32, 32, 32))
```
The error traceback indicates that the issue occurs in the attention processing:
```
ZeroDivisionError: integer division or modulo by zero
```
This seems to be because num_attention_heads is None, leading to self.inner_dim = 0 in the transformer configuration.
I noticed that in the UNet3DConditionModel implementation, there's a check that raises an error if num_attention_heads is provided:
```python
if num_attention_heads is not None:
raise NotImplementedError(
"At the moment it is not possible to define the number of attention heads via num_attention_heads because of a naming issue as described in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131 . Passing num_attention_heads will only be supported in diffusers v0.19."
)
```
Given this limitation, I'm unsure how to properly configure the model to avoid this error. Could you provide guidance on:
1. How to correctly perform a forward pass with demo hidden states
2. What parameters I should adjust to ensure the model is properly configured
3. If there's a workaround for this issue in the current version of diffusers
Thank you for your assistance!
### Reproduction
```python
from diffusers import UNet3DConditionModel
import torch
model = UNet3DConditionModel(
down_block_types=(
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"DownBlock3D",
),
up_block_types=(
"UpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
),
block_out_channels=(32, 64, 128, 128),
norm_num_groups=4,
)
data = torch.randn(1, 4, 32, 32, 32)
model(data, timestep=3, encoder_hidden_states=torch.zeros(1, 4, 32, 32, 32))
```
### Logs
```shell
```
### System Info
Python 3.11.10
diffusers version 0.32.2
ubuntu 24.04
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11042 | closed | [
"bug"
] | 2025-03-12T09:26:01Z | 2025-03-13T02:00:12Z | 2 | txz32102 |
pytorch/executorch | 9,180 | Convert model.safetensors in order to be able to execute it with ExecuteTorch: how to prepare the example input and dynamic shape information? | Hi!
I've trained for fine-tuning the Bert model to use it for Named Entity Recognition.
Now I want to convert the resulting model.safetensors in order to be able to execute it with ExecuteTorch. Thanks to the explanation of a kind guy : https://dev-discuss.pytorch.org/t/what-is-the-correct-future-proof-way-of-deploying-a-pytorch-python-model-in-c-for-inference/2775/11?u=raphael10-collab ,
I've learned that, in order to export the torch.nn.Module into aExportedProgram, I need first to prepare the example input and dynamic shape information.
So.... my question is: which dynamic shape information should I use, since the model.safetensors I produced is just a fine-tuning of the Bert Model?
Should I use the shapes from here: https://github.com/google-research/bert/blob/master/modeling.py#L389 : input_ids: int32 Tensor of shape [batch_size, seq_length] containing word ids ?
This the code I used to fine-tune Bert model for NER task:
`BERT-NER.py` :
# https://github.com/tozameerkhan/Fine-Tuning-BERT-for-Named-Entity-Recognition/blob/main/BERTfineTunningFinal.ipynb
# 1. Setup and Installation
import datasets
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from transformers import BertTokenizerFast
from transformers import DataCollatorForTokenClassification
from transformers import TrainingArguments, Trainer, EarlyStoppingCallback
from transformers import logging as hf_logging
from transformers import pipeline
import json
from pprint import pprint
from torchmetrics.text.bert import BERTScore
bertscore = BERTScore()
hf_logging.set_verbosity_info() #to display informational messages.
from transformers import AutoModelForTokenClassification
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
plt.style.use("fivethirtyeight")
# 2. Data Exploration (EDA)
# Load Dataset
conll2003 = datasets.load_dataset("conll2003", trust_remote_code=True)
conll2003
# Convert to DataFrame
train_df = pd.DataFrame(conll2003['train'])
validation_df = pd.DataFrame(conll2003['validation'])
test_df = pd.DataFrame(conll2003['test'])
# Data Overview
print(train_df.head())
print(f"Number of sentences in the training set: {len(train_df)}")
print(f"Number of sentences in the validation set: {len(validation_df)}")
print(f"Number of sentences in the test set: {len(test_df)}")
label_list = conll2003["train"].features["ner_tags"].feature.names
print(label_list)
# Distribution of Sentence Lengths
train_df['sentence_length'] = train_df['tokens'].apply(len)
plt.figure(figsize=(10, 6))
sns.histplot(train_df['sentence_length'], bins=30, kde=True)
plt.title('Distribution of Sentence Lengths in Training Set')
plt.xlabel('Sentence Length')
plt.ylabel('Frequency')
plt.show()
# Distribution of Named Entity Tags
ner_tags = conll2003['train'].features['ner_tags'].feature.names
tag_counts = [0] * len(ner_tags)
for tags in train_df['ner_tags']:
for tag in tags:
tag_counts[tag] += 1
plt.figure(figsize=(12, 6))
sns.barplot(x=ner_tags, y=tag_counts)
plt.title('Distribution of Named Entity Tags in Training Set')
plt.xlabel('Named Entity Tag')
plt.ylabel('Count')
plt.xticks(rotation=45)
plt.show()
# 3. Data Preparation
# Tokenization and Label Alignment
#load a pre-trained tokenizer.
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
example_1 = conll2003['train'][0]
tokenized_input = tokenizer(example_1["tokens"], is_split_into_words=True)
tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
word_ids = tokenized_input.word_ids()
print("word_ids :: ",word_ids)
''' As we can see, it returns a list with the same number of elements as our processed input ids,
mapping special tokens to None and all other tokens to their respective word.'''
print()#Function to tokenize and align labels with respect to the tokens.
def tokenize_and_align_labels(examples, label_all_tokens=True):
tokenized_inputs = tokenizer(examples['tokens'], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples['ner_tags']):
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx:
label_ids.append(label[word_idx])
else:
label_ids.append(label[word_idx] if label_all_tokens else -100)
previous_word_idx = word_idx
| https://github.com/pytorch/executorch/issues/9180 | open | [
"module: user experience"
] | 2025-03-12T09:17:50Z | 2025-12-18T21:55:01Z | null | raphael10-collab |
huggingface/lerobot | 851 | Hello, I would like to ask if I can use my ROS2 MoveIt2 robotic arm? | Can it support ROS training? I believe this would be beneficial for ecosystem development. | https://github.com/huggingface/lerobot/issues/851 | open | [
"question"
] | 2025-03-12T07:39:51Z | 2025-08-04T19:29:03Z | null | Gates-456 |
huggingface/open-r1 | 502 | How to use vllm with 2 GPUs? | Just as GRPO OOM #475 stated, the vllm kv init is so large that 1 A100 80GB could not hold it, while I have 8*A100 in total.
However, only 1 GPU is allowed to assign to vllm, as `vllm_device: auto` or `ib/python3.10/site-packages/trl/trainer/grpo_trainer.py`.
How should I solve the issue? Would anybody know?
| https://github.com/huggingface/open-r1/issues/502 | open | [] | 2025-03-12T03:36:18Z | 2025-06-03T11:55:47Z | null | greatxue |
huggingface/diffusers | 11,036 | Why perform the following operations on the latent condition? | in the code :https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
line 395-404:
```
latents_mean = (
torch.tensor(self.vae.config.latents_mean)
.view(1, self.vae.config.z_dim, 1, 1, 1)
.to(latents.device, latents.dtype)
)
latents_std = 1.0 / torch.tensor(self.vae.config.latents_std).view(1, self.vae.config.z_dim, 1, 1, 1).to(
latents.device, latents.dtype
)
latent_condition = (latent_condition - latents_mean) * latents_std
```
The official inference code of Wan2.1 does not perform similar operations:
https://github.com/Wan-Video/Wan2.1/blob/main/wan/image2video.py#L237 | https://github.com/huggingface/diffusers/issues/11036 | closed | [] | 2025-03-12T02:32:09Z | 2025-03-15T02:40:13Z | 2 | trouble-maker007 |
pytorch/vision | 8,962 | Missing Windows Wheel for torchvision==0.11.2+cu111 | Hello Torchvision team,
We are attempting to install specific versions with CUDA 11.1 using .whl files from [torch_stable.html](https://download.pytorch.org/whl/cu111/torch_stable.html).
However, we can't find the required wheel for torchvision==0.11.2+cu111 for Windows (win_amd64.whl).
Could you provide guidance on how to obtain this package or upload the Windows wheel for torchvision 0.11.2 with CUDA 11.1 support?
Thank you for your assistance. | https://github.com/pytorch/vision/issues/8962 | closed | [] | 2025-03-11T22:04:31Z | 2025-03-28T13:10:02Z | 2 | huang3527 |
huggingface/lerobot | 847 | Is there a way Merge | Convert | Edit datasets function or a way how we can train model using different datasets ? | Hey, everyone.
At the moment, we have this problem: We have recorded datasets with around 100 episodes each, but we would like to train our model with 1000 episodes. Unfortunately, we didn't find a way to load multiple datasets into a single policy training job, is it even possible ? If no, ss there a way to merge a couple of small datasets into a big one?
If none of that is possible, is there a way to convert to hdf5 ?
I was referencing https://github.com/huggingface/lerobot/issues/533, but there are no answers as well.
| https://github.com/huggingface/lerobot/issues/847 | closed | [
"question",
"policies",
"dataset"
] | 2025-03-11T17:25:08Z | 2025-10-17T12:09:32Z | null | runmaget |
huggingface/lerobot | 846 | How to convert my own dataset to LerobotDataset format? | Hi, I am new to Lerobot and have a dataset in my own format. I would like to convert it to the LerobotDataset format.
I referred to `lerobot/scripts/push_dataset_to_hub.py`, but it seems to be deprecated. Could you provide guidance or an updated method for converting custom datasets?
Thanks in advance! | https://github.com/huggingface/lerobot/issues/846 | closed | [
"question",
"dataset"
] | 2025-03-11T09:17:23Z | 2025-04-15T00:59:10Z | null | yilin404 |
pytorch/torchtitan | 951 | Nan's on step 1 of 405B model training | Anyone has any tip of how to debug/prevent nan's on step 1 during FSDP+TP training of the 405B model on 256 GPU's on the C4 dataset ? | https://github.com/pytorch/torchtitan/issues/951 | closed | [] | 2025-03-11T07:00:12Z | 2025-03-28T01:47:28Z | 12 | githubsgi |
huggingface/open-r1 | 498 | How to Enable enforce_eager or Disable CUDA Graph in Evaluation | Evaluation code is currently using lighteval and vLLM for inference, and I would like to disable CUDA Graph by enabling options like ```enforce_eager```. However, I could not find a command-line argument for this in ```$MODEL_ARGS```. Additionally, setting it as an environment variable (e.g., VLLM_ENFORCE_EAGER) does not seem to work.
Is there a way to achieve this? Any guidance would be appreciated. | https://github.com/huggingface/open-r1/issues/498 | closed | [] | 2025-03-11T00:25:49Z | 2025-03-11T04:54:02Z | null | superdocker |
huggingface/diffusers | 11,020 | Multi-gpus Context Parallel training support? | Nowadays, the number of parameters in video generation models is increasing, and the video length is increasing. When training video models, it is difficult to fit a complete video sequence(200k~ tokens) on a single GPU. Some sequence parallel training technologies can solve this problem, such as the [fastvideo](https://github.com/hao-ai-lab/FastVideo) training framework, but the imperfection of this framework makes it difficult to use. Can the diffusers framework support sequence parallel training? | https://github.com/huggingface/diffusers/issues/11020 | open | [] | 2025-03-10T11:45:30Z | 2025-07-18T13:05:08Z | 2 | yinian-lw |
huggingface/blog | 2,728 | Open In "02_how_to_generate", code cell 1 has an outdated version of tensorflow | The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.
if we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1. | https://github.com/huggingface/blog/issues/2728 | open | [] | 2025-03-09T18:05:55Z | 2025-03-09T18:06:11Z | null | Umashankar86 |
huggingface/blog | 2,727 | Open In "02_how_to_generate", code cell 1 has an outdated version of tensorflow | The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.
if we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1. | https://github.com/huggingface/blog/issues/2727 | closed | [] | 2025-03-09T18:04:48Z | 2025-03-09T18:05:03Z | null | Umashankar86 |
huggingface/datasets | 7,442 | Flexible Loader | ### Feature request
Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?
It can be something as simple as this one:
```
def load_hf_dataset(path_or_name):
if os.path.exists(path_or_name):
return load_from_disk(path_or_name)
else:
return load_dataset(path_or_name)
```
### Motivation
This can be done inside the user codebase, too, but in my experience, it becomes repetitive code.
### Your contribution
I can open a pull request. | https://github.com/huggingface/datasets/issues/7442 | open | [
"enhancement"
] | 2025-03-09T16:55:03Z | 2025-03-27T23:58:17Z | 3 | dipta007 |
huggingface/chat-ui | 1,751 | Analyze uploaded PDF files through OpenAI API | When I upload a PDF file and leverage it, I will get the base64 data. But I didn't find the code to process it in endpoints/openai, while it can handle the image base64 data. Besides, I failed to transfer it back to text. How can I analyze the file through OpenAI API?
 | https://github.com/huggingface/chat-ui/issues/1751 | open | [
"support"
] | 2025-03-09T09:31:13Z | 2025-03-15T18:38:17Z | 2 | zu0feng |
huggingface/hf-hub | 99 | Where is the `0.4.2` commit? | I saw on [crates.io](https://crates.io/crates/hf-hub/versions) that the latest version of hf-hub is 0.4.2, but I can't find the 0.4.2 tag on GitHub. Could you tell me what is the commit ID corresponding to this version?
Sincerely suggest that you add a corresponding tag for each version release, which can effectively avoid such inefficient communication and thus speed up the work efficiency of other contributors.🙏 | https://github.com/huggingface/hf-hub/issues/99 | closed | [] | 2025-03-08T12:43:18Z | 2025-06-16T09:41:15Z | null | HairlessVillager |
huggingface/transformers | 36,613 | In "02_how_to_generate", code cell 1 has an error message | ### System Info
In "02_how_to_generate", code cell 1 has an error message but the rest works fine: ERROR: Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Run code cell 1
### Expected behavior
No error message should appear when running code cell | https://github.com/huggingface/transformers/issues/36613 | closed | [
"bug"
] | 2025-03-08T07:46:39Z | 2025-04-16T08:03:04Z | null | kvutien |
pytorch/xla | 8,809 | MarkShardingFunction causes OOM when applied to model parameters | When tested in https://github.com/AI-Hypercomputer/torchprime/pull/144/files, if we shard parameters with `MarkShardingFunction.apply`, that causes Mixtral to OOM. Gradient HLO arrays end up living much longer than needed.
Shard both activations and model parameters with `MarkShardingFunction`: http://shortn/_vvNPYfxSe3
Shard activation with `MarkShardingFunction` and shard model parameters with `xs.mark_sharding`: http://shortn/_6OxaSdjJzQ
Another clue is that if I change `MarkShardingFunction` to be not in-place, then the OOM goes away:
```
class MarkShardingFunction(torch.autograd.Function):
"""
Autograd function to mark_sharding on intermediate tensors and the gradient
of the intermediate tensors during backward pass.
Usage:
new_tensor = MarkShardingFunction.apply(tensor, mesh, ('axis_1', 'axis_2'))
This is required to guide GSPMD sharding propagation better during the
backward pass as during complicated workloads the compiler can introduce extra
collectives that can hurt performance.
"""
@staticmethod
def forward(
ctx, torch_tensor: torch.Tensor, mesh: Mesh, partition_spec: tuple
) -> torch.Tensor:
o = mark_sharding(torch_tensor.clone(), mesh, partition_spec)
ctx.partition_spec = partition_spec
ctx.mesh = mesh
return o.global_tensor
@staticmethod
def backward(ctx, grad_output: torch.Tensor) -> torch.Tensor:
partition_spec = ctx.partition_spec
mesh = ctx.mesh
o = mark_sharding(grad_output.clone(), mesh, partition_spec)
return o.global_tensor, None, None
``` | https://github.com/pytorch/xla/issues/8809 | closed | [
"performance"
] | 2025-03-08T06:14:48Z | 2025-03-17T04:03:08Z | 3 | tengyifei |
huggingface/diffusers | 11,008 | Support wan2.1 video model? | ### Did you like the remote VAE solution?
Yes.
### What can be improved about the current solution?
Wan2.1 video model support is appreciated!
### What other VAEs you would like to see if the pilot goes well?
Wan2.1 video model support is appreciated!
### Notify the members of the team
@hlky @sayakpaul | https://github.com/huggingface/diffusers/issues/11008 | open | [
"stale"
] | 2025-03-08T04:21:33Z | 2025-05-09T15:03:47Z | 6 | kexul |
huggingface/trl | 3,028 | Distill teacher models where the vocab size of teacher and student is different | I am trying to distill a Qwen2.5-7B-Instruct to Qwen2.5-5B-Instruct using a sample code
```from datasets import Dataset
from trl import GKDConfig, GKDTrainer
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
NUM_DUMMY_SAMPLES = 100
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
teacher_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
train_dataset = Dataset.from_dict(
{
"messages": [
[
{"role": "user", "content": "Hi, how are you?"},
{"role": "assistant", "content": "I'm great thanks"},
]
]
* NUM_DUMMY_SAMPLES
}
)
eval_dataset = Dataset.from_dict(
{
"messages": [
[
{"role": "user", "content": "What colour is the sky?"},
{"role": "assistant", "content": "The sky is blue"},
]
]
* NUM_DUMMY_SAMPLES
}
)
training_args = GKDConfig(output_dir="gkd-model", per_device_train_batch_size=1)
trainer = GKDTrainer(
model=model,
teacher_model=teacher_model,
args=training_args,
processing_class=tokenizer,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()```
But this gives me an error because their vocab sizes are different (so might be their tokenizers). Is there a workaround for these kind of situations? How are such cases handled? | https://github.com/huggingface/trl/issues/3028 | open | [
"🏋 GKD"
] | 2025-03-08T00:29:01Z | 2025-10-29T04:15:50Z | null | shaunakjoshi12 |
huggingface/diffusers | 11,005 | pipeline_wan_i2v.py: minor discrepancy between arg default and docstring | ### Describe the bug
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
Line 447 (arg default):
```output_type: Optional[str] = "np",```
Line 496 (docstring):
```output_type (`str`, *optional*, defaults to `"pil"`):```
### Reproduction
n/a
### Logs
```shell
```
### System Info
n/a
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11005 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-03-07T16:37:48Z | 2025-04-24T18:49:38Z | 2 | rolux |
huggingface/finetrainers | 301 | How to train text-to-video generation model on different generation models using Disney dataset? | The current repository does not explicitly describe ho to change training methods between t2v or i2v.
| https://github.com/huggingface/finetrainers/issues/301 | closed | [] | 2025-03-07T16:02:42Z | 2025-03-07T16:08:06Z | null | kjosh925 |
huggingface/speech-to-speech | 159 | What is from df.enhance import enhance, init_df ? in vad_handler? | https://github.com/huggingface/speech-to-speech/issues/159 | open | [] | 2025-03-07T15:07:53Z | 2025-03-07T15:07:53Z | null | Manukrishna2K | |
huggingface/diffusers | 11,002 | Any chance class members like self._interrupt could be defined in __init__ across pipelines? | ### Describe the bug
I think there is no benefit to late initializing here and it puts a burden on the library user that could be easily avoided. Also leads to some confusion as it is uncommon, code inspection flags this. Let me know if I'm missing something.
### Reproduction
```
class WanImageToVideoPipeline:
def __init__(self):
pass
def __call__(self, *args, **kwargs):
self._interrupt = False
return 23
@property
def interrupt(self):
return self._interrupt
pipe = WanImageToVideoPipeline()
def on_async_user_abort_call_me_any_time():
# check if already interrupted but mid step
print(pipe.interrupt)
on_async_user_abort_call_me_any_time()
```
### Logs
```shell
AttributeError: 'WanImageToVideoPipeline' object has no attribute '_interrupt'. Did you mean: 'interrupt'?
```
### System Info
Diffusers 0.33.0.dev0, Linux, Python 3.10
### Who can help?
@yiyixuxu @DN6 | https://github.com/huggingface/diffusers/issues/11002 | open | [
"bug",
"help wanted",
"contributions-welcome"
] | 2025-03-07T11:28:27Z | 2025-05-26T07:21:47Z | 9 | spezialspezial |
pytorch/ao | 1,850 | What the dtype of input in Float8Linear backward? | In Float8Linear forward input is saved in high precision,
<img width="605" alt="Image" src="https://github.com/user-attachments/assets/b2f4fdff-79e6-4274-8e68-9bf7947f5003" />
Why not save input in float8? I don't know if I understand this correctly. | https://github.com/pytorch/ao/issues/1850 | closed | [
"question"
] | 2025-03-07T07:33:01Z | 2025-03-10T16:28:11Z | null | yh8899 |
pytorch/pytorch | 148,747 | How can I use inductor aot_compile to support a MoE network? | ### 🚀 The feature, motivation and pitch
Deepseek has sparked a wave of enthusiasm for the design of Moe (Mixture of Experts) network architectures. I am often asked how to accelerate the inference of an Moe network. Undoubtedly, I thought of using Inductor's aot_compile to compile it into a dynamic library and then calling it in C++ for acceleration.
Unfortunately, the process of selecting experts in Moe is different from that of a typical dense network. This part of the syntax is more like an extension of PyTorch, closer to Python's syntax, and cannot be traced. Below is a simple demo I wrote. I would like to know if the developers of Inductor have any plans to support Moe networks?
```Python
import torch
import torch.nn as nn
import torch.nn.functional as F
class Expert(nn.Module):
def __init__(self, input_dim, output_dim):
super(Expert, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
return self.linear(x)
class MoE(nn.Module):
def __init__(self, input_dim, output_dim, num_experts=10, top_k=2):
super(MoE, self).__init__()
# Eight experts for gating
self.other_experts = nn.ModuleList([Expert(input_dim, output_dim) for _ in range(num_experts - 2)])
# Gate network to choose top_k experts
self.gate = nn.Linear(input_dim, num_experts - 2)
# Final output layer
self.final_linear = nn.Linear((top_k) * output_dim, output_dim)
def forward(self, x):
# Compute gating scores
gate_scores = self.gate(x)
topk_scores, topk_indices = torch.topk(gate_scores, 2, dim=-1)
# Collect outputs from selected experts based on gating
selected_expert_outputs = torch.stack(
[torch.stack([self.other_experts[i](x[idx]) for i in topk_indice], dim = 0) for idx, topk_indice in enumerate(topk_indices)], dim=0
)
# Flatten and pass through final linear layer
all_expert_outputs = selected_expert_outputs.view(x.size(0), -1)
output = self.final_linear(all_expert_outputs)
return output
if __name__ == "__main__":
# Example usage
input_dim = 128
output_dim = 64
moe = MoE(input_dim, output_dim)
x = torch.randn(32, input_dim) # Batch size of 32
output = moe(x)
print(output.shape) # Expected output shape: [32, 64]
export_model = torch.export.export(
mod=moe,
args=tuple([torch.randn(32, input_dim)]),
dynamic_shapes={"x": {0: torch.export.Dim("batch", min=1, max=1024)}},
)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi | https://github.com/pytorch/pytorch/issues/148747 | closed | [
"oncall: pt2",
"export-triage-review",
"oncall: export",
"module: aotinductor"
] | 2025-03-07T07:04:07Z | 2025-05-24T02:21:21Z | null | sujuyu |
pytorch/pytorch | 148,713 | [torch.export] How to export with the model having *args and **kwargs as forward signature? | This is the original model code:
```python
from diffusers.models import AutoencoderKL
import torch
model_name = "black-forest-labs/FLUX.1-dev"
hf_safetensor = True
model_opts = {'torch_dtype': torch.float16}
model = AutoencoderKL.from_pretrained(model_name, subfolder="vae", use_safetensors=hf_safetensor, force_download=True, **model_opts).to("cpu")
model.forward = model.decode # This turns model forward signature to *args and **kwargs
inputs = torch.randn(1, 16, 128, 128, dtype=torch.float32, device="cpu")
B, H, W = torch.export.Dim("B"), torch.export.Dim("H"), torch.export.Dim("W")
dynamic_shapes = ({0:B, 2:H, 3:W},)
torch.export.export(
model,
(inputs,),
dynamic_shapes=dynamic_shapes,
strict=False
)
```
No matter what data structure I turn inputs or dynamic_shapes to, it mismatches.
A simple and not so much making sense example could be like this:
```python
import torch
import torch.nn as nn
import torch.onnx
class AddModel(nn.Module):
def __init__(self):
super(AddModel, self).__init__()
def forward(self, x):
return torch.sigmoid(x)
class WrappedModel(nn.Module):
def __init__(self, model):
super(WrappedModel, self).__init__()
self.model = model
def forward(self, *arga, **kwargs):
return self.model(*arga, **kwargs)
# Instantiate the model
model = WrappedModel(AddModel())
# Set the model to evaluation mode
model.eval()
# Create dynamic input tensors
x = torch.randn(2, 3)
# Define dynamic axes for ONNX export
dynamic_shapes = ({0: torch.export.Dim.AUTO, 1: torch.export.Dim.AUTO},)
torch.export.export(
model,
(x,),
dynamic_shapes=dynamic_shapes,
strict=False
)
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | https://github.com/pytorch/pytorch/issues/148713 | closed | [
"oncall: pt2",
"oncall: export"
] | 2025-03-06T23:01:17Z | 2025-03-07T01:47:05Z | null | titaiwangms |
huggingface/diffusers | 10,993 | f-divergence | Is there a plan to implement the f-divergence scheduler ? I would like to contribute that to the library. | https://github.com/huggingface/diffusers/issues/10993 | open | [
"stale"
] | 2025-03-06T22:46:13Z | 2025-04-06T15:02:55Z | 5 | manmeet3591 |
huggingface/smolagents | 902 | How to populate custom variables in prompt template? | I'm trying to configure custom template variables in my system prompt.
**Current Implementation:**
1. I have a system prompt template with custom variables:
```python
CUSTOM_CODE_SYSTEM_PROMPT = """You are {{ bot_name }}, a customer support assistant...
{{ formatting_guidelines }}
```
2. Agent creation and configuration:
```python
from smolagents import CodeAgent, LiteLLMModel
def get_agent(platform: str = "whatsapp", variables: dict = None):
manager_agent = CodeAgent(
tools=[ClinicKnowledgeTool()],
model=model,
max_steps=3,
)
return manager_agent
```
3. Calling the agent:
```python
agent = get_agent(
platform=platform,
variables={
"conversation_history": conversation_history,
"formatting_guidelines ": "test",
},
)
agent.prompt_templates["system_prompt"] = CUSTOM_CODE_SYSTEM_PROMPT
```
**Questions:**
1. What's the correct way to populate template variables like `{{ bot_name }}` and `{{ formatting_guidelines }}` in the system prompt?
2. How do I handle dynamic variables like `conversation_history` that change with each request?
**Environment:**
- smolagents v1.10.0
- Python 3.10+
- FastAPI integration | https://github.com/huggingface/smolagents/issues/902 | closed | [] | 2025-03-06T20:45:51Z | 2025-03-07T08:54:22Z | null | Luisotee |
huggingface/agents-course | 295 | [QUESTION] Ambiguity what chat templates are. | Issue:
Where ➡ https://huggingface.co/learn/agents-course/unit1/messages-and-special-tokens
> This is where chat templates come in. They act as the bridge between conversational messages (user and assistant turns) and the specific formatting requirements of your chosen LLM. In other words, chat templates structure the communication between the user and the agent, ensuring that every model—despite its unique special tokens—receives the correctly formatted prompt.
In my opinion, the first sentence about chat templates is correct. The second part seems wrong.
It says `...chat templates structure the communication between the user and the agent...`.
Correct Sentence:
`...chat templates structure the communication between the agents and the language model or LLM...`.
Reason:
The Chat templates are implemented inside the agents with respective `chat.completion` method to send the user's request, through agents, to the LLMs.
The user just types into the chatbox as similar to how we type messages. The text-flow is as below in it's simplest form is as below:
User's message >> Chat Templates wraps the message as per LLM's specs >> send to LLMs through agents.
So the `the user and the agent` part doesn't seem very right to me. I did give my best alternative, I could thought of. I okay with anything else you come up with. | https://github.com/huggingface/agents-course/issues/295 | open | [
"question"
] | 2025-03-06T17:12:41Z | 2025-03-06T17:12:41Z | null | MekongDelta-mind |
huggingface/open-r1 | 483 | How to calculate total optimization steps | I ran it on 8 GPUs and set num_generations to 8, num_processes=7, Why Total optimization steps=196, isn't it Num examples/Total train batch size? It seems that multiplying by num_generations yields 196. Why do we need to multiply by num_generations?
[INFO|trainer.py:2405] 2025-03-06 12:04:09,913 >> ***** Running training *****
[INFO|trainer.py:2406] 2025-03-06 12:04:09,913 >> Num examples = 5,498
[INFO|trainer.py:2407] 2025-03-06 12:04:09,914 >> Num Epochs = 1
[INFO|trainer.py:2408] 2025-03-06 12:04:09,914 >> Instantaneous batch size per device = 8
[INFO|trainer.py:2411] 2025-03-06 12:04:09,914 >> Total train batch size (w. parallel, distributed & accumulation) = 224
[INFO|trainer.py:2412] 2025-03-06 12:04:09,914 >> Gradient Accumulation steps = 4
[INFO|trainer.py:2413] 2025-03-06 12:04:09,914 >> Total optimization steps = 196
[INFO|trainer.py:2414] 2025-03-06 12:04:09,915 >> Number of trainable parameters = 7,615,616,512 | https://github.com/huggingface/open-r1/issues/483 | open | [] | 2025-03-06T09:47:19Z | 2025-03-13T08:45:23Z | null | HelloWorld506 |
huggingface/transformers.js | 1,221 | How to use Xenova/deplot using the transformers.js library. | ### Question
Currently I'm doing:
```
this.pipeline = await pipeline("image-text-to-text", "Xenova/deplot", {
progress_callback: (progress) => {
this.updateProgress({
status: `Loading model: ${progress.status}`,
progress: 0.1 + (progress.progress * 0.9)
});
},
device: "cpu",
dtype: dtype,
});
```
I get the following error:
```
Error: Unsupported pipeline: image-text-to-text. Must be one of [text-classification,token-classification,question-answering,fill-mask,summarization,translation,text2text-generation,text-generation,zero-shot-classification,audio-classification,zero-shot-audio-classification,automatic-speech-recognition,text-to-audio,image-to-text,image-classification,image-segmentation,zero-shot-image-classification,object-detection,zero-shot-object-detection,document-question-answering,image-to-image,depth-estimation,feature-extraction,image-feature-extraction]
``` | https://github.com/huggingface/transformers.js/issues/1221 | open | [
"question"
] | 2025-03-06T07:56:07Z | 2025-03-06T11:36:19Z | null | aadya940 |
huggingface/peft | 2,410 | running forward loop using get_peft_model disables requires_grad on output | Hi,
I would like to report a recent issue I have been facing, but I am not sure if it is a bug or I am doing something wrong in the process. The steps to re-create the steps are easy. The issue happens when I try to convert **Qwen2-VL-2B-Instruct** model into a PEFT model using `get_peft_model` method. Simply load the model using the sample code in https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct and try to convert it to a PEFT model using a typical **8bit** LoraConfig with just sample `target_modules=["q_proj", "v_proj"]`. Then simply run a forward call to the model using a dummy input, such as `input_ids = torch.zeros((4, 1247)).to(device)`. When I inspect the `requires_grad` of `logits` attribute of the output, it is False. Meaning that I cannot run backward based on that output. This issue has been puzzling me for a while. I would appreciate if you can help me with a solution or advice how to address it properly.
| https://github.com/huggingface/peft/issues/2410 | closed | [] | 2025-03-06T05:12:42Z | 2025-04-13T15:03:40Z | 4 | Hamidreza3252 |
pytorch/pytorch | 148,634 | README doesn't explain how to run tests in the "Test PyTorch" section | ### 📚 The doc issue
README needs to have the "Test PyTorch" section after the [Install PyTorch](https://github.com/pytorch/pytorch#install-pytorch) section in the README.
Testing is the next step after building PyTorch.
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/pytorch/issues/148634 | closed | [] | 2025-03-06T04:32:44Z | 2025-03-06T17:58:19Z | null | yurivict |
huggingface/lerobot | 826 | Should the pi0 pytorch model on Huggingface load model.safetensors or the other three satetensors? | https://huggingface.co/lerobot/pi0/tree/main
What is the difference between `model.safetensors` and the other three satetensors (`model-00001-of-0000*.safetensors`)? The pi0 model `from_pretrained()` method will load `model.safetensor`s by default instead of `model-00001-of-0000*.safetensors`.
| https://github.com/huggingface/lerobot/issues/826 | closed | [
"question",
"stale"
] | 2025-03-06T03:12:05Z | 2025-10-08T08:42:49Z | null | chopinxxxx |
huggingface/agents-course | 290 | [QUESTION] First Agent code does not produce any output | I cloned and tried running the first agent app.py. I wanted to try the image generation tool. the application built and ran but when I tried typing something in the chat such as "generate an image of a cat", there is no response from the bot. it stays blank
| https://github.com/huggingface/agents-course/issues/290 | open | [
"question"
] | 2025-03-05T23:49:06Z | 2025-03-18T14:45:44Z | null | Sabk0926 |
pytorch/xla | 8,799 | Re-enable CPU test `test/test_python_ops.py -k TestPythonOps` for `uint8` dtype | To unblock bumping libtpu pin, we have to disable this test: https://github.com/pytorch/xla/pull/8788/files
This test fails with a LLVM memory allocation error on the CPU.
We should report this bug upstream and re-enable it after a fix is there.
Failed run: https://github.com/pytorch/xla/actions/runs/13668949609/job/38217578967?pr=8788
Error:
```
E0000 00:00:1741156332.106836 21429 execution_engine.cc:53] LLVM compilation error: Cannot allocate memory
./test/run_tests.sh: line 51: 21120 Segmentation fault (core dumped) python3 "$@"
```
| https://github.com/pytorch/xla/issues/8799 | closed | [
"bug",
"libtpu"
] | 2025-03-05T19:40:50Z | 2025-05-05T00:25:18Z | 0 | tengyifei |
huggingface/accelerate | 3,421 | How to sync distribute model paramaters when training with continual learning fashion? | When performing distributed continual learning tasks, it is common to expand model parameters as tasks increase. For example, I have defined an `expand_classifier()` method with random initialization to increase the parameters of the classifier.
How can I ensure that the newly added parameters are initialized the same on each GPU model?
If i do
```
if self.accelerator.is_main_process:
self.model.module.prompt.expand_classifier()
```
How can i sync classifier across all distributed model? | https://github.com/huggingface/accelerate/issues/3421 | closed | [] | 2025-03-05T13:44:15Z | 2025-04-13T15:06:22Z | null | Iranb |
pytorch/xla | 8,792 | Generating stablehlo.composite and running it through PJRT | ## ❓ Questions and Help
Following the example from the [docs](https://pytorch.org/xla/release/r2.6/features/stablehlo.html#preserving-high-level-pytorch-operations-in-stablehlo-by-generating-stablehlo-composite), I tried to use `StableHLOCompositeBuilder` to generate a `stablehlo.composite` op with the difference that I want to actually run it through PJRT instead of exporting it.
Is there a way of doing this currently or are there any future plans regarding it?
This is my example code:
```python
import os
os.environ['XLA_STABLEHLO_COMPILE'] = '1'
import torch
import torch.nn.functional as F
from torch_xla import stablehlo
from torch_xla.experimental.mark_pattern_utils import StableHLOCompositeBuilder
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.q_proj = torch.nn.Linear(128, 128, bias=False)
self.k_proj = torch.nn.Linear(128, 128, bias=False)
self.v_proj = torch.nn.Linear(128, 128, bias=False)
self.b = StableHLOCompositeBuilder("test.sdpa", {"scale": 0.25, "other_attr": "val"})
def forward(self, x):
q = self.q_proj(x)
k = self.k_proj(x)
v = self.v_proj(x)
q, k, v = self.b.mark_inputs(q, k, v)
attn_out = F.scaled_dot_product_attention(q, k, v, scale=0.25)
attn_out = self.b.mark_outputs(attn_out)
attn_out = attn_out + x
return attn_out
device = "xla"
input_args = torch.randn((10, 8, 128)).to(device)
model = M().to(device)
out = model(input_args)
print(out)
```
```
WARNING:root:Found CUDA without GPU_NUM_DEVICES. Defaulting to PJRT_DEVICE=CUDA with GPU_NUM_DEVICES=1
loc("select.69"): error: 'stablehlo.select' op using value defined outside the region
...
RuntimeError: torch_xla/csrc/runtime/stablehlo_helper.cc:109 : Check failed: status.ok()
*** Begin stack trace ***
tsl::CurrentStackTrace()
torch_xla::ConvertHloToStableHlo(xla::HloModuleProto const*, mlir::ModuleOp*)
torch_xla::runtime::PjRtComputationClient::Compile(std::vector<torch_xla::runtime::ComputationClient::CompileInstance, std::allocator<torch_xla::runtime::ComputationClient::CompileInstance> >)
torch_xla::XLAGraphExecutor::Compile(std::vector<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> >, std::allocator<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > > >&, absl::lts_20230802::Span<std::string const>, torch::lazy::LazyGraphExecutor::SyncTensorCollection const&, torch::lazy::LazyGraphExecutor::PostOrderData*, std::vector<torch::lazy::Value, std::allocator<torch::lazy::Value> > const&)
torch_xla::XLAGraphExecutor::SyncTensorsGraphInternal(std::vector<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> >, std::allocator<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > > >*, absl::lts_20230802::Span<std::string const>, torch::lazy::LazyGraphExecutor::SyncTensorsConfig const&, bool)
torch_xla::XLAGraphExecutor::SyncTensorsGraph(std::vector<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> >, std::allocator<c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > > >*, absl::lts_20230802::Span<std::string const>, bool, bool, bool)
...
*** End stack trace ***
MHLO -> StableHLO conversion failed.
StableHLO Module from MHLO -> StableHLO conversion is not leagal.Please open a github issue to PyTorch/XLA.
```
I used torch-xla 2.5.1 for the example above but I get similar error with 2.6
```
torch 2.5.1
torch-xla 2.5.1
```
| https://github.com/pytorch/xla/issues/8792 | open | [
"bug",
"stablehlo"
] | 2025-03-05T10:45:12Z | 2025-03-06T12:49:08Z | 1 | sechkova |
huggingface/lerobot | 817 | SO 100 Arm assembly instruction inconsistency | Step 22 of the assembly guide shows a picture of wrist that is flipped comparing to the drawing and front page photo. Are both right? If not, which one is correct?
[Latest instruction](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#wrist-assembly):
<img width="723" alt="Image" src="https://github.com/user-attachments/assets/490e23aa-1085-4c89-9148-49304ac85ed5" />
[Assembly video](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#additional-guidance):
<img width="812" alt="Image" src="https://github.com/user-attachments/assets/b12cc0a7-bff9-4b2a-b2a2-30333a205506" />
[Project home page](https://github.com/huggingface/lerobot/tree/main?tab=readme-ov-file#------------build-your-own-so-100-robot):
 | https://github.com/huggingface/lerobot/issues/817 | closed | [
"question",
"robots",
"stale"
] | 2025-03-05T05:23:57Z | 2025-11-30T02:37:07Z | null | liuhuanjim013 |
huggingface/open-r1 | 472 | how to set the max_model_length, max_new_tokens and generation_size when evaluate ? | Suppose the max_position_embedding of my model is 4096, how to set max_model_length, max_new_tokens and generation_size to. get the correct evaluate result? For example , set max_model_length=4096, max_new_tokens=1000, generation_size=1000? | https://github.com/huggingface/open-r1/issues/472 | open | [] | 2025-03-05T04:01:48Z | 2025-03-12T03:41:42Z | null | ItGirls |
pytorch/torchtitan | 930 | `CheckpointManager.save` with async mode is vulnerable to race conditions | ### Bug description
Based on [[Distributed w/ TorchTitan] Optimizing Checkpointing Efficiency with PyTorch DCP](https://discuss.pytorch.org/t/distributed-w-torchtitan-optimizing-checkpointing-efficiency-with-pytorch-dcp/211250)'s Figure 3, when using async checkpointing via `CheckpointManager` with `AsyncMode.ASYNC`, I would think `CheckpointManager.save` blocks until the model is at least in "staging":

However, running the below reproducer, we see that is not actually happening with `save`, the model is actually not ready and `load` fails.
Is this the expected behavior?
It seems suboptimal to me, I would think the predictable behavior (given Figure 3) is:
1. `save` with async mode: (1) blocks until the model is in "staging", then (2) "persistence" takes place asynchronously
2. Since the model is in "staging" after `save`, we can immediately mutate the model
3. Then if you call `load` before the "persistence" is finished, `load` will just have to wait (blocking) a bit longer
Does this make sense?
<details><summary>Reproducer</summary>
Please forgive the `TrainState` being overly verbose, I just needed it for this reproducer
```python
import tempfile
from collections.abc import Iterator
from dataclasses import dataclass, field
from io import BytesIO
from pathlib import Path
from typing import Any
import pytest
import torch
from torch import nn
from torch.utils.data import DataLoader, Dataset
from torchtitan.components.checkpoint import AsyncMode, CheckpointManager
from torchtitan.components.ft import FTManager
from transformers import AutoModelForCausalLM
@dataclass
class TrainState:
step: int = 0
global_avg_losses: list[float] = field(default_factory=list)
global_max_losses: list[float] = field(default_factory=list)
log_steps: list[int] = field(default_factory=list)
def state_dict(self) -> dict[str, Any]:
global_avg_losses_bytes = BytesIO()
torch.save(self.global_avg_losses, global_avg_losses_bytes)
global_max_losses_bytes = BytesIO()
torch.save(self.global_max_losses, global_max_losses_bytes)
log_steps_bytes = BytesIO()
torch.save(self.log_steps, log_steps_bytes)
return {
"step": torch.tensor(self.step, dtype=torch.int32),
"global_avg_losses": global_avg_losses_bytes,
"global_max_losses": global_max_losses_bytes,
"log_steps": log_steps_bytes,
}
def load_state_dict(self, state_dict) -> None:
self.step = state_dict["step"].item()
state_dict["global_avg_losses"].seek(0)
self.global_avg_losses = torch.load(
state_dict["global_avg_losses"], weights_only=False
)
state_dict["global_max_losses"].seek(0)
self.global_max_losses = torch.load(
state_dict["global_max_losses"], weights_only=False
)
state_dict["log_steps"].seek(0)
self.log_steps = torch.load(state_dict["log_steps"], weights_only=False)
class MockDataset(Dataset):
def __len__(self):
return 10
def __getitem__(self, idx):
return torch.randn(128)
@dataclass
class MockCheckpointConfig:
enable_checkpoint: bool = True
folder: str = "checkpoint"
interval: int = 1
async_mode: str = AsyncMode.DISABLED
keep_latest_k: int = 0
model_weights_only: bool = False
export_dtype: str = "float32"
exclude_from_loading: list[str] = field(default_factory=list)
load_step: int = -1
@dataclass
class MockFTConfig:
replica_id: int = 0
enabled: bool = False
@dataclass
class MockJobSubConfig:
dump_folder: str = tempfile.gettempdir()
@dataclass
class MockJobConfig:
checkpoint: MockCheckpointConfig = field(default_factory=MockCheckpointConfig)
fault_tolerance: MockFTConfig = field(default_factory=MockFTConfig)
job: MockJobSubConfig = field(default_factory=MockJobSubConfig)
@pytest.fixture(scope="session", name="distributed_setup")
def fixture_distributed_setup() -> Iterator[None]:
if not torch.distributed.is_initialized():
torch.distributed.init_process_group(
backend="gloo",
# Use a different port as previous runs might have left it in TIME_WAIT state
init_method="tcp://localhost:10998",
world_size=1,
rank=0,
)
yield
if torch.distributed.is_initialized():
torch.distributed.destroy_process_group()
@pytest.fixture(scope="session", name="model")
def fixture_model(
distributed_setup, # noqa: ARG001
) -> Iterator[tuple[nn.Module, float]]:
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-1.5B-Instruct",
torch_dtype=torch.bfloat16,
device_map="cpu", # Use CPU for testing
)
# Return the original parameter value for verification
yield model, model.get_input_embeddings().weight[0, 0].item()
| https://github.com/pytorch/torchtitan/issues/930 | closed | [
"question",
"module: checkpoint"
] | 2025-03-05T02:06:09Z | 2025-03-20T18:30:28Z | null | jamesbraza |
huggingface/transformers | 36,546 | how to use transformers with musicgen with float16 | ```
import transformers, torch, builtins, numpy
processor = transformers.AutoProcessor.from_pretrained(' facebook/musicgen-stereo-melody-large', torch_dtype=torch.float16)
model = transformers.MusicgenMelodyForConditionalGeneration.from_pretrained('facebook/musicgen-stereo-melody-large ,torch_dtype=torch.float16).to('cuda')
result = []
for _ in builtins.range(2):
inputs = processor(audio=result[-1] if result else None, sampling_rate=model.config.audio_encoder.sampling_rate, text='A grand and majestic symphony with soaring strings, powerful brass, and dynamic orchestration. Inspired by Beethoven and Tchaikovsky, featuring dramatic crescendos, delicate woodwind passages, and a triumphant finale. The mood is epic, emotional, and timeless', padding=True, return_tensors='pt').to('cuda')
audio_values = model.generate(**inputs, max_new_tokens=1000)
result += audio_values[0, 0].cpu().numpy(),
from IPython.display import Audio
Audio(numpy.concatenate(result), rate=model.config.audio_encoder.sampling_rate)
```
i alwayse get
```
<ipython-input-12-348220656bb8> in <cell line: 0>()
7 for _ in builtins.range(2):
8 inputs = processor(audio=torch.from_numpy(result[-1]).to(dtype=torch.float32) if result else None, sampling_rate=model.config.audio_encoder.sampling_rate, text='A grand and majestic symphony with soaring strings, powerful brass, and dynamic orchestration. Inspired by Beethoven and Tchaikovsky, featuring dramatic crescendos, delicate woodwind passages, and a triumphant finale. The mood is epic, emotional, and timeless', padding=True, return_tensors='pt').to('cuda')
----> 9 audio_values = model.generate(**inputs, max_new_tokens=1000)
10 result += audio_values[0, 0].cpu().numpy(),
11
5 frames
/usr/local/lib/python3.11/dist-packages/torch/nn/modules/linear.py in forward(self, input)
123
124 def forward(self, input: Tensor) -> Tensor:
--> 125 return F.linear(input, self.weight, self.bias)
126
127 def extra_repr(self) -> str:
RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
```
| https://github.com/huggingface/transformers/issues/36546 | closed | [] | 2025-03-05T00:40:24Z | 2025-03-06T09:49:18Z | null | ghost |
pytorch/torchx | 1,012 | possible Improvement: Using shutdown() Before close() in `server.py` | ### Description:
While reviewing the get_routable_ip_to function in [torchx/apps/serve/serve.py](https://github.com/pytorch/torchx/blob/main/torchx/apps/serve/serve.py#L96), I noticed that the socket is directly closed using s.close(), without calling shutdown() beforehand.
```python3
def get_routable_ip_to(addr: str) -> str:
"""
get_routable_ip_to opens a dummy connection to the target HTTP URL and
returns the IP address used to connect to it.
"""
parsed = urlparse(addr)
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect((parsed.hostname, parsed.port or 80))
return s.getsockname()[0]
finally:
s.close()
```
### Question
Would there be any potential downsides or benefits to adding a shutdown(socket.SHUT_RDWR) call before closing the socket in the get_routable_ip_to function?
Possible Benefits
- Ensures that all pending data is properly discarded before closing, particularly if the socket is still in a half-open state.
- Prevents potential issues with lingering resources and improves resource management.
- Aligns with best practices for socket cleanup.
### Reference
The Python socket documentation states:
"close() releases the resource associated with a connection but does not necessarily close the connection immediately. If you want to close the connection in a timely fashion, call shutdown() before close()." [link](https://docs.python.org/3/library/socket.html#socket.socket.close)
Looking forward to your thoughts!
Thanks!
| https://github.com/meta-pytorch/torchx/issues/1012 | open | [] | 2025-03-04T23:59:09Z | 2025-03-04T23:59:09Z | 0 | allrob23 |
huggingface/lerobot | 813 | State Collection Timing Issue in Manipulator Teleoperation: Post-action vs Pre-action States | **Description:**
I've noticed in lerobot/lerobot/common/robot_devices/robots/manipulator.py that during teleoperation, the state being collected is the state after action execution. Is this intended behavior?
In my understanding, model inference should use the state before action execution, not after. This could potentially impact learning and inference accuracy, as the model would be using post-action states to predict actions rather than pre-action states.


| https://github.com/huggingface/lerobot/issues/813 | closed | [
"question",
"policies",
"stale"
] | 2025-03-04T14:19:52Z | 2025-10-07T02:26:55Z | null | www-Ye |
huggingface/agents-course | 284 | [QUESTION] Clarify Payment Required for completing Unit 2 notebooks | For the notebook for [components.ipynb]() I ran the `IngestionPipeline` function as follows:
```py
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline
# create the pipeline with transformations
pipeline = IngestionPipeline(
transformations=[
SentenceSplitter(),
HuggingFaceInferenceAPIEmbedding(model_name="BAAI/bge-small-en-v1.5"),
]
)
# run the pipeline sync or async
nodes = await pipeline.arun(documents=documents[:10])
nodes
```
I got the following outcome and looks like this .ipynb can't be executed without a payment route:
```python
---------------------------------------------------------------------------
ClientResponseError Traceback (most recent call last)
[<ipython-input-15-067f632f4f21>](https://localhost:8080/#) in <cell line: 1>()
12
13 # run the pipeline sync or async
---> 14 nodes = await pipeline.arun(documents=documents[:10])
15 nodes
12 frames
[/usr/local/lib/python3.11/dist-packages/aiohttp/client_reqrep.py](https://localhost:8080/#) in raise_for_status(self)
1159 self.release()
1160
-> 1161 raise ClientResponseError(
1162 self.request_info,
1163 self.history,
ClientResponseError: 402, message='Payment Required', url='https://api-inference.huggingface.co/pipeline/feature-extraction/BAAI/bge-small-en-v1.5'
```
is there any free and open alternatives?
| https://github.com/huggingface/agents-course/issues/284 | open | [
"question"
] | 2025-03-04T14:16:01Z | 2025-03-06T16:08:39Z | null | carlosug |
huggingface/agents-course | 281 | [any free and unpaid alternative for Inference Providers?] | while executing the [notebook](https://colab.research.google.com/github/huggingface/agents-course/blob/main/notebooks/unit2/smolagents/multiagent_notebook.ipynb) on **unit2. multi agent systems**, i got the following client error for [Inference Providers](https://huggingface.co/blog/inference-providers):
```python
> result = agent.run(task)
HTTPError: 402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/together/v1/chat/completions
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)
475 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
476 # as well (request id and/or server error message)
--> 477 raise _format(HfHubHTTPError, str(e), response) from e
478
479
HfHubHTTPError: 402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/together/v1/chat/completions (Request ID: Root=1-67c6f46c-005ae18a6bffc88c0d7a6668;04e6891c-45f6-4358-81fc-b5b794f25ddd)
You have exceeded your monthly included credits for Inference Providers. Subscribe to PRO to get 20x more monthly allowance.
```
any free and unpaid alternative for Inference Providers? | https://github.com/huggingface/agents-course/issues/281 | open | [
"question"
] | 2025-03-04T12:51:26Z | 2025-03-31T07:23:49Z | null | carlosug |
pytorch/xla | 8,786 | How to show PJRT Call Stack | ## ❓ Questions and Help
I wounder how to print PJRT Call Stack. Thanks | https://github.com/pytorch/xla/issues/8786 | open | [
"question",
"openxla"
] | 2025-03-04T09:32:43Z | 2025-03-07T20:23:32Z | null | yuanfz98 |
huggingface/lerobot | 808 | How to acquire the End-Effector(eef) pose? | Hi, thanks for your great job!
How can we acquire the eef pose and control the eef pose instead of only the joints states?
Thanks for your attention and hope for your kind response! | https://github.com/huggingface/lerobot/issues/808 | closed | [
"question",
"policies",
"robots",
"stale"
] | 2025-03-04T09:30:35Z | 2025-10-16T02:28:50Z | null | oym1994 |
huggingface/lerobot | 806 | How to control local robot with remote model? | I have achieved the inference process on my local computer. I want to know how to put the model on a remote server and control a robot on local.
My robot: Koch1.1 | https://github.com/huggingface/lerobot/issues/806 | closed | [
"question",
"stale"
] | 2025-03-04T09:09:12Z | 2025-10-16T02:28:51Z | null | neverspillover |
huggingface/optimum-intel | 1,186 | How to initialize development env for this repo? | Hi! I would like to develop this repo, met some issues during env initialization. I ran `pip install -e .` to install current repo to local python env.
However error came out when running 'pytest tests\'
`ImportError while importing test module '/home/shji/codes/optimum-intel/tests/ipex/test_modeling.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../miniforge3/envs/optimum-intel/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/ipex/test_modeling.py:42: in <module>
from optimum.intel import (
E ImportError: cannot import name 'IPEXModelForSeq2SeqLM' from 'optimum.intel' (/home/shji/codes/optimum-intel/optimum/intel/__init__.py`
Seems like installation is wrong or something has been missed as local module cannot be found.
Could you provide me some suggestions? Any documentation for setting dev env would be better, thank you | https://github.com/huggingface/optimum-intel/issues/1186 | closed | [] | 2025-03-04T06:10:15Z | 2025-03-10T06:01:21Z | null | shjiyang-intel |
pytorch/xla | 8,784 | how to save weights | ## ❓ Questions and Help
Hello, I using torchxa to convert model to stablehlo.
https://pytorch.org/xla/master/features/stablehlo.html#torch-export-to-stablehlo
Follow this page,
weights, stablehlo = tx.export.exported_program_to_stablehlo(exported)
print(stablehlo.mlir_module())
Can store weights and/or stablehlo object however you like
But how to store weights, I don't know. I found weights is a list.
Could you help me? Thank you!
Another question, we can save data and functions directory by using torch_xla, how can I save functions by using torchax? | https://github.com/pytorch/xla/issues/8784 | closed | [
"question"
] | 2025-03-04T06:05:29Z | 2025-03-29T08:35:03Z | null | raninbowlalala |
pytorch/examples | 1,319 | Cuda memory usage does not decrease when increasing the number of cuda cards (fsdp_tp_example.py). | According to the implementation of the source code, I did several experiments to study the script running time and cuda memory occupancy.
- exp1: nproc_per_node=4, nnodes=1 => cuda=2161~2411MB, runtime=63.04s
- exp2: nproc_per_node=8, nnodes=1 => cuda=2141~2395MB, runtime=70.52s
- exp3: nproc_per_node=4, nnodes=2 => cuda=2141~2145MB, runtime=233.03s
According to the results of the above three experiments, we find that with the increase of the number of graphics cards, the cuda memory usage did not decrease significantly, but the script running time increased.
Why?
I am looking for the reasons, according to the algorithm principle (FSDP and TP), as the number of video cards increases, the cuda memory and running time should become smaller.
# My Environment
* Pytorch version: 3.11.7
* Operating System and version: Linux version 3.10.0-1160.114.2.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) )
* Installed using source? [yes/no]: yes
* Are you planning to deploy it using docker container? [yes/no]: no
* Is it a CPU or GPU environment?: GPU
* Which example are you using: fsdp_tp_example.py
* Link to code or data to repro [if any]: https://github.com/pytorch/examples/tree/main/distributed/tensor_parallelism | https://github.com/pytorch/examples/issues/1319 | open | [] | 2025-03-04T04:04:35Z | 2025-03-04T04:59:47Z | 0 | YangHui90 |
huggingface/open-r1 | 457 | How to run reject sampling | I ran generate_reaoning and got the cot data. How do I run reject sampling after that? | https://github.com/huggingface/open-r1/issues/457 | open | [] | 2025-03-03T03:56:32Z | 2025-03-03T03:56:32Z | null | JavaZeroo |
pytorch/serve | 3,396 | Why is TorchServe No Longer Actively Maintained? | Hello, I noticed that the TorchServe GitHub page has been marked as 'Limited Maintenance,' indicating that the project is no longer actively maintained. Could you share the reasons behind this decision? Is it related to the development direction of the PyTorch ecosystem? Additionally, are there any recommended alternative tools or solutions for deploying PyTorch models?
Thank you for your response! | https://github.com/pytorch/serve/issues/3396 | open | [] | 2025-03-03T02:16:01Z | 2025-04-09T09:29:25Z | 11 | ily666666 |
huggingface/lerobot | 797 | use_delta_joint_actions_aloha | if self.use_delta_joint_actions_aloha:
raise NotImplementedError(
"`use_delta_joint_actions_aloha` is used by pi0 for aloha real models. It is not ported yet in LeRobot."
)
when will you put implementation for it because it is very important
| https://github.com/huggingface/lerobot/issues/797 | closed | [
"question",
"policies"
] | 2025-03-02T18:14:13Z | 2025-04-03T16:39:39Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/open-r1 | 453 | How to log the intermediate outputs results? | How to log the intermediate outputs results to track the 'aha moment'. How can I set this in config or modify the code? | https://github.com/huggingface/open-r1/issues/453 | closed | [] | 2025-03-01T17:08:48Z | 2025-03-09T13:53:59Z | null | 0205090923 |
huggingface/Math-Verify | 32 | How to adjust the priority of '\\ln' and '*' when parsing latex? | When I try to parse a string: "$$ \\dfrac{\\cos x}{2\\lnx * x^{\\ln x - 1}} $$", the result is "cos(x)/((2*log(x*x**(log(x, E) - 1), E)))", rather than "cos(x)/((2*x**(log(x, E) - 1)*log(x, E)))". It seems that there is something wrong when dealing with the priority of '\\ln' and '*'. So I wonder how to adjust the priority to fix this error. Thank you!
Error case:

Expected (which changes the order of '\\ln'):
 | https://github.com/huggingface/Math-Verify/issues/32 | closed | [] | 2025-03-01T09:22:31Z | 2025-07-01T20:17:49Z | null | yhhu99 |
pytorch/ao | 1,805 | What kind of layers are optimized by torchao on a RTX 4090? | I am trying to quantize a model and I am running this on a 4090. Since many of the available quantization benchmarks are done on higher gpus, I am trying to establish a baseline perfromance gain I can expect from quantization.
I tried the tutorial at [torchao_demo](https://github.com/ethanshenley/PyTorch-Conference-Recipes/blob/main/torchao_demo.ipynb) on a gpu and it worked great. My model has similar kind of transformer layers with q, k, v projections but I am not able to see the same kind of performance with a large chunk of `aten::_copy()` operations in profile log.
To debug, I wanted to benchmark on a single linear layer as the majority of modified layers seem to be of this type. But I am not able to see any performance gain in this experiment of mine. I would appreciate if I can get more context into the specific layers that gets optimized by `torchao`.
```
'''
https://github.com/ethanshenley/PyTorch-Conference-Recipes/blob/main/torchao_demo.ipynb
'''
import gc
import psutil
import torch
import torch.nn as nn
import time
from torchao.quantization import quantize_, int8_weight_only,float8_weight_only
device = "cuda:0"
def get_memory_usage():
return psutil.Process().memory_info().rss / 1024 / 1024 # in MB
def run_inference(model, inputs, num_runs=10):
start_time = time.time()
for i in range(num_runs):
with torch.no_grad():
outputs = model(inputs[i].squeeze())
torch.cuda.synchronize(device)
end_time = time.time()
return (end_time - start_time) / num_runs
# Load model and tokenizer
bsz = 16
n_runs = 100
for sz in range(1024, 20480, 1024):
print('====================================================')
print(f"Running with linear layer of size {sz}...")
model = nn.Linear(sz, sz).to(device)
inputs = torch.randn(n_runs, bsz, sz).to(device)
print("\nRunning baseline model...")
baseline_memory = get_memory_usage()
baseline_time = run_inference(model, inputs, n_runs)
print(f"Baseline - Time: {baseline_time:.4f}s, Memory: {baseline_memory:.2f}MB")
print("\nRunning int8 weight-only quantized model...")
model_int8 = nn.Linear(sz, sz).to(device)
quantize_(model_int8, int8_weight_only())
int8_memory = get_memory_usage()
int8_time = run_inference(model_int8, inputs, n_runs)
print(f"Int8 Weight-Only - Time: {int8_time:.4f}s, Memory: {int8_memory:.2f}MB")
print("\nRunning fp8 weight-only quantized model...")
model_fp8 = nn.Linear(sz, sz).to(device)
quantize_(model_fp8, float8_weight_only())
fp8_memory = get_memory_usage()
fp8_time = run_inference(model, inputs, n_runs)
print(f"fp8 Weight-Only - Time: {fp8_time:.4f}s, Memory: {fp8_memory:.2f}MB")
print("\nPerformance Improvements:")
print(f"Int8 weight-only speedup: {baseline_time / int8_time:.2f}x")
print(f"Int8 weight-only memory reduction: {baseline_memory / int8_memory:.2f}x")
print(f"fp8 weight-only speedup: {baseline_time / fp8_time:.2f}x")
print(f"fp8 weight-only memory reduction: {baseline_memory / fp8_memory:.2f}x")
del model, model_int8, model_fp8, inputs
gc.collect()
torch.cuda.empty_cache()
torch.cuda.synchronize(device)
```
| https://github.com/pytorch/ao/issues/1805 | open | [
"question",
"performance",
"triaged"
] | 2025-03-01T00:36:14Z | 2025-05-01T18:36:43Z | null | naiveen |
pytorch/xla | 8,776 | Standardize `AllClose` calls from test_aten_xla_tensor tests | Standardize `AllClose` calls from `test/cpp/test_aten_xla_tensor_*.cpp` tests to be with the same standards. | https://github.com/pytorch/xla/issues/8776 | open | [
"enhancement",
"documentation"
] | 2025-03-01T00:14:19Z | 2025-03-05T20:24:41Z | 0 | pgmoka |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.