repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/candle | 2,974 | Any good first issues a newcomer could tackle? | Hey! I've been using this crate for a while now and would love to start contributing back! I notice that your issues aren't labelled, who should I contact/do you have a list of issues that would be good for me? | https://github.com/huggingface/candle/issues/2974 | open | [] | 2025-05-29T04:19:18Z | 2025-05-30T18:25:37Z | 3 | Heidar-An |
pytorch/torchtitan | 1,237 | [Bug] Potential bugs in "_grouped_mm" in Llama4 MoE codes | ### Bug description
### Descriptions for Bugs.
I encountered NaN loss values when running Llama 4 MoE experimental codes.
The errors come from [here](https://github.com/pytorch/torchtitan/blob/ed2bbc07dda35ce26187bb0d743115381e884b35/torchtitan/experiments/llama4/model/moe.py#L85-L87).
Afaik `offsets` are defined a... | https://github.com/pytorch/torchtitan/issues/1237 | closed | [] | 2025-05-29T00:07:09Z | 2025-07-08T04:54:37Z | 8 | raymin0223 |
pytorch/xla | 9,259 | need an incremental build script | ## 🚀 Feature
After making a small change to the source code, we should be able to do an incremental build that only rebuilds the affected targets. We need to document how to do that. It may require writing a script that can be easily invoked.
## Motivation
Currently we recommend developers to run https://github.com... | https://github.com/pytorch/xla/issues/9259 | closed | [
"tech debt",
"build"
] | 2025-05-28T23:15:38Z | 2025-05-30T01:30:56Z | 4 | zhanyong-wan |
huggingface/xet-core | 358 | How can I have snapshot_download to have continue feature? Errors became very common | Whenever some error happens and i run same code, it starts from 0
It is XET enabled repo and hf xet installed
I really need to have resume feature
my entire code
```
from huggingface_hub import snapshot_download
import os
import argparse
def download_models(target_dir=None):
"""
Download models from Huggi... | https://github.com/huggingface/xet-core/issues/358 | closed | [
"enhancement"
] | 2025-05-28T22:30:19Z | 2025-11-20T17:08:35Z | null | FurkanGozukara |
pytorch/xla | 9,256 | Docs build issues errors / warnings on duplicate labels (anchors) | Docs build indicates that the docs have duplicate labels (aka anchors). These predate the recent changes to myst but now that we have standardized on the same tooling as upstream PT, we should now start fixing these. Here is an output. Note that you have to manually clean by deleting the build directory to force a full... | https://github.com/pytorch/xla/issues/9256 | closed | [
"documentation"
] | 2025-05-28T19:02:14Z | 2025-07-16T22:48:17Z | 1 | yaoshiang |
huggingface/transformers | 38,452 | Memory saving by upcasting logits for only non-ignored positions | ### Feature request
In [`loss_utils.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/loss/loss_utils.py), logits are upcasted for float32 for some losses. This can waste memory for cases where certain labels are `ignore_index`. This is especially true for fine tuning cases where one chooses ... | https://github.com/huggingface/transformers/issues/38452 | open | [
"Feature request"
] | 2025-05-28T18:58:52Z | 2025-05-29T12:38:15Z | 1 | harshit2997 |
huggingface/speech-to-speech | 163 | how to use this with Livekit Agent? | how to use this with Livekit Agent? | https://github.com/huggingface/speech-to-speech/issues/163 | open | [] | 2025-05-28T18:27:11Z | 2025-05-28T18:27:11Z | null | Arslan-Mehmood1 |
huggingface/transformers | 38,448 | num_items_in_batch larger than the actual useful token when computing loss | def fixed_cross_entropy(source, target, num_items_in_batch: int = None, ignore_index: int = -100, **kwargs):
I check the shape of the inputs and find follows:
In [1]: logits.shape
Out[1]: torch.Size([4, 896, 152064])
In [2]: labels.shape
Out[2]: torch.Size([4, 896])
In [3]: num_items_in_batch
Out[3]: 4390
Why is 439... | https://github.com/huggingface/transformers/issues/38448 | closed | [] | 2025-05-28T15:28:05Z | 2025-05-31T02:30:07Z | 4 | SHIFTTTTTTTT |
huggingface/transformers | 38,435 | [i18n-ro] Translating docs to Romanian | Hi!
Let's bring the documentation to all the Romanian-speaking community 🌐
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to t... | https://github.com/huggingface/transformers/issues/38435 | open | [
"WIP"
] | 2025-05-28T12:01:48Z | 2025-05-28T15:53:39Z | 2 | zero-point |
huggingface/transformers | 38,428 | [Question] The logic of data sampler in data parallel. | Hi, thanks for your attention.
When reading the source code of transformers, I cannot understand the implementation of `_get_train_sampler` in `trainer.py`. Why the default data sampler is `RandomSampler` rather than `DistributedSampler`? How does the trainer handle the sampler for data parallel?
reference code: http... | https://github.com/huggingface/transformers/issues/38428 | closed | [] | 2025-05-28T08:49:13Z | 2025-07-06T08:02:36Z | 3 | kxzxvbk |
pytorch/TensorRT | 3,536 | ❓ [Question] Do you have any plan to release v2.6.1 ? | ## ❓ Question
Hello, Torch-TensorRT team,
I'd like to ask if there are any plans to release a patch version, such as v2.6.1.
The current release (v2.6.0) includes a `breakpoint()` call left in [the code](https://github.com/pytorch/TensorRT/blob/v2.6.0-rc3/py/torch_tensorrt/dynamo/conversion/custom_ops_converters.py#... | https://github.com/pytorch/TensorRT/issues/3536 | closed | [
"question"
] | 2025-05-28T08:37:18Z | 2025-06-03T04:50:48Z | null | junstar92 |
huggingface/transformers | 38,425 | Can not load TencentBAC/Conan-embedding-v2 | ### System Info
Description
When attempting to load the “Conan-embedding-v2” model directly via transformers.AutoModel.from_pretrained, I get a ValueError indicating that the repo’s config.json lacks a model_type key. This prevents the Transformers library from inferring which model class to instantiate.
### Who c... | https://github.com/huggingface/transformers/issues/38425 | closed | [
"bug"
] | 2025-05-28T08:21:23Z | 2025-05-28T14:58:03Z | 1 | shanekao-sks |
huggingface/accelerate | 3,596 | How to distribute the model into multiple GPUs using accelerate? | I have 4 GPUs. If I only use a single GPU to train the model, there will be an OutOfMemoryError raised. How can I distribute the model into all the 4 GPUs to avoid the OutOfMemoryError using accelerate? | https://github.com/huggingface/accelerate/issues/3596 | closed | [] | 2025-05-28T06:27:08Z | 2025-05-28T14:06:18Z | null | GeorgeCarpenter |
huggingface/candle | 2,971 | Enhance the usability of the tensor struct | Hello,
I’m currently learning how to use Candle with the book Dive into Deep Learning, but implementing the code in Candle. I noticed that Candle is missing some practical utility functions, such as:
* The Frobenius norm
* dot product (vector or matrix dot product)
* matrix-vector multiplication
While these functio... | https://github.com/huggingface/candle/issues/2971 | closed | [] | 2025-05-28T03:41:44Z | 2025-05-29T07:41:02Z | 1 | ssfdust |
huggingface/transformers.js | 1,323 | Cannot get the SAM model running like in example | ### Question
I've found that transformers.js supports SAM as written in 2.14.0 release notes.
https://github.com/huggingface/transformers.js/releases/tag/2.14.0
I'm running the code on a M1 mac in a Brave browser.
But after I've used and adapted the example script, I can actually see in my browser console that the m... | https://github.com/huggingface/transformers.js/issues/1323 | closed | [
"question"
] | 2025-05-27T20:01:49Z | 2025-11-29T12:32:29Z | null | BernhardBehrendt |
pytorch/tutorials | 3,367 | 💡 [REQUEST] - Proposal: Add Tutorial on Differentiable Decision Forests (DNDF-style) | ### 🚀 Describe the improvement or the new tutorial
### Proposal: Add a Tutorial/Documentation Example on Differentiable Decision Forests
**Overview**
This is a proposal to add a well-documented example or tutorial demonstrating a *Differentiable Decision Forest* model in PyTorch — inspired by the Deep Neural Decisi... | https://github.com/pytorch/tutorials/issues/3367 | open | [
"tutorial-proposal"
] | 2025-05-27T10:01:23Z | 2025-07-02T15:00:18Z | 6 | Tunahanyrd |
huggingface/chat-ui | 1,836 | Search feature tasks | We implemented a first version of the search chat feature in #1823, there's still some todos if people feel like tackling:
- [ ] Right now we only return the N most relevant snippets, we would need to return all matching conversations and implement infinite loading & pagination. The building blocks already exist in ... | https://github.com/huggingface/chat-ui/issues/1836 | closed | [
"enhancement",
"help wanted",
"front",
"back"
] | 2025-05-27T08:17:44Z | 2025-06-02T14:30:40Z | 7 | nsarrazin |
huggingface/transformers | 38,396 | Can I disable all CI works in my forked version of Transformers? | After I synced the `main` branch of Transformers in my forked version, github keeps running CI works and fails. Can I disable it? Thanks. | https://github.com/huggingface/transformers/issues/38396 | closed | [] | 2025-05-27T04:44:07Z | 2025-05-28T18:06:31Z | 2 | ChengLyu |
huggingface/doc-builder | 564 | How to ignore some line when applying style? | I have this in my code:
```python
expected_output = textwrap.dedent("""\
╭────────────────────── Step 42 ───────────────────────╮
│ ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┓ │
│ ┃ Prompt ┃ Completion ┃ Correctness ┃ Format ┃ │
│ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━┩ │
│ │ The sky is │ ... | https://github.com/huggingface/doc-builder/issues/564 | open | [] | 2025-05-26T21:58:08Z | 2025-05-26T21:59:13Z | null | qgallouedec |
huggingface/safetensors | 609 | Properties data | ### Feature request
Please add properties for the content of safetensor files.
(Which can be read without the requirement to load the whole thing ...)
### Motivation
Rename all your safetensor files to a numeric value from 1.safetensors to n.safetensors, where n is the amount of such files you have.
Now try to find... | https://github.com/huggingface/safetensors/issues/609 | closed | [] | 2025-05-26T20:06:13Z | 2025-06-16T12:13:08Z | 2 | schoenid |
huggingface/open-r1 | 660 | How to control the number of responses per query for each benchmark? | Hi, thank you for the great work!
In the README, I noticed that you mention the use of different numbers of responses per query for estimating pass@1 across benchmarks. For example:
Benchmark | Number of responses per query
-- | --
AIME 2024 | 64
MATH-500 | 4
GPQA Diamond | 8
LiveCodeBench | 16
However, I'm unable to... | https://github.com/huggingface/open-r1/issues/660 | open | [] | 2025-05-26T14:38:15Z | 2025-05-27T15:32:50Z | null | Zoeyyao27 |
huggingface/transformers | 38,377 | Why are the model classes in unit tests imported directly from the transformer package instead of directly importing the model classes in the file? Is there any special consideration? | ### Feature request
Take qwen3MoE unit test as an example:
if is_torch_available():
import torch
from transformers import (
Qwen3MoeForCausalLM,
Qwen3MoeForQuestionAnswering,
Qwen3MoeForSequenceClassification,
Qwen3MoeForTokenClassification,
Qwen3MoeModel,
)
Why no... | https://github.com/huggingface/transformers/issues/38377 | open | [
"Feature request"
] | 2025-05-26T11:41:19Z | 2025-05-26T11:41:19Z | 0 | ENg-122 |
huggingface/transformers | 38,375 | Unable to run run_instance_segmentation_no_trainer with HF Accelerate | ### System Info
I am trying to run the [examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py](https://github.com/huggingface/transformers/blob/d1b92369ca193da49f9f7ecd01b08ece45c2c9aa/examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py) with HF Accelerate. I was abl... | https://github.com/huggingface/transformers/issues/38375 | closed | [
"bug"
] | 2025-05-26T10:23:04Z | 2025-07-05T08:03:07Z | 3 | gohjiayi |
huggingface/huggingface_hub | 3,117 | how to download huggingface model files organize the http header and so on in other language | Hi,
I want to use another language like java or scala to download huggging face model and config.json. but meet connnect error , it is not make sense . so I want to know does huggingface have some more setting to download file ?
````
package torch.tr
import java.io.FileOutputStream
import java.ne... | https://github.com/huggingface/huggingface_hub/issues/3117 | open | [] | 2025-05-26T10:00:25Z | 2025-06-15T14:55:48Z | null | mullerhai |
huggingface/agents-course | 510 | anyone can run unit 1 dumm agent notebook???? | <img width="1226" alt="Image" src="https://github.com/user-attachments/assets/1813be3d-0d73-478e-86fa-11304e796614" /> | https://github.com/huggingface/agents-course/issues/510 | closed | [
"question"
] | 2025-05-25T03:00:04Z | 2025-06-25T09:03:52Z | null | chaoshun2025 |
pytorch/torchtitan | 1,223 | How to pretrain from scratch a Qwen 2.5 7B-base model using Torchtitan? | HI team,
Thank you for the excellent work!
Could you please tell me where to find example scripts/templates for pretraining from scratch a Qwen 2.5 7B-base model using Torchtitan?
Thanks again! | https://github.com/pytorch/torchtitan/issues/1223 | closed | [] | 2025-05-25T00:42:15Z | 2025-08-21T03:18:41Z | null | tjoymeed |
huggingface/transformers | 38,346 | Why is return_assistant_tokens_mask and continue_final_message incompatible? | I'm currently authoring a new chat template, and while debugging encountered the check for this, however when uncommenting the check, the resulting mask and template both seem to still be correct. So I'm curious as to why or whether this check is needed at all?
I can see it was introduced in [the original PR](https://... | https://github.com/huggingface/transformers/issues/38346 | closed | [] | 2025-05-24T23:44:13Z | 2025-07-02T08:03:11Z | 2 | nyxkrage |
huggingface/candle | 2,967 | Logit Discrepancy Between Candle and PyTorch When Using XLM-RoBERTa Model | When running the same XLM-RoBERTa model (`s-nlp/xlmr_formality_classifier` - [HF](https://huggingface.co/s-nlp/xlmr_formality_classifier) ) in both Candle and PyTorch, I'm observing significant differences in the logits produced by the model's classification head for identical inputs. Is this expected behavior? See [th... | https://github.com/huggingface/candle/issues/2967 | closed | [] | 2025-05-24T17:24:33Z | 2025-05-26T10:45:24Z | 2 | jpe90 |
huggingface/diffusers | 11,607 | with a custom attention processor for Flux.dev, inference time changes when manually load and inject the transformer model into a flux pipeline versus let the flux pipeline constructor load the transformer internally. | With a custom attention processor for Flux.dev transformer, the inference time is different between the following two ways:
1. Manually load and inject the transformer into a flux.dev pipeline
2. Let the pipeline constructor load the transformer internally
The inference time of the first way is about 15% slower than... | https://github.com/huggingface/diffusers/issues/11607 | closed | [] | 2025-05-24T06:42:11Z | 2025-05-26T01:27:00Z | 1 | LinchuanXuTheSEAAI |
huggingface/transformers | 38,326 | Allow `MllamaModel` to accept `pixel_values` and `inputs_embeds` | ### Feature request
`MllamaModel` does not allow users to pass `pixel_values` and `inputs_embeds` simultaneously:
https://github.com/huggingface/transformers/blob/54cd86708d2b63a1f696ee1c59384a2f04100f57/src/transformers/models/mllama/modeling_mllama.py#L1702-L1705
However, commenting out those lines and running the ... | https://github.com/huggingface/transformers/issues/38326 | closed | [
"Feature request"
] | 2025-05-23T15:26:28Z | 2025-05-27T16:33:57Z | 1 | dxoigmn |
pytorch/audio | 3,918 | `io.UnsupportedOperation: seek` when using `torchaudio.io.StreamWriter` with a File-like object | ### 🐛 Describe the bug
In [the tutorial for `StreamWriter`](https://docs.pytorch.org/audio/stable/tutorials/streamwriter_basic_tutorial.html#file-like-objects), it is clearly stated that `StreamWriter` works with File-like object that implements `io.RawIOBase.write`. However, when I used `StreamWriter` with the [Goog... | https://github.com/pytorch/audio/issues/3918 | open | [] | 2025-05-23T15:24:45Z | 2025-05-23T15:40:48Z | 0 | digicosmos86 |
huggingface/transformers | 38,323 | `PYTHONOPTIMIZE=2` seems not work with `transformers-`based library | ### System Info
I am currently having the latest package install.
torch 2.6.0+cu124
transformers 4.51.3
sentence-transformers 4.1.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folde... | https://github.com/huggingface/transformers/issues/38323 | closed | [
"bug"
] | 2025-05-23T14:24:34Z | 2025-05-26T14:29:17Z | 1 | IchiruTake |
huggingface/candle | 2,965 | Are there any support for complex number? | Are there any support for complex number? | https://github.com/huggingface/candle/issues/2965 | closed | [] | 2025-05-23T09:33:47Z | 2025-11-23T22:16:54Z | 1 | hndrbrm |
huggingface/accelerate | 3,586 | Where is PartialState._shared_state initialized? | Hi! When I step through the code line by line, before this line ([entering into `__init__` of `AcceleratorState`](https://github.com/huggingface/accelerate/blob/v0.34.2/src/accelerate/state.py#L856 )) , `PartialState._shared_state`returns
```
{}
```
But after entering into `__init__` of `AcceleratorState`, `PartialStat... | https://github.com/huggingface/accelerate/issues/3586 | closed | [] | 2025-05-23T08:17:44Z | 2025-06-30T15:08:15Z | null | SonicZun |
pytorch/ao | 2,249 | int4_weight_only get plain weight are padded | I try to quantize a model with int4_weight_only, and want to get the plained weight, but found the weight has been padded. To reproduce it, run the following script:
```python
import torch
from transformers import TorchAoConfig, AutoModelForCausalLM
model_name = "JackFram/llama-68m"
quantization_config = TorchAoConfi... | https://github.com/pytorch/ao/issues/2249 | open | [
"question",
"quantize_"
] | 2025-05-23T07:17:20Z | 2025-06-24T20:14:53Z | null | jiqing-feng |
huggingface/transformers | 38,300 | Will Gemma 3n be added to transformers? | ### Model description
Question: Are there plans from Google or Huggingface to implement Gemma 3n in other frameworks?
I've seen the LiteRT weights and Android App Link on Huggingface, and was wandering if it would be possible to convert the model architecture in the *.task file to a transformer pytorch Module?
Perso... | https://github.com/huggingface/transformers/issues/38300 | closed | [
"New model"
] | 2025-05-22T15:26:20Z | 2025-06-30T07:07:53Z | 4 | TheMrCodes |
huggingface/transformers | 38,281 | KeyError in Llama-4-Maverick-17B-128E-Instruct-FP8 Inference with Offloading | ### Issue Description
Loading `meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` succeeds with `transformers==4.51.0`, but inference fails with `KeyError: 'model.layers.37.feed_forward.experts.gate_up_proj'` during `model.generate`. This occurs on 4x NVIDIA RTX A6000 (~196GB VRAM, CUDA 12.4, Python 3.12.3, Ubuntu 24.0... | https://github.com/huggingface/transformers/issues/38281 | closed | [] | 2025-05-22T05:45:30Z | 2025-07-27T08:03:11Z | 4 | pchu2025 |
pytorch/xla | 9,236 | make README work for people using python 3.12/13 | ## 📚 Documentation
The installation instructions in README fail if the user has python 3.12 or 3.13 as the default. (Currently pytorch-xla only works with python 3.8-3.11.)
We should:
- document the requirement for the python version.
- add workaround instructions for people whose default python version is not 3.8-... | https://github.com/pytorch/xla/issues/9236 | open | [
"documentation"
] | 2025-05-22T00:33:29Z | 2025-05-22T16:09:41Z | 4 | zhanyong-wan |
huggingface/transformers | 38,268 | Group beam search with sampling? | ### Feature request
In the current generation code, group beam search is necessarily greedy. From a theoretical point of view, it is not very clear why that should be the case, since the diversity penalty is applied on the logits anyway, yielding a full distribution from which sampling can still be performed.
### Mot... | https://github.com/huggingface/transformers/issues/38268 | open | [
"Feature request"
] | 2025-05-21T18:08:59Z | 2025-06-06T18:11:13Z | 4 | adrian-valente |
huggingface/candle | 2,961 | Shape Mismatch in MatMul During Forward Pass of ModernBertForSequenceClassification | ModernBertForSequenceClassification model (hidden size = 768, sequence length = 128) to categorize text into one of classes. During the initial training epoch, however, the forward pass fails with a “shape mismatch in matmul” error.
Is there any way to solve this?
#Error log
Tokenized shape: [4, 128]
Attention mask ... | https://github.com/huggingface/candle/issues/2961 | closed | [] | 2025-05-21T14:25:07Z | 2025-06-08T12:11:46Z | 2 | whitebox2 |
pytorch/pytorch | 154,027 | How to add custom attributes to torch tensor? | ### 🚀 The feature, motivation and pitch
How can I add custom attributes like device_local or host_local to a PyTorch tensor without affecting TensorImpl or StorageImpl? I have a use case where I need to convert an external tensor into a PyTorch tensor while preserving such properties
### Alternatives
_No response_
... | https://github.com/pytorch/pytorch/issues/154027 | closed | [] | 2025-05-21T09:13:16Z | 2025-05-21T13:42:11Z | null | bailuan |
pytorch/vision | 9,079 | Build pytorch trunk from source and build vision from source makes `import torchvision;` fail | ### 🐛 Describe the bug
If I build pytorch from turnk (2.8+1478d0185c29) and build vision from source, I can't run `import torchvision;`.
```
import torchvision
```
will report: `RuntimeError: operator torchvision::nms does not exist`.
It will succeed if I replace the version of pytorch from trunk to branch `releas... | https://github.com/pytorch/vision/issues/9079 | open | [] | 2025-05-21T03:25:05Z | 2025-09-02T15:27:37Z | 3 | ChuanqiXu9 |
pytorch/pytorch | 154,009 | SourcelessBuilder.create does not know how to wrap <class '__main__.InFlexData'> | ### 🐛 Describe the bug
I am trying to use torch compile on my functions and encounter this issue. I attached a minimum test program so anyone can reproduce the issue.
```python
from dataclasses import dataclass
import torch
@dataclass(frozen=True)
class BaseFlexData:
dtype: torch.dtype | None = None
def ... | https://github.com/pytorch/pytorch/issues/154009 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-dataclasses",
"vllm-compile",
"module: vllm"
] | 2025-05-21T02:34:19Z | 2025-10-24T16:39:07Z | null | zyongye |
huggingface/transformers | 38,243 | <spam> | We are looking for an experienced Machine Learning Engineer for a BTC/USDT prediction project using CNN, LSTM, and Transformers. The goal is to forecast cryptocurrency price movements with a target accuracy of 90%+.
More details here:[ ](https://gist.github.com/DandBman/c76a548b1972da50ffe6bbdd93fdd613) | https://github.com/huggingface/transformers/issues/38243 | closed | [] | 2025-05-20T22:14:11Z | 2025-05-21T13:14:41Z | 0 | DandBman |
huggingface/diffusers | 11,590 | Infinite (not literally) length video creation using LTX-Video? | First of all thanks to Aryan (0.9.7 integration) and DN6 (adding GGUF). Model is quite good and output is also promising.
I need help in creating continuous video using the last frame. 1 trick is to generate the video, extract the last frame and do inference. Is there any easy way where I can do this in loop.
My thou... | https://github.com/huggingface/diffusers/issues/11590 | closed | [] | 2025-05-20T13:37:36Z | 2025-05-20T19:51:20Z | 1 | nitinmukesh |
pytorch/ao | 2,228 | [Quant] Can quant not be decomposed on inductor? | torch.ops.torchao.dequantize_affine decomposed to convert_element_type and mul.
Inductor will do constant_fold before pattern matching
On constant_fold, inductor replace fp8 weight and some previous operations with fp32 weight
Is this as expected?
Now register_decomposition on [register_decomposition](https://github.c... | https://github.com/pytorch/ao/issues/2228 | closed | [
"question",
"triaged"
] | 2025-05-20T09:25:54Z | 2025-06-25T08:22:25Z | null | shiyang-weng |
huggingface/agents-course | 501 | [BUG] Notebook on HF Hub is not updated | "Workflows in LlamaIndex" [course page](https://huggingface.co/learn/agents-course/unit2/llama-index/workflows#creating-workflows) is referring notebook on [HF Hub](https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/workflows.ipynb), which is not the updated version from [GitHub](https://github.... | https://github.com/huggingface/agents-course/issues/501 | closed | [
"question"
] | 2025-05-20T06:45:26Z | 2025-05-29T05:28:46Z | null | karenwky |
huggingface/open-r1 | 649 | how to evaluate use local models and datasets? | I change the readme eval command like following:
**MODEL=./deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=./data/evals/
# AIME 2024
TASK=aime24... | https://github.com/huggingface/open-r1/issues/649 | open | [] | 2025-05-20T05:57:29Z | 2025-05-20T05:57:29Z | null | SiqingHe |
huggingface/lerobot | 1,130 | Drive mode reversed on calibration. | I had an issue where after calibrating drive_mode was reversed for one of my motors (0 vs. 1) as a result, moving the leader in one direction caused the follower to go the opposite direction.
Saw some suggestions that moving it through the full range of motion resolved this but I wasn't able to get that to work. I cou... | https://github.com/huggingface/lerobot/issues/1130 | open | [
"bug",
"question",
"robots"
] | 2025-05-20T03:08:06Z | 2025-07-16T06:50:20Z | null | brainwavecoder9 |
pytorch/TensorRT | 3,525 | ❓ [Question] How to save the compiled while using torch.compile | For the example below, how do I save the compiled model?
backend = "torch_tensorrt"
tp_model = torch.compile(
tp_model,
backend=backend,
options={
"truncate_long_and_double": True,
"enabled_precisions": {torch.float32, torch.float16},
"use_python_runtime": True,
"min_block_s... | https://github.com/pytorch/TensorRT/issues/3525 | open | [
"question"
] | 2025-05-20T03:06:53Z | 2025-05-20T15:15:27Z | null | klin2024 |
pytorch/torchchat | 1,543 | [IMPORTANT] torchchat sunset | **As of May 19th 2025, we are halting active development on torchchat.**
The original intent of torchchat was to both demonstrate how to run LLM inference using PyTorch and improve the performance and functionality of the entire PyTorch ecosystem.
Since torchchat’s launch, we’ve seen vLLM become the dominant player ... | https://github.com/pytorch/torchchat/issues/1543 | open | [] | 2025-05-20T02:41:03Z | 2025-05-20T11:06:54Z | 3 | Jack-Khuu |
huggingface/text-generation-inference | 3,233 | Docker image For llama cpp backend? | Hey,
Is there any reason in particular why docker images for the llama-cpp backend do not get built along with new versions? It seems the backend has been ready for a while so just curious why images don't get built as part of the build pipeline
cc @mfuntowicz | https://github.com/huggingface/text-generation-inference/issues/3233 | open | [] | 2025-05-20T02:07:46Z | 2025-05-20T02:07:46Z | 0 | vrdn-23 |
pytorch/xla | 9,201 | Issue warning on set_mat_mul | On #9080 and #9103, there was a request to add a warning when user sets mat mul. I added it to the PR, but, the ci/ci now skips running documentation.
This issue and PR will cherry pick the code changes to isolate them from docs, allowing code cicd to run on this PR, and docs build cicd to run on 9082. | https://github.com/pytorch/xla/issues/9201 | closed | [
"documentation",
"CI"
] | 2025-05-19T21:21:48Z | 2025-05-21T18:38:49Z | 0 | yaoshiang |
pytorch/xla | 9,199 | Simplify device count external API calls | Currently there are many external APIs related getting the number of devices associate with PyTorch XLA. Those that I could find were:
- "global_runtime_device_count": returns the total number of devices across all processes/hosts, but it has "@functools.lru_cache()"
- "global_device_count": returns the total number o... | https://github.com/pytorch/xla/issues/9199 | open | [
"usability",
"documentation"
] | 2025-05-19T19:26:46Z | 2025-06-04T05:52:28Z | 4 | pgmoka |
huggingface/diffusers | 11,580 | Can diffusers support loading and running FLUX with fp8 ? | This is how I use diffusers to load flux model:
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained(
"/ckptstorage/repo/pretrained_weights/black-forest-labs/FLUX.1-dev",
torch_dtype=torch.float16,
)
device = torch.device(f"cuda:{device_number}" if torch.cuda.is_available() e... | https://github.com/huggingface/diffusers/issues/11580 | open | [] | 2025-05-19T12:18:13Z | 2025-12-12T19:30:33Z | 5 | EmmaThompson123 |
huggingface/lerobot | 1,124 | How to add force data to lerobot and models? | As title said, I use a force sensor on SO100 arm and want to record the data in lerobot dataset then train with the force data. How to do it?
force data looks like: a list: [x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4, x5, y5, z5] (15 d list)
Thanks! | https://github.com/huggingface/lerobot/issues/1124 | closed | [] | 2025-05-19T07:48:20Z | 2025-05-19T13:36:44Z | null | milong26 |
huggingface/diffusers | 11,575 | Hidream Model loading takes too long — any way to speed it up? | Hi, thanks for this great project.
I'm running Hidream with this library in a serverless environment and facing major delays during model loading. It can be very frustrating, especially for time-sensitive or ephemeral deployments.
I've tried everything I could think of to reduce the loading time, but nothing has work... | https://github.com/huggingface/diffusers/issues/11575 | open | [] | 2025-05-19T00:49:00Z | 2025-05-23T12:55:05Z | 6 | Me-verner |
huggingface/optimum | 2,275 | ONNX export for ColPali | Hi Optimum,
I have created a small tutorial how to export the ColPali late-interaction VLM in this [notebook](https://gist.github.com/kstavro/9bcdf930f0e69626dd5aa9aa5f09f867), but I think it shouldn't be too difficult to integrate it to Optimum as well.
However, as far as I have seen, there is not much support for l... | https://github.com/huggingface/optimum/issues/2275 | closed | [] | 2025-05-18T18:56:22Z | 2025-06-11T13:56:43Z | 2 | kstavro |
huggingface/transformers | 38,190 | Gibberish generations with FSDP2 and MixedPrecisionPolicy | ### System Info
```
transformers.__version__='4.51.2'
torch.__version__='2.6.0+cu124'
sys.version='3.10.17 (main, Apr 16 2025, 15:03:57) [GCC 12.1.1 20220628 (Red Hat 12.1.1-3)]'
```
### Who can help?
@SunMarc @zach-huggingface
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### T... | https://github.com/huggingface/transformers/issues/38190 | closed | [
"bug"
] | 2025-05-18T11:56:08Z | 2025-08-29T09:36:57Z | 17 | dlvp |
pytorch/torchtitan | 1,202 | How to run the tests in the tests directory | Looking for how to documentations to run the tests in the tests directory. | https://github.com/pytorch/torchtitan/issues/1202 | closed | [
"documentation",
"good first issue"
] | 2025-05-16T17:33:46Z | 2025-05-20T04:02:02Z | null | githubsgi |
huggingface/transformers | 38,181 | Add a way for `callbacks` to get `trainer` handler | When I want to implement differential privacy for the model, I customize the gradient clipping before `optimizer.step()`. The add custom noise to the model after `optimizer.step()`. I cannot get `Trainer.optimizer` in the `callback` function, it shows as `None`. Is it possible to get the reference of `Trainer` directly... | https://github.com/huggingface/transformers/issues/38181 | closed | [] | 2025-05-16T16:01:35Z | 2025-05-19T12:17:06Z | 1 | MinzhiYoyo |
pytorch/helion | 46 | [QST] Compiler Pipeline | @jansel @yf225
Very cool project.
Is there any documentation on how helion leverages inductor to generate triton kernels?
Trying to understand the overlap between dynamo and helion. My naive take is that dynamo parses general python code to an fx graph that is then passed to inductor whereas helion parses a subs... | https://github.com/pytorch/helion/issues/46 | closed | [
"question"
] | 2025-05-16T12:30:52Z | 2025-08-25T21:28:38Z | null | jeromeku |
pytorch/TensorRT | 3,522 | ❓ [Question] Manually Annotate Quantization Parameters in FX Graph | ## ❓ Question
is there a way to manually annotate quantization parameters that will be respected throughout torch_tensorrt conversion (e.g. manually adding q/dq nodes, or specifying some tensor metadata) via dynamo? thank you! | https://github.com/pytorch/TensorRT/issues/3522 | open | [
"question"
] | 2025-05-16T07:38:33Z | 2025-06-02T15:35:40Z | null | patrick-botco |
huggingface/open-r1 | 645 | How to set vllm max-model-len? | I use qwen2.5-7b-Instruct to run grpo, and open yarn, to accommodate a longer window(greater than 32768). But fowllowing error exists:
... | https://github.com/huggingface/open-r1/issues/645 | closed | [] | 2025-05-16T03:28:50Z | 2025-06-12T08:45:15Z | null | huyongquan |
huggingface/transformers | 38,165 | Gemma 3 Pipeline does not accept dictionary with no images | ### System Info
System info not really relevant as the bug is root caused in my description below.
- `transformers` version: 4.51.3
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.11.9
- Huggingface_hub version: 0.31.2
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not foun... | https://github.com/huggingface/transformers/issues/38165 | closed | [
"bug"
] | 2025-05-16T01:34:15Z | 2025-06-23T08:03:03Z | 6 | sheldonlai |
pytorch/xla | 9,178 | Code sample for basic mark sharding doesn't work | ## 📚 Documentation
This document:
https://docs.pytorch.org/xla/master/learn/api-guide.html#module-torch_xla.distributed.spmd
has an important code sample to demonstrate sharding tensors across devices. It doesn't work - there are imports and setup that are not included.
More broadly, all of these samples should go... | https://github.com/pytorch/xla/issues/9178 | open | [
"distributed",
"documentation"
] | 2025-05-15T17:28:02Z | 2025-05-19T13:59:30Z | 0 | yaoshiang |
pytorch/xla | 9,177 | make CI build fast | ## 🐛 Bug
The CI build takes ~2 hours, significantly affects dev velocity.
Judging from https://github.com/pytorch/xla/actions/runs/14986142268/job/42100348515, the `Build PyTorch/XLA` steps seems the bottleneck (it takes 1h15m and blocks a whole bunch of downstream test jobs). If we can speed this up, we may shove a... | https://github.com/pytorch/xla/issues/9177 | open | [
"tech debt",
"CI",
"build"
] | 2025-05-15T16:48:36Z | 2025-05-15T16:48:36Z | 0 | zhanyong-wan |
huggingface/lerobot | 1,114 | How to collect data and train the policy from Lerobot totally out of the leader arm only by learning from demonstration using the main arm such as XARM or UR series | https://github.com/huggingface/lerobot/issues/1114 | closed | [
"question",
"robots",
"stale"
] | 2025-05-15T15:31:13Z | 2025-12-31T02:35:25Z | null | David-Kingsman | |
pytorch/data | 1,489 | Implement a Cache node | ### 🚀 The feature
At some point, there were a [`InMemoryCacheHolder`](https://docs.pytorch.org/data/0.9/generated/torchdata.datapipes.iter.InMemoryCacheHolder.html?highlight=cache#torchdata.datapipes.iter.InMemoryCacheHolder) datapipe. However, this has been removed from the new node design.
This would be very usefu... | https://github.com/meta-pytorch/data/issues/1489 | open | [] | 2025-05-15T09:47:19Z | 2025-05-20T04:25:09Z | 1 | leleogere |
huggingface/transformers | 38,147 | How to check the number of tokens processed or the load of each expert in the Qwen3 MoE model during inference? | https://github.com/huggingface/transformers/issues/38147 | closed | [] | 2025-05-15T09:21:29Z | 2025-05-15T13:36:53Z | null | wumaotegan | |
huggingface/diffusers | 11,561 | FluxFillPipeline Support load IP Adapter. | ### Model/Pipeline/Scheduler description
'FluxFillPipeline' object has no attribute 'load_ip_adapter'
I really need this,Thanks!
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the ... | https://github.com/huggingface/diffusers/issues/11561 | closed | [
"help wanted",
"Good second issue"
] | 2025-05-15T08:58:42Z | 2025-06-17T08:48:28Z | 6 | PineREN |
huggingface/lerobot | 1,111 | Unrecognized argument policy.path. How to load a pretrained model? | When I run this command:
```
python lerobot/scripts/control_robot.py --robot.type so100 --control.type record --control.fps 30 --control.single_task "Grasp a yellow tape and put it to yellow square." --control.repo_id a_cam_1/result --control.tags '["tutorial"]' --control.warmup_time_s 5 --control.episode_time_s 30 --c... | https://github.com/huggingface/lerobot/issues/1111 | closed | [
"bug"
] | 2025-05-15T03:13:27Z | 2025-06-24T06:20:08Z | null | milong26 |
pytorch/xla | 9,175 | Add documentation on multi-controller | ## 📚 Documentation
Add documentation demonstrating multi-node coordination. Start with 2 machines, each with [n] TPUs, and demonstrate ssh into each machine to run the same script with an all-reduce. Reference necessary information for network configuration to allow two hosts to communicate on GCP (optional: AWS and ... | https://github.com/pytorch/xla/issues/9175 | open | [
"documentation"
] | 2025-05-15T02:50:00Z | 2025-05-19T13:58:20Z | 0 | yaoshiang |
huggingface/diffusers | 11,555 | `device_map="auto"` supported for diffusers pipelines? | ### Describe the bug
Hey dear diffusers team,
for `DiffusionPipline`, as I understand (hopefully correctly) from [this part of the documentation](https://huggingface.co/docs/diffusers/v0.33.1/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.device_map), it should be possible to specify `device_ma... | https://github.com/huggingface/diffusers/issues/11555 | open | [
"bug"
] | 2025-05-14T16:49:32Z | 2025-05-19T09:44:29Z | 4 | johannaSommer |
pytorch/torchtitan | 1,192 | document the usage of environment variables | This is one of the community requests.
Similarly, we should also document the inductor flag usages.
Format can be a dedicated `.md` under `docs/`. | https://github.com/pytorch/torchtitan/issues/1192 | open | [
"documentation",
"better engineering",
"high priority",
"triage review"
] | 2025-05-14T08:41:36Z | 2025-05-14T08:41:40Z | 0 | tianyu-l |
huggingface/lerobot | 1,107 | Does Pi0 use PaliGemma VLM pretrained model weights? | I attempted to finetune the Pi0 model, but noticed that it does not download the pretrained weights of Paligemma from Hugging Face. Specifically, I found that Pi0 initializes the VLM with:
```python
self.paligemma = PaliGemmaForConditionalGeneration(config=config.paligemma_config)
```
instead of using:
```python
Aut... | https://github.com/huggingface/lerobot/issues/1107 | closed | [
"bug",
"question",
"policies"
] | 2025-05-14T06:47:15Z | 2025-10-08T08:44:03Z | null | lxysl |
huggingface/lerobot | 1,106 | How to convert image mode to video mode lerobot dataset? | https://github.com/huggingface/lerobot/issues/1106 | open | [
"question",
"dataset"
] | 2025-05-14T03:54:42Z | 2025-08-08T16:42:33Z | null | hairuoliu1 | |
huggingface/transformers.js | 1,316 | May I ask how to set the HF_TOKEN on the browser side? | ### Question
May I ask how to set the HF_TOKEN on the browser side?

The following is my code:
```
const model = await AutoModel.from_pretrained("briaai/RMBG-2.0", {
config: {
model_type: "custom",
},
headers: {
'... | https://github.com/huggingface/transformers.js/issues/1316 | open | [
"question"
] | 2025-05-14T01:43:02Z | 2025-05-27T21:53:45Z | null | dengbupapapa |
huggingface/xet-core | 321 | How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache? | How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache?
I guess there may be a way in the scenario I had but by my mistake apparently I chose some incorrect usage and caused the deletion of the 95% complete partial local file instead of resumi... | https://github.com/huggingface/xet-core/issues/321 | closed | [] | 2025-05-13T22:16:02Z | 2025-05-16T17:48:45Z | null | ghchris2021 |
huggingface/chat-ui | 1,819 | Correct syntax of .env: what are those backticks for multiline strings? | I have read the suggestion of checking discussions but I was unable to find an answer so something very basic looks like it is missing here.
In the documentation there are many examples suggesting of putting long values in env var surrounded by backticks.
However when I do this I get errors like:
JSON5: invalid char... | https://github.com/huggingface/chat-ui/issues/1819 | open | [
"support"
] | 2025-05-13T12:21:43Z | 2025-05-23T09:37:09Z | 1 | sciabarracom |
huggingface/optimum | 2,262 | New Release to Support `transformers>=4.51.0`? | ### Feature request
The latest release (`1.24.0`) is 4 months old. There has been around 38 commits since the last release. Will there be a new release soon?
### Motivation
There is a medium CVE related to `transformers==4.48.1` that is the latest compatible version.
GHSA-fpwr-67px-3qhx
I am also blocked from upgra... | https://github.com/huggingface/optimum/issues/2262 | closed | [] | 2025-05-13T07:46:15Z | 2025-05-13T22:27:08Z | 2 | yxtay |
huggingface/lerobot | 1,101 | ValueError: No integer found between bounds [low_factor=np.float32(-0.001953125), upp_factor=np.float32(-0.001953125)] | ### System Info
```Shell
2025,ubantu,python3.10. when doing teleoperation
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [x] My own task or dataset (give details below)
### Reproduction
python lerobot/scripts/control_robot.py --robot.type=so100 --robot.cameras='{}' --contro... | https://github.com/huggingface/lerobot/issues/1101 | closed | [
"question"
] | 2025-05-13T05:06:35Z | 2025-06-19T14:25:08Z | null | qingx-cyber |
pytorch/torchtitan | 1,184 | [Question] CP and DP | Hi, this is a really great repo! Thanks for open-sourcing it!
I am reading the code of how torchtian handles the multi-dimensional parallelism. It seems the `cp` is a part of the mesh dimensions interacting with `dp_shard`, `dp_replicate` etc. My understanding of `cp` is that it is orthogonal to other parallelisms. F... | https://github.com/pytorch/torchtitan/issues/1184 | closed | [
"question",
"module: context parallel"
] | 2025-05-13T03:30:10Z | 2025-05-13T17:19:22Z | null | galalalala |
huggingface/diffusers | 11,542 | What's the difference between 'example/train_text_to_image_lora.py' and 'example/research_projects/lora/train_text_to_image_lora.py' ? | I want to use the "--train_text_encoder" argument, but it only exists in the latter script. | https://github.com/huggingface/diffusers/issues/11542 | closed | [] | 2025-05-13T01:41:19Z | 2025-06-10T20:35:10Z | 2 | night-train-zhx |
huggingface/lerobot | 1,097 | UnboundLocalError: local variable 'action' referenced before assignment | May I ask where the problem lies? It occurred during the evaluation of the strategy and I have been searching for a long time without finding a solution
(lerobot) wzx@wzx:~/lerobot$ python lerobot/scripts/control_robot.py \
> --robot.type=so101 \
> --control.type=record \
> --control.fps=30 \
> --control.singl... | https://github.com/huggingface/lerobot/issues/1097 | closed | [
"bug",
"question"
] | 2025-05-12T16:06:27Z | 2025-06-19T14:08:57Z | null | incomple42 |
huggingface/lerobot | 1,093 | List of available task | Thank you for your effort. Can you provide a list of available tasks (not just environments) for better understanding and usage? | https://github.com/huggingface/lerobot/issues/1093 | closed | [
"question"
] | 2025-05-10T06:18:21Z | 2025-10-17T12:03:32Z | null | return-sleep |
huggingface/transformers | 38,052 | `.to` on a `PreTrainedModel` throws a Pyright type check error. What is the correct way to put a model to the device that does not throw type check errors? | ### System Info
(venv) nicholas@B367309:tmp(master)$ transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.51.1
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version:... | https://github.com/huggingface/transformers/issues/38052 | closed | [
"bug"
] | 2025-05-09T19:01:15Z | 2025-06-29T08:03:07Z | null | nickeisenberg |
huggingface/finetrainers | 401 | how to train wan using multi-node | ### Feature request / 功能建议
Hi! I still wonder the multi-node training of Wan2.1 14B. Do you support FSDP across nodes?
### Motivation / 动机
Currently the memory restraint is very harsh for long video LoRA fine-tuning
### Your contribution / 您的贡献
N/A | https://github.com/huggingface/finetrainers/issues/401 | open | [] | 2025-05-09T18:11:07Z | 2025-05-09T18:11:07Z | null | Radioheading |
pytorch/torchtitan | 1,179 | FSDP2+DPP vs 2D Device Mesh FSDP2 | I have a question regarding FSDP2 + DDP, in torchtitan codebase it is used as FSDP2 -> DDP. In FSDP2 doc it is said that you can use 2d device mesh to apply MISC equivalent in deepspeed which IUC is FSDP wrapped in DDP.
Is there any difference between those 2 methods that I should be aware of, or are they functionally... | https://github.com/pytorch/torchtitan/issues/1179 | closed | [] | 2025-05-09T18:02:56Z | 2025-05-10T16:52:15Z | 2 | S1ro1 |
pytorch/torchtitan | 1,177 | Can we support outputting checkpoints directly in .pt format? | Today we need to do an extra conversion step according to this README: https://github.com/pytorch/torchtitan/blob/main/docs/checkpoint.md
```
python -m torch.distributed.checkpoint.format_utils dcp_to_torch outputs/checkpoint/step-100 /tmp/checkpoint.pt
```
I think we should **provide an option for users to specify w... | https://github.com/pytorch/torchtitan/issues/1177 | open | [
"enhancement",
"module: checkpoint"
] | 2025-05-09T16:01:50Z | 2025-08-21T03:18:12Z | 8 | andrewor14 |
huggingface/lerobot | 1,091 | Diffusion policy for different tasks instead of PushT | Thank you all for the great job. I want to know if I can train the diffusion policy for different tasks besides the PushT task. How to achieve that? If the task is a new custom task with custom dataset, is there any feasible solution to solve that?
Thank you for your help! | https://github.com/huggingface/lerobot/issues/1091 | closed | [
"question",
"policies",
"stale"
] | 2025-05-09T15:44:20Z | 2025-12-31T02:35:27Z | null | siqisiqisiqisiqi |
huggingface/lerobot | 1,086 | push_to_the_hub error | ### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.30.2
- Dataset version: 3.5.0
- Numpy version: 2.2.5
- PyTorch version (GPU?): 2.7.0 (False)
- Cuda version: N/A
- Using GPU in script?: <fill in>
```
### Information
- ... | https://github.com/huggingface/lerobot/issues/1086 | closed | [
"question"
] | 2025-05-09T03:48:09Z | 2025-10-17T11:55:25Z | null | jungwonshin |
pytorch/xla | 9,129 | set_mat_mul_precision is flakey | ## 🐛 Bug
set_mat_mul_precision seems to allow switching the precision within a single process... sometimes, like in the precision_tutorial.py/ipynb. But in the unit test test_mat_mul_precision, there's an example of a test that switches the precision unsuccessfully.
## To Reproduce
One unit test in test_mat_mul_pr... | https://github.com/pytorch/xla/issues/9129 | open | [
"bug",
"runtime"
] | 2025-05-09T03:22:22Z | 2025-05-12T12:23:12Z | 1 | yaoshiang |
pytorch/xla | 9,118 | Add installation instructions to `benchmarks/README.md` | ## 📚 Documentation
The [`benchmarks/README.md`](https://github.com/pytorch/xla/blob/master/benchmarks/README.md) does not contain the installation instructions, which is crucial for running the benchmarks.
It requires installing the [`pytorch/benchmark`](https://github.com/pytorch/benchmark) repo and other libraries... | https://github.com/pytorch/xla/issues/9118 | closed | [
"documentation",
"benchmarking"
] | 2025-05-08T17:51:31Z | 2025-05-22T17:40:05Z | 1 | haifeng-jin |
huggingface/trl | 3,424 | [GRPO] How to train model using vLLM and model parallelism on one node? | I tried to start GRPO trainer with vLLM and model parallelism on a single node with 8 GPUs (8 x A100 80G).
My plan was to use one GPU as the vLLM server and other 7 GPUs to load model with model parallelism (e.g., `device_map="auto"`)
```
CUDA_VISIBLE_DEVICES=0 trl vllm-serve --model <model_path> &
CUDA_VISIBLE_DEVIC... | https://github.com/huggingface/trl/issues/3424 | open | [] | 2025-05-08T17:22:19Z | 2025-12-02T22:48:13Z | null | zhiqihuang |
huggingface/lerobot | 1,082 | When add openvla oft policy? | https://github.com/huggingface/lerobot/issues/1082 | closed | [
"question",
"policies",
"stale"
] | 2025-05-08T09:16:16Z | 2025-12-31T02:35:30Z | null | zmf2022 | |
huggingface/text-generation-inference | 3,213 | Whether it supports Huawei Atlas300 graphics card? | ### System Info
Does the tgi inference framework support Huawei Atlas300I graphics cards?Could you help come up with a compatible solution?
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
.
### Expected behavior
C... | https://github.com/huggingface/text-generation-inference/issues/3213 | open | [] | 2025-05-08T03:18:30Z | 2025-05-08T03:18:38Z | 0 | fxb392 |
pytorch/serve | 3,416 | Adding vendor RBLN(Rebellions) | TorchServe has a varying structure for different accelerator types through recently added #3371.
Although [Rebellions](https://rebellions.ai/) provides a guide on how to utilize `TorchServe with the RBLN(Rebellions) NPUs` through its official document page(https://docs.rbln.ai/software/model_serving/torchserve/torchse... | https://github.com/pytorch/serve/issues/3416 | open | [] | 2025-05-08T00:49:45Z | 2025-05-08T00:49:45Z | 0 | rebel-ysseo |
pytorch/pytorch | 153,108 | Introduce unbacked friendly is_known_contiguous and use it instead of is_contiguous in all locations where there is a general path for not know_contiguous | title.
cc @chauhang @penguinwu @ezyang @bobrenjc93 | https://github.com/pytorch/pytorch/issues/153108 | closed | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"data dependent error"
] | 2025-05-07T23:10:19Z | 2025-09-27T01:23:17Z | null | laithsakka |
huggingface/trl | 3,419 | [GRPO] How to do gradient accumulation over sampled outputs? | Greetings,
I am wondering if we have this feature to do gradient accumulation over sampled outputs. For example, if I have `num_generations = 4`, so we have a single query `q1`, we have`completions = [o1, o2, o3, o4]`. I want to set that `per_device_train_batch_size=2, gradient_accumulation_steps=2`. So that the GPU o... | https://github.com/huggingface/trl/issues/3419 | closed | [] | 2025-05-07T17:49:36Z | 2025-05-09T06:26:29Z | null | SpaceHunterInf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.