repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 38,918 | Lack of IDE-Specific Authentication Instructions in Hugging Face "Quickstart" Documentation | Explanation:
I’m currently exploring the Transformers library and want to understand its architecture in order to make meaningful contributions. I started with the Quickstart page, particularly the setup section, which provides instructions for getting started with the Hugging Face Hub.
However, I noticed that the do... | https://github.com/huggingface/transformers/issues/38918 | closed | [] | 2025-06-19T17:16:32Z | 2025-06-24T18:48:17Z | 4 | marcndo |
huggingface/datasets | 7,627 | Creating a HF Dataset from lakeFS with S3 storage takes too much time! | Hi,
I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_
Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then buil... | https://github.com/huggingface/datasets/issues/7627 | closed | [] | 2025-06-19T14:28:41Z | 2025-06-23T12:39:10Z | 1 | Thunderhead-exe |
huggingface/lerobot | 1,351 | Need help about dataset and train. | # What this for
Attracted by smolvla, and new to smolvla_base, and i am now trying to ask few questions before a try with this model.
Several parts:
1) dataset
2) simulation
3) real world
## dataset
### Two cameras ?
I have read three datasets, including
https://huggingface.co/datasets/lerobot/svla_so101_pickplac... | https://github.com/huggingface/lerobot/issues/1351 | closed | [
"question",
"policies",
"dataset"
] | 2025-06-19T04:03:43Z | 2025-10-17T11:47:56Z | null | hbj52152 |
huggingface/candle | 2,997 | Implement Conv3D support for compatibility with Qwen-VL and similar models | Several vision-language models such as Qwen-VL and its variants make use of 3D convolution layers (Conv3D) in their architecture, especially for handling video or temporal spatial data. Currently, Candle does not support Conv3D operations, which makes it impossible to run or port such models natively.
In order to supp... | https://github.com/huggingface/candle/issues/2997 | open | [] | 2025-06-19T02:57:20Z | 2025-10-10T16:51:20Z | 1 | maximizemaxwell |
pytorch/torchrec | 3,114 | Which lightning strategy to use with torchrec optimizers? | Hi, thank you for this great work. I would like to know which [distributed strategy](https://github.com/Lightning-AI/pytorch-lightning/blob/76d3d22c5997398ffb5296cf500c723a176c0a06/src/lightning/pytorch/trainer/trainer.py#L95) to use with lightning trainer. I see two potential avenues:
1. DDP strategy: [following this ... | https://github.com/meta-pytorch/torchrec/issues/3114 | open | [] | 2025-06-18T19:25:10Z | 2025-06-19T06:04:22Z | 0 | JacobHelwig |
huggingface/accelerate | 3,633 | how to save a model with FSDP2 ? | Hello everyone, I’m confused about how to save model weights using FSDP2. I keep running into OOM (out-of-memory) issues when trying to save a trained 8B model with FSDP2. Interestingly, memory is sufficient during training, but saving the model requires too much memory.
I would like each rank to save only its own wei... | https://github.com/huggingface/accelerate/issues/3633 | closed | [] | 2025-06-18T11:41:05Z | 2025-06-18T15:36:37Z | null | colinzhaoxp |
huggingface/datasets | 7,624 | #Dataset Make "image" column appear first in dataset preview UI | Hi!
#Dataset
I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"im... | https://github.com/huggingface/datasets/issues/7624 | closed | [] | 2025-06-18T09:25:19Z | 2025-06-20T07:46:43Z | 2 | jcerveto |
huggingface/agents-course | 550 | [QUESTION] Diagram of the multi-agent architecture | [Unit 2.1 Multi-Agent Systems](https://huggingface.co/learn/agents-course/unit2/smolagents/multi_agent_systems#multi-agent-systems) contains [an image](https://mermaid.ink/img/pako:eNp1kc1qhTAQRl9FUiQb8wIpdNO76eKubrmFks1oRg3VSYgjpYjv3lFL_2hnMWQOJwn5sqgmelRWleUSKLAtFs09jqhtoWuYUFfFAa6QA9QDTnpzamheuhxn8pt40-6l13UtS0ddhtQ... | https://github.com/huggingface/agents-course/issues/550 | open | [
"question"
] | 2025-06-18T08:58:58Z | 2025-06-18T08:58:58Z | null | st143575 |
pytorch/vision | 9,110 | RoIHeads.postprocess_detections boxes slicing error occurs when removing predictions with the background label | ### 🐛 Describe the bug
**Bug Report: Incorrect Box Slicing in Faster R-CNN's postprocess_detections**
### Minimal Reproduction Code
```python
import torch
import torchvision
detector = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
data = torch.zeros((1, 3, 1080, 1920), dtype=torch.float32)
d... | https://github.com/pytorch/vision/issues/9110 | closed | [
"bug",
"question"
] | 2025-06-18T08:55:33Z | 2025-09-04T14:52:39Z | null | FeiFanMoKe |
pytorch/pytorch | 156,191 | Dynamo does not know how to trace method `__len__` of class `<unknown type>` with torch.logging calls | ### 🐛 Describe the bug
Whenever we use any logging function, there is a graph break due to calling `__len__` on an unkown type. I dug into the logging source code and set a breakpoint, and the `root.handlers` object is defintiely a standard list but torch.compile isn't able to parse that.
I know that there is there ... | https://github.com/pytorch/pytorch/issues/156191 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-06-17T16:54:31Z | 2025-06-17T19:27:01Z | null | aboubezari |
huggingface/lerobot | 1,337 | how to work with ur robot,and collect the data and fine turn the model ? | https://github.com/huggingface/lerobot/issues/1337 | closed | [
"question",
"policies",
"dataset"
] | 2025-06-17T09:51:16Z | 2025-10-17T11:49:17Z | null | mmlingyu | |
huggingface/diffusers | 11,730 | Add `--lora_alpha` and metadata handling in training scripts follow up | With #11707, #11723 we pushed some small changes to the way we save and parse metadata for trained LoRAs, which also allow us to add a `--lora_alpha` arg to the Dreambooth LoRA training scripts, making LoRA alpha also configurable.
This issue is to ask for help from the community to bring these changes to the other t... | https://github.com/huggingface/diffusers/issues/11730 | closed | [
"good first issue",
"contributions-welcome"
] | 2025-06-17T09:29:24Z | 2025-06-24T10:58:54Z | 8 | linoytsaban |
huggingface/trl | 3,605 | How to convert my multiturn dialogue dataset? | I have created a multiturn dialogue dataset. During the training process, the assistant's reply needs to be based on the user's reply and historical records in the previous round. First, the user's reply is labeled, and then the corresponding reply sentence is generated. In other words, the assistant's reply needs to r... | https://github.com/huggingface/trl/issues/3605 | closed | [
"🏋 Reward"
] | 2025-06-17T09:07:47Z | 2025-09-22T17:46:35Z | null | Miaoqinghong |
huggingface/lerobot | 1,333 | SO-100 Follower: Severe wrist_roll motor instability causing unwanted rotation during teleoperation | ## Problem Description
The SO-100 Follower robot arm experiences severe instability in the `wrist_roll` motor during teleoperation, causing unwanted and uncontrollable rotation that significantly impacts usability. The motor exhibits extreme sensitivity and appears to be completely out of control in the default config... | https://github.com/huggingface/lerobot/issues/1333 | open | [
"question",
"policies"
] | 2025-06-17T07:10:23Z | 2025-12-05T12:17:16Z | null | TKDRYU104 |
huggingface/safetensors | 624 | Interest in Parallel Model Training and Xformers Saving Support (Bug?) (SOLVED) | ### Feature request
I would like to request official support for xformers (link: https://github.com/facebookresearch/xformers) and parallel model training: https://huggingface.co/docs/transformers/v4.13.0/en/parallelism for the safetensor saving file format if this does not currently exist. This safetensors saving err... | https://github.com/huggingface/safetensors/issues/624 | closed | [] | 2025-06-17T03:20:15Z | 2025-06-18T22:01:11Z | 1 | viasky657 |
huggingface/lerobot | 1,330 | Could you update the repository to enable the evaluation of SmolVLA's performance? | Could you update the repository to enable the evaluation of SmolVLA's performance? | https://github.com/huggingface/lerobot/issues/1330 | closed | [
"question",
"policies"
] | 2025-06-17T02:38:22Z | 2025-10-17T11:50:22Z | null | Pandapan01 |
huggingface/transformers | 38,851 | Should `compute_metrics` only run on the main process when doing DDP? | Hi, I want to know when doing training and evaluation on a multi-GPU setup (DDP using trainer and accelerate), does `compute_metrics` only need to be run on the main process?
The reason being that `trainer` itself already does `gather_for_metrics` ([here](https://github.com/huggingface/transformers/blob/v4.51-release... | https://github.com/huggingface/transformers/issues/38851 | closed | [] | 2025-06-17T00:09:43Z | 2025-07-25T08:02:33Z | 2 | TIE666 |
pytorch/xla | 9,371 | Failing `torch_xla._XLAC._xla_custom_call()` with `RuntimeError: Bad StatusOr access: UNIMPLEMENTED: No registered implementation for custom call to my_lib.my_op.default for platform CUDA` | ## ❓ Questions and Help
During execution of `torch_xla.stablehlo.exported_program_to_stablehlo()`, it fails with `RuntimeError: Bad StatusOr access: UNIMPLEMENTED: No registered implementation for custom call to my_lib.my_op.default for platform CUDA`. For more context, `my_op` is registered under a custom library as ... | https://github.com/pytorch/xla/issues/9371 | open | [
"bug",
"stablehlo"
] | 2025-06-16T21:01:05Z | 2025-06-24T18:55:50Z | 4 | hsjts0u |
pytorch/xla | 9,366 | PyTorch/XLA custom Triton kernel export to StableHLO | I'd like to export a model to StableHLO with a simple custom Triton kernel. Following the [guide here](https://docs.pytorch.org/xla/master/features/triton.html) on Pytorch/XLA with custom GPU kernels. However, I am encountering errors with the [torch.export](https://docs.pytorch.org/xla/master/features/stablehlo.html) ... | https://github.com/pytorch/xla/issues/9366 | open | [
"enhancement",
"xla:gpu",
"Triton"
] | 2025-06-16T18:28:42Z | 2025-06-23T19:55:53Z | 4 | annabellej |
huggingface/lerobot | 1,324 | Where is control_robot.py script? | It is mentioned in the readme in the Walkthrough section that there is a script called control_robot.py. however, I can not see it in the main branch | https://github.com/huggingface/lerobot/issues/1324 | closed | [] | 2025-06-16T15:57:34Z | 2025-06-18T11:06:11Z | null | AbdElRahmanFarhan |
huggingface/agents-course | 547 | [QUESTION] Possible mistake in transformers size in terms of parameters | Hey,
Thanks for the great course!
I have a question on what looks to me like an inconsistency.
In the [unit1/what-are-llms](https://huggingface.co/learn/agents-course/unit1/what-are-llms) section, when explaining the 3 types of transformers, in the Typical Size, we can see:
Decoders:
Typical Size: Billions (in the U... | https://github.com/huggingface/agents-course/issues/547 | open | [
"question"
] | 2025-06-16T14:43:29Z | 2025-06-16T14:43:29Z | null | jonoillar |
huggingface/transformers.js | 1,341 | FireFox compatible models | ### Question
I am fairly new to everything here and kind of just vibe code while I learn JS, but I use Zen browser and enjoy making it more like Arc over my summer. I was wondering if it was possible to expose the native Firefox AI and be able to prompt it, which I was able to do [here](https://github.com/Anoms12/Fire... | https://github.com/huggingface/transformers.js/issues/1341 | open | [
"question"
] | 2025-06-16T12:43:39Z | 2025-06-16T12:47:44Z | null | 12th-devs |
huggingface/lerobot | 1,319 | How to debug or inspect the health of Feetech servos in so101 setup? | Hi, I'm working with the `so101` robot and running into issues with the Feetech servos.
I would like to ask:
1. Are there any recommended tools or procedures for debugging Feetech servos?
2. How can I check the health of a servo (e.g. temperature, load, internal error)?
Any help or pointers would be greatly apprecia... | https://github.com/huggingface/lerobot/issues/1319 | open | [
"question",
"robots"
] | 2025-06-16T08:58:32Z | 2025-08-12T10:01:41Z | null | DIMARIA123 |
huggingface/lerobot | 1,318 | How to use my own dataset to train pi0 or smolVLA | I have a dataset that I collected and converted to Lerobot format. This dataset has not been uploaded to huggingface. I want to use this dataset to train `pi0` or `smolvla`. How should I set it up?
I have tried to use only `dataset.root`, but it prompts that `dataset.repo_id` needs to be entered. What should I do? | https://github.com/huggingface/lerobot/issues/1318 | closed | [
"question",
"policies"
] | 2025-06-16T08:40:50Z | 2025-10-17T11:51:54Z | null | xliu0105 |
huggingface/lerobot | 1,316 | [Question] SmolVLA LIBERO / MetaWorld evaluation | Hello, thank you for open sourcing this wonderful repository. I have read the SmolVLA paper impressively and tried to run some evaluations.

In Section 4.5 of the paper, under Simulation Evaluation, it seems that you have fine-tu... | https://github.com/huggingface/lerobot/issues/1316 | closed | [
"question",
"policies",
"simulation"
] | 2025-06-16T06:28:50Z | 2025-12-10T22:11:17Z | null | tykim0507 |
huggingface/agents-course | 546 | [QUESTION] Can i solve this final assignment with free versions? | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer, you can ask here, please **be specific**.
I like to solve the final assignment, but I failed with free tools. I try to take inspiration from leaderboard toppers; they us... | https://github.com/huggingface/agents-course/issues/546 | open | [
"question"
] | 2025-06-16T06:13:37Z | 2025-06-16T06:13:37Z | null | mehdinathani |
huggingface/datasets | 7,617 | Unwanted column padding in nested lists of dicts | ```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '... | https://github.com/huggingface/datasets/issues/7617 | closed | [] | 2025-06-15T22:06:17Z | 2025-06-16T13:43:31Z | 1 | qgallouedec |
pytorch/torchtitan | 1,301 | Slow checkpoint saving time (6 mins to save an 8B model checkpoint in sync mode) | It takes ~6 minutes to save a checkpoint using non async mode. Is this expected?
### Sync mode
```
[rank0]:[titan] 2025-06-15 21:31:48,968 - root - INFO - TensorBoard logging enabled. Logs will be saved at ./outputs/tb/20250615-2131
[rank0]:[titan] 2025-06-15 21:31:48,969 - root - INFO - CUDA capacity: NVIDIA H100 ... | https://github.com/pytorch/torchtitan/issues/1301 | closed | [
"question",
"module: checkpoint"
] | 2025-06-15T21:42:47Z | 2025-06-23T16:34:52Z | null | vwxyzjn |
huggingface/transformers.js | 1,340 | Audio-to-Audio task | ### Question
Hi there.
I would like to know how running **Audio-to-Audio models** with _transformers.js_.
I haven't success to found any material about this. If has no way, is there some schedule to adds this?
Thanks! | https://github.com/huggingface/transformers.js/issues/1340 | open | [
"question"
] | 2025-06-15T17:58:54Z | 2025-10-13T04:45:39Z | null | LuSrodri |
huggingface/open-r1 | 677 | Error from E2B executor: cannot access local variable 'sandbox' where it is not associated with a value | Hi there,
I encountered a bug while following the sandbox setup instructions exactly as provided. Here’s what I’m seeing:

Has anyone experienced this before? Any advice on how to resolve it would be greatly appreciated!
Thank ... | https://github.com/huggingface/open-r1/issues/677 | closed | [] | 2025-06-14T19:08:22Z | 2025-07-22T06:55:38Z | null | juyongjiang |
pytorch/examples | 1,355 | `language_translation` has typo which make loaded tgt tensor invalid | for `_yield_token` implementation in `src/data.py`, the third argument `src` expected to be `True` or `False`
```
# Turns an iterable into a generator
def _yield_tokens(iterable_data, tokenizer, src):
# Iterable data stores the samples as (src, tgt) so this will help us select just one language or the other
in... | https://github.com/pytorch/examples/issues/1355 | closed | [] | 2025-06-14T12:13:35Z | 2025-06-16T13:55:52Z | 0 | zwzmzd |
pytorch/xla | 9,356 | Transition torch_xla::ShardingSec to torch_xla::OpSharding | This is primarily for the sake of documentation and consistency. | https://github.com/pytorch/xla/issues/9356 | open | [
"distributed",
"documentation"
] | 2025-06-13T23:07:34Z | 2025-06-13T23:07:34Z | 0 | pgmoka |
pytorch/TensorRT | 3,571 | ❓ [Question] Can I export a serialized engine from Torch-TensorRT targeting TensorRT 10.3.0.26? | ## ❓ Question
Hello, I am attempting to export a serialized engine from Torch-TRT. I require TensorRT version 10.3.0.26, as I am planning to use this engine with a Nvidia DeepStream container that requires that TensorRT version. I attempted to use torch-tensorrt==2.5.0, but this version is listed as using builtin Tens... | https://github.com/pytorch/TensorRT/issues/3571 | closed | [
"question"
] | 2025-06-13T16:44:40Z | 2025-06-16T20:06:15Z | null | geiche735 |
pytorch/torchtitan | 1,291 | Using official HuggingFace script to convert DCP weights to HF format,the outputs are not human-readable | DCP -> torch (in PyTorch, see https://github.com/pytorch/torchtitan/blob/main/docs/checkpoint.md)
torch -> HF (from [HF](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py), although missing params.json if saved from DCP)[](url)
 | Hi,
For the llama3-8b model (which has GQA, with num_kv_heads=8, num_heads=32), I see the KV replication being done inside the Attention module in model.py
Will this lead to additional communication volume for ring attention (with passKV) wherein we'll be circulating 32 heads instead of 8?
Afaik flash attention kern... | https://github.com/pytorch/torchtitan/issues/1283 | open | [
"question",
"module: context parallel"
] | 2025-06-11T22:18:04Z | 2025-06-12T16:08:59Z | null | rghadia |
huggingface/transformers.js | 1,339 | Model is cached, but still reloads from network? | ### Question
I have this code in a React project :
```
import { env, pipeline } from "@xenova/transformers";
const model = await pipeline("translation", "Xenova/opus-mt-de-en");
let transText = await model("hallo, ich bin hier");
```
When I inspect the browser cache, I see relevant files in "cache storage". (xenov... | https://github.com/huggingface/transformers.js/issues/1339 | closed | [
"question"
] | 2025-06-11T16:19:26Z | 2025-06-27T06:06:25Z | null | patrickinminneapolis |
huggingface/peft | 2,583 | Lora transfer learning | Hello, I am training a lora model using flux fill pipeline using diffusers+peft+accelerate. I already have a lora model for general purpose for my application which was trained for 5k steps and large dataset. Now, I want to do transfer learning to finetune on very small dataset but want to train from previous lora mode... | https://github.com/huggingface/peft/issues/2583 | closed | [] | 2025-06-11T12:00:25Z | 2025-07-20T15:04:05Z | 4 | hardikdava |
huggingface/transformers | 38,750 | Is it a good choice to early error when `output_attentions=True` and attn implementation not equal to `eager` | ### System Info
Before this PR [38288](https://github.com/huggingface/transformers/pull/38288), the program will run smoothly even when we set `output_attentions=True` and the attn implementation is not `eager`, as it will fallback to use eager mode, after this PR, it will throw error directly: [L342](https://github.c... | https://github.com/huggingface/transformers/issues/38750 | closed | [
"bug"
] | 2025-06-11T11:05:48Z | 2025-06-25T08:00:06Z | 2 | kaixuanliu |
huggingface/lerobot | 1,262 | use smolVLA, How to know the current task is completed | I use smolVLA to do a wiping task, it will keep doing the task again and again, how to judge the task is completed, thank you | https://github.com/huggingface/lerobot/issues/1262 | open | [
"question",
"policies"
] | 2025-06-11T08:48:03Z | 2025-08-12T10:04:14Z | null | haoyankai |
huggingface/transformers.js | 1,338 | Question about supporting Float16Array | ### Question
I am trying transformers.js with WebGPU. The performance is great, but I found that transformers.js returns a Float32Array where the model is quantized to `fp16`:
```javascript
const extractor = await pipeline(
"feature-extraction",
"bge-small-zh-v1.5",
{
device: "webgpu",
dty... | https://github.com/huggingface/transformers.js/issues/1338 | open | [
"question"
] | 2025-06-11T07:29:19Z | 2025-07-03T05:50:56Z | null | xmcp |
huggingface/transformers | 38,745 | [Bug][InformerForPredict] The shape will cause a problem | ### System Info
When I set the infomerconfig.input_size = 1, I find a bug, but I don't know how to fix it.
- Function Name : `create_network_inputs`
```
time_feat = (
torch.cat(
(
past_time_features[:, self._past_length - self.config.context_length :, ...],
... | https://github.com/huggingface/transformers/issues/38745 | closed | [
"bug"
] | 2025-06-11T07:22:06Z | 2025-07-20T11:41:45Z | 11 | 2004learner |
huggingface/transformers | 38,740 | [DOCS] Add `pruna` as optimization framework | ### Feature request
Have a section on Pruna AI within the documentation. We did [a similar PR for diffusers](https://github.com/huggingface/diffusers/pull/11688) and thought it would be nice to show how to optimize transformers models too.
.
### Motivation
Have a section on Pruna AI within the documentation to show... | https://github.com/huggingface/transformers/issues/38740 | open | [
"Feature request"
] | 2025-06-11T04:52:33Z | 2025-07-16T08:56:52Z | 8 | davidberenstein1957 |
huggingface/sentence-transformers | 3,390 | How to create a customized model architecture that fits sentence-transformer's training framework? | I'd like to train a two tower model that takes categorical features, floats features in one tower, and the other tower just encodes a document using an out of the box embedding. Then the outputs from both towers are feed into sentence transformers loss function. All the training configuration should reuse sentence tr... | https://github.com/huggingface/sentence-transformers/issues/3390 | open | [] | 2025-06-11T03:07:42Z | 2025-06-12T05:05:54Z | null | HuangLED |
pytorch/examples | 1,353 | tensor_parallel_example.py and sequence_parallel_example.py | The primary difference between the two files are as follows. The TP case , only see 1 allreduce per iteration - is that what is expected ? Seems to be same as DDP ! In the SP case, see 1 allgather and 1 reduce -scatter per iteration.
```
# Custom parallelization plan for the model
sp_model = parallelize_module(... | https://github.com/pytorch/examples/issues/1353 | open | [] | 2025-06-11T01:10:08Z | 2025-10-30T09:12:25Z | 2 | githubsgi |
huggingface/lerobot | 1,258 | Leader Servo Numbering different from script to documentation | First thank you for sharing this amazing work!
I am initializing the servos for the arm leader and I noticed that the numbering for the Wrist Roll and Wrist Pitch are different from the documentation when I ran the script:

wris... | https://github.com/huggingface/lerobot/issues/1258 | open | [
"documentation",
"question"
] | 2025-06-10T21:03:03Z | 2025-08-12T10:04:29Z | null | FaboNo |
huggingface/transformers | 38,733 | GRPO per_device_eval_batch_size can't be set as 1, when there is only 1 GPU | `eval batch size must be evenly divisible by the number of generations per prompt. ` When I only have one GPU, I cannot set `per_device_eval_batch_size=1` because there will be no reasonable G to choose from. Is it possible to automatically calculate a value similar to the number of gradient accumulation steps to achie... | https://github.com/huggingface/transformers/issues/38733 | closed | [] | 2025-06-10T14:58:11Z | 2025-06-11T09:45:32Z | 0 | CasanovaLLL |
huggingface/lerobot | 1,254 | [Feature Proposal] Planning a new user friendly simulation environment for new task and data collection | Hello and bonjour! First and foremost, I really wanted to thanks the team and community for making this wonderful repo. It really helps and guide beginner in this field. And I also wanted to contribute for the community.
Reading the issues here, I found a lot of people are trying to run without physical robot. But wit... | https://github.com/huggingface/lerobot/issues/1254 | open | [
"question",
"simulation"
] | 2025-06-10T12:36:13Z | 2025-08-12T10:04:42Z | null | Bigenlight |
huggingface/lerobot | 1,252 | Failed to sync read 'Present_Position' on ids=[2,3,4,6]after 1 tries. [TxRxResult] There is no status packet | my arm is koch,when I set the motors ids and baudrates, it report error:
Failed to sync read 'Present_Position' on ids=[2,3,4,6]after 1 tries. [TxRxResult] There is no status packet | https://github.com/huggingface/lerobot/issues/1252 | open | [
"question",
"robots"
] | 2025-06-10T10:21:05Z | 2025-09-01T02:24:25Z | null | huazai665 |
pytorch/torchtitan | 1,278 | [Qes] Is `torch.float32` as the default dtype when training? | I ran the example config, and found the parameter dtype of model is `torch.float32`. I don't understand why we use this as the default dtype, why not half precision? And I found the only way to change it to half precision is enabling fsdp and set mix dtype to half. | https://github.com/pytorch/torchtitan/issues/1278 | closed | [] | 2025-06-10T09:30:35Z | 2025-06-12T06:13:43Z | 2 | foreverlms |
huggingface/lerobot | 1,251 | where is async inference | hi,thx for your SmolVLA
I have a question:**where is the async inference?**
the eval.py in script doesn't seem for SmolVLA inference
hope for your early reply,thx in advance | https://github.com/huggingface/lerobot/issues/1251 | closed | [] | 2025-06-10T07:44:38Z | 2025-06-30T11:35:25Z | null | JuilieZ |
huggingface/transformers.js | 1,336 | node.js WebGPU compatibility and WASM performance in web enviornment | ### Question
Hello!
I've been running some performance benchmarks on whisper models and noticed that the web environment (running in react renderer in electron, separate worker with WASM) produced slower transcription results than the python counterpart (e.g. 1400ms vs 400ms per batch) - both utilizing the same numbe... | https://github.com/huggingface/transformers.js/issues/1336 | open | [
"question"
] | 2025-06-10T06:05:36Z | 2025-06-11T06:53:35Z | null | devnarekm |
huggingface/transformers | 38,709 | `get_video_features` in XCLIPModel always returns `pooled_output` | ### System Info
https://github.com/huggingface/transformers/blob/f4fc42216cd56ab6b68270bf80d811614d8d59e4/src/transformers/models/x_clip/modeling_x_clip.py#L1376
Hi
The `get_video_features` function is hardcoded to always return the `pooled_output`. But sometimes, it might be beneficial to get the `last_hidden_state... | https://github.com/huggingface/transformers/issues/38709 | closed | [
"bug"
] | 2025-06-10T00:51:37Z | 2025-07-18T08:02:50Z | 4 | Vishu26 |
huggingface/lerobot | 1,242 | SmolVLA Gym Simulation - Release? | Hello,
I've trained the smolvla_base for 200K steps. I'm trying to do a inference and visualize like we do for aloha or pusht. Could anyone guide me on this.
I dont have a robot arm, so Gym simulation is something I'm looking for, when will it be released? | https://github.com/huggingface/lerobot/issues/1242 | closed | [
"question",
"policies",
"visualization"
] | 2025-06-09T13:05:38Z | 2025-10-17T11:00:57Z | null | Jaykumaran |
huggingface/smollm | 78 | how to continously pretrain VLM base model | rt.
How can I pretrain VLM base model? | https://github.com/huggingface/smollm/issues/78 | open | [
"Image",
"Video"
] | 2025-06-09T07:04:57Z | 2025-07-29T12:50:50Z | null | allenliuvip |
huggingface/text-generation-inference | 3,259 | Enable passing arguments to chat templates | ### Feature request
I would like to enable passing parameters to a chat template when using the messages API. Something like:
```python
qwen3_model = HuggingFaceModel(...)
predictor = qwen3_model.deploy(...)
predictor.predict({
"messages": [
{"role": "system", "content": "You are a helpful assistant." },
... | https://github.com/huggingface/text-generation-inference/issues/3259 | open | [] | 2025-06-09T06:04:27Z | 2025-06-09T07:53:17Z | 2 | alexshtf |
huggingface/datasets | 7,600 | `push_to_hub` is not concurrency safe (dataset schema corruption) | ### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (... | https://github.com/huggingface/datasets/issues/7600 | closed | [] | 2025-06-07T17:28:56Z | 2025-07-31T10:00:50Z | 4 | sharvil |
huggingface/lerobot | 1,226 | 404 Not Found | [lerobot](https://github.com/huggingface/lerobot/tree/main)/[examples](https://github.com/huggingface/lerobot/tree/main/examples)
/10_use_so100.md/
This is supposed to be a tutorial but cannot be opened???
404 Not Found!!!
| https://github.com/huggingface/lerobot/issues/1226 | closed | [
"documentation",
"question"
] | 2025-06-07T09:02:37Z | 2025-06-08T21:26:07Z | null | luk-e158 |
huggingface/transformers | 38,656 | Potential Memory Leak or Caching in Fast Image Processor | ### System Info
Hi team,
Thank you for your great work on `transformers`!
While using the `AutoProcessor` with `use_fast=True`, I noticed that there seems to be a memory leak or possibly some form of persistent caching when processing images. Even after deleting the processor and clearing the CUDA cache, approximate... | https://github.com/huggingface/transformers/issues/38656 | closed | [
"bug"
] | 2025-06-07T08:46:48Z | 2025-08-12T13:02:37Z | 8 | yhyang201 |
huggingface/transformers | 38,654 | The visualization of image input in Qwen2.5-VL | The image input of Qwen2.5-VL is processed by processor and then saved as tensor in inputs['pixel_values'].
I tried to restore the image, using tensor in inputs['pixel_values'], but I found that the restored image patches were in disorder.
So how to restore the image from inputs['pixel_values'] in a proper way?
For ex... | https://github.com/huggingface/transformers/issues/38654 | closed | [] | 2025-06-07T08:15:44Z | 2025-06-10T09:04:04Z | 2 | Bytes-Lin |
pytorch/pytorch | 155,391 | how to save the fx graph with output tensor shapes ? | ### 🐛 Describe the bug
# When I use **f.write** to save the fx graph, it doesn't have output tensor shapes
> refer to https://www.doubao.com/chat/7948299479012098
```
with open("fx_graph.py", "w") as f:
f.write(graph_module.code)
```
* its dump is similar to
```
def forward(self, inputs_1, labels_1):
view =... | https://github.com/pytorch/pytorch/issues/155391 | closed | [] | 2025-06-07T02:35:38Z | 2025-06-07T02:58:25Z | null | vfdff |
huggingface/lerobot | 1,223 | smolvla introduce an asynchronous inference stack decoupling perception and action prediction? | why code not realize? | https://github.com/huggingface/lerobot/issues/1223 | closed | [
"question",
"policies"
] | 2025-06-07T01:23:24Z | 2025-06-08T21:25:04Z | null | zmf2022 |
huggingface/transformers | 38,650 | Support of Qwen3 GGUF model | Hi, I am getting the following error when I want to use the GGUF model with Qwen3
"ValueError: GGUF model with architecture qwen3 is not supported yet."
I have the latest transformers and gguf-0.17.0
```
self.tokenizer = AutoTokenizer.from_pretrained(model_name, gguf_file= "Qwen3-0.6B-Q2_K_L.gguf",use_fast=True)
... | https://github.com/huggingface/transformers/issues/38650 | closed | [] | 2025-06-06T20:11:23Z | 2025-07-15T08:02:59Z | 2 | Auth0rM0rgan |
huggingface/diffusers | 11,675 | Error in loading the pretrained lora weights | Hi, I am using the script https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py to train a lora.
An error is raised on https://github.com/huggingface/diffusers/blob/73a9d5856f2d7ae3637c484d83cd697284ad3962/examples/text_to_image/train_text_to_image_lora_sdxl.py#L131... | https://github.com/huggingface/diffusers/issues/11675 | closed | [] | 2025-06-06T17:09:45Z | 2025-06-07T07:40:14Z | 1 | garychan22 |
huggingface/text-generation-inference | 3,257 | if use chat.completions, text+image inference return incorrect output because of template issue | ### System Info
common in all platform
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
text-generation-launcher --model-id=llava-hf/llava-v1.6-mistral-7b-hf --max-input-tokens 4096 --max-batch-prefill-tokens 16384 -... | https://github.com/huggingface/text-generation-inference/issues/3257 | open | [] | 2025-06-06T13:06:20Z | 2025-06-06T13:11:22Z | 2 | sywangyi |
huggingface/nanotron | 372 | datatrove need numpy>=2.0.0 bug nanotron 0.4 requires numpy<2, how to fix? | https://github.com/huggingface/nanotron/issues/372 | open | [] | 2025-06-06T12:12:39Z | 2025-11-22T14:44:01Z | null | lxyyang | |
pytorch/xla | 9,303 | Runtime is already initialized. Do not use the XLA ' RuntimeError: Runtime is already initialized. Do not use the XLA device before calling xmp.spawn. | ## 🐛 Bug
-- Block 13 ALT: Direct xmp.spawn (Consolidated) ---
torch_xla and xmp imported for Block 13.
Defining hyperparameters for training function...
Hyperparameters for training function defined.
Setting XLA/TPU specific environment variables for xmp.spawn...
XRT_TPU_CONFIG already set: localservice;0;localhost:51... | https://github.com/pytorch/xla/issues/9303 | open | [
"question"
] | 2025-06-06T01:12:22Z | 2025-06-10T23:04:30Z | null | pojoba02 |
pytorch/pytorch | 155,242 | Partitioner loses Inplace ops where source is constant | ### 🐛 Describe the bug
If backward contains some constant compute, e.g. result of joint constant propagation:
```
POST_JOINT_CONST_FOLDING:graph():
237 %primals_1 : [num_users=1] = placeholder[target=primals_1]
238 %primals_2 : [num_users=2] = placeholder[target=primals_2]
239 %tangents_1 : [num_users=1] = p... | https://github.com/pytorch/pytorch/issues/155242 | closed | [
"triaged",
"module: correctness (silent)",
"module: aotdispatch"
] | 2025-06-05T17:34:38Z | 2025-06-11T12:50:03Z | null | IvanKobzarev |
huggingface/transformers | 38,613 | MDX Errors | ### System Info
Ubuntu 24.04.2 LTS, CPython 3.11.12, transformers==4.53.0.dev0
@stevhliu I'm trying to contribute to the model cards. I forked the latest transformers and I ran the scripts, from the home page and then I want to the documents page. I'm having issues with the doc builder. I keep receiving the errors... | https://github.com/huggingface/transformers/issues/38613 | closed | [
"bug"
] | 2025-06-05T14:19:45Z | 2025-06-06T20:12:36Z | 7 | rileyafox |
pytorch/ao | 2,310 | [Question] Combining QAT and Sparsity Training | First of all, thank you for all the time and effort invested in this project to make (large) models more accessible.
I am fairly new to optimizing my models using sparsity, and therefore, wanted to ask if my understanding of this library is correct.
In general, I would like to train my model using sparsity and QAT.
Fo... | https://github.com/pytorch/ao/issues/2310 | closed | [
"question"
] | 2025-06-05T13:03:12Z | 2025-06-20T12:37:47Z | null | CaptainDario |
huggingface/diffusers | 11,661 | [BUG]: Using args.max_train_steps even if it is None in diffusers/examples/flux-control | ### Describe the bug
Under [https://github.com/huggingface/diffusers/tree/main/examples/flux-control](examples/flux-control) there are two files showing how to fine tune flux-control:
- [train_control_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/flux-control/train_control_flux.py)
- [train_cont... | https://github.com/huggingface/diffusers/issues/11661 | closed | [
"bug"
] | 2025-06-05T07:18:06Z | 2025-06-05T09:26:26Z | 0 | Markus-Pobitzer |
huggingface/lerobot | 1,203 | Could you please upload the config.json file for smolvla? |
Could you please upload the config.json file for smolvla? Thank you very much!
FileNotFoundError: config.json not found on the HuggingFace Hub in lerobot/smolvla_base
| https://github.com/huggingface/lerobot/issues/1203 | closed | [
"question"
] | 2025-06-05T06:59:12Z | 2025-06-11T14:56:56Z | null | Pandapan01 |
huggingface/transformers | 38,601 | Contribute to Transformers on windows natively without WSL | ### System Info
### System info
OS: Windows 11
Python: 3.13.3 and 3.10
Git: 2.49.0
CMake: 4.0.2
Msys64: Pacman v6.1.0 - libalpm v14.0.0
Pip: 25.1.1
Setuptools: 80.9.0
Visual studio C++ build tools
### NOTE: I followed the steps here [Contribute to 🤗 Transformers](https://huggingface.co/docs/transformers/en/contrib... | https://github.com/huggingface/transformers/issues/38601 | closed | [
"bug"
] | 2025-06-05T04:14:12Z | 2025-07-27T08:02:54Z | 4 | ghost |
pytorch/torchtitan | 1,262 | Checkpointer Feature Enhancements | This document tracks and describes the essential checkpointing features still to be added to TorchTitan.
- [ ] **Full `state_dict` saving**
- Support exporting the complete (unsharded) model `state_dict`; many existing formats only handle full `state_dict`.
- https://github.com/pytorch/torchtitan/pull/1219 is WI... | https://github.com/pytorch/torchtitan/issues/1262 | open | [
"enhancement",
"better engineering",
"module: checkpoint"
] | 2025-06-04T20:44:27Z | 2025-08-21T03:20:05Z | 3 | fegin |
huggingface/diffusers | 11,657 | Custom Wan diffusion Lora runs without error but doesn't apply effect and gives warning: No LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'. | ### Describe the bug
I run the diffusers pipe using the standard process with a custom diffusers trained lora:
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
pipe.scheduler = scheduler
pipe.load_lora_weights("lora/customdiffusers_lora.safetensors")
etc...
it runs without error but... | https://github.com/huggingface/diffusers/issues/11657 | closed | [
"bug"
] | 2025-06-04T19:50:14Z | 2025-09-12T03:32:17Z | 3 | st-projects-00 |
huggingface/transformers | 38,576 | A local variable 'image_seq_length' leading to UnboundLocalError: cannot access local variable 'image_seq_length' where it is not associated with a value | ### System Info
- `transformers` version: 4.52.3
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
- Python version: 3.12.2
- Huggingface_hub version: 0.32.2
- Safetensors version: 0.5.3
- Accelerate version: 0.26.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?):... | https://github.com/huggingface/transformers/issues/38576 | closed | [
"bug"
] | 2025-06-04T09:06:04Z | 2025-06-04T12:20:33Z | null | IceGiraffe |
huggingface/lerobot | 1,195 | ros2_control support | Hello,
I was thinking that it would be great to use the robot with ros2_control :
- to test code developped with the ROS2 framework:
- for education purposes : the robot is great, easily and not expensive to build (thank you for the work achieved), transporteable in a case, etc.
Do you have any knowledge of an exist... | https://github.com/huggingface/lerobot/issues/1195 | open | [
"enhancement",
"question"
] | 2025-06-03T15:31:53Z | 2025-11-27T16:30:08Z | null | baaluidnrey |
huggingface/diffusers | 11,648 | how to load lora weight with fp8 transfomer model? | Hi, I want to run fluxcontrolpipeline with transformer_fp8 reference the code :
https://huggingface.co/docs/diffusers/api/pipelines/flux#quantization
```
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, FluxTransformer2DModel, FluxControlPipeline
from transformers import BitsAndB... | https://github.com/huggingface/diffusers/issues/11648 | open | [] | 2025-06-03T10:31:23Z | 2025-06-19T12:37:35Z | null | Johnson-yue |
huggingface/candle | 2,986 | How to reset gradient before each batch | In Pytorch, you would call `optimizer.zero_grad` to zero the gradients before every batch. How do you do this in candle? | https://github.com/huggingface/candle/issues/2986 | open | [] | 2025-06-03T10:17:52Z | 2025-06-03T10:17:52Z | null | lokxii |
huggingface/transformers | 38,544 | Paligemma model card needs update | Hi
I found a minor problem with paligemma model card. How can I raise a PR to fix it ? I am first time contributor. I raised PR. Whom should I mention to review it ?
https://huggingface.co/google/paligemma-3b-pt-896 | https://github.com/huggingface/transformers/issues/38544 | closed | [] | 2025-06-03T06:55:14Z | 2025-07-14T16:23:52Z | 7 | punitvara |
pytorch/torchtitan | 1,257 | Question about fixed std=0.02 initialization of `w1` in `moe.py` | Hi torchtitan team,
Thanks for the great work on this project! I had a question regarding a detail in the code at moe.py#L92
https://github.com/pytorch/torchtitan/blob/768cde131105bde624160029d808e94649faf0f4/torchtitan/experiments/llama4/model/moe.py#L92
I noticed that `w1` is initialized with a fixed standard devi... | https://github.com/pytorch/torchtitan/issues/1257 | open | [
"question",
"triage review"
] | 2025-06-03T04:06:53Z | 2025-08-21T07:03:44Z | null | trestad |
huggingface/transformers | 38,541 | `eager_attention_forward` and `repeat_kv` code duplication | I see the two functions appear in a lot of places in the code base. Shall we unify them into a single place?
And can we treat `eager_attention_forward` as another option in [`ALL_ATTENTION_FUNCTIONS`](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L6186)? Any concerns? | https://github.com/huggingface/transformers/issues/38541 | closed | [] | 2025-06-03T00:57:16Z | 2025-06-10T10:27:25Z | 3 | ChengLyu |
pytorch/tutorials | 3,373 | [BUG] Running `make html-noplot` yields errors. | ### Add Link
I ran the following command about 10 hours ago, around 12:20:00 utc and it gave me errors. (I am being specific about the time, because I was unable to find a release that I could point to).
`git clone --depth 1 https://github.com/pytorch/tutorials.git`
### Describe the bug
## What errors did you encoun... | https://github.com/pytorch/tutorials/issues/3373 | open | [
"bug",
"build issue"
] | 2025-06-02T23:37:26Z | 2025-06-03T20:01:28Z | 5 | phonokoye |
huggingface/chat-ui | 1,843 | can you make a release? | The current codebase is far away from the official release in November, maybe you can stabilize and release current code? | https://github.com/huggingface/chat-ui/issues/1843 | open | [
"enhancement"
] | 2025-06-02T21:26:51Z | 2025-07-21T20:44:03Z | 1 | antonkulaga |
huggingface/transformers | 38,527 | Why do you remove sample_indices_fn for processor.apply_chat_template? | Just as shown in the picture, since 4.52 processor.apply_chat_template does no longer support sample_indices_fn but the args doc is still there.
<img width="712" alt="Image" src="https://github.com/user-attachments/assets/e055d5f5-4800-4eb7-8054-0f41a9be5707" /> | https://github.com/huggingface/transformers/issues/38527 | closed | [] | 2025-06-02T12:34:23Z | 2025-06-03T02:44:22Z | 1 | futrime |
huggingface/optimum | 2,284 | Error when exporting DinoV2 with Registers | When trying :
` python -m scripts.convert --quantize --model_id facebook/dinov2-with-registers-small`
I Got :
`ValueError: Trying to export a dinov2-with-registers model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggi... | https://github.com/huggingface/optimum/issues/2284 | closed | [
"Stale"
] | 2025-06-02T08:53:55Z | 2025-07-04T02:16:54Z | 1 | elkizana |
huggingface/agents-course | 523 | [QUESTION] The final quiz of Unit 1, always crashes with dataset not found | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.
Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.
The full log is:
```
Traceback (most rec... | https://github.com/huggingface/agents-course/issues/523 | open | [
"question"
] | 2025-06-02T07:58:01Z | 2025-06-02T07:58:01Z | null | abcnishant007 |
huggingface/peft | 2,563 | Integrate Lily | ### Feature request
This request proposes integrating Lily (Low-Rank Interconnected Adaptation across Layers), accepted to ACL 2025 Findings, into the PEFT library.
Paper: https://arxiv.org/pdf/2407.09946
Repo: https://github.com/yibozhong/lily
### Motivation
Lily aims to directly make the rank of each indivi... | https://github.com/huggingface/peft/issues/2563 | closed | [] | 2025-06-02T07:23:30Z | 2025-12-18T14:03:32Z | 15 | yibozhong |
huggingface/lerobot | 1,180 | dataset training | How many episodes do you recommend making for each file when learning the dataset? Can I create about 400 episodes by putting different tasks in each episode? Or can I create the same task data for each file and combine multiple files? | https://github.com/huggingface/lerobot/issues/1180 | closed | [
"question",
"dataset"
] | 2025-06-01T15:59:47Z | 2025-10-08T12:54:48Z | null | bruce577 |
huggingface/lerobot | 1,177 | [Question] Why using a kernel device for IP cameras? | I'm wondering why, when we have an IP camera (by using DroidCam on Android for instance), the team decided to plug the IP camera into a loopback device in `/dev/videoX` instead of directly reading the video stream in the code with Opencv `cv2.VideoCapture(url)`. I understand doing this allows controlling FPS & resoluti... | https://github.com/huggingface/lerobot/issues/1177 | closed | [
"question",
"robots",
"stale"
] | 2025-05-31T05:24:21Z | 2025-12-31T02:35:18Z | null | godardt |
pytorch/xla | 9,272 | Improve documentation for running benchmark unit tests | ## 📚 Documentation
Currently, in the `benchmarks/` directory, the `README.md` file only specified to use `make -C ...` to run the unit tests for the benchmarking code. The python tests like `test_benchmark_model.py` is not run.
We need better instructions on how to run the python unit tests.
Currently, I have to ad... | https://github.com/pytorch/xla/issues/9272 | open | [
"documentation",
"benchmarking"
] | 2025-05-30T22:01:17Z | 2025-06-04T12:10:55Z | 1 | haifeng-jin |
huggingface/transformers | 38,501 | torch.compile fails for gemma-3-1b-it | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-6.15.0-1-MANJARO-x86_64-with-glibc2.41
- Python version: 3.12.8
- Huggingface_hub version: 0.32.3
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.... | https://github.com/huggingface/transformers/issues/38501 | closed | [
"bug"
] | 2025-05-30T21:01:41Z | 2025-06-02T20:45:54Z | 6 | InCogNiTo124 |
pytorch/xla | 9,269 | Torch model parameters as HLO constants | ## ❓ Questions and Help
Hello, I am wondering if there is a way to bake model parameters into the produced HLO model as constants. For Torch-XLA it seems like model parameters are treated as additional input args which makes it difficult to port this into openxla/xla for execution in cpp. The HLO produced from Jax alre... | https://github.com/pytorch/xla/issues/9269 | open | [
"question"
] | 2025-05-30T20:18:01Z | 2025-06-13T04:35:59Z | null | drewjenks01 |
huggingface/transformers | 38,500 | Unable to deploy Gemma 3 on AWS SageMaker due to lack of support in tranfomers release | hi,
it seems when i deploy the model
```
huggingface_model = HuggingFaceModel(
model_data=model_s3_uri,
role=role,
transformers_version="4.49.0",
pytorch_version="2.6.0",
py_version="py312",
)
predictor = huggingface_model.deploy(
instance_type="ml.g5.48xlarge",
initial_instance_cou... | https://github.com/huggingface/transformers/issues/38500 | closed | [] | 2025-05-30T17:10:22Z | 2025-07-08T08:02:37Z | 2 | ehrun32 |
huggingface/transformers | 38,499 | ModernBERT for MLM outputs incorrect hidden state shape. | ### System Info
When using `ModernBERTForMaskedLM` with `output_hidden_states=True` the hidden state is not correctly padded when it is returned. A minimal example is included below:
```
import torch
from transformers import AutoTokenizer, ModernBertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("answerdotai/... | https://github.com/huggingface/transformers/issues/38499 | closed | [
"bug"
] | 2025-05-30T17:02:55Z | 2025-07-08T08:02:39Z | 2 | jfkback |
huggingface/lerobot | 1,174 | [Question] Multi-Rate Sensor and Discrete Event Handling in `lerobot` | Hello `lerobot` Team,
First off, huge thanks for building such an awesome open-source project!
I'm currently exploring `lerobot` for a project and have some critical questions regarding its data handling, specifically for multi-rate sensors and discrete events. My understanding from the README is that `lerobot` recor... | https://github.com/huggingface/lerobot/issues/1174 | open | [
"question",
"dataset"
] | 2025-05-30T09:04:13Z | 2025-12-17T10:44:46Z | null | MilkClouds |
huggingface/transformers | 38,489 | VLM reverse mapping logic in modeling_utils.py save_pretrained not doing anything? | ### System Info
transformers version: 4.52.3
Platform: Ubuntu 24.04
Python version: 3.11.0
Huggingface_hub version: 0.32.2
Safetensors version: 0.5.3
Accelerate version: 1.7.0
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (GPU?): 2.7.0+cu126 (H100)
Tensorflow version (GPU?): not install... | https://github.com/huggingface/transformers/issues/38489 | closed | [
"bug"
] | 2025-05-30T08:55:57Z | 2025-05-30T13:08:58Z | 6 | rolandtannous |
huggingface/diffusers | 11,637 | How to load lora weight in distribution applications? | If I want to use xDiT with 2 GPU inference FluxControlPipeline, how should I do
I write a xFuserFluxControlPipeline class, but it can not load lora weight with right way
xFuserFluxTransformer in 1GPU have some parameters and another GPU have others.
How should I do ?? | https://github.com/huggingface/diffusers/issues/11637 | open | [] | 2025-05-30T07:14:50Z | 2025-06-03T10:15:51Z | null | Johnson-yue |
huggingface/peft | 2,558 | GraLoRA support? | ### Feature request
will the library support the [GraLoRA](https://arxiv.org/abs/2505.20355) technique?
### Motivation
GraLoRA addresses a fundamental limitation of LoRA: overfitting when the bottleneck is widened.
The technique seems to more closely approximate full fine-tuning; hybrid GraLoRA gets the best of bot... | https://github.com/huggingface/peft/issues/2558 | closed | [] | 2025-05-29T18:36:27Z | 2025-07-15T15:04:20Z | 10 | DiTo97 |
huggingface/lerobot | 1,171 | sync_read.py | Hi, I am currently testing the functions in the STServo_Python folder to work with my STS3215 motors. When I run the sync_read.py script, I encounter an issue caused by the addParam(self, sts_id) function returning False. I tried several things, but I can't get past the error.
I made sure that the motor IDs are correct... | https://github.com/huggingface/lerobot/issues/1171 | closed | [
"bug",
"question",
"robots",
"stale"
] | 2025-05-29T15:33:16Z | 2025-12-31T02:35:19Z | null | Baptiste-le-Beaudry |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.