repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/smolagents | 842 | How to pass custom type variables to tools |
Iβm working on a Telegram bot and using the `smolagents` library to create agents that handle reminders. The issue Iβm facing is related to passing the `context` object (which is specific to each message received by the bot) to a tool function (`add_reminder`). The `context` object is required to access the `job_queue... | https://github.com/huggingface/smolagents/issues/842 | closed | [] | 2025-02-28T23:04:49Z | 2025-03-01T23:45:40Z | null | ebravofm |
pytorch/xla | 8,774 | The "Pytorch/XLA overview" is very long, goes into advanced topics, and is overall intimidating for new users. | ## π Documentation
The "Pytorch/XLA overview" includes many advanced topics that go beyond an "overview", including how to specifically convert Stable Diffusion to run on TPUs (which is more of a Guide) and how to profile (which is more of a Tutorial). The result is an intimidating introduction for potential users of... | https://github.com/pytorch/xla/issues/8774 | open | [
"enhancement",
"documentation"
] | 2025-02-28T20:47:25Z | 2025-06-03T17:34:09Z | 2 | yaoshiang |
pytorch/xla | 8,773 | Document the virtual device mesh | ## π Documentation
We need to explain what is a "mesh". The current documentation in https://pytorch.org/xla/master/perf/spmd_basic.html#mesh doesn't explain it very well. For example, it doesn't say what does specifying `device_ids is almost always np.array(range(num_devices)).` do. | https://github.com/pytorch/xla/issues/8773 | closed | [
"enhancement",
"documentation"
] | 2025-02-28T19:16:32Z | 2025-03-16T23:33:32Z | 1 | tengyifei |
pytorch/xla | 8,772 | Paramatize test_aten_xla_tensor tests | ## π Feature
Paramatize test_aten_xla_tensor tests. Inspired by https://github.com/pytorch/xla/pull/8734#discussion_r1968768218.
Example of a test_aten_xla_tensor tests: [test_aten_xla_tensor_1](https://github.com/pytorch/xla/blob/2675e6892c6f955fc2baf88d85dfdfa72062273c/test/cpp/test_aten_xla_tensor_1.cpp)
## Motiv... | https://github.com/pytorch/xla/issues/8772 | open | [
"enhancement",
"usability",
"testing"
] | 2025-02-28T18:19:20Z | 2025-03-06T03:06:31Z | 2 | pgmoka |
pytorch/pytorch | 148,196 | [inductor][triton] Decide how to deprecate "old triton versions" | ### π The feature, motivation and pitch
Right now we have a mess of at least 3 "versions" of Triton - i.e. commit ranges that we are compatible with.
This is beneficial for a few reasons:
* Ability to bisect old versions of Triton
* Compatibility with users who have different (i.e. old) versions of Triton installed ... | https://github.com/pytorch/pytorch/issues/148196 | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-02-28T17:18:12Z | 2025-03-04T15:39:46Z | null | davidberard98 |
huggingface/sentence-transformers | 3,254 | How to train sentencetransformer with multiple negativeοΌ | I have a dataset like: {'anchor':str,'postive':str,negative:list[str]}
it seems invalid by example code
```python
model = SentenceTransformer(model_path)
extend_position_embeddings(model._first_module().auto_model,max_length)
loss = CachedMultipleNegativesRankingLoss(model, mini_batch_size=16)
tr... | https://github.com/huggingface/sentence-transformers/issues/3254 | closed | [] | 2025-02-28T15:01:19Z | 2025-06-13T05:04:35Z | null | rangehow |
huggingface/lerobot | 789 | how to run eval with mujoco sim? | now ,run eval.py is only output in command line. how to run eval with mujoco sim? | https://github.com/huggingface/lerobot/issues/789 | closed | [
"simulation",
"stale"
] | 2025-02-28T10:42:46Z | 2025-10-08T11:57:42Z | null | mmlingyu |
huggingface/lerobot | 788 | offline run convert_dataset_v1_to_v2.py | I need help!!!!!
for exampleοΌwhen i run convert_dataset_v1_to_v2.py, it prompts the following:

and what is train.parquet?

how to solve it... | https://github.com/huggingface/lerobot/issues/788 | closed | [
"bug",
"question",
"dataset",
"stale"
] | 2025-02-28T06:41:43Z | 2025-10-09T21:54:09Z | null | ximiluuuu |
pytorch/torchtitan | 903 | [Possible PR discuss] Will a PR of training HF model be welcomed? | Hi! We are in the process of developing a novel training framework for Reinforcement Learning (RL) following TorchTitan. Recently, we've developed a feature to support direct training from Hugging Face (HF) models and the loading safetensors in online sharded fashion. This may substantially cuts down the cost of adapti... | https://github.com/pytorch/torchtitan/issues/903 | open | [
"huggingface integration",
"community help wanted"
] | 2025-02-28T03:13:40Z | 2025-03-04T08:09:14Z | 7 | junjzhang |
pytorch/torchtitan | 902 | Question about triton in deepseek implementtion | I noticed that some adaptations related to DeepSeek have already been merged. I would like to understand why Triton is being used for implementation. In certain scenarios, such as on ARM architecture or other privateuse1 backends, Triton is not yet fully supported. Have you considered making the use of Triton an option... | https://github.com/pytorch/torchtitan/issues/902 | closed | [
"question"
] | 2025-02-28T02:55:48Z | 2025-08-21T03:13:51Z | null | zqwenn |
pytorch/xla | 8,765 | Settle on a consistent logging methodology and document it | It would be useful for PyTorchXLA to provide easy to use debugging logs. To do so, we need to:
1) Settle on specific logging methodology
2) Document it for further use
3) Document how to activate these logs | https://github.com/pytorch/xla/issues/8765 | open | [
"enhancement",
"usability",
"documentation"
] | 2025-02-27T19:28:20Z | 2025-03-05T20:19:25Z | 0 | pgmoka |
pytorch/xla | 8,764 | "Too many open files" error documenting for multi-processing | In multiprocessing cases, we can get a "Too many open files" error from too many processes opening at the same time. This can be confusing as this is a common error for file opening. We should add more information to the error to make this issue easier to track. | https://github.com/pytorch/xla/issues/8764 | open | [
"enhancement",
"usability",
"documentation"
] | 2025-02-27T19:08:27Z | 2025-03-05T20:19:12Z | 0 | pgmoka |
pytorch/xla | 8,763 | Improve Logging methodology and documentation | Standardized logging method which can be leverage with debugging flags.
Afterwards, document how to get these logs in our documentation. | https://github.com/pytorch/xla/issues/8763 | open | [
"enhancement",
"usability",
"documentation"
] | 2025-02-27T18:57:29Z | 2025-03-11T16:48:58Z | 0 | pgmoka |
pytorch/xla | 8,762 | Centralize API guide docs | Centralize API guide docs. Right now for users interested in our APIs, there are a couple places they might go to:
- https://github.com/pytorch/xla/blob/6f423d0bb284190cf1b12d8a943a334e57b4df28/docs/source/learn/api-guide.rst
- https://pytorch.org/xla/release/r2.6/learn/api-guide.html
- https://github.com/pytorch/xla/b... | https://github.com/pytorch/xla/issues/8762 | open | [
"enhancement",
"documentation"
] | 2025-02-27T18:54:49Z | 2025-03-05T20:18:36Z | 0 | pgmoka |
pytorch/xla | 8,761 | Create full tutorial example for transitioning Pytorch to Pytorch XLA | It would be useful for new users to have a basic example showing the differences between the two. | https://github.com/pytorch/xla/issues/8761 | open | [
"enhancement",
"documentation"
] | 2025-02-27T18:53:29Z | 2025-03-28T17:54:03Z | 3 | pgmoka |
pytorch/xla | 8,760 | Add profiling documentation | [re: issues/8743](]https://github.com/pytorch/xla/issues/8743#issuecomment-2686428336)
This issue has a request for adding documentation on the `start_trace` and `stop_trace` API, but we currently don't have any documentation around profiling. Who can I work with to get some profiling documentation written? Thanks!... | https://github.com/pytorch/xla/issues/8760 | open | [
"enhancement",
"documentation"
] | 2025-02-27T17:48:34Z | 2025-03-12T00:08:59Z | 3 | mikegre-google |
huggingface/sentence-transformers | 3,252 | How to train sentence transformers with multi machines? | The [docs](https://sbert.net/docs/sentence_transformer/training/distributed.html) describes how to train sentence transformers with multi-GPUs.
But both my model and my data are huge, and training sentence transformers with 8 GPUs in one single machine is still very slow.
Does sentence transformers support training u... | https://github.com/huggingface/sentence-transformers/issues/3252 | open | [] | 2025-02-27T13:37:02Z | 2025-02-27T13:37:02Z | null | awmoe |
huggingface/diffusers | 10,917 | Is lumina-2.0 script correct? | I wrote a script, based on the one provided [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)
it gets stuck on loss around 0.5, and i think it is a lot, isn't it? | https://github.com/huggingface/diffusers/issues/10917 | open | [] | 2025-02-27T11:17:00Z | 2025-02-28T15:46:43Z | 3 | Riko0 |
huggingface/open-r1 | 444 | How to increase the context window from 4k to 32k on qwen models ? | Hello,
I'm trying to distill a subset of the [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/openr1-220k-math) dataset into my Qwen/Qwen2.5-Math-7B-Instruct. I want to do this via a custom SFT pipeline in order to see if I can match the results obtained in the evaluations.
However I'm struggling increasing... | https://github.com/huggingface/open-r1/issues/444 | closed | [] | 2025-02-27T10:27:43Z | 2025-07-24T23:56:12Z | null | Jeremmmyyyyy |
huggingface/trl | 2,972 | How many H20 (96GB) GPUs are needed to train Qwen7B with the GRPO algorithm? | I want to use the GRPO algorithm to train Qwen7B, but I failed using 4 H20 (96GB) GPUs with the trl library. I would like to know how many H20 GPUs are needed. | https://github.com/huggingface/trl/issues/2972 | open | [
"β question",
"π GRPO"
] | 2025-02-27T04:12:16Z | 2025-03-14T02:22:36Z | null | Tuziking |
pytorch/ao | 1,790 | An error was encountered setting torch._dynamo.decorators.mark_unbacked |
Hello, I want batch set up to be dynamic and I use torch._dynamo.mark_dynamic to set it. But I found that recompile is triggered when batch is 1 and 2. Then I used torch._dynamo.decorators.mark_unbacked but it quantizes incorrectly. Can you look at this problem?
My environment:
torch: 2.5.0
torchao: 0.8.0
This is th... | https://github.com/pytorch/ao/issues/1790 | open | [
"question",
"quantize_",
"triaged"
] | 2025-02-27T03:10:43Z | 2025-03-06T19:07:34Z | null | songh11 |
pytorch/torchtitan | 897 | Moving train.py to torchtitan submodule makes run_train.sh failed with "Can not find module" | ### Bug description
Hi team,
I noticed a recent change which moved train.py from the top level fold in the project to torchtitan sub folder. This caused the failure of run_train.sh with following error msg.
It cased the following error with import message "from torchtitan.components.checkpoint import CheckpointMana... | https://github.com/pytorch/torchtitan/issues/897 | closed | [] | 2025-02-27T00:11:02Z | 2025-03-23T01:42:01Z | 3 | jianiw25 |
pytorch/xla | 8,757 | Document on how to profile with torch_xla | ## π Documentation
I found we don't have a doc/guide on how to profile with torch_xla. We should add this because getting profile is essential for performance analysis.
| https://github.com/pytorch/xla/issues/8757 | closed | [
"enhancement",
"documentation"
] | 2025-02-26T23:27:20Z | 2025-12-02T00:18:03Z | null | lsy323 |
pytorch/serve | 3,394 | Rename open_inference_grpc.proto package name | Hi Team,
Starting from 0.10.0, torchServe introduced [open_inference_grpc.proto](https://github.com/pytorch/serve/blob/v0.10.0/frontend/server/src/main/resources/proto/open_inference_grpc.proto) to allow Pytorch GRPC APIs to follow Kserve open inference V2 protocol. However, I am wondering why the [package name](https:... | https://github.com/pytorch/serve/issues/3394 | open | [] | 2025-02-26T21:49:57Z | 2025-02-26T21:50:25Z | 0 | jwang20250226 |
huggingface/lerobot | 779 | Is there a way for a robot arm with kinesthetic teaching function to collect data using lerobot? | Hello, I have a robot arm with kinesthetic teaching function. I guess I can teach my robot at the first time, and collect data from the second time using lerobot? I'm here to ask is this easy to achieve by modifying control_robot.py file? Thanks | https://github.com/huggingface/lerobot/issues/779 | closed | [
"question",
"stale"
] | 2025-02-26T17:50:51Z | 2025-10-16T02:28:54Z | null | yzzueong |
huggingface/diffusers | 10,910 | ValueError: Attempting to unscale FP16 gradients. | ### Describe the bug
I encountered the following error when running train_text_to_image_lora.py: ValueError: Attempting to unscale FP16 gradients.
The script I am running is as follows:
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export DATASET_NAME="lambdalabs/naruto-blip-captions"
accelerate launch --mixed_... | https://github.com/huggingface/diffusers/issues/10910 | closed | [
"bug"
] | 2025-02-26T14:43:57Z | 2025-03-18T17:43:08Z | 4 | Messimanda |
huggingface/transformers.js | 1,209 | Is NFD type normalizer supported? | ### Question
Hi,
I was trying the following code on browser which uses [dewdev/language_detection](https://huggingface.co/dewdev/language_detection):
`import { pipeline, Pipeline } from '@huggingface/transformers';
export class DetectLanguage {
private modelid: string | null = null;
private detectPipeline: ... | https://github.com/huggingface/transformers.js/issues/1209 | closed | [
"question"
] | 2025-02-26T08:48:08Z | 2025-02-26T14:41:38Z | null | adewdev |
pytorch/FBGEMM | 3,737 | How to install this on Windows x64 | I can't pip install FBGEMM, and I've looked through [https://download.pytorch.org/whl/fbgemm-gpu/](https://download.pytorch.org/whl/fbgemm-gpu/), seems like all whl are support for linux (with 'manylinux' in its name)
I just want to use torchrec on Windows, I wonder How to download FBGEMM.
thank you | https://github.com/pytorch/FBGEMM/issues/3737 | open | [] | 2025-02-26T05:58:05Z | 2025-05-09T00:50:07Z | null | Elllllllvin |
huggingface/open-r1 | 436 | Why is the reward low and not increased in grpo trainingοΌHow to solveοΌ | my config
# Model arguments
model_name_or_path: ../experiment/models/Qwen2.5-1.5B-Instruct
#model_revision: main
torch_dtype: bfloat16
attn_implementation: flash_attention_2
# Data training arguments
dataset_name: ../experiment/datasets/NuminaMath-TIR/data
dataset_configs:
- default
system_prompt: "You are a helpful A... | https://github.com/huggingface/open-r1/issues/436 | open | [] | 2025-02-26T05:12:18Z | 2025-02-27T01:06:53Z | null | AXy1527 |
huggingface/lerobot | 773 | How to overrite the code to collect action datas from others robotοΌ | HeyοΌI have got a problem when i try to overwrite the code of lerobot to collect action datas from my own robot. Hereβs the detail. My robot is a single six joint robot arm, so i make a new RobotConfig, which only contains the info of the camera. And then I overwrite the fuction 'teleop_step' in file manipulator.py. I a... | https://github.com/huggingface/lerobot/issues/773 | closed | [
"question",
"stale"
] | 2025-02-26T03:33:09Z | 2025-10-16T02:28:56Z | null | tjh-flash |
pytorch/data | 1,456 | Discussion: DCP APIs and broader contracts for rescalability | After much discussion, it was decided that the best approach to implementing rescalability would be to implement rescaling in the base file reader, in order to maintain low overhead and avoid proliferation of logical shard objects (see #1372 , #1455, [torchtitan PR](https://github.com/pytorch/torchtitan/pull/376)). How... | https://github.com/meta-pytorch/data/issues/1456 | open | [] | 2025-02-25T23:14:45Z | 2025-04-21T13:03:30Z | 2 | daviswer |
huggingface/lerobot | 771 | Example of training a policy with PI0? | is there an example config file for training a policy with PI0 policy? | https://github.com/huggingface/lerobot/issues/771 | closed | [
"question",
"policies"
] | 2025-02-25T19:39:51Z | 2025-04-03T16:44:44Z | null | pqrsqwewrty |
huggingface/diffusers | 10,904 | CLIP Score Evaluation without Pre-processing. | I am referring to [Evaluating Diffusion Models](https://huggingface.co/docs/diffusers/main/en/conceptual/evaluation), specifically the quantitative evaluation using CLIP score example.
We have images of shape (6, 512, 512, 3).
CLIP score is calculated using `"openai/clip-vit-base-patch16"`.
However, as far as I can... | https://github.com/huggingface/diffusers/issues/10904 | open | [
"stale"
] | 2025-02-25T16:51:44Z | 2025-03-28T15:03:20Z | 1 | e-delaney |
huggingface/lerobot | 769 | How to convert my ALOHA hdf5 data type to your dataset format? | https://github.com/huggingface/lerobot/issues/769 | closed | [
"question",
"dataset",
"stale"
] | 2025-02-25T14:07:13Z | 2025-10-16T02:28:58Z | null | return-sleep | |
pytorch/pytorch | 147,850 | The issue where opt_output in fx_graph_runnable.py is inconsistent with the actual output when testing run_repro(acc=True) | ### π Describe the bug
Conclusion
β Use .clone() before modifying tensors from expand(), view(), or as_strided().
β Ensure tensors are .contiguous() before operations.
β Debug with x.is_contiguous() to check memory layout.
If the issue persists, share a code snippet for further debugging! π
### Versions
Conclusio... | https://github.com/pytorch/pytorch/issues/147850 | closed | [] | 2025-02-25T12:23:49Z | 2025-03-03T16:56:35Z | null | MovieTrack |
pytorch/serve | 3,393 | map workers and GPUs, deviceIds not considered in ts_config | lt;dr: using my existing configuration shows no effect when using the "deviceIds" property.
I am successfully hosting three diffeerent models on a server with two gpus.
Each model can be run on a single gpu, but one is more demanding - so I'd like to control the distribution of workers per gpu.
The deviceIds property... | https://github.com/pytorch/serve/issues/3393 | open | [] | 2025-02-25T12:23:11Z | 2025-02-26T14:37:27Z | 0 | RuDevKu |
huggingface/diffusers | 10,901 | HunyuanVIdeo in diffusers use negative_prompt but generate wrong video | ### Describe the bug
Diffusers support negative_prompt for hunyuan_video recently, but when I use negative_prompt and set **guidance_scale** and **true_cfg_scale**, I got a video with all black elements. Maybe I set wrong parameters or save video fail.
How can I fix my problem? Thanks
### Reproduction
import torch
... | https://github.com/huggingface/diffusers/issues/10901 | open | [
"bug",
"stale"
] | 2025-02-25T11:08:43Z | 2025-07-15T07:19:15Z | 2 | philipwan |
huggingface/optimum | 2,200 | Bug exporting Whisper? | ### System Info
Hi! I'm exporting some fine-tuned whisper models, small and base, being fine-tuned in english or spanish. In some cases I've detected that the tokenizer.json is 2.423KB and in other cases 3.839, being the tokenizer.json exported for the same language. I have some models in english where the tokenizer w... | https://github.com/huggingface/optimum/issues/2200 | open | [
"bug"
] | 2025-02-25T09:45:02Z | 2025-03-05T20:58:30Z | 1 | AlArgente |
huggingface/diffusers | 10,899 | Whether lohaconfig is supported in the convert_state_dict_to_diffusers method | In the train_text_to_image_lora.py file
unet_lora_config = LoraConfig(
r=cfg.rank,
lora_alpha=cfg.rank,
init_lora_weights="gaussian",
target_modules=["to_k", "to_q", "to_v", "to_out.0"],
)
modified to
unet_lora_config = LoHaConfig(
r=cfg.rank,
alpha=cfg.rank,
... | https://github.com/huggingface/diffusers/issues/10899 | open | [
"stale"
] | 2025-02-25T08:39:08Z | 2025-03-27T15:03:17Z | 2 | llm8047 |
pytorch/data | 1,452 | Open for contribution on utility nodes like `Filter`, `Shuffler`, `Header`, `Cycler`? | Hi, do you think this kind of nodes would be in the scope of Torchdata? Then I'm down to open a PR to add them. with remaining and testing, for sure.
```python
import logging
import random
from collections import deque
from typing import Any, Callable, Deque, Dict, Optional, TypeVar, Optional
from torchdata.nodes imp... | https://github.com/meta-pytorch/data/issues/1452 | open | [] | 2025-02-25T03:36:59Z | 2025-02-25T05:08:09Z | 1 | keunwoochoi |
pytorch/torchtitan | 885 | Possible to integrate DeepEP? | ref: https://github.com/deepseek-ai/DeepEP | https://github.com/pytorch/torchtitan/issues/885 | open | [] | 2025-02-25T03:24:56Z | 2026-01-05T17:13:54Z | 5 | airlsyn |
pytorch/xla | 8,740 | Add single processing to Getting Started Instructions | In our initial README document, we currently only have instructions on multi-processing steps for getting started. We should add information to single processing. | https://github.com/pytorch/xla/issues/8740 | closed | [
"documentation"
] | 2025-02-25T01:15:38Z | 2025-03-27T17:30:35Z | 0 | pgmoka |
huggingface/sentence-transformers | 3,246 | How to save the merged model trained with peft? | I am working on fine tuning a 7B model and due to the size, we trained it with lora- by following the guidance (https://sbert.net/examples/training/peft/README.html)
```python
peft_config = LoraConfig(
task_type=TaskType.FEATURE_EXTRACTION,
inference_mode=False,
r=8,
lora_alpha=32,
... | https://github.com/huggingface/sentence-transformers/issues/3246 | closed | [] | 2025-02-25T00:56:20Z | 2025-12-05T12:33:48Z | null | chz816 |
huggingface/datasets | 7,420 | better correspondence between cached and saved datasets created using from_generator | ### Feature request
At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a... | https://github.com/huggingface/datasets/issues/7420 | open | [
"enhancement"
] | 2025-02-24T22:14:37Z | 2026-01-05T15:16:35Z | 3 | vttrifonov |
pytorch/torchtitan | 883 | [Evaluation] Minimal support for downstream tasks | Hello and thanks for the great work,
For now torchtitan only has an evaluation on train loss. Do you have in mind to provide a minimal support for a downstream task like for example a general knowledge score on MMLU?
The aim would be to provide the minimum necessary to accomplish a downstream task, a bit like the minim... | https://github.com/pytorch/torchtitan/issues/883 | closed | [
"enhancement",
"high priority",
"triage review"
] | 2025-02-24T16:07:57Z | 2025-07-10T12:30:00Z | 14 | K-H-Ismail |
huggingface/open-r1 | 413 | How many resources are required to train deepseek r1 671b using grpo? | . | https://github.com/huggingface/open-r1/issues/413 | open | [] | 2025-02-24T11:55:12Z | 2025-02-24T11:55:12Z | null | LiuShixing |
huggingface/safetensors | 577 | Could I get safe tensor without lazy loading? | ### System Info
I see safe_open and deserialize, it seems that both two are lazy loading.
So if I don't want to load safetensor without lazy loading
how could I do, thanks
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Reproduction
I use sglang, and in sglang model_loader/weig... | https://github.com/huggingface/safetensors/issues/577 | open | [] | 2025-02-24T07:55:33Z | 2025-03-13T16:51:49Z | 1 | voidxb |
pytorch/xla | 8,738 | support more op in jaten.py | ## β Questions and Help
Hi, I want to convert llama2-7b model, and I want to use jlibrary.register_jax_composite to composite some op.
Now I need to composite below 2 ops: torch.nn.RMSNorm and transformers.models.llama.modeling_llama.LlamaRotaryEmbedding.
Do you have plan to add above 2 ops in jaten.py?
[xla](https://... | https://github.com/pytorch/xla/issues/8738 | closed | [
"question",
"torchxla2"
] | 2025-02-24T06:10:50Z | 2025-03-04T06:06:09Z | null | raninbowlalala |
huggingface/trl | 2,941 | How to dynamically adjust params during grpo training? | How to dynamically adjust params during training? For example, I want to adopt a smaller num_generations(8) at the beginning of grpo training, and enlarge it to 32 and also adopt a larger temperature from the 50th step. | https://github.com/huggingface/trl/issues/2941 | open | [
"β question",
"π GRPO"
] | 2025-02-24T02:08:52Z | 2025-02-24T07:49:10Z | null | Tomsawyerhu |
huggingface/open-r1 | 406 | How many GPU hours you take to train a simple model? | I wonder how many hours you take to use this repo to train a simple model, like DeepSeek-R1-Distill-Qwen-1.5B or DeepSeek-R1-Distill-Qwen-7B, if on 8 H100? | https://github.com/huggingface/open-r1/issues/406 | closed | [] | 2025-02-24T00:27:52Z | 2025-02-24T06:31:31Z | null | Red-Scarff |
huggingface/safetensors | 576 | How to access header with python | Is there a way to access the header in Python to know the offsets of each tensor data? | https://github.com/huggingface/safetensors/issues/576 | closed | [] | 2025-02-23T17:42:46Z | 2025-03-13T16:58:36Z | null | justinchuby |
huggingface/diffusers | 10,878 | How to expand peft.LoraConfig | If expanding
peft.LoraConfigοΌ How to modify to accommodate more lora? | https://github.com/huggingface/diffusers/issues/10878 | open | [
"stale"
] | 2025-02-23T14:01:11Z | 2025-03-25T15:03:28Z | null | llm8047 |
huggingface/diffusers | 10,874 | Does it support adding LoHa method | Does it support adding LoHa methodοΌ
Where can I modify itοΌ | https://github.com/huggingface/diffusers/issues/10874 | open | [
"stale"
] | 2025-02-23T12:06:14Z | 2025-03-25T15:03:41Z | 3 | llm8047 |
huggingface/diffusers | 10,872 | [Feature request] Please add from_single_file support in SanaTransformer2DModel to support first Sana Apache licensed model | **Is your feature request related to a problem? Please describe.**
We all know Sana model is very good but unfortunately the LICENSE is restrictive.
Recently a Sana finetuned model is released under Apache LICENSE. Unfortunately SanaTransformer2DModel does not support from_single_file to use it
**Describe the solution... | https://github.com/huggingface/diffusers/issues/10872 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] | 2025-02-23T11:36:21Z | 2025-03-10T03:08:32Z | 5 | nitinmukesh |
pytorch/ao | 1,764 | [QST] Tensor subclass serialization | Pardon the naive question, trying to understand how to implement a basic tensor subclass.
The problem I'm encountering is that the tensor subclass loses its attributes after calling torch.save on a state dict containing the subclass likely due to the use of `swap_tensors`.
Minimal repro:
```python
from io import Byte... | https://github.com/pytorch/ao/issues/1764 | open | [
"question"
] | 2025-02-23T03:25:05Z | 2025-03-01T19:32:57Z | null | jeromeku |
huggingface/lerobot | 761 | How to convert from custom dataset format to LeRobotDataset format? | I'm trying to train a LeRobot model on some custom data I've recorded on a custom robot, but first, I need to convert that custom data into the correct format for LeRobotDataset. I'm guessing that an example of how to do this is in the `pusht_zarr.py` file.
Questions:
1) Is the example in `pusht_zarr.py` the proper w... | https://github.com/huggingface/lerobot/issues/761 | closed | [] | 2025-02-22T02:35:36Z | 2025-02-25T19:39:08Z | null | pqrsqwewrty |
huggingface/trl | 2,922 | How to support multi-device VLLM inference in the GRPO Trainer | https://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L439-L461
In the current GRPO implementation, VLLM can only run on a single GPU, which becomes a performance bottleneck. For example, in an 8-GPU setup, the remaining 7 GPUs have to wait for 1 GPU to complete i... | https://github.com/huggingface/trl/issues/2922 | open | [
"β¨ enhancement",
"π GRPO"
] | 2025-02-21T09:24:51Z | 2025-03-14T02:45:21Z | null | 0x404 |
huggingface/safetensors | 575 | How to change the model weights in safetensors? | ### Feature request
For example, I want to change some weight with shape [K,K,C] into [K,K,C/2], how can I achieve this hacking?
### Motivation
N/A
### Your contribution
N/A | https://github.com/huggingface/safetensors/issues/575 | open | [] | 2025-02-21T03:36:27Z | 2025-03-13T16:59:32Z | null | JulioZhao97 |
pytorch/torchtitan | 875 | RuntimeError: Got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators | When I ran the llama3-8b model with cp on a third party device, I ran into a problem with the error message:
`RuntimeError: npu.npu_fusion_attention.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators.`
npu_fusion_attention is called in the torc... | https://github.com/pytorch/torchtitan/issues/875 | closed | [
"question",
"module: context parallel",
"module: dtensor"
] | 2025-02-21T03:23:27Z | 2025-02-28T08:30:44Z | null | aahehehe |
huggingface/transformers.js | 1,201 | Unable to convert Janus models to ONNX | ### Question
I see that @xenova has successfully export Janus-1.3B and Janus-Pro-1B to ONNX, presumably using some version of scripts/convert.py. We are interested in exporting Janus-Pro-7B to ONNX as well, but have not been able to do so using this script (nor any other path). Attempting to convert either of the prev... | https://github.com/huggingface/transformers.js/issues/1201 | open | [
"question"
] | 2025-02-20T17:55:00Z | 2025-08-19T12:55:58Z | null | turneram |
huggingface/datasets | 7,415 | Shard Dataset at specific indices | I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from... | https://github.com/huggingface/datasets/issues/7415 | open | [] | 2025-02-20T10:43:10Z | 2025-02-24T11:06:45Z | 3 | nikonikolov |
huggingface/trl | 2,913 | How to specify the GPU used by vllm | https://github.com/huggingface/trl/blob/a92e00e810762548787fadd5c4a5e6fc13a4928a/trl/trainer/grpo_trainer.py#L392
I have an 8-GPUs server, of which only the last two GPUs are available, and I set CUDA_VISIBLE_DEVICE=6,7, the value of torch.cuda.device_count() is 2. I want to load vllm into GPU 6, and I set vllm_device=... | https://github.com/huggingface/trl/issues/2913 | closed | [
"β question"
] | 2025-02-20T10:32:30Z | 2025-02-21T03:14:13Z | null | xiaolizh1 |
huggingface/open-r1 | 381 | how to set sampling parameters when do evaluation | As you said you use greedy decoding to reproduce deepseek's evaluation results, And I get different score, there may be something not aligning. So I want to know how to set the sampling parameters and how to see them when I use the 'evaluate.py' to do evaluation. | https://github.com/huggingface/open-r1/issues/381 | open | [] | 2025-02-20T08:41:26Z | 2025-02-24T06:57:59Z | null | ItGirls |
huggingface/open-r1 | 380 | How to set cuda device for your Data generation pipline | Hi author, thanks for your work.
When I use your pipline to generate data set (deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
I find I can not set device with os.environ

It is actually always on the cude:0, how can I set it correctl? ... | https://github.com/huggingface/open-r1/issues/380 | open | [] | 2025-02-20T07:06:44Z | 2025-02-20T07:06:44Z | null | Aristo23333 |
pytorch/xla | 8,728 | Debug XLA using GDB | ## β Questions and Help
I would like to debug XLA code using gdb via C++/Python Debugger, which means that I need a _XLAC.cpython-310-x86_64-linux-gnu.so built in debug mode to have debug symbols, just like DCMAKE_BUILD_TYPE=Debug. I don't know how to get this artifact.
Thanks for your help. | https://github.com/pytorch/xla/issues/8728 | closed | [] | 2025-02-20T03:22:41Z | 2025-02-20T08:00:00Z | 2 | yuanfz98 |
huggingface/transformers | 36,293 | Bug in v4.49 where the attention mask is ignored during generation (t5-small) | ### System Info
Hi all!
First, thank you very much for your hard work and making these features avalible.
I'm seeing a bug after updating to v4.49 where the output changes even though the attention mask should be masking padded values. Below is a script to reproduce the error.
It will tokenize two prompts, and then... | https://github.com/huggingface/transformers/issues/36293 | closed | [
"bug"
] | 2025-02-20T02:16:23Z | 2025-02-20T16:28:11Z | null | bdhammel |
pytorch/xla | 8,727 | Create a site map or centralize links in README | ## π Documentation
Add repo map to https://github.com/pytorch/xla/blob/master/README.md. Currently we have many helpful links, but they are spread around the repo. We should have a location with these centralized to help people find useful documentation easily. | https://github.com/pytorch/xla/issues/8727 | closed | [
"documentation"
] | 2025-02-20T00:04:49Z | 2025-03-24T18:58:57Z | 1 | pgmoka |
pytorch/xla | 8,726 | Add documentation on xla_native_functions.yaml categories | ## π Documentation
Add more information to https://github.com/pytorch/xla/blob/60160233ad413f030da1e7e383cc85950bcf347c/codegen/xla_native_functions.yaml#L3 on what the different categories mean in terms of lowering operations | https://github.com/pytorch/xla/issues/8726 | open | [
"documentation"
] | 2025-02-19T21:57:04Z | 2025-02-20T12:54:47Z | 2 | pgmoka |
pytorch/xla | 8,725 | Add operation lowering unit tests to test_operations.py | ## π Feature
We should expand test/test_operations to check if operations are being lowered. We have previously seen issues being cause due to this issue (see https://github.com/pytorch/xla/issues/4032 and https://github.com/pytorch/xla/issues/8713). An example of this test can be seen in https://github.com/pytorch/xl... | https://github.com/pytorch/xla/issues/8725 | open | [
"testing"
] | 2025-02-19T20:18:28Z | 2025-03-04T22:56:09Z | 1 | pgmoka |
pytorch/torchtitan | 862 | SimpleFSDP vs. FSDP2 | Hi @tianyu-l , just came across [SimpleFSDP](https://arxiv.org/pdf/2411.00284) and its [implementation](https://github.com/facebookresearch/capi/blob/main/fsdp.py) (nice project!).
In the paper, SimpleFSDP is extensively compared with FSDP2. May I know if torchtitan is going to support it or there is a way to somehow ... | https://github.com/pytorch/torchtitan/issues/862 | closed | [
"question"
] | 2025-02-19T20:16:58Z | 2025-02-20T08:18:36Z | null | yenchenlin |
huggingface/optimum-nvidia | 176 | How to run whisper after #133 | I see that previously, whisper could be run as follows: [https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py](https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py)
But after #133 the code... | https://github.com/huggingface/optimum-nvidia/issues/176 | open | [] | 2025-02-19T17:45:01Z | 2025-02-19T17:45:01Z | null | huggingfacename |
pytorch/xla | 8,722 | Add args documentation to xla.launch | ## π Documentation
In https://github.com/pytorch/xla/blob/60160233ad413f030da1e7e383cc85950bcf347c/torch_xla/torch_xla.py#L212, we should have arguments be documented to note that:
1) The callable function's firts argument is the process id;
2) The args tuple is passed to the callable function afterwards.
The patter... | https://github.com/pytorch/xla/issues/8722 | closed | [
"documentation"
] | 2025-02-19T17:44:44Z | 2025-02-20T18:21:22Z | 1 | pgmoka |
huggingface/peft | 2,388 | ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported. | ## Context
I'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly.
In short, I followed these steps
```python
# load model
model, processor = get_model_tokenizer(
'Qwen/Qwen2.5-VL-3B-Inst... | https://github.com/huggingface/peft/issues/2388 | closed | [] | 2025-02-19T15:09:17Z | 2025-04-09T16:23:53Z | 8 | samuellimabraz |
huggingface/trl | 2,905 | How to use GRPOTrainer to train a LLM for code generation? What is the format of the dataset? | https://github.com/huggingface/trl/issues/2905 | open | [] | 2025-02-19T12:38:13Z | 2025-02-19T12:38:13Z | null | xiangxinhello | |
huggingface/open-r1 | 370 | how to train grpo on 2 nodes(16gpus) | how to train grpo on 2 nodes(16gpus)? 10000 thanks for giving a successful example. | https://github.com/huggingface/open-r1/issues/370 | closed | [] | 2025-02-19T09:15:14Z | 2025-03-26T11:36:03Z | null | glennccc |
huggingface/finetrainers | 267 | How to save the best performing checkpoint during LoRA fine-tuning on Hunyuan Video? | In the HunyuanVideo training scripts, we can save checkpoints every 500 steps by passing `--checkpointing_steps 500`. The final model is saved through the following code:
```python
if accelerator.is_main_process:
transformer = unwrap_model(accelerator, self.transformer)
if self.args.training_type == "lora":
... | https://github.com/huggingface/finetrainers/issues/267 | open | [] | 2025-02-19T07:49:11Z | 2025-02-21T01:39:30Z | null | dingangui |
huggingface/lerobot | 748 | [pi0] confusion about the state embedding dimension in `embed_suffix` | ### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-5.14.0-284.86.1.el9_2.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.28.1
- Dataset version: 3.2.0
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Cuda version: 12040
- Using GPU in script?... | https://github.com/huggingface/lerobot/issues/748 | closed | [
"question",
"policies",
"stale"
] | 2025-02-19T03:33:01Z | 2025-10-20T02:31:45Z | null | IrvingF7 |
pytorch/tutorials | 3,272 | Introduction to Libuv TCPStore Backend | Thanks for the [article](https://github.com/pytorch/tutorials/blob/main/intermediate_source/TCPStore_libuv_backend.rst). Wondering if you can provide some details about the content of the TCPStore and what is its role in c10d . | https://github.com/pytorch/tutorials/issues/3272 | closed | [
"question"
] | 2025-02-18T20:56:09Z | 2025-04-16T17:57:44Z | null | githubsgi |
huggingface/transformers.js | 1,198 | whisper: how to get streaming word level timestamps? (automatic-speech-recognition) | ### Question
## Goal
- streaming
- word level timestamps
## Issue
`on_chunk_start` / `on_chunk_end` are not called when using `return_timestamps: "word"`.
These callbacks only provide timestamps with `return_timestamps: true`
I also tried to decode tokens, as Iβve seen it in the demo, but that uses callbacks that n... | https://github.com/huggingface/transformers.js/issues/1198 | open | [
"question"
] | 2025-02-18T15:29:42Z | 2025-02-20T04:45:48Z | null | getflourish |
huggingface/diffusers | 10,817 | auto_pipeline missing SD3 contol nets | ### Describe the bug
Hey, auto_pipeline seesm to be missing the control nets variants for SD3
venv\Lib\site-packages\diffusers\pipelines\auto_pipeline.py
### Reproduction
Load an sd3 model checkpoint with a controlnet loading any of the auto pipes you will just get the none control net variations as its not set in ... | https://github.com/huggingface/diffusers/issues/10817 | closed | [
"bug",
"help wanted",
"contributions-welcome"
] | 2025-02-18T12:54:40Z | 2025-02-24T16:21:03Z | 3 | JoeGaffney |
huggingface/lerobot | 746 | How should I run the model on my own datasets in different envs which is not clearly mentioned in the README? | I want to run the diffusion model on my own real world arms datasets, which are different from the example env and input format in observation and action dims.
I've seem some yaml files to store these parameters in earlier version of the repo, but I can't find it in the newest version of the repo. So should I write th... | https://github.com/huggingface/lerobot/issues/746 | closed | [
"question",
"policies",
"dataset",
"stale"
] | 2025-02-18T12:33:07Z | 2025-10-19T02:32:17Z | null | shi-akihi |
pytorch/pytorch | 147,374 | [ONNX] How to export triton custom kernels as custom ops? | ### π Describe the bug
can't export triton cumstom op kernel when use torch.onnx.export(dynamo=True)
i have use triton_op and wrap_triton to wrap this triton kernel
```python
import torch
from torch.library import triton_op, wrap_triton
import triton
from triton import language as tl
@triton.jit
def add_kernel(
... | https://github.com/pytorch/pytorch/issues/147374 | closed | [
"module: onnx",
"triaged"
] | 2025-02-18T12:11:20Z | 2025-02-19T22:57:49Z | null | zzq96 |
pytorch/xla | 8,715 | Pytorch XLA XMP Spawn Error | ## π Bug
<!-- A clear and concise description of what the bug is. -->
I'm currently trying to run a very simple example of just calling "Hello World" from each TPU. I'm currently running based on the torch xla versions on the vllm-tpu docker
## To Reproduce
<!--
It is really important for the team to have a quick ... | https://github.com/pytorch/xla/issues/8715 | closed | [
"distributed"
] | 2025-02-18T06:29:06Z | 2025-02-20T18:59:56Z | 3 | BabyChouSr |
pytorch/ao | 1,724 | [Question] Static Quantization for Open-Source LLMs | ## Description
Hi, I am a beginner in quantization and would like to experiment with INT8 dynamic and static quantization on open-source LLMs.
* For dynamic quantization, I found that `int8_dynamic_activation_int8_weight` is available in `torchao/quantization/quant_api.py`.
* For static quantization, I did not find an... | https://github.com/pytorch/ao/issues/1724 | open | [
"question",
"quantize_"
] | 2025-02-18T02:32:20Z | 2025-02-19T13:13:44Z | null | yang-ahuan |
huggingface/lerobot | 741 | Inquiry on Implementing NoMaD Model (Transformers and Diffusion Policy) | I am planning to implement the NoMaD model, which combines Transformers and Diffusion Policy, within the LeRobot project. Before proceeding, I wanted to check if anyone else is currently working on or has already started implementing this model.
For reference, here are the relevant resources:
Website: https://general... | https://github.com/huggingface/lerobot/issues/741 | closed | [
"question",
"stale"
] | 2025-02-17T19:57:23Z | 2025-10-08T20:56:42Z | null | vaishanth-rmrj |
pytorch/torchtitan | 852 | How to define Custom Communication Operations for Custom Operators in Distributed Settings | Thank you for your awesome project. I would like to ask how to solve the following issue:
I have implemented the logcumsumexp operator, where the input placement is Shard(-1) and the output placement is Replicate(). To obtain the final result, I need to create a custom all-reduce operator (instead of using the conven... | https://github.com/pytorch/torchtitan/issues/852 | closed | [
"question",
"module: dtensor"
] | 2025-02-17T16:49:25Z | 2025-08-21T03:07:29Z | null | Doraemonzzz |
pytorch/serve | 3,392 | How to run the benchmark scripts on the local model ? | How to run the benchmark scripts on the local model ?
I tried following but it fails with `ModelNotFoundException`
python benchmark_ab.py --config benchmark_config.json
```
{
"url": "./model_store/custom_model.mar",
"requests": 100,
"concurrency": 10,
"input": "kitten_small.jpg",
"exec_env": "local... | https://github.com/pytorch/serve/issues/3392 | closed | [] | 2025-02-17T14:16:47Z | 2025-02-17T14:53:01Z | null | ranipakeyur |
pytorch/torchtitan | 850 | "Universal" Checkpointing | Is there an equivalent of Deepspeed [Universal Checkpointing](https://github.com/deepspeedai/DeepSpeed/blob/master/blogs/deepspeed-ucp/README.md) currently for distributed checkpointing, DTensor and FSDP2? That is, how to use torch-native tooling to convert from a checkpoint with a given sharded / parallelism config t... | https://github.com/pytorch/torchtitan/issues/850 | closed | [
"question",
"module: checkpoint"
] | 2025-02-17T12:32:39Z | 2025-06-05T06:28:04Z | null | jeromeku |
huggingface/lerobot | 738 | convert simulation data of insertion from v1 to v2 | I cannot convert using the file (datasets/v2/convert_dataset_v1_to_v2.py) which requires robotconfig which I don't have
I just want to convert your data on lerobot/act_aloha_sim_transfer_cube_human | https://github.com/huggingface/lerobot/issues/738 | closed | [
"question",
"dataset",
"stale"
] | 2025-02-17T11:00:38Z | 2025-10-08T08:59:52Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/open-r1 | 340 | About the data using in sft, how to set SFTConfig.dataset_text_field? | how to use the HuggingFaceH4/Bespoke-Stratos-17k in sft.
I find there are two items in the data, "system" and "conversations". So, when I download this data and to finetune a LLM such as Qwen2.5-1.5B-Instruct, how to organize the data, in trl SFTConfig has a default parameter named dataset_text_field, it's default va... | https://github.com/huggingface/open-r1/issues/340 | open | [] | 2025-02-17T07:06:14Z | 2025-02-20T08:59:49Z | null | ItGirls |
huggingface/finetrainers | 264 | How to set --precompute_conditions for CogvideoI2V training? | cause i don't find this feature in Image2Video training.
does it exist? | https://github.com/huggingface/finetrainers/issues/264 | open | [] | 2025-02-17T06:00:50Z | 2025-03-05T03:49:05Z | null | BlackTea-c |
huggingface/diffusers | 10,805 | is there inpainiting dataset and parameters example provided for xl training? | **What API design would you like to have changed or added to the library? Why?**
**What use case would this enable or better enable? Can you give us a code example?**
Hi patil-suraj @patil-suraj , appreciated for the convenient script ! Is there any code example and dataset example to run the script: https://github.c... | https://github.com/huggingface/diffusers/issues/10805 | closed | [] | 2025-02-17T01:56:14Z | 2025-02-17T02:03:09Z | 2 | fire2323 |
huggingface/gsplat.js | 109 | Info request: How to update individual points in splat? | I would like to update position of individual points dynamically in order to create animations and effects.
What would be the optimal way to do it?
| https://github.com/huggingface/gsplat.js/issues/109 | open | [] | 2025-02-16T18:11:14Z | 2025-02-16T18:43:23Z | null | sjovanovic |
huggingface/diffusers | 10,803 | SANARubber a flexible version of SANA with i2i and multidiffusion/regional diffusion | ### Model/Pipeline/Scheduler description
I made a pipeline that is as reliable as the basic SANA pipeline but more flexible by making it run an array of functions which runs everything the og pipeline does. this can make easy combinations if necessary.
here's the link, enjoy
https://github.com/alexblattner/SANARubbe... | https://github.com/huggingface/diffusers/issues/10803 | open | [
"stale"
] | 2025-02-16T15:08:11Z | 2025-03-19T15:03:31Z | 1 | alexblattner |
huggingface/candle | 2,774 | Dumb Question: How to do forward hooks ? | For example I want to extract activations of intermediate layers. How do I register forward hooks similar to PyTorch or is there a similar/comparable paradigm in candle for this ? | https://github.com/huggingface/candle/issues/2774 | open | [] | 2025-02-16T12:41:26Z | 2025-02-16T12:41:26Z | null | pzdkn |
huggingface/diffusers | 10,799 | Effective region mask for controlnet | Hi, I just want to ask is there any way to use controlnet with mask like [this](https://github.com/Mikubill/sd-webui-controlnet/discussions/2831)
As you know comfyui, webui support effective region (mask for controlnet affect).
But I can't find how to do this with diffusers. | https://github.com/huggingface/diffusers/issues/10799 | closed | [
"stale"
] | 2025-02-15T17:42:20Z | 2025-04-03T04:01:37Z | 8 | Suprhimp |
huggingface/swift-coreml-diffusers | 102 | Question: how to use in my own swift project for inference? | How would I run diffusers on device on all apple devices in my swift Xcode project? | https://github.com/huggingface/swift-coreml-diffusers/issues/102 | open | [] | 2025-02-15T15:56:36Z | 2025-02-15T15:56:36Z | null | SpyC0der77 |
pytorch/pytorch | 147,263 | How to trigger several independent communications simultaneously? | For example, in training with 4 GPUs, I divide the GPUs into pairs and create two communication groups: group1 = dist.new_group([0, 1]) and group2 = dist.new_group([2, 3]). If I want to run independent dist.all_gather operations within both communication groups simultaneously, it results in an error. I'd like to ask ho... | https://github.com/pytorch/pytorch/issues/147263 | open | [
"oncall: distributed",
"triaged"
] | 2025-02-15T11:47:10Z | 2025-04-23T20:54:39Z | null | Ind1x1 |
huggingface/transformers.js | 1,194 | How do I know which ONNX transformation models are available? (Errors when loading models with CDN) | ### Question
I am using a CDN to load the models, as shown in the code below.
I filtered the models in HuggingFace the way you recommend (text-generation, transformers.js) and put the id of the model I looked up. As I understand it, to change the model, I only need to change the model id.
However, I get an error for... | https://github.com/huggingface/transformers.js/issues/1194 | open | [
"question"
] | 2025-02-15T10:31:32Z | 2025-02-16T14:02:08Z | null | mz-imhj |
huggingface/open-r1 | 333 | how to use tensorboard instead of wandbοΌ | https://github.com/huggingface/open-r1/issues/333 | closed | [] | 2025-02-15T08:00:06Z | 2025-02-15T08:02:35Z | null | ngrxmu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.