repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 10,796 | Docs for HunyuanVideo LoRA? | ### Describe the bug
As it seems like LoRA loading on HunyuanVideo has been implemented, I wonder where I can find the docs on this? Are they missing?
### Reproduction
Search for HunyuanVideo and LoRA
### Logs
```shell
```
### System Info
As it is the online docs...
### Who can help?
@stevhliu @sayakpaul | https://github.com/huggingface/diffusers/issues/10796 | closed | [
"bug",
"stale"
] | 2025-02-15T04:31:34Z | 2025-06-10T20:52:28Z | 9 | tin2tin |
huggingface/open-r1 | 328 | How to set generation sampling parameters? | Need to use deepseek reference settings of temperature=0.6, top_p=0.95.
Greedy sampling does poorly on AIME:
## r1-1.5B
- AIME24: 23.33%
Tried to refer to lighteval docs and ran into issues using model config:
```
model: # Model specific parameters
base_params:
model_args: "pretrained=Qwen/Qwen2.5-7B-Instruct... | https://github.com/huggingface/open-r1/issues/328 | open | [] | 2025-02-14T21:42:28Z | 2025-02-20T03:28:53Z | null | rawsh |
pytorch/xla | 8,710 | Expand troubleshoot instructions | ## 📚 Documentation
Expand troubleshoot instructions in https://github.com/pytorch/xla/blob/6f423d0bb284190cf1b12d8a943a334e57b4df28/docs/source/learn/troubleshoot.md to include common errors, and new debugging strategies. | https://github.com/pytorch/xla/issues/8710 | open | [
"documentation"
] | 2025-02-14T18:54:01Z | 2025-02-14T18:54:22Z | 0 | pgmoka |
pytorch/xla | 8,709 | Add more info to TPU_TOPOLOGY errors | ## 📚 Documentation
Currently if a VM is created utilizing an OS that does not support training on the TPU we get a TPU_TOPOLOGY OS error. We should add to our documentation to make these errors, and their solutions clearer. | https://github.com/pytorch/xla/issues/8709 | open | [
"documentation"
] | 2025-02-14T18:49:40Z | 2025-02-14T18:49:40Z | 0 | pgmoka |
pytorch/vision | 8,905 | Can the `_make_divisible_function` be explained better? | ### 📚 The doc issue
I'm referring to the following function: https://github.com/pytorch/vision/blob/main/torchvision/models/_utils.py#L76 I've no doubt that it is correct, but why does it sometimes round down the input and why is the threshold set to 90%? Is the formula from a well-known paper?
### Suggest a potent... | https://github.com/pytorch/vision/issues/8905 | closed | [] | 2025-02-14T17:16:42Z | 2025-02-14T17:37:01Z | 1 | bjourne |
huggingface/trl | 2,864 | How to train GPRO on 2 GPUs, one for training, one for vllm | ### Reproduction
When I use `Qwen2.5-3B-instruct` to train GRPO, the device for vllm always appear OOM when loading weights. II used two GPUs with 32GB of memory, one device for training, another for vllm. I dont know why a 3B model using so much memory on `device 1`
.
I was wondering if there is a way for me to contribute a recently ac... | https://github.com/huggingface/peft/issues/2377 | closed | [] | 2025-02-14T12:17:46Z | 2025-03-24T15:04:11Z | 2 | SpeeeedLee |
pytorch/serve | 3,391 | How can a user specify an envelope? | ### 📚 The doc issue
The `service_envelope` parameter has disappeared from the documentation:
https://pytorch.org/serve/configuration.html#other-properties
The KServe documentation states that this parameter is depricated:
https://kserve.github.io/website/0.11/modelserving/v1beta1/torchserve/#create-model-storage-wit... | https://github.com/pytorch/serve/issues/3391 | open | [] | 2025-02-14T07:24:11Z | 2025-02-14T07:24:11Z | 0 | yurkoff-mv |
pytorch/pytorch | 147,187 | [torch.export] How to export a model with kv cache | ### 🐛 Describe the bug
In an attention layer, kv cache needs a variable number "start_pos" from outside.
(may related to https://github.com/pytorch/pytorch/issues/146990)
Here is a simplified model for reproducing the issue:
```python
import torch
from torch import nn
class Cache(nn.Module):
def __init__(self... | https://github.com/pytorch/pytorch/issues/147187 | open | [
"oncall: pt2",
"oncall: export"
] | 2025-02-14T06:15:41Z | 2025-02-18T19:20:39Z | null | exeex |
huggingface/optimum | 2,189 | PEFT to ONNX conversion | ### System Info
```shell
Hello!
I have a fine-tuned LLM model from Hugging Face saved in PEFT format, and it’s about 2.1 GB. When we convert it to ONNX, its size nearly doubles to about 4.1 GB. What causes this significant increase in model size after converting from PEFT to ONNX? Is there any bug under this conversi... | https://github.com/huggingface/optimum/issues/2189 | open | [
"bug"
] | 2025-02-13T18:21:05Z | 2025-03-10T13:58:28Z | 2 | morteza89 |
pytorch/data | 1,442 | what dataloader to use for torchdata.nodes nodes? | hi, thanks for reviving torchdata. i was able to move on to `0.10.1` for lots of my existing datapipes. it seems to work pretty nicely.
question - am i supposed to use `torchdata.nodes.Loader` or `torchdata.stateful_dataloader.StatefulDataLoader` for my data nodes? or just `torch.utils.data.DataLoader`? i'm getting co... | https://github.com/meta-pytorch/data/issues/1442 | closed | [] | 2025-02-13T17:32:53Z | 2025-10-24T04:07:52Z | 16 | keunwoochoi |
pytorch/pytorch | 147,076 | How to check grads in each step of model? | Hi there:
I've implement a Pytorch version of [Retrieval-based-Voice-Conversion(RVC for short)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI) at [here](https://github.com/ElinLiu0/RVCTorch/blob/master/POC_Torch.ipynb).
The question is,when i wanna export my implementation pipeline into ON... | https://github.com/pytorch/pytorch/issues/147076 | closed | [
"module: onnx",
"triaged"
] | 2025-02-13T09:01:49Z | 2025-02-20T07:56:31Z | null | ElinLiu0 |
huggingface/agents-course | 113 | Show how to use Inference Providers for inference | Can be helpful for students to explore different models easily.
| https://github.com/huggingface/agents-course/issues/113 | open | [] | 2025-02-13T07:46:01Z | 2025-02-13T08:04:58Z | null | pcuenca |
pytorch/torchtitan | 840 | profiling | A few questions .
1. Is it based on kineto or something else ?
2. Only seeing CPU activities ( e.g. python) - do I have to do anything special to see GPU activities ?
| https://github.com/pytorch/torchtitan/issues/840 | closed | [
"question"
] | 2025-02-13T01:27:00Z | 2025-02-20T19:55:53Z | null | githubsgi |
pytorch/pytorch | 146,990 | How to export a model using topk with a variable number of neighbour? | ### 🐛 Describe the bug
The export is the following but that may not be the only one. That's the first raised one.
``torch._dynamo.exc.UserError: Could not guard on data-dependent expression u7 >= 0 (unhinted: u7 >= 0). (Size-like symbols: none)``
```python
import contextlib
import io
import logging
import warnings... | https://github.com/pytorch/pytorch/issues/146990 | closed | [
"triaged",
"oncall: pt2",
"oncall: export"
] | 2025-02-12T16:02:20Z | 2025-02-26T17:45:40Z | null | xadupre |
pytorch/pytorch | 146,977 | How to install Torch version that supports RTX 5090 on Windows? - CUDA kernel errors might be asynchronously reported at some other API call | I have purchased RTX 5090 just to test AI apps
Currently getting this error on any app
I need torch for Python 3.10 venv on Windows
I am ok with installing nightly version etc just install command please
```
Traceback (most recent call last):
File "E:\trellis_v5\TRELLIS\app.py", line 401, in <module>
pipeline... | https://github.com/pytorch/pytorch/issues/146977 | closed | [
"high priority",
"needs reproduction",
"module: build",
"module: windows",
"module: cuda",
"triaged"
] | 2025-02-12T12:43:57Z | 2025-03-01T09:47:47Z | null | FurkanGozukara |
pytorch/xla | 8,702 | Links misssing in CONTRIBUTING.md for Additional steps for GPU. | ## 📚 Documentation
<!-- A clear and concise description of what content is an issue. -->
I was visity CONTRIBUTING.md doc and try to build a GPU version, but in the part "Additional steps for GPU", the refer to guide link is missing.
 but I could not find a clear answer to my current problem.
I currently have a customized model based on the [ALBERT transformer](https://huggingface.co... | https://github.com/huggingface/optimum-neuron/issues/782 | closed | [
"Stale"
] | 2025-02-11T23:36:13Z | 2025-03-20T08:05:40Z | null | efemaer |
huggingface/diffusers | 10,772 | Sana Controlnet Support | **Is your feature request related to a problem? Please describe.**
The first controlnet for Sana has appeared, so the feature is to add the sana controlnet to the diffusers pipeline https://github.com/NVlabs/Sana/blob/main/asset/docs/sana_controlnet.md
**Describe the solution you'd like.**
Be able to use the sana cont... | https://github.com/huggingface/diffusers/issues/10772 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] | 2025-02-11T22:39:10Z | 2025-04-13T13:49:40Z | 5 | jloveric |
huggingface/smolagents | 610 | Is this normal? Im getting this a lot | Hey, is this normal?

also, out: None is this ok as well?? | https://github.com/huggingface/smolagents/issues/610 | closed | [
"question"
] | 2025-02-11T22:05:27Z | 2025-03-19T07:12:32Z | null | Mhdaw |
pytorch/ao | 1,701 | Model size after quantization | Why is the size relationship of the model unreasonable after I use these three quantization methods on the same model?
```Python
from torchao.quantization import quantize_, int8_weight_only
quantize_(new_model, int8_weight_only())
# from torchao.quantization import quantize_, int8_dynamic_activation_int8_weight
# qu... | https://github.com/pytorch/ao/issues/1701 | open | [
"question",
"quantize_"
] | 2025-02-11T19:32:29Z | 2025-02-12T08:54:01Z | null | TaylorYangX |
huggingface/agents-course | 77 | [QUESTION] Why am I able to select multiple options in Quick Quiz? | In quick quizzes as there is a single answer correct, shouldn't it be like only be able to choose a single option instead of being able select all at once to see correct answer?
| https://github.com/huggingface/agents-course/issues/77 | closed | [
"question"
] | 2025-02-11T17:35:31Z | 2025-02-13T07:20:59Z | null | Devrajsinh-Gohil |
pytorch/ao | 1,699 | [DOC] Questions on Integrating a New CPU Operator into TorchAO? | I'm working on integrating a **CPU operator** into TorchAO and have a few questions regarding the process:
### How can I add a New **_CPU Operator_** in 'torchao/csrc':
* What is the recommended approach for adding a new CPU operator in the 'csrc' directory?
* Are there any specific guidelines or templates I should ... | https://github.com/pytorch/ao/issues/1699 | open | [
"question",
"cpu"
] | 2025-02-11T12:03:02Z | 2025-02-13T01:53:33Z | null | Zijie-Tian |
pytorch/pytorch | 146,889 | How to customize a torch.Tensor() method to access the underlying data structure of a PyTorch tensor. | ### 🐛 Describe the bug
1. How to customize a torch.Tensor() method and call PyTorch's THPVariable_pynew function to obtain the underlying data structure of the original Tensor.

tensor = torch.Tensor(3,4).to("new_one") -> initM... | https://github.com/pytorch/pytorch/issues/146889 | open | [
"triaged",
"tensor subclass"
] | 2025-02-11T07:18:54Z | 2025-04-14T17:40:25Z | null | xiangxinhello |
pytorch/torchtitan | 831 | converging.md | In the [page](https://github.com/pytorch/torchtitan/blob/main/docs/converging.md) . Can someone please clarify the the following.
1. How many (dp) and what type of GPU was used for the [chart](https://github.com/pytorch/torchtitan/blob/main/docs/converging.md#test-results).
2. What is FSDP 8 , 8 GPU's or FP 8 ?
3. | https://github.com/pytorch/torchtitan/issues/831 | closed | [
"question"
] | 2025-02-11T04:15:19Z | 2025-03-17T19:13:39Z | null | githubsgi |
huggingface/agents-course | 66 | [QUESTION] About the **Thought: Internal Reasoning and the Re-Act Approach** section of UNIT 1 | I am a bit confused about the ReAct prompting example at the end of the **Thought: Internal Reasoning and the Re-Act Approach** section in Unit 1. The figure label describes it as an example of **ReAct**, but the image itself mentions "Zero-shot CoT." Could you please take a look at this section and clarify? I would re... | https://github.com/huggingface/agents-course/issues/66 | closed | [
"question"
] | 2025-02-11T03:54:26Z | 2025-02-13T07:30:13Z | null | saidul-islam98 |
huggingface/datasets | 7,390 | Re-add py.typed | ### Feature request
The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here?
### Motivation
MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be goo... | https://github.com/huggingface/datasets/issues/7390 | open | [
"enhancement"
] | 2025-02-10T22:12:52Z | 2025-08-10T00:51:17Z | 1 | NeilGirdhar |
pytorch/torchtitan | 828 | Any optimized suggestions for fast save ema/model/optim and resume training from them all. | By using dcp.async_save, we can save the model and optimizer asynchronously, preventing them from blocking the training process. However, if I also want to save the EMA (Exponential Moving Average) model, the typical approach would be to create another async_save call for the EMA. According to the documentation, it's "... | https://github.com/pytorch/torchtitan/issues/828 | closed | [
"question",
"module: distributed_state_dict"
] | 2025-02-10T10:39:16Z | 2025-02-13T07:39:35Z | null | tangjiasheng |
huggingface/lerobot | 707 | is there option to run on parallel gpu | I have 2 gpus 4090 I wonder if there is an option to run on parallel while finetuning the model
I have found this parameter here

but I don't actually understand what do you mean by mp
so if there is option for parallel gpu pl... | https://github.com/huggingface/lerobot/issues/707 | closed | [
"question"
] | 2025-02-10T09:34:13Z | 2025-05-14T20:51:43Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/lerobot | 706 | adapt_to_pi_aloha parameter | I am finetuning pi0 on a static aloha dataset and I found the following parameter : adapt_to_pi_aloha : false
in /lerobot/common/policies/pi0/configuration_pi0.py
but when I set it to true the first loss increased from 0.17 to 4.7
should I set it to true or not knowing that I want the predicted actions to be in alo... | https://github.com/huggingface/lerobot/issues/706 | open | [
"question",
"configuration"
] | 2025-02-10T09:24:45Z | 2025-07-24T08:15:35Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/chat-ui | 1,708 | Generation failed occur | when I ask model then get generation error

using base model is llama3 -1b
below code is my .env.local code
 | https://github.com/huggingface/chat-ui/issues/1708 | open | [
"support"
] | 2025-02-10T08:12:56Z | 2025-02-12T07:48:47Z | 5 | mondayjowa |
huggingface/open-r1 | 260 | How to use tensor_parallel_size for vllm in GRPO? | GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported.
What if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800,
Is there any setting we can set the tensor_parallel_size for vllm params?
```
if self.accelerator.... | https://github.com/huggingface/open-r1/issues/260 | open | [] | 2025-02-10T07:17:07Z | 2025-02-20T12:21:15Z | null | bannima |
huggingface/trl | 2,814 | How to use tensor_parallel_size for vllm reference in GRPO? | GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported.
What if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800,
Is there any setting we can set the tensor_parallel_size for vllm params?
```
if self.accelerator... | https://github.com/huggingface/trl/issues/2814 | open | [
"⚡accelerate",
"🏋 GRPO"
] | 2025-02-10T07:09:47Z | 2025-03-04T11:40:13Z | null | bannima |
huggingface/diffusers | 10,755 | Difference in Output When Using PIL.Image vs numpy.array for Image and Mask Input. | hi.
I get different results when providing image and mask as input using PIL.Image versus numpy. array. Why does this happen?
Is there an issue with my normalization method?
| pillow | array |
|---|---|
|  |  function? | I'm trying to move from scipy to torchaudio.
Here is my code below:
```python
from torchaudio.functional.filtering import filtfilt
from scipy import signal
bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
audio = sample_input
print(f"Audio contains nan: {torch.isnan(torch.from_numpy(audio).float().to(torch... | https://github.com/pytorch/audio/issues/3879 | closed | [] | 2025-02-10T02:56:31Z | 2025-02-10T08:55:03Z | null | ElinLiu0 |
huggingface/trl | 2,813 | What is the minimum GPU requirement in gigabytes for TRL intensive training? | https://github.com/huggingface/trl/issues/2813 | open | [] | 2025-02-10T02:52:07Z | 2025-02-11T08:41:56Z | null | lonngxiang | |
huggingface/transformers.js | 1,188 | It seems like Xenova/swin2SR-classical-sr-x2-64 model only work with image url?How to implement partial output with it? | ### Question
I have fun with react demo and Xenova/swin2SR-classical-sr-x2-64 model.
https://huggingface.co/Xenova/swin2SR-classical-sr-x2-64
I tried to give object URL to upscaler function but it doesn't work, I wonder if it only accepts image url.
Also I want to know how to do partial output like the translate react... | https://github.com/huggingface/transformers.js/issues/1188 | open | [
"question"
] | 2025-02-10T02:18:32Z | 2025-02-16T00:50:36Z | null | codenoobforreal |
huggingface/transformers.js | 1,186 | Which undocumented transformersJS Generator parameters are supported? crapCheck ran fine. | ### Question
Sorry to bug you again Josh @xenova I was trying a set of generator parameters and things were working fine without errors so I tried the parameter "crapCheck" and it also ran without errors so now I am worried if anything works. In the docs it seems that these are supported:
Supported Parameters ... | https://github.com/huggingface/transformers.js/issues/1186 | open | [
"question"
] | 2025-02-09T05:35:57Z | 2025-02-09T05:35:57Z | null | hpssjellis |
pytorch/torchtitan | 827 | How to design TP plan for `nn.GLU` | Hi guys, I'm encountering a challenge in designing TP plans for gated MLP, i.e., [nn.GLU](https://pytorch.org/docs/stable/generated/torch.nn.GLU.html#torch.nn.GLU) with packed weights `w12 = [w1, w2]`, followed by a down proj `w3`
The plan for separated `w1` and `w2` is quite straightforward
```
layer_tp_plan = {
... | https://github.com/pytorch/torchtitan/issues/827 | closed | [
"question"
] | 2025-02-08T23:24:47Z | 2025-02-12T19:43:22Z | null | yzhangcs |
huggingface/lighteval | 545 | couldn't find it in the cached files and it looks like Elron/bleurt-tiny-512, how to set the model path? | How to set the eval model path?
## Eval
when I use the script to eval model with MATH-500
`NUM_GPUS=8 # Set to 8 for 32B and 70B models
MODEL=Deepseek_R1_distill/Qwen2.5-32B-Open-R1-Distill/
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8,tensor_parallel_size=$NUM_GPUS"
... | https://github.com/huggingface/lighteval/issues/545 | closed | [] | 2025-02-08T07:26:28Z | 2025-05-15T15:27:30Z | null | bannima |
huggingface/open-r1 | 240 | How to do knowledge distillation training | In the deepseek r1 technical report, there is a small model based on distillation at the end; deepseek r1, as the teacher model, qwen and llama, as the student model, do SFT based on distilled data. However, it seems that the process of knowledge distillation is not involved here(open r1), that is, the process of the ... | https://github.com/huggingface/open-r1/issues/240 | open | [] | 2025-02-08T06:50:20Z | 2025-02-27T08:16:02Z | null | RyanOvO |
huggingface/transformers.js-examples | 42 | How to stop the transformerJS webGPU models when they chat for too long. | @xenova Hi Josh.
I am making several very capable TransformerJS single page applications and I really like what they are doing. My demo index page is [here](https://hpssjellis.github.io/my-examples-of-transformersJS/public/index.html), but I can't seem to stop any of my examples if they are taking too long and then b... | https://github.com/huggingface/transformers.js-examples/issues/42 | closed | [] | 2025-02-08T04:38:51Z | 2025-02-08T22:05:23Z | null | hpssjellis |
huggingface/lerobot | 692 | How to evaluate policy on real robot and sim environment | I am working on evaluating a trained policy on a real robot and in a simulated environment (Isaac Gym). However, I am uncertain about the process and communication mechanisms involved.
My questions are:
- Evaluating on a real robot:
> How do I retrieve real-time observations from the real robot with Lerobot?
- Eval... | https://github.com/huggingface/lerobot/issues/692 | closed | [
"question",
"simulation"
] | 2025-02-07T13:40:27Z | 2025-10-17T11:20:29Z | null | ShiyaoExtendQA |
huggingface/diffusers | 10,743 | Support zero-3 for FLUX training | ### Describe the bug
Due to memory limitations, I am attempting to use Zero-3 for Flux training on 8 GPUs with 32GB each. I encountered a bug similar to the one reported in this issue: https://github.com/huggingface/diffusers/issues/1865. I made modifications based on the solution proposed in this pull request: https:... | https://github.com/huggingface/diffusers/issues/10743 | closed | [
"bug"
] | 2025-02-07T12:50:44Z | 2025-10-27T09:33:59Z | 9 | xiaoyewww |
pytorch/pytorch | 146,682 | How to get last layer hidden state of transformer model while convert model to onnx format? |
I am currently working with a model that has been exported to the ONNX format. For my project, I need to extract the last layer hidden states during inference. However, I couldn’t find any documentation or example that explains how to achieve this using an ONNX-exported model.
Whether the ONNX format retains the cap... | https://github.com/pytorch/pytorch/issues/146682 | closed | [
"module: onnx",
"triaged"
] | 2025-02-07T08:35:07Z | 2025-03-03T20:42:20Z | null | Jianshu-She |
huggingface/alignment-handbook | 210 | Problem with multi-epoch training | Hi, I run the orpo code with 1 epoch and there was no issue. But when I tried to run the code with 5 epochs, I had the following error just at the start of the second epoch:
```
RuntimeError: Tensors of the same index must be on the same device and the same dtype except `step` tensors that can be CPU and float32 notwi... | https://github.com/huggingface/alignment-handbook/issues/210 | open | [] | 2025-02-07T04:50:41Z | 2025-02-07T04:50:41Z | 0 | sowmaster |
pytorch/executorch | 8,282 | Advise on how to run the training example on iOS | ### 🚀 The feature, motivation and pitch
Hello team,
I was wondering if it is possible to run the `train_xor` or a similar training example on an iOS device.
So be able to do
`#import <executorch/extension/training/training_module.h>`
I have followed this guide: https://pytorch.org/executorch/main/apple-runtime and... | https://github.com/pytorch/executorch/issues/8282 | closed | [
"triaged",
"module: ios",
"module: training"
] | 2025-02-06T18:57:43Z | 2025-09-02T16:46:06Z | null | YuanTingHsieh |
huggingface/smolagents | 521 | authenticated sessions with smolagents (how to be logged in during browser use) | **Is your feature request related to a problem? Please describe.**
I would like smolagents to be able to use websites with my login credentials.
**Describe the solution you'd like**
Either a way to give Helium credentials, or a way to use my actual browser, like: https://github.com/browser-use/browser-use/blob/main/ex... | https://github.com/huggingface/smolagents/issues/521 | open | [
"enhancement"
] | 2025-02-06T15:51:53Z | 2025-02-06T15:51:53Z | null | rawwerks |
huggingface/open-r1 | 210 | How to push own dataset to hub with train and test dataset? | How do I push my own dataset to the hub along with the training and test datasets?
```python
train_distiset = pipeline.run(dataset=train_dataset)
test_distiset = pipeline.run(dataset=test_dataset)
```
There is a problem with the code above. | https://github.com/huggingface/open-r1/issues/210 | closed | [] | 2025-02-06T15:28:15Z | 2025-02-08T05:59:13Z | null | JACKYLUO1991 |
huggingface/peft | 2,364 | docs: broken links to boft | ### System Info
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give d... | https://github.com/huggingface/peft/issues/2364 | closed | [] | 2025-02-06T14:48:16Z | 2025-02-07T10:14:44Z | 1 | makelinux |
huggingface/open-r1 | 207 | DeepSeek RL-Zero: How to clone DeepSeek RL-Zero? | How to clone DeepSeek RL-Zero? | https://github.com/huggingface/open-r1/issues/207 | open | [] | 2025-02-06T13:45:33Z | 2025-02-06T13:45:33Z | null | win10ogod |
pytorch/pytorch | 146,575 | How to pip3 torch==2.1.0.dev20230822+cu118 |
> I’ve tried installing this specific version multiple times, but the issue keeps occurring.
pip3 install torch==2.1.0.dev20230822+cu118
```
ERROR: Could not find a version that satisfies the requirement torch==2.1.0.dev20230822+cu118 (from versions: 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2... | https://github.com/pytorch/pytorch/issues/146575 | closed | [
"module: binaries",
"triaged"
] | 2025-02-06T06:07:34Z | 2025-02-06T15:14:25Z | null | minhphi1712 |
huggingface/smolagents | 501 | How to run open_deep_research? | How to run open_deep_research? | https://github.com/huggingface/smolagents/issues/501 | closed | [
"bug"
] | 2025-02-05T13:35:52Z | 2025-03-19T07:28:22Z | null | win4r |
pytorch/ao | 1,665 | NF4Tensor and DDP | I am trying to use `NF4Tensor` weights in my model and wrap it with `DistributedDataParallel`, but get the following error:
```
[rank0]: model = DistributedDataParallel(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/path/to/venv/lib/python3.12/site-packages/torch/nn/parallel/distributed.py", line... | https://github.com/pytorch/ao/issues/1665 | closed | [
"question"
] | 2025-02-05T12:12:27Z | 2025-02-18T02:35:05Z | null | psinger |
pytorch/torchtitan | 821 | WARNING - When using FSDP, it's recommended to enable config.force_recompute_fp8_weight_in_bwd. | Not necessarily an issue, but I see this log quite a lot when I enable Float8. I can open a PR to turn it on, but was wondering if it was intentional. Thanks for the great library! | https://github.com/pytorch/torchtitan/issues/821 | closed | [
"question",
"module: fsdp"
] | 2025-02-05T05:04:38Z | 2025-02-18T18:32:34Z | null | c0g |
huggingface/trl | 2,768 | How to log more metrics with wandb when using GRPO trainer and accelerate | ### Reproduction
```python
def correctness_reward_func(prompts, completions, answer, **kwargs) -> list[float]:
responses = [completion[0]["content"] for completion in completions]
q = prompts[0][-1]["content"]
extracted_responses = [extract_xml_answer(r) for r in responses]
# Get current step from tr... | https://github.com/huggingface/trl/issues/2768 | open | [
"✨ enhancement",
"⚡accelerate",
"🏋 GRPO"
] | 2025-02-05T03:59:10Z | 2025-02-05T03:59:54Z | null | andrewsiah |
pytorch/ao | 1,664 | Tensor subclass methods for `DTensor` and `FSDP2` | Is there a protocol / interface that a tensor subclass must implement in order to be used with `DTensor` primitives and for training with `FSDP2`?
I've been walking through `NF4` as an example as it [covers both](https://github.com/search?q=repo%3Apytorch%2Fao+FSDP2+and+NF4&type=pullrequests). However, the methods ar... | https://github.com/pytorch/ao/issues/1664 | open | [
"question"
] | 2025-02-05T00:40:54Z | 2025-02-05T23:33:35Z | null | jeromeku |
pytorch/torchtitan | 818 | Is user-defined initializers a must-have for FSDP2? | ```
with torch.device("meta"):
model = Transformer()
for module in model.modules():
if isinstance(module, TransformerBlock):
fully_shard(module)
fully_shard(model)
for tensor in itertools.chain(model.parameters(), model.buffers()):
assert tensor.device == torch.device("meta")
# Allocate buffers and ... | https://github.com/pytorch/torchtitan/issues/818 | closed | [
"question"
] | 2025-02-04T22:00:45Z | 2025-02-05T18:03:29Z | null | goldhuang |
huggingface/open-r1 | 183 | How to directly input embeddings into the model? | My data are embeddings of the tokens (i.e., already after tokenization), is there a way of directly inputting the embeddings into the DeepSeek open-r1 model?
For example, when I use the BERT model via Hugging Face, I can simply input the embeddings using the "inputs_embeds" parameter:
```
from transformers import Ber... | https://github.com/huggingface/open-r1/issues/183 | open | [] | 2025-02-04T21:10:13Z | 2025-02-04T21:10:13Z | null | CCCC1800 |
huggingface/open-r1 | 180 | How to launch GRPO with vLLM on multi-node slurm? | How to write sbatch script to run GRPO with vLLM on multiple nodes? What should be `--num_processes`? Is [GRPOTrainer](https://github.com/huggingface/trl/blob/1f344c9377d87cd348d92b78f27afea8e66563d7/trl/trainer/grpo_trainer.py#L288-L298) compatible with multinode training? | https://github.com/huggingface/open-r1/issues/180 | open | [] | 2025-02-04T16:58:50Z | 2025-03-14T15:55:18Z | null | pbelevich |
huggingface/lerobot | 678 | The inverse kinematic solution code of so-100 | Are there any code of inverse kinematic of so-100, which just need the input of the x, y on my desk, then it can move to the target
coordinate?
Thanks for any response. | https://github.com/huggingface/lerobot/issues/678 | open | [
"question",
"robots"
] | 2025-02-04T03:58:17Z | 2025-10-15T16:55:01Z | null | gxy-1111 |
huggingface/diffusers | 10,710 | Is DDUF format supported? | I checked this PR, https://github.com/huggingface/diffusers/pull/10037 and it is merged
```
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"DDUF/FLUX.1-dev-DDUF", dduf_file="FLUX.1-dev.dduf", torch_dtype=torch.bfloat16
)
image = pipe(
"photo a cat holding a sig... | https://github.com/huggingface/diffusers/issues/10710 | closed | [] | 2025-02-03T17:42:37Z | 2025-02-23T17:56:26Z | 4 | nitinmukesh |
huggingface/trl | 2,754 | How to do multi-node training for GRPO with DeepSpeed + vLLM? | ### Multi-Node Request
I am interested in doing multi-node (4 x 8 GPUs) reinforcement fine-tuning of 8B (or 14B) models using GRPO. However, given that at least 1 GPU needs to be assigned to vLLM, I am not sure how to exactly run multi-node setup? Would it be possible for you to share a simple set of scripts (config ... | https://github.com/huggingface/trl/issues/2754 | closed | [
"🚀 deepspeed",
"🏋 GRPO"
] | 2025-02-03T16:03:23Z | 2025-03-22T12:51:19Z | null | nikhilchandak |
pytorch/ao | 1,653 | [Doc] gemlite version | What gemlite version is required/supported? Can we specify this in the readme? | https://github.com/pytorch/ao/issues/1653 | closed | [
"topic: documentation",
"question"
] | 2025-02-03T14:26:29Z | 2025-05-02T18:00:20Z | null | bhack |
pytorch/text | 2,283 | import torchtext fails | ## 🐛 Bug
Today I installed torchtext in my Linux Ubuntu. When I tried to import torchtext into python, torchtext failed.
Details
1. Ubuntu 24.04.1 LTS
2. Python 3.12.3
3. PyTorch Version 2.5.1+cu124 (running fine)
4. During the torchtext install I saw messages suggesting that the version is 0.18, which accor... | https://github.com/pytorch/text/issues/2283 | open | [] | 2025-02-03T01:20:48Z | 2025-02-03T01:20:48Z | 0 | JuanVargas |
huggingface/lerobot | 673 | configure_motor.py says it's increasing the max acceleration of feetech motors, but is decreasing it | I built my SO ARM 100s before reading the huggingface instructions, so I am trying to retroactively setup the servos properly. I looked into configure_motor.py to see what it was doing so I could configure it manually, and I notice that for Feetech motors it sets Maximum_Acceleration to 254 to " speedup acceleration a... | https://github.com/huggingface/lerobot/issues/673 | closed | [
"question",
"robots"
] | 2025-02-01T18:46:30Z | 2025-04-07T15:52:20Z | null | jbrownkramer |
huggingface/lerobot | 672 | Limited Range of Motion in 'Elbow Flex' Motor on SO-100 Follower Arm | # Issue: Limited Range of Motion in 'Elbow Flex' Motor on SO-100 Follower Arm
## Description
In my build of the SO-100 arm, the follower arm exhibits an issue where the motor labeled **'elbow flex'** is restricted to a movement range of approximately **90 degrees from the rest position**.
## Steps Taken to Troublesho... | https://github.com/huggingface/lerobot/issues/672 | closed | [
"question",
"robots",
"stale"
] | 2025-02-01T15:01:59Z | 2025-10-20T02:31:48Z | null | ParzivalExtrimis |
pytorch/pytorch | 146,241 | How to perform BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32 efficiently using pytorch API? | ### 🚀 The feature, motivation and pitch
NVIDIA's cutlass library can perform BF16 matrix multiplication so that multiplication is done in BF16 and summation is done in FP32 for improved numerical stability. For example, consider the following snippet from [this code example from flash-attention](https://github.com/Da... | https://github.com/pytorch/pytorch/issues/146241 | closed | [
"module: cuda",
"triaged",
"module: linear algebra",
"module: python frontend",
"matrix multiplication"
] | 2025-02-01T13:13:18Z | 2025-04-18T05:02:40Z | null | Wongboo |
pytorch/xla | 8,660 | Torch XLA Model all_gather does not work with tensors of different sizes along dimension 0 | ## 🐛 Bug
Torch XLA Model all_gather works with tensors of same size along `dim=0`, but if tensor sizes are different along `dim=0`, it hangs.
## To Reproduce
Save this code in `test_all_gather.py`
```
import torch
import torch_xla.core.xla_model as xm
import torch_xla.runtime as xr
import torch_xla.distributed.xla... | https://github.com/pytorch/xla/issues/8660 | open | [
"enhancement",
"distributed",
"usability"
] | 2025-01-31T22:02:27Z | 2025-03-04T22:52:46Z | 6 | ajayvohra2005 |
huggingface/sentence-transformers | 3,207 | How to increase batch size by using multiple gpus? | Hello! My fine-tuned model need a large batch size to get the best performance. I have multiple gpus with 40G VRAM each. How can i use them together to enlarge the batch size? Currently i can only set the batch size be 3 per GPU and seems they won't share the datas. How can i make the total batch size become 24? | https://github.com/huggingface/sentence-transformers/issues/3207 | open | [] | 2025-01-31T18:00:08Z | 2025-02-19T10:36:28Z | null | 13918763630 |
pytorch/torchtitan | 813 | HSDP causes loss instability | I have a codebase forked from torchtitan with minor changes. FSDP trains very well with minimal instability, but HSDP on the same codebase exhibits loss spikes.
Is there some reason for this you folks can think of? Note that I have implemented gradient accumulation in my fork, though without changing any sharding beha... | https://github.com/pytorch/torchtitan/issues/813 | closed | [
"question",
"module: fsdp"
] | 2025-01-31T03:27:09Z | 2025-08-21T03:06:46Z | null | apkumar |
pytorch/vision | 8,889 | Torchvision 0.20.1 looks for jpeg9 on MacOS, while depending on libjpeg-turbo which only provides jpeg8 | ### 🐛 Describe the bug
Hi, I tried to create a new conda environment torch + torchvision + torchaudio + blas accelerate on a MacOS 14.
Post installation, when I try to import the torchvision library, I get a warning about missing libjpeg9.
I have added more details below. Just wanted to bring this to your attentio... | https://github.com/pytorch/vision/issues/8889 | open | [] | 2025-01-30T16:57:13Z | 2025-09-22T13:02:58Z | 4 | IMG-PRCSNG |
huggingface/optimum | 2,174 | Support for ONNX export of SeamlessM4TModel | ### Feature request
Add SeamlessM4Tv2 Model support to onnx_export_from_model.
### Motivation
Being able to deploy SeamlessM4Tv2 models to production using onnx.
### Your contribution
I got the speech-to-text model to ONNX, but I'm not able to generate the audio as expected, even though I'm trying to give the tgt... | https://github.com/huggingface/optimum/issues/2174 | closed | [
"Stale"
] | 2025-01-30T15:10:31Z | 2025-03-18T02:07:02Z | 3 | AlArgente |
pytorch/pytorch | 145,978 | What is the recommended way to use Distributed Checkpointing Save/Load with HSDP? | ### 🐛 Describe the bug
There are torch distributed checkpointing examples in [torch/distributed/checkpoint/examples](https://github.com/pytorch/pytorch/tree/main/torch/distributed/checkpoint/examples). All of these examples use FSDP. Running these examples out of the box has no issues, the loaded checkpoint state mat... | https://github.com/pytorch/pytorch/issues/145978 | open | [
"oncall: distributed",
"triaged",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 2025-01-29T22:24:11Z | 2025-04-08T15:58:03Z | null | gkroiz |
huggingface/diffusers | 10,683 | Would anyone consider a diffusers export_to_frames utility fuction? | **Is your feature request related to a problem? Please describe.**
The current `export_to_video` function in Hugging Face's Diffusers library exports a compressed video, but it's not straightforward for users to obtain raw, lossless PNG frames from a list of frames. This can be a problem for users who need to work with... | https://github.com/huggingface/diffusers/issues/10683 | open | [
"stale"
] | 2025-01-29T17:30:21Z | 2025-03-26T15:04:10Z | 4 | lovetillion |
huggingface/transformers.js | 1,174 | How to create a new onnx TTS model like mms-tts-eng | ### Question
First of all, congratulations on such a great library!
I would like to ask for your guidance and assistance in creating a new onnx model similar to the following one:
https://huggingface.co/Xenova/mms-tts-eng/tree/main
…but for the Malagasy language:
https://huggingface.co/facebook/mms-tts-mlg ... | https://github.com/huggingface/transformers.js/issues/1174 | closed | [
"question"
] | 2025-01-29T16:02:13Z | 2025-02-05T12:48:57Z | null | elloza |
huggingface/open-r1 | 113 | What is the GPU resource required to run Open-R1 (Deepseek-R1) locally? | I am trying to run it using Ollama with Open WebUI in a docker container, does it required a dedicated GPU with high VRAM or an integrated GPU?
Which model (8 billion, 9 billion, 12 billion) can be required with each GPU VRAM? | https://github.com/huggingface/open-r1/issues/113 | open | [] | 2025-01-29T14:08:47Z | 2025-01-29T21:17:17Z | null | ruidazeng |
huggingface/open-r1 | 100 | What is the compute needed for GRPO for 7B R1-Distill model? | Anybody who has tried GRPO over any of the R1-Distill models: what is the minimum GPU compute requirement to run the training?
Let's say for R1-Distill-Qwen-7B ?
I am talking about this from the README:
### GRPO
```
accelerate launch --config_file configs/zero3.yaml src/open_r1/grpo.py \
--output_dir DeepSeek-R1-... | https://github.com/huggingface/open-r1/issues/100 | open | [] | 2025-01-29T03:01:03Z | 2025-02-10T09:17:47Z | null | iamansinha |
huggingface/diffusers | 10,677 | Support for training with Grayscale images? | I am trying to train an unconditional diffusion model on grayscale images using your [pipeline](https://huggingface.co/docs/diffusers/training/unconditional_training). When running training with the default parameters I discovered inferred images that contained colour (specifically green). Where it learnt such colours ... | https://github.com/huggingface/diffusers/issues/10677 | open | [
"stale"
] | 2025-01-28T22:25:19Z | 2025-02-28T15:02:57Z | 1 | DavidGill159 |
pytorch/torchtitan | 811 | FSDP checkpoints don't load when run is restarted with greater world size | A checkpoint is saved from an 8-GPU run with `dp_shard ` set to 8 and all other parallelisms set to 1. My understanding is that this is configured as an FSDP run.
The checkpoint is resumed from 16 GPUs with `dp_shard` now set to 16. When loading the checkpoint, we get this error:
```
[rank0]: Traceback (most recent c... | https://github.com/pytorch/torchtitan/issues/811 | closed | [
"bug",
"documentation",
"enhancement",
"module: fsdp"
] | 2025-01-28T21:38:09Z | 2025-02-07T01:22:26Z | 4 | darkmirage |
huggingface/diffusers | 10,675 | Difference in Flux scheduler configuration max_shift | ### Describe the bug
Could you please check if the value of 1.16 here...
https://github.com/huggingface/diffusers/blob/658e24e86c4c52ee14244ab7a7113f5bf353186e/src/diffusers/pipelines/flux/pipeline_flux.py#L78
...is intentional or maybe a typo?
`max_shift` is 1.15 both in the model configuration...
https://huggingfa... | https://github.com/huggingface/diffusers/issues/10675 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-28T20:35:58Z | 2025-02-18T06:54:58Z | 2 | dxqb |
huggingface/transformers.js | 1,171 | Does the image generation model support using LoRA? | ### Question
I would like to implement an image generation feature to my website using a image generation model and a LoRA. Is LoRA supported in transformers.js? | https://github.com/huggingface/transformers.js/issues/1171 | open | [
"question"
] | 2025-01-28T19:48:38Z | 2025-02-11T23:11:27Z | null | hunkim98 |
pytorch/xla | 8,642 | Make Mixtral pallas kernels Dynamo/AOTAutograd traceable | Similar to https://github.com/pytorch/xla/issues/8633, we'll need to refactor pallas kernels needed by Mixtral (e.g. GMM) into PyTorch custom ops in order to use scan in Mixtral. | https://github.com/pytorch/xla/issues/8642 | open | [
"enhancement",
"pallas"
] | 2025-01-28T19:29:33Z | 2025-02-13T13:15:27Z | 1 | tengyifei |
huggingface/diffusers | 10,672 | Please support callback_on_step_end for following pipelines | **Is your feature request related to a problem? Please describe.**
Missing callback_on_step_end in these pipeline takes away the capability to show the progress in UI
**Describe the solution you'd like.**
Please support callback_on_step_end
**Describe alternatives you've considered.**
N.A.
**Additional context.**
1.... | https://github.com/huggingface/diffusers/issues/10672 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-28T16:26:56Z | 2025-02-16T17:28:58Z | 2 | nitinmukesh |
huggingface/transformers.js | 1,170 | Processing in image encoding for Florence 2 | ### Question
Hi,
while having a look at the code for generation with the Florence 2 model, I've noticed something weird. The original code for inference uses the [_encode_image](https://huggingface.co/microsoft/Florence-2-base-ft/blob/main/modeling_florence2.py#L2599) method for creating image features. However, look... | https://github.com/huggingface/transformers.js/issues/1170 | closed | [
"question"
] | 2025-01-27T16:13:28Z | 2025-03-02T14:37:52Z | null | ir2718 |
huggingface/text-generation-inference | 2,956 | How to give custom model code for TGI to run. | Is there a way to give custom model inference code for TGI to run during invocation? | https://github.com/huggingface/text-generation-inference/issues/2956 | open | [] | 2025-01-27T10:37:55Z | 2025-01-27T10:37:55Z | null | ashwani-bhat |
huggingface/diffusers | 10,662 | Feature Request: Image-to-Image Fine-Tuning Example | Hello, and thank you for maintaining this amazing repository!
While working with the Diffusers library, I noticed there is a folder containing fine-tuning examples for text-to-image models but not for image-to-image fine-tuning.
Since image-to-image models have many use cases (e.g., style transfer, image restoration, ... | https://github.com/huggingface/diffusers/issues/10662 | closed | [] | 2025-01-27T08:33:39Z | 2025-02-07T08:27:44Z | 6 | YanivDorGalron |
pytorch/xla | 8,632 | [scan] Avoid re-tracing the combine function on every call | ## 🚀 Feature
It should be possible to somehow cache the traced graphs in `torch_xla.experimental.scan` so we don't trace on every call.
## Motivation
Today `torch_xla.experimental.scan` and `scan_layers` traces the user function with both AOTAutograd (to get the backward) and with LazyTensor (to lower them to HLO).... | https://github.com/pytorch/xla/issues/8632 | closed | [
"enhancement",
"good first issue",
"performance"
] | 2025-01-27T06:30:47Z | 2025-06-19T20:02:13Z | 21 | tengyifei |
huggingface/finetrainers | 248 | How to load full finetune for inference? | ### Feature request / 功能建议

### Motivation / 动机
It seems like only lora inference example in README.MD
### Your contribution / 您的贡献
test the full finetune(LTX-VIDEO,Cogxvideo) | https://github.com/huggingface/finetrainers/issues/248 | closed | [] | 2025-01-27T03:49:57Z | 2025-01-27T06:27:18Z | null | BlackTea-c |
pytorch/text | 2,282 | combining TEXT.build_vocab with BERT Embedding | ## ❓ Questions and Help
**Description**
Hi, we can use glove embedding when building vocab, using
something like:
```
MIN_FREQ = 2
TEXT.build_vocab(train_data,
min_freq = MIN_FREQ,
vectors = "glove.6B.300d",
unk_init = torch.Tensor.normal_)
```
<!-- Please send q... | https://github.com/pytorch/text/issues/2282 | open | [] | 2025-01-27T02:11:21Z | 2025-01-27T02:11:21Z | 0 | muhalfian |
huggingface/Google-Cloud-Containers | 143 | Route to /generate and /metrics | Hello team, thanks for supporting :)
Inside https://github.com/huggingface/text-generation-inference/blob/main/router/src/server.rs file,
there is a route for google cloud definition as below.
#[cfg(feature = "google")]
{
tracing::info!("Built with `google` feature");
tracing::info!(
... | https://github.com/huggingface/Google-Cloud-Containers/issues/143 | closed | [
"question"
] | 2025-01-27T02:02:28Z | 2025-01-31T11:44:05Z | null | jk1333 |
huggingface/optimum | 2,171 | Adding Phi3 support in BetterTransformer (to use the microsoft/phi-4 model) | ### Feature request
Hello,
Is it possible to add the phi3 architecture to BetterTransformer supported models?
### Motivation
Nan
### Your contribution
Nan | https://github.com/huggingface/optimum/issues/2171 | closed | [
"Stale"
] | 2025-01-26T19:10:34Z | 2025-03-04T02:05:22Z | 2 | majdabd |
huggingface/transformers.js | 1,167 | How to create and use a customized voice in a tts pipeline? | ### Question
Hi transformers.js community!
I am new here and I’d like to ask how to create a new voice and use it inside the current tts pipeline? I just create a next.js project and I can run the text-to-speech model in the tutorial, like following code,
```
const synthesizer = await pipeline('text-to-speech', 'Xeno... | https://github.com/huggingface/transformers.js/issues/1167 | open | [
"question"
] | 2025-01-26T17:44:57Z | 2025-02-11T02:55:40Z | null | gonggqing |
huggingface/open-r1 | 56 | How to supervise non-math data? | I see the accuracy reward only can check the numerical equal? But what if my question is MCQ and asking an option?
I did a quick check and find it's not working.
```
from math_verify import parse, verify
# Parse the gold and answer
# If you know that gold will only contain latex or expr (no latex env), use
# parse(... | https://github.com/huggingface/open-r1/issues/56 | open | [] | 2025-01-26T14:30:13Z | 2025-01-26T17:52:58Z | null | Luodian |
huggingface/diffusers | 10,655 | How to use custon dataset in train_dreambooth_flux.py. | Hi. what if i want to train two images with two different prompts. somethink like m1.jpeg , m1.txt ; m2.jpeg, m2.txt.
the default example only shows all images share one instant prompt. thanks for the help! | https://github.com/huggingface/diffusers/issues/10655 | closed | [] | 2025-01-26T11:53:01Z | 2025-01-27T19:43:55Z | null | rooooc |
huggingface/open-r1 | 46 | how to train on MultiNode MultiGPU | https://github.com/huggingface/open-r1/issues/46 | open | [] | 2025-01-26T04:57:11Z | 2025-02-19T14:00:44Z | null | yuepengs | |
huggingface/transformers.js | 1,166 | Why isn't transformers using filesystem API instead of Cache API? | ### Question
I find the cache API quite limiting when it comes to user experience. I am curious why transformers.js is not utilizing filesystem API. Is there any practical difficulty in it?
| https://github.com/huggingface/transformers.js/issues/1166 | open | [
"question"
] | 2025-01-25T14:12:38Z | 2025-02-08T12:09:16Z | null | Nithur-M |
huggingface/open-r1 | 23 | How to contribute | Hello there 👋!
Replicating all parts of DeepSeek's R1 pipeline is going to take a community effort, especially with dataset curation and creation. If you would like to contribute, please explore the issues linked below. | https://github.com/huggingface/open-r1/issues/23 | open | [] | 2025-01-25T13:55:31Z | 2025-05-06T13:32:10Z | null | lewtun |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.