repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/torchtitan | 764 | FSDP 2 doesn't pad tensors? | Hi, I ran my model with FSDP 2, one of the linear layers has a dim that's not divisible by the world size (128), and so I got the following error:
```
torch.Size([...]) is not divisible by FSDP world size 128.
```
FSDP 1 circumvents this issue by padding the tensors. Is this not supported by FSDP 2? If not, will ... | https://github.com/pytorch/torchtitan/issues/764 | open | [
"question",
"module: fsdp"
] | 2024-12-29T21:55:50Z | 2025-02-13T01:51:43Z | null | cassanof |
pytorch/torchchat | 1,446 | Supply Local Weights to an LLM instead of Downloading Weights from HuggingFace | ### 🚀 The feature, motivation and pitch
I am having local copy of llama weights and i want to supply those weights to create a chat application.Please include a CLI flag to do so
### Alternatives
_No response_
### Additional context
_No response_
### RFC (Optional)
_No response_ | https://github.com/pytorch/torchchat/issues/1446 | closed | [
"documentation",
"triaged"
] | 2024-12-29T20:14:26Z | 2025-01-06T01:54:19Z | 2 | sgupta1007 |
pytorch/data | 1,418 | torch.node datawriter | ### 📚 The doc issue
Can we add in the example/migration file related to a `torch.node` datawriter (if already possible with the current API).
See:
https://github.com/pytorch/pytorch/issues/140296#issuecomment-2563190801
### Suggest a potential alternative/fix
_No response_ | https://github.com/meta-pytorch/data/issues/1418 | open | [] | 2024-12-27T13:49:24Z | 2024-12-27T13:49:24Z | 0 | bhack |
pytorch/pytorch | 143,906 | How to correctly asynchronously copy a GPU tensor to a CPU tensor in another process without introducing blocking? | ### 🐛 Describe the bug
I am developing a distributed PyTorch application designed to asynchronously transfer data from a GPU process to a CPU process, ensuring that GPU computations remain non-blocking. In my current implementation, I utilize the non-blocking copy_ method to transfer data from a GPU tensor to a CPU... | https://github.com/pytorch/pytorch/issues/143906 | open | [
"needs reproduction",
"oncall: distributed",
"triaged"
] | 2024-12-27T11:22:11Z | 2025-01-03T18:13:46Z | null | zhanghb55 |
huggingface/trl | 2,523 | How to solve the situation where the tokenizer of the reward model is inconsistent with the tokenizer of the actor model? | https://github.com/huggingface/trl/issues/2523 | open | [
"❓ question"
] | 2024-12-27T09:43:06Z | 2024-12-28T06:26:16Z | null | stephen-nju | |
huggingface/peft | 2,298 | Qdora support | ### Feature request
is it possible to use qdora with peft?
### Motivation
qdora is better than qlora and perform like full fine tuning.
### Your contribution
```
peft_config = LoraConfig(
r=8,
lora_alpha=32,
lora_dropout=0.1,
qdora=True # adding qdora
)
``` | https://github.com/huggingface/peft/issues/2298 | closed | [] | 2024-12-27T04:47:54Z | 2025-01-03T12:26:58Z | 2 | imrankh46 |
huggingface/smolagents | 2 | How to call OpenAI-like models through an API? | How to call OpenAI-like models through an API? | https://github.com/huggingface/smolagents/issues/2 | closed | [] | 2024-12-27T04:34:35Z | 2024-12-29T21:58:10Z | null | win4r |
huggingface/datasets | 7,347 | Converting Arrow to WebDataset TAR Format for Offline Use | ### Feature request
Hi,
I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:
```
import json
from datasets import load_dataset
dataset = load_dataset("pixparse/cc3m-wds")
dataset.save_to_disk("./cc3m_1")
```
now I need to convert it to WebDataset's TAR form... | https://github.com/huggingface/datasets/issues/7347 | closed | [
"enhancement"
] | 2024-12-27T01:40:44Z | 2024-12-31T17:38:00Z | 4 | katie312 |
huggingface/transformers.js | 1,118 | Trying to use custom finetuned Whisper Model with | ### Question
@xenova I am trying to use our own Whisper fine tuned model https://huggingface.co/medxcribe/whisper-base.en with
https://huggingface.co/spaces/Xenova/whisper-web. I have uploaded into a seperate repo for reference https://huggingface.co/medxcribe/whisper-base-onnx.en.
We have converted the fine tun... | https://github.com/huggingface/transformers.js/issues/1118 | open | [
"question"
] | 2024-12-26T20:18:36Z | 2024-12-26T20:18:36Z | null | vijaim |
huggingface/finetrainers | 153 | How to generate result of validation and resolution. | Hi author:
I am using your hunyuan finetuning bash to finetune lora on my own dataset with original resolution of 1080p. But I find your model can only run on video with both height and weight can be divided by 32. Can the model also be trained on video with 360p or 720p and why? | https://github.com/huggingface/finetrainers/issues/153 | closed | [] | 2024-12-26T15:21:22Z | 2025-01-10T23:38:39Z | null | Aristo23333 |
huggingface/lerobot | 597 | Inquiry About Support for RDT-1B Model | Hi,
I would like to extend my heartfelt thanks for maintaining such an outstanding codebase. Your dedication and hard work have significantly contributed to advancements in the robotics field, and I truly appreciate the resources and support your community provides.
I am reaching out to inquire whether there are an... | https://github.com/huggingface/lerobot/issues/597 | closed | [
"question",
"policies",
"stale"
] | 2024-12-26T11:12:58Z | 2025-10-08T20:52:51Z | null | Robert-hua |
huggingface/diffusers | 10,383 | [Request] Optimize HunyuanVideo Inference Speed with ParaAttention | Hi guys,
First and foremost, I would like to commend you for the incredible work on the `diffusers` library. It has been an invaluable resource for my projects.
I am writing to suggest an enhancement to the inference speed of the `HunyuanVideo` model. We have found that using [ParaAttention](https://github.com/ch... | https://github.com/huggingface/diffusers/issues/10383 | closed | [
"roadmap"
] | 2024-12-25T15:07:53Z | 2025-01-16T18:05:15Z | 10 | chengzeyi |
huggingface/lerobot | 596 | How to achieve multiple tasks on the basis of LeRobot ? | LeRobot can achieve single tasks (such as inserting, transferring blocks, etc.), how to achieve multiple tasks on the basis of LeRobot (such as first recognizing objects and classifying, and then putting objects in order in boxes, etc.)?"
Please give me some ideas. | https://github.com/huggingface/lerobot/issues/596 | closed | [
"question",
"stale"
] | 2024-12-25T12:20:37Z | 2025-10-17T11:38:20Z | null | wangwisdom |
huggingface/diffusers | 10,375 | [low priority] Please fix links in documentation | https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video
Both links are broken
Make sure to check out the Schedulers [guide](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse co... | https://github.com/huggingface/diffusers/issues/10375 | closed | [] | 2024-12-25T09:04:33Z | 2024-12-28T20:01:27Z | 0 | nitinmukesh |
huggingface/diffusers | 10,374 | Is there any plan to support TeaCache for training-free acceleration? | TeaCache is a training-free inference acceleration method for visual generation. TeaCache currently supports HunyuanVideo, CogVideoX, Open-Sora, Open-Sora-Plan and Latte. TeaCache can speedup HunyuanVideo 2x without much visual quality degradation. For example, the inference for a 720p, 129-frame video takes around 5... | https://github.com/huggingface/diffusers/issues/10374 | open | [
"wip"
] | 2024-12-25T05:00:23Z | 2025-01-27T01:28:53Z | 4 | LiewFeng |
huggingface/chat-ui | 1,633 | docker run is not working | I'm running the following:
```
docker run -p 3000:3000 --env-file env.local huggingface/chat-ui
```
The env file has the following set: `HF_TOKEN`, `MONGODB_URL` and `MODELS`. The container prints the following:
```
Listening on 0.0.0.0:3000
```
However, on hitting the `localhost:3000`, I get a blank page wit... | https://github.com/huggingface/chat-ui/issues/1633 | open | [
"support"
] | 2024-12-23T08:36:09Z | 2025-01-06T07:30:46Z | 1 | sebastiangonsal |
huggingface/peft | 2,293 | Is it possible to add LoRA on specific head? | ### Feature request
Could I add LoRA only to some selected heads on the model?
I read some documentation [here](https://huggingface.co/docs/peft/developer_guides/custom_models), but am still not sure about how to implement my goal.
### Motivation
Current LoRA Config can allow users to decide where matrices to add L... | https://github.com/huggingface/peft/issues/2293 | closed | [] | 2024-12-22T19:57:54Z | 2025-12-14T10:07:49Z | 12 | SpeeeedLee |
huggingface/datasets | 7,344 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs | ### Describe the bug
I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when ... | https://github.com/huggingface/datasets/issues/7344 | closed | [] | 2024-12-22T16:30:07Z | 2025-01-15T05:32:00Z | 2 | clankur |
huggingface/diffusers | 10,345 | safetensor streaming in from_single_file_loading() | can we add support for streaming safetensors while loading using `from_single_file`.
source:https://github.com/run-ai/runai-model-streamer
example:
```python
from runai_model_streamer import SafetensorsStreamer
file_path = "/path/to/file.safetensors"
with SafetensorsStreamer() as streamer:
streamer.str... | https://github.com/huggingface/diffusers/issues/10345 | closed | [
"stale"
] | 2024-12-22T13:27:46Z | 2025-01-21T15:07:58Z | 2 | AbhinavJangra29 |
pytorch/xla | 8,516 | how to release tpu memory after del diffusers pipeline | ## ❓ Questions and Help
i create a pipeline
`
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.bfloat16).to(torch_xla.core.xla_model.xla_device())
pipeline.to('cpu')
pipeline = StableAudioPipeline.from_pretrained("stabilityai/stable-audio-open-1.0", ... | https://github.com/pytorch/xla/issues/8516 | closed | [
"duplicate",
"question",
"xla:tpu"
] | 2024-12-22T11:03:38Z | 2025-02-13T13:40:42Z | null | ghost |
pytorch/torchchat | 1,436 | If scripts need `bash`, don't say to use `sh` | ### 🐛 Describe the bug
On Debian systems, sh isn't bash, it's [dash](https://en.wikipedia.org/wiki/Almquist_shell#Dash). I haven't tested every script, but https://github.com/pytorch/torchchat/blob/main/docs/quantization.md says to run `sh torchchat/utils/scripts/build_torchao_ops.sh`, but this script fails unless ru... | https://github.com/pytorch/torchchat/issues/1436 | closed | [
"bug",
"documentation",
"actionable",
"Quantization",
"triaged"
] | 2024-12-22T06:43:48Z | 2024-12-23T06:49:43Z | 2 | swolchok |
pytorch/ao | 1,456 | [Bug] Unable to Obtain Quantized Weights Independently | **Description**
Thank you so much for your excellent work! I have been trying out a few demos to better understand your project.
While running [this demo](https://github.com/pytorch/ao/tree/main/torchao/quantization#a16w8-int8-weightonly-quantization), I attempted to independently print the quantized weight value... | https://github.com/pytorch/ao/issues/1456 | closed | [
"question",
"triaged"
] | 2024-12-22T02:55:18Z | 2024-12-24T06:53:03Z | null | Mingbo-Lee |
huggingface/accelerate | 3,309 | deepspeed zero3 how to save custom model? | DeepSpeedEngine(
(module): LLMDecoder(
(model): Qwen2ForSequenceClassification(
(model): Qwen2Model(
(embed_tokens): Embedding(151936, 1536)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2SdpaAttention(
(q_proj): Linear(in_... | https://github.com/huggingface/accelerate/issues/3309 | closed | [] | 2024-12-21T17:01:17Z | 2025-01-30T15:06:45Z | null | NLPJCL |
pytorch/xla | 8,515 | multi_queries_paged_attention_kernel fails with Llama3 70B on a TPU-v4-16 with sequence length of 256 | I'm running Llama3 70B with vllm on a TPU-v4-16, when using the flash attention kernel i'm able to go up to 16k, but using multi_queries_paged_attention with sequence length 256, it seems that the page table is taking too much smem.
@vanbasten23 @WoosukKwon any idea how to address this (i'm familiar with pallas progr... | https://github.com/pytorch/xla/issues/8515 | open | [
"performance",
"pallas",
"xla:tpu"
] | 2024-12-21T14:23:04Z | 2025-02-13T13:43:19Z | 2 | OhadRubin |
huggingface/diffusers | 10,334 | Sana broke on MacOS. Grey images on MPS, NaN's on CPU. | ### Describe the bug
Just started to play with Sana, was excited when I saw it was coming to Diffusers as the NVIDIA supplied code was full of CUDA only stuff.
Ran the example code, changing cuda to mps and got a grey image.
)` when passing the input ten... | https://github.com/pytorch/xla/issues/8510 | closed | [] | 2024-12-20T17:51:33Z | 2025-01-08T21:59:14Z | 4 | JmeanJmy |
pytorch/torchtitan | 757 | [question]can't disable CP for specific (unsupported) SDPA op | ## Problem
currently the API of context parallel have five problems.
1. only support apply CP to whole model. if we have some cross attn in prep part of model with unsupported shape, it's impossible to apply CP since `_context_parallel` always override all SDPA and need to wrap whole backward.
2. no shard/unshar... | https://github.com/pytorch/torchtitan/issues/757 | open | [
"enhancement",
"module: context parallel"
] | 2024-12-20T11:00:23Z | 2025-03-12T10:30:52Z | 3 | FindDefinition |
huggingface/sentence-transformers | 3,141 | How to load ModernBERT model correctly? | Hi Teams,
I want to ask how to properly load [ModernBERT](https://huggingface.co/blog/modernbert) using SentenceTransformer?
The main difficulty I met is about the weight loading of prediction head as defined [here](https://github.com/huggingface/transformers/blob/f42084e6411c39b74309af4a7d6ed640c01a4c9e/src/tran... | https://github.com/huggingface/sentence-transformers/issues/3141 | closed | [] | 2024-12-20T06:52:44Z | 2024-12-24T03:08:47Z | null | Hannibal046 |
huggingface/picotron | 15 | Difference between picotron and nanotron | What is the difference between picotron and [nanotron](https://github.com/huggingface/nanotron)? Why huggingface team rolled out two hybrid-parallelism framework? | https://github.com/huggingface/picotron/issues/15 | closed | [
"question"
] | 2024-12-19T12:48:57Z | 2024-12-20T10:17:25Z | null | cailun01 |
huggingface/diffusers | 10,302 | Using FP8 for inference without CPU offloading can introduce noise. | ### Describe the bug
If I use ```pipe.enable_model_cpu_offload(device=device)```, the model can perform inference correctly after warming up. However, if I comment out this line, the inference results are noisy.
### Reproduction
```python
from diffusers import (
FluxPipeline,
FluxTransformer2DModel... | https://github.com/huggingface/diffusers/issues/10302 | open | [
"bug"
] | 2024-12-19T12:39:06Z | 2025-03-10T14:18:58Z | 6 | todochenxi |
huggingface/candle | 2,674 | [Question] How to create a autograd function like in PyTorch? How to customize forward and backward process? | https://github.com/huggingface/candle/issues/2674 | open | [] | 2024-12-19T07:02:04Z | 2024-12-19T07:02:15Z | null | VanderBieu | |
huggingface/blog | 2,551 | How to process and visualize the segment output tokens? | How to process the segment tokens and generate segmentation masks? what the output means?

| https://github.com/huggingface/blog/issues/2551 | open | [] | 2024-12-19T03:11:15Z | 2024-12-19T03:11:15Z | null | 00mmw |
pytorch/ao | 1,437 | Segmentation Fault Running Int8 Quantized Model on GPU | Hi! We got into segmentation fault error when trying to run model inference on gpu. Below is a minimal example from the tutorial ([link](https://pytorch.org/docs/stable/quantization.html#post-training-static-quantization)):
```
import torch
import time
# define a floating point model where some layers could be ... | https://github.com/pytorch/ao/issues/1437 | closed | [
"question",
"triaged"
] | 2024-12-18T19:51:48Z | 2025-01-23T19:16:09Z | null | wendywangwwt |
pytorch/TensorRT | 3,331 | ❓ [Question] Jetson AGX Orin build and install torch_tensorrt wheel file Failed | ## ❓ Question
I follow this [tutorial](https://pytorch.org/TensorRT/getting_started/installation.html) to install Torch-TensorRT, but in the last step:
```
cuda_version=$(nvcc --version | grep Cuda | grep release | cut -d ',' -f 2 | sed -e 's/ release //g')
export TORCH_INSTALL_PATH=$(python -c "import torch, o... | https://github.com/pytorch/TensorRT/issues/3331 | open | [
"question"
] | 2024-12-18T18:55:56Z | 2024-12-18T20:30:20Z | null | breknddone |
huggingface/transformers | 35,316 | How to use a custom Image Processor? | I want to use the processor in the form of `auto_map` but when using `AutoProcessor.from_pretrained`, I am unable to load the custom `ImageProcessor`.
The root cause lies in the use of the `transformers_module` to initialize the class in `ProcessorMixin`.
https://github.com/huggingface/transformers/blob/c7e48053... | https://github.com/huggingface/transformers/issues/35316 | closed | [] | 2024-12-18T12:04:33Z | 2024-12-19T02:53:43Z | null | glamourzc |
huggingface/diffusers | 10,281 | Request to implement FreeScale, a new diffusion scheduler | ### Model/Pipeline/Scheduler description
FreeScale is a tuning-free method for higher-resolution visual generation, unlocking the 8k image generation for pre-trained SDXL! Compared to direct inference by SDXL, FreeScale brings negligible additional memory and time costs.
.
The reason is that the loader creates multiple proces... | https://github.com/huggingface/diffusers/issues/10280 | closed | [
"bug"
] | 2024-12-18T06:02:41Z | 2025-01-10T10:11:05Z | 4 | wlhee |
pytorch/xla | 8,497 | API guide code snippets don't work | ## 📚 Documentation
Trying to follow the example here: https://github.com/pytorch/xla/blob/master/API_GUIDE.md#running-on-a-single-xla-device
The Python code snippet doesn't work, as `MNIST()`, `nn`, and `optim` are all undefined.
| https://github.com/pytorch/xla/issues/8497 | closed | [
"bug",
"documentation"
] | 2024-12-17T23:14:45Z | 2025-05-20T15:55:40Z | 6 | richardsliu |
huggingface/optimum-neuron | 750 | Document how to use Qwen 2.5 | ### Feature request
Qwen 2.5 7B Instruct on EC2 with HF DL AMI
Qwen 2.5 7B Instruct on Sagemaker with HF DLC Neuronx TGI
Maybe something for the code version too?
Dependency of adding the model to the cache
### Motivation
increase AMI and DLC usage
### Your contribution
doc | https://github.com/huggingface/optimum-neuron/issues/750 | closed | [
"Stale"
] | 2024-12-17T16:03:25Z | 2025-01-22T08:04:54Z | null | pagezyhf |
pytorch/serve | 3,375 | 503 InternalServerException, prediction failed | ### 🐛 Describe the bug
Hello, my inference request is returning a 503 InternalServerException, prediction failed. How can I resolve this issue? Below are the specific request, inference response, and torchserve logs. Additional note: I am using Docker to run the service, and the inference works fine with the gRPC A... | https://github.com/pytorch/serve/issues/3375 | closed | [] | 2024-12-17T04:02:49Z | 2024-12-17T08:43:24Z | 1 | Jax29 |
pytorch/torchtitan | 743 | Model init with HuggingFace model | I am writing a simple script to run FSDP2 (`fully_shard`) on the `pythia-1b` model available on HuggingFace. I am currently running the model on 1 node with 2 devices. I was following the meta-device initialisation from the [FSDP2 docs](https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md). However, I think the... | https://github.com/pytorch/torchtitan/issues/743 | open | [
"bug",
"question",
"module: checkpoint",
"huggingface integration"
] | 2024-12-16T05:45:04Z | 2025-04-22T18:38:22Z | null | neeldani |
pytorch/torchtitan | 742 | Low bit Optimizers & FA-3 | 1. hi have there been any tests with fa-3 and low bit optimizers from torchao like FP8adam for 8bit adam? i see divergence in training when resuming a FA-2 checkpoint with FA-3 or when using 8BITADAMW | https://github.com/pytorch/torchtitan/issues/742 | open | [
"bug",
"question"
] | 2024-12-16T03:56:22Z | 2025-01-07T00:55:59Z | null | asahni-sc |
huggingface/accelerate | 3,294 | How to run accelerate with PYTORCH_ENABLE_MPS_FALLBACK | ### System Info
```Shell
MacOS
transformers>=4.35.1
datasets[audio]>=2.14.7
accelerate>=0.24.1
matplotlib
wandb
tensorboard
Cython
- `Accelerate` version: 1.2.1
- Platform: macOS-14.7.1-arm64-arm-64bit
- `accelerate` bash location: .venv/bin/accelerate
- Python version: 3.12.3
- Numpy version: 2.0.2
... | https://github.com/huggingface/accelerate/issues/3294 | closed | [] | 2024-12-15T07:03:41Z | 2025-01-23T15:06:57Z | null | mirodil-ml |
pytorch/audio | 3,863 | How to install or download avutil-<VERSION>.dll and others on Windows Python venv not Conda! | I am reading this page and there is only information for conda
I am not using conda but using Python venv
So how to install or where to get these dll files?
https://pytorch.org/audio/stable/installation.html#optional-dependencies
`When searching for FFmpeg installation, TorchAudio looks for library files wh... | https://github.com/pytorch/audio/issues/3863 | closed | [] | 2024-12-14T13:15:01Z | 2024-12-14T13:48:42Z | null | FurkanGozukara |
pytorch/tutorials | 3,186 | Writing a gradient tutorial, focused on leaf vs non leaf tensors. | There is no tutorial that specifically talks about requires_grad, retain_grad, and leaf tensor/ non-leaf tensors and how they interact with each other. Can I write a tutorial specifically talking about this topic? This will be useful when gradients are used in unusual places, as is the case for the deep dream algorithm... | https://github.com/pytorch/tutorials/issues/3186 | closed | [
"advanced",
"tutorial-proposal",
"docathon-h1-2025",
"hard"
] | 2024-12-14T06:44:48Z | 2025-08-20T23:30:53Z | 5 | JitheshPavan |
huggingface/diffusers | 10,223 | Where should I obtain the lora-sdxl-dreambooth-id in Inference | ### Describe the bug
I tried to upload the download link from the README file generated during training, but an error indicated it was incorrect. Where should I obtain the lora-id for Inference?
### Reproduction
README.md:
---
base_model: /data/ziqiang/czc/diffusers/examples/dreambooth/model
library_name: diffuse... | https://github.com/huggingface/diffusers/issues/10223 | open | [
"bug",
"stale"
] | 2024-12-14T06:34:56Z | 2025-02-07T15:03:24Z | 5 | Zarato2122 |
pytorch/torchchat | 1,424 | Misaligned AOTI input; potential perf gains by fixing? | ### 🐛 Describe the bug
Picked up in https://github.com/pytorch/torchchat/pull/1367, and worked around via https://github.com/pytorch/pytorch/pull/143236, it appears the input to the torchchat AOTI runner is not 16 byte aligned.
While the PR from pytorch/pytorch eases this constraint, this may be indicative of pot... | https://github.com/pytorch/torchchat/issues/1424 | open | [
"bug",
"actionable",
"Compile / AOTI",
"triaged"
] | 2024-12-14T01:11:30Z | 2024-12-17T23:35:29Z | 1 | Jack-Khuu |
pytorch/xla | 8,492 | How to do multi-machine SPMD/FSDPv2 training with TPU? | ## ❓ Questions and Help
I saw https://github.com/pytorch/xla/issues/6362 but there's no example training script found? For example, if I have multiple TPU v3-8 VMs, how would I achieve this with SPMD/FSDPv2?
I'm currently sending the commands to all TPU VMs this way:
```
python3.10 podrun --include-local -- hos... | https://github.com/pytorch/xla/issues/8492 | closed | [
"question",
"distributed"
] | 2024-12-13T18:47:39Z | 2025-05-05T12:34:29Z | null | radna0 |
huggingface/lerobot | 575 | Gello dataset converter | I made a converter for the [Gello](https://wuphilipp.github.io/gello_site/) dataset format (pickles containing dicts with all the observations).
If this is of interest, I am willing to contribute it back here.
The current code can be found [here](https://github.com/tlpss/lerobot/blob/tlpss-dev/lerobot/common/da... | https://github.com/huggingface/lerobot/issues/575 | closed | [
"enhancement",
"question",
"dataset",
"stale"
] | 2024-12-13T15:47:58Z | 2025-10-08T08:50:40Z | null | tlpss |
huggingface/diffusers | 10,207 | KolorsPipeline does not support from_single_file | from diffusers import KolorsPipeline
KolorsPipeline.from_single_file("models/kolrs-8steps.safetensors")
How does KolorsPipeline load a single file model? | https://github.com/huggingface/diffusers/issues/10207 | open | [
"stale",
"single_file"
] | 2024-12-13T09:44:46Z | 2025-01-12T15:02:46Z | 3 | Thekey756 |
huggingface/sentence-transformers | 3,134 | How to set a proper batchsize when using CachedMultipleNegativesRankingLoss? | When using the [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss), I tried different batchsize(per_device_train_batch_size) setting, and found that 512 was the maximum. When batchsize was greater than 512, GPU memory OOM was happened.
... | https://github.com/huggingface/sentence-transformers/issues/3134 | open | [] | 2024-12-13T09:25:34Z | 2024-12-27T13:46:17Z | null | awmoe |
huggingface/sentence-transformers | 3,133 | How to avoid the long time waiting before start training? | Dear developer,
Thanks for the great sentence-transformers library!
I am finetuning the [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) using my own data following the tutorial from: https://sbert.net/docs/sentence_... | https://github.com/huggingface/sentence-transformers/issues/3133 | open | [] | 2024-12-13T09:10:32Z | 2024-12-25T03:46:50Z | null | awmoe |
pytorch/torchtitan | 735 | [question]FSDP2 have more peak active memory/reserved memory than FSDP1 | ## Environment
OS: Ubuntu
GPU: 8x GPU
torch: torch-2.6.0.dev20241212+cu124
DDP: 4-way Tensor Parallel * 2-way FSDP
## Problem
I'm using FSDP+TP in my model and follow torchtitan code. when I switch fsdp1 to fsdp2, the memory usage showed by `nvidia-smi` increases by 10GB, also the peak active memory is greatly ... | https://github.com/pytorch/torchtitan/issues/735 | closed | [
"question"
] | 2024-12-13T08:42:49Z | 2024-12-18T11:31:23Z | null | FindDefinition |
pytorch/torchtitan | 734 | using fsdp2 wrapper Flux(text to image) model , gradient is inconsistent with fsdp1 | i use register_full_backward_hook print grad when backward like this way:
```
def print_grad_hook(name):
def hook(module, grad_input, grad_output):
print(f"Layer Name: {name},Grad input: {grad_input},Grad output: {grad_output}")
return hook
for name, layer in model.named_children():
layer.reg... | https://github.com/pytorch/torchtitan/issues/734 | closed | [
"question"
] | 2024-12-13T07:59:32Z | 2025-08-21T02:58:13Z | null | yanmj0601 |
huggingface/lighteval | 447 | [BUG] how to eval large scale model use 1dp+8pp? | ## Describe the bug
I tired to eval a large scale model use1dp+8pp with accelerate. I use the command like the following:
```
accelerate launch --multi_gpu --num_processes=1 run_evals_accelerate.py \
--model_args="pretrained=<path to model on the hub>" \
--model_parallel \
--tasks <task parameters> \
... | https://github.com/huggingface/lighteval/issues/447 | closed | [
"bug"
] | 2024-12-13T03:56:36Z | 2025-01-02T11:20:20Z | null | mxjmtxrm |
pytorch/vision | 8,803 | OpenGL interoperability | ### 🚀 The feature
Zero-copy transfer of data between PyTorch and OpenGL on GPU by including "OpenGL interoperability" from CUDA in torchvision.
### Motivation, pitch
I am working on a real-time machine learning graphics project which uses OpenGL both as an intermediate processing step in the model and to visualize ... | https://github.com/pytorch/vision/issues/8803 | open | [] | 2024-12-12T16:04:11Z | 2024-12-12T16:04:11Z | 0 | cajoek |
huggingface/diffusers | 10,196 | How to finetune Flux-dev full params, 80G OOM ... | I am using the [train_dreambooth_flux](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flux.py) script to fine-tune the `flux-dev` model with full parameters using DeepSpeed Stage 2. However, I am still encountering out-of-memory issues on an 80GB GPU. Are there any solutions ava... | https://github.com/huggingface/diffusers/issues/10196 | open | [
"training"
] | 2024-12-12T09:24:18Z | 2025-08-20T13:19:20Z | null | huangjun12 |
huggingface/chat-ui | 1,627 | Cookie “hf-chat” has been rejected because there is an existing “secure” cookie. | ## Bug description
I use `ghcr.io/huggingface/chat-ui-db:latest` to host `ChatUI` in docker. If `PUBLIC_ORIGIN="http://localhost"` in `.env.local` and visit `ChatUI` through `http://localhost:3000`, it works well. Then I try to replace `localhost` by my domain name `qiangwulab.sjtu.edu.cn`. For the sake of testing, ... | https://github.com/huggingface/chat-ui/issues/1627 | open | [
"bug"
] | 2024-12-12T07:04:26Z | 2024-12-12T07:04:26Z | 0 | ljw20180420 |
pytorch/xla | 8,486 | 2 questions for the composite op feature | ## ❓ Questions and Help
Glad to see that the [composite op feature](https://github.com/pytorch/xla/blob/master/docs/source/features/stablehlo.md#preserving-high-level-pytorch-operations-in-stablehlo-by-generating-stablehlocomposite) is added to Torch-XLA. I have tried this feature and got some questions, hope to get a... | https://github.com/pytorch/xla/issues/8486 | closed | [
"question",
"stablehlo"
] | 2024-12-12T02:37:57Z | 2025-05-05T12:32:51Z | null | Zantares |
pytorch/ao | 1,403 | ImportError: cannot import name 'weight_only_quant_qconfig' from 'torchao.quantization' (R:\CogVideoX_v3\CogVideo\venv\Lib\site-packages\torchao\quantization\__init__.py) | I am trying to use [CogVideoX1.5-5B-I2V](https://huggingface.co/THUDM/CogVideoX1.5-5B-I2V) with following
I am on Windows
Everything installed but still getting this error - version 0.7.0
```
Traceback (most recent call last):
File "R:\CogVideoX_v3\CogVideo\inference\gradio_composite_demo\app.py", line 4... | https://github.com/pytorch/ao/issues/1403 | closed | [
"question",
"triaged"
] | 2024-12-11T23:43:15Z | 2024-12-12T01:45:57Z | null | FurkanGozukara |
huggingface/diffusers | 10,190 | How to use fluxfill to repalce background? | I want to use fluxfill to change the background, but I find that the prompt words are almost useless, and the output image is more like the original image.
I have tested multiple guidance_scale parameters, but found that the resulting image is more related to the original image, and less related to the prompt word. | https://github.com/huggingface/diffusers/issues/10190 | closed | [] | 2024-12-11T10:48:27Z | 2025-05-23T12:12:28Z | null | babyta |
huggingface/sentence-transformers | 3,132 | How to train a model with DDP for TSDAE | hello, I want to train a model using TSDAE method.
Is there any way to train with DDP(Multi-GPU)?
I already read your sample code.
But I'm not sure how to apply DenoisingAutoEncoderDataset in SentenceTransformerTrainer.
([[v3] Training refactor - MultiGPU, loss logging, bf16, etc](https://github.com/UKPLab/sen... | https://github.com/huggingface/sentence-transformers/issues/3132 | closed | [] | 2024-12-11T10:39:30Z | 2024-12-11T14:04:32Z | null | OnAnd0n |
pytorch/TensorRT | 3,317 | ❓ [Question] Jetson AGX Orin Install in Jetpack 6.1 Build did NOT complete successfully | ## ❓ Question
I follow this [tutorial](https://pytorch.org/TensorRT/getting_started/installation.html) to install Torch-TensorRT, but in the last step:
```
# build and install torch_tensorrt wheel file
python setup.py --use-cxx11-abi install --user
```
some errors happened:
```
using CXX11 ABI build
Jetpack ... | https://github.com/pytorch/TensorRT/issues/3317 | open | [
"question"
] | 2024-12-11T09:21:09Z | 2024-12-18T19:16:46Z | null | breknddone |
huggingface/diffusers | 10,180 | Can't load multiple loras when using Flux Control LoRA | ### Describe the bug
I was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras.
For example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I load the 8-step lora first and then the depth lora, it err... | https://github.com/huggingface/diffusers/issues/10180 | closed | [
"bug",
"help wanted",
"lora"
] | 2024-12-10T21:40:24Z | 2024-12-20T09:00:33Z | 11 | jonathanyin12 |
huggingface/transformers | 35,186 | How to convert my Mask2Former model (ResNet-50 backbone) to Hugging Face transformer | ### System Info
```shell
- `transformers` version: 4.34.0
- Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.17
- Python version: 3.8.20
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.5
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
... | https://github.com/huggingface/transformers/issues/35186 | closed | [] | 2024-12-10T19:17:22Z | 2025-01-18T08:03:21Z | null | yujunwei04 |
huggingface/datasets | 7,318 | Introduce support for PDFs | ### Feature request
The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"pat... | https://github.com/huggingface/datasets/issues/7318 | open | [
"enhancement"
] | 2024-12-10T16:59:48Z | 2024-12-12T18:38:13Z | 6 | yabramuvdi |
huggingface/diffusers | 10,172 | Raise an error when `len(gligen_images )` is not equal to `len(gligen_phrases)` in `StableDiffusionGLIGENTextImagePipeline` | To whom it may concern,
I found that when using `StableDiffusionGLIGENTextImagePipeline`, there is no error raised when `len(gligen_images )` is not equal to `len(gligen_phrases)`. And when I dig into the source code, it seems that these two features are zipped together in a for loop during the preprocessing. I gues... | https://github.com/huggingface/diffusers/issues/10172 | closed | [] | 2024-12-10T14:25:48Z | 2024-12-11T08:59:44Z | 1 | abcdefg133hi |
huggingface/lerobot | 568 | Do I need two SO 100 arms to get started? | I have printed and assembled one arms, the follower version. Do I need two arms to record datasets and do testing? | https://github.com/huggingface/lerobot/issues/568 | closed | [
"question",
"robots"
] | 2024-12-10T13:31:50Z | 2025-10-08T08:45:58Z | null | rabhishek100 |
pytorch/ao | 1,397 | "Where is the overloaded function for torch.nn.functional.linear(aqt, original_weight_tensor, bias)? " | Here is an example
int8_dynamic_activation_int8_weight
aqt:
AffineQuantizedTensor(tensor_impl=PlainAQTTensorImpl(data=tensor([[ 5, -2, 24, ..., 17, 73, 54],
[ -30, -19, -53, ..., -9, -33, 55],
[ -7, -20, -28, ..., 47, 71, -15],
...,
[ 36, 8... | https://github.com/pytorch/ao/issues/1397 | open | [] | 2024-12-10T10:05:42Z | 2024-12-11T06:41:30Z | null | Lenan22 |
pytorch/torchtitan | 724 | Issue: Loss Discrepancy Between FSDP1 and FSDP2 with AdamW Optimizer | We observed a loss discrepancy between FSDP1 and FSDP2 while training with the AdamW optimizer. Are you aware of any known issues with the AdamW optimizer and FSDP2 that might contribute to this behavior? | https://github.com/pytorch/torchtitan/issues/724 | closed | [
"question"
] | 2024-12-09T19:45:45Z | 2025-08-21T02:57:39Z | null | Teng-xu |
pytorch/torchtitan | 723 | Context parallelism understanding | Hi
We are recently testing the CP parallelism strategy, for a 2D configuration: FSDP+CP.
From what we know, CP is to slice the sequence length, as attention kernel needs to compute the attention for the whole sequence, which means each GPU needs to gather all the sharded KV cache using some collective communicati... | https://github.com/pytorch/torchtitan/issues/723 | open | [
"question",
"module: context parallel"
] | 2024-12-09T03:07:27Z | 2024-12-20T21:45:48Z | null | jinsong-mao |
huggingface/transformers | 35,152 | how to load the weight of decoder.embed_tokens.weight seperately from the shared weight? | ### System Info
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.17
- Python version: 3.8.20
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorfl... | https://github.com/huggingface/transformers/issues/35152 | closed | [
"bug"
] | 2024-12-08T15:46:55Z | 2025-01-22T08:03:52Z | null | SoSongzhi |
pytorch/ao | 1,390 | AO and Automated Mixed Precision | Can we clarify in the readme what are the best practices to use ao at inference with a pytorch AMP trainer model/checkpoint? | https://github.com/pytorch/ao/issues/1390 | open | [
"topic: documentation",
"question"
] | 2024-12-08T13:52:15Z | 2025-03-17T20:46:24Z | null | bhack |
huggingface/datasets | 7,311 | How to get the original dataset name with username? | ### Feature request
The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub.
The solution used now is to get the dataset name, config and split, then `... | https://github.com/huggingface/datasets/issues/7311 | open | [
"enhancement"
] | 2024-12-08T07:18:14Z | 2025-01-09T10:48:02Z | null | npuichigo |
huggingface/lerobot | 555 | To bulid my own policy, but have errors TypeError: '>' not supported between instances of 'int' and 'dict' | I improved the act policy in lerobot framework and created a new policy named myact. I mainly did the following:
Create the my_act folder in the lerobot/common/policies/ path
Create 'configuration_my_act.py' and 'modeling_my_act.py' in the + my_act folder
Create lerobot/configs/policy/myact yaml, which is modified t... | https://github.com/huggingface/lerobot/issues/555 | closed | [
"enhancement",
"question"
] | 2024-12-07T09:10:35Z | 2025-04-07T16:08:38Z | null | zhouzhq2021 |
huggingface/diffusers | 10,144 | Why mochi diffusers video output is worse than mochi official code? | ### Describe the bug
The quality of video is worse.
### Reproduction
Run the code with official prompt
### Logs
_No response_
### System Info
diffusers@main
### Who can help?
@a-r-r-o-w @yiyixuxu | https://github.com/huggingface/diffusers/issues/10144 | closed | [
"bug",
"stale"
] | 2024-12-07T05:53:57Z | 2025-01-07T15:38:38Z | 10 | foreverpiano |
huggingface/peft | 2,264 | Guidance Needed on Two-Stage Fine-Tuning with LoRA(SFT and DPO) for Model Adaptation | # I am planning to perform a two-stage fine-tuning process and need some guidance on how to proceed.
## First Stage
1. Load Base Model: I start by loading the base model, qwen1.5 32B.
2. Apply LoRA Fine-Tuning: I then apply LoRA fine-tuning to this base model and obtain a new model state.
3. Save Adapter Mode... | https://github.com/huggingface/peft/issues/2264 | closed | [] | 2024-12-06T13:35:20Z | 2025-01-06T10:50:09Z | 5 | none0663 |
huggingface/transformers | 35,118 | How to load local transformers? | transformers==4.47.0.dev0
I want to use my local transformers. And I tried to set `sys.insert(0,'xxx/transformers/src')` and `PYTHONPATH=xxx/transformers/src`, but they doesn't work.
PLZ, tell me why. | https://github.com/huggingface/transformers/issues/35118 | closed | [] | 2024-12-06T10:07:57Z | 2024-12-12T04:05:08Z | null | yiyexy |
pytorch/xla | 8,466 | Useful Q8 Kernels For TPUs/XLA Support | ## ❓ Questions and Help
I'm looking at this repo here [KONAKONA666/q8_kernels](https://github.com/KONAKONA666/q8_kernels).
The Q8 functions are being used [is located here](https://github.com/KONAKONA666/q8_kernels/tree/main/q8_kernels/functional), the [cuda kernels here](https://github.com/KONAKONA666/q8_kernels/t... | https://github.com/pytorch/xla/issues/8466 | open | [
"question",
"fp8"
] | 2024-12-06T07:01:59Z | 2025-02-13T15:17:36Z | null | radna0 |
huggingface/lerobot | 552 | Rounding to int32 makes robot less precise. Do we have a solid reason for doing this? | ### System Info
```Shell
Latest LeRobot. MacOS
```
### Information
- [X] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
1) Run teleoperation
2) Measure preciseness with rounding and without.
at lerobot/common/robot_devices/robots/manipula... | https://github.com/huggingface/lerobot/issues/552 | closed | [
"bug",
"question",
"stale"
] | 2024-12-05T16:31:49Z | 2025-10-08T13:08:50Z | null | 1g0rrr |
huggingface/tokenizers | 1,696 | How to determine the splicing logic in post_processor based on the sentence to be tokenized? | For example,
```python
def post_processor(self, token_ids_0, token_ids_1=None):
if "cls" in token_ids_0:
return processors.TemplateProcessing(
single=f"{cls} $A {sep}",
pair=f"{cls} $A {sep} $B {cls}",
special_tokens=[
... | https://github.com/huggingface/tokenizers/issues/1696 | open | [] | 2024-12-05T14:05:13Z | 2024-12-05T14:05:13Z | null | gongel |
huggingface/peft | 2,262 | Could you provide example code for AdaLoRA finetuning decoder-only model? | ### Feature request
The current [example of AdaLoRA](https://github.com/huggingface/peft/blob/b2922565c4c4445706a87cf7b988c828b451fe61/examples/conditional_generation/peft_adalora_seq2seq.py) is on **facebook/bart-base**. Since AdaLoRA requires hand-crafted calculations on loss, would it be possible to provide me som... | https://github.com/huggingface/peft/issues/2262 | closed | [] | 2024-12-05T12:03:31Z | 2025-01-18T15:03:29Z | 4 | SpeeeedLee |
pytorch/xla | 8,454 | how to auto convert back to bfloat16 after conv1 and conv2 | ## ❓ Questions and Help
I have an tensor with dtype torch.bfloat16, in kaggle v3-8, after the conv1 and conv2 operation the return type is torch.float32. Any way (environent varable or so) to convert the return type back to torch.bfloat16? | https://github.com/pytorch/xla/issues/8454 | open | [
"question"
] | 2024-12-05T09:59:35Z | 2025-02-13T14:35:46Z | null | ghost |
huggingface/diffusers | 10,129 | Does StableDiffusion3 have an image2image pipeline with ControlNet? | I want to use `ControlNet` with `StableDiffusion3`, providing a prompt, an original image, and a control image as inputs. However, I found that the `StableDiffusion3ControlNetPipeline` only supports prompts and control images as inputs. The `StableDiffusionControlNetImg2ImgPipeline` allows for providing a prompt, an or... | https://github.com/huggingface/diffusers/issues/10129 | closed | [
"New pipeline/model",
"contributions-welcome"
] | 2024-12-05T09:40:03Z | 2025-01-02T20:02:33Z | 1 | ZHJ19970917 |
huggingface/diffusers | 10,128 | Is there any plan to support fastercache? | Expect to support fastercache, https://github.com/Vchitect/FasterCache | https://github.com/huggingface/diffusers/issues/10128 | closed | [
"wip",
"performance"
] | 2024-12-05T09:11:19Z | 2025-03-21T04:05:06Z | 4 | songh11 |
huggingface/datasets | 7,306 | Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values). | ### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create... | https://github.com/huggingface/datasets/issues/7306 | open | [] | 2024-12-05T09:07:53Z | 2024-12-05T09:09:38Z | 0 | ai-nikolai |
huggingface/lerobot | 549 | Low accuracy for act policy on pushT env | The highest success rate is 44%, as n_decoder_layers=7. Are there any other tricks for this? | https://github.com/huggingface/lerobot/issues/549 | closed | [
"question",
"policies",
"stale"
] | 2024-12-05T06:18:06Z | 2025-10-19T02:32:37Z | null | KongCDY |
huggingface/Google-Cloud-Containers | 128 | Can we use Multi-LORA CPU | Hi,
Im currently following this doc: https://huggingface.co/docs/google-cloud/en/examples/gke-tgi-multi-lora-deployment
After got a bug: "Can’t scale up due to exceeded quota" and do some research, I suspect that my free trial (300$) account is not able to increase GPU quota (even I have activated my account to n... | https://github.com/huggingface/Google-Cloud-Containers/issues/128 | open | [
"question"
] | 2024-12-05T05:42:51Z | 2024-12-12T10:06:43Z | null | AndrewNgo-ini |
huggingface/peft | 2,260 | Is it possible to support the transformer engine when using Lora in Megatron? | ### Feature request
I am currently using the Megatron framework and want to use Lora for training. I saw that the Megatron format is supported at https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/tp_layer.py RowParallelLinear and ColumnParallelLinear do the adaptation. But if I use the transformer eng... | https://github.com/huggingface/peft/issues/2260 | closed | [] | 2024-12-05T03:24:15Z | 2025-01-12T15:03:29Z | 3 | liulong11 |
huggingface/diffusers | 10,120 | memory consumption of dreambooth+SD3 | Hi, I am running dreambooth SD3 with a single A100 GPU, I reduced resolution to 256; but it still need more memory than a single A100 has? I am wondering is this huge memory consumption normal?
```
!python train_dreambooth_sd3.py \
--pretrained_model_name_or_path="stabilityai/stable-diffusion-3-medium-diffusers"... | https://github.com/huggingface/diffusers/issues/10120 | closed | [
"bug",
"stale",
"training"
] | 2024-12-04T19:39:04Z | 2025-01-27T01:30:18Z | 5 | KolvacS-W |
pytorch/xla | 8,451 | Is it possible to execute jax code in torch_xla? | ## Is it possible to execute jax code in torch_xla?
After reading the docs, I realized that customized kernels via Jax Pallas can be adopted as kernels. I wonder if it is possible to execute jax code in torch_xla. It seems torch_xla._XLAC._xla_tpu_custom_call only accept custom kernels. Is there a way to execute jax ... | https://github.com/pytorch/xla/issues/8451 | closed | [] | 2024-12-04T17:54:55Z | 2024-12-08T12:24:51Z | 2 | lime-j |
huggingface/diffusers | 10,112 | Detail-Daemon diffusers | **Describe the solution you'd like.**
Detail-Daemon: https://github.com/Jonseed/ComfyUI-Detail-Daemon
How to implement Detail-Daemon in diffusers, as seen in https://github.com/Jonseed/ComfyUI-Detail-Daemon. Will there be a better official component in the future? | https://github.com/huggingface/diffusers/issues/10112 | open | [
"wip",
"consider-for-modular-diffusers"
] | 2024-12-04T09:14:39Z | 2025-01-03T18:01:24Z | 10 | NicholasCao |
pytorch/gloo | 399 | How to specify ai_family explicitly | we note that gloo supports ipv4 and ipv6 by setting ai_family = AF_UNSPEC and deciding a real one at runtime. However, in our cluster, we got an exception about ai_family mismatching. Our cluster contains both ipv4 and ipv6 network stacks. How can we specify ai_family explicitly?
We run pyroch, and get below exception... | https://github.com/pytorch/gloo/issues/399 | open | [] | 2024-12-04T08:30:50Z | 2025-02-10T09:06:52Z | null | NEWPLAN |
huggingface/lerobot | 547 | How to make a custom LeRobotDataset with v2? | Hi folks, thanks for the amazing open source work!
I am trying to make a custom dataset to use with the LeRobotDataset format.
The readme says to copy the example scripts here which I've done, and I have a working format script of my own.
https://github.com/huggingface/lerobot/blob/8e7d6970eaf5a64b8af6ec45586d... | https://github.com/huggingface/lerobot/issues/547 | closed | [
"question",
"dataset",
"stale"
] | 2024-12-04T08:00:19Z | 2025-10-08T08:28:34Z | null | alik-git |
huggingface/lerobot | 545 | Poor success rate in complex scenarios | Hi I used Moss robot to play with and train ACT policy, when it comes to one lego piece, it can finish grabbing task at high success rate after recording 50+ episodes with different pose & location variants, but generalization on multi-piece random location is not promising.
When I started to add complexity (for exa... | https://github.com/huggingface/lerobot/issues/545 | closed | [
"question",
"policies",
"stale"
] | 2024-12-04T06:20:31Z | 2025-10-08T08:28:45Z | null | mydhui |
huggingface/frp | 14 | where is the code of frpc-gradio-0.3 | https://github.com/huggingface/frp/issues/14 | closed | [] | 2024-12-04T05:37:34Z | 2025-03-11T00:55:39Z | null | BoyuanJiang | |
pytorch/tutorials | 3,174 | 💡 [REQUEST] - Tutorial for exporting popular class of models, showing the unique challenges faced and how to address them | ### 🚀 Describe the improvement or the new tutorial
The gaming community cares about certain classes of models like pose estimation, instance segmentation, video classification. When we try to export OSS implementations of these models, we run into unique challenges with `torch.export`
Currently, we have tutorial... | https://github.com/pytorch/tutorials/issues/3174 | closed | [
"module: export"
] | 2024-12-03T20:35:42Z | 2025-01-21T18:22:54Z | null | agunapal |
pytorch/xla | 8,430 | Request for Wheel with Older GLIBC | ## ❓ Questions and Help
Hi, I have installed torch-xla from https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.5.0-cp311-cp311-manylinux_2_28_x86_64.whl. "manylinux_2_28" indicates that it is compiled with GLIBC 2.28. However, when I installed and tried to import torch_xla, it said GLIBC ... | https://github.com/pytorch/xla/issues/8430 | open | [
"question",
"build"
] | 2024-12-03T17:50:24Z | 2025-02-13T14:47:34Z | null | ASU-ScopeX-Lab |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.