repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/peft | 2,255 | Is this the right way to check whether a model has been trained as expected? | I'd like to check whether my PEFT model has been trained as intended, i.e. whether the PEFT weights have changed, but not the base weights. The following code works, but I'm sure a PEFT specialist will suggest a better way.
```python
import tempfile
import torch
from datasets import load_dataset
from peft impo... | https://github.com/huggingface/peft/issues/2255 | closed | [] | 2024-12-03T17:36:00Z | 2024-12-04T12:01:37Z | 5 | qgallouedec |
huggingface/peft | 2,251 | a guide to add a new fine-tuning method in the doc | ### Feature request
Hello, I am a researcher in the finetune area. Can you publish a guide to add a new fine-tuning method in the doc? I think researchers like me are glad to experiment their methods based on this repo.
### Motivation
Researchers like me are glad to experiment their methods based on this repo, but d... | https://github.com/huggingface/peft/issues/2251 | closed | [] | 2024-12-03T13:46:02Z | 2024-12-04T02:12:35Z | 2 | YF-T |
pytorch/vision | 8,777 | Documentation for the expected input dimension of the model class | ### 📚 The doc issue
The built-in models are really convenient. However, the documentation usually did not specified the expected input dimension, I always find it troublesome to confirm what is the correct input dimension for the model class that i want to use.
For example:
https://pytorch.org/vision/main/models... | https://github.com/pytorch/vision/issues/8777 | closed | [] | 2024-12-02T17:55:40Z | 2024-12-03T10:30:23Z | 2 | hzhz2020 |
huggingface/diffusers | 10,076 | Do we have any script covert from hf format to orginal format? | **Is your feature request related to a problem? Please describe.**
scripts/convert_cogvideox_to_diffusers.py
in this script, we can convert cogvideox -> diffusers. Do we have the opposite script?
cc @yiyixuxu
| https://github.com/huggingface/diffusers/issues/10076 | open | [
"good first issue",
"contributions-welcome",
"conversion script"
] | 2024-12-02T07:49:34Z | 2024-12-02T18:22:50Z | 1 | foreverpiano |
huggingface/trl | 2,424 | How to calculate the loss of multi-turn dialogue training data? | In a single data entry containing multiple turns of dialogue, abbreviated as Q1 + A1 + Q2 + A2, does this project calculate the loss only for the last answer of the multi-turn dialogue, or for each answer? | https://github.com/huggingface/trl/issues/2424 | closed | [
"❓ question",
"🏋 SFT"
] | 2024-12-02T07:47:17Z | 2025-01-20T02:47:34Z | null | NUMB1234 |
huggingface/diffusers | 10,074 | how to install diffusers 0.32.0 | FluxFillPipeline Function need =0.32.0 But I don't know how to install it, can anyone help me? Thanks in advance | https://github.com/huggingface/diffusers/issues/10074 | closed | [] | 2024-12-02T07:05:24Z | 2024-12-02T19:11:34Z | null | babyta |
huggingface/diffusers | 10,070 | Xformers info , memory efficient atttention unavailable | ### Describe the bug
I just started learning Stable Diffuision on Win11. After I installed xformers, I found several memory_efficient_attention string is unavailable. Is it possible to make them available? Thanks for any help.
### Reproduction
xFormers 0.0.28.post3
memory_efficient_attention.ckF: ... | https://github.com/huggingface/diffusers/issues/10070 | open | [
"bug",
"stale"
] | 2024-12-01T16:14:21Z | 2025-01-01T15:03:09Z | 1 | Stareshine |
huggingface/Google-Cloud-Containers | 126 | Deployment error on GKE | Hello!
I deployed Gemma 2 2b it on GKE with autopilot mode following these instructions https://cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-tgi#autopilot. There's this error Node scale up in zones us-central1-c associated with this pod failed: GCE quota exceeded. Pod is at risk of not being sched... | https://github.com/huggingface/Google-Cloud-Containers/issues/126 | closed | [
"question"
] | 2024-12-01T14:09:29Z | 2025-01-07T08:39:07Z | null | piksida |
huggingface/lerobot | 538 | questions about load dataset for localhost, make own policy and use headless eval mode | Hello, I'm trying to download a data set on hugging face to the local and then call this data set from the local. For example, 'aloha_sim_insertion_scripted_image' , its format is many 'episode_000000.parquet' files . Then how to load this format by LeRobotDataset() func or other ways?
Second, I want to create my ow... | https://github.com/huggingface/lerobot/issues/538 | closed | [
"question",
"stale"
] | 2024-12-01T03:32:06Z | 2025-10-19T02:32:41Z | null | zhouzhq2021 |
huggingface/lerobot | 536 | How auto calibration works | Is there any details about run_arm_auto_calibration_moss and run_arm_auto_calibration_so100 we can refer? I read the code but couldn't fully understand.
When should we use auto_calibration, instead of the manual calibration calculating the homing_offset of the rotated (90d) pose?
What to check whether my underst... | https://github.com/huggingface/lerobot/issues/536 | closed | [
"question",
"robots",
"stale"
] | 2024-11-30T18:04:23Z | 2025-10-08T08:37:24Z | null | wzds2015 |
pytorch/torchtitan | 709 | First Shard Group Save and Load Checkpoint for HSDP | Based on my understanding, current strategy:
1. All ranks currently read and load the checkpoint.
2. All ranks also save and write the checkpoint.
I have a question regarding the HSDP case:
If different shard groups write data to storage, could this lead to data corruption?
Ideally, should only the first shard... | https://github.com/pytorch/torchtitan/issues/709 | closed | [
"question"
] | 2024-11-29T22:20:42Z | 2025-01-08T07:52:58Z | null | qsh-zh |
huggingface/accelerate | 3,269 | 🤨Question: What if model has float16 dtype and `mixed_precision` is set to fp16 as well? | As the title:
**🤨Question: What if model has float16 dtype and `mixed_precision` is set to fp16 as well?**
- Will it computate in original float16? Like Auto-Mixed-Precision never exist
- or some modules, which are easy to overflow(e.g. BatchNorm, LayerNorm), will be upcasted to float32, as AMP fp32->fp16 does?... | https://github.com/huggingface/accelerate/issues/3269 | closed | [] | 2024-11-29T17:55:58Z | 2025-01-07T15:33:26Z | null | townwish4git |
huggingface/chat-macOS | 36 | Document how to download and install a local model | 1st, thanks very much for this work!
I'm a bit of nube here.
The 'Get' button takes you to web page for the example, however chat-macOS instruction are not part of the options. And also where do you place the downloaded model for the "add +" option and where do the models go? Is there a way to configure where model... | https://github.com/huggingface/chat-macOS/issues/36 | open | [] | 2024-11-29T17:18:43Z | 2024-11-29T17:18:43Z | null | deepcoder |
pytorch/rl | 2,618 | [Feature Request] Provide documentation on how to use CatFrames with a data collector and replay buffer for images | ## Motivation
Using CatFrames for inference is fairly straightforward and is already well documented.
That being said, using CatFrames to reconstruct a stack of frames when sampling from the replay buffer is not so straightforward I find (subjective) and is not explicitly documentd for images (objective).
Using fr... | https://github.com/pytorch/rl/issues/2618 | open | [
"enhancement"
] | 2024-11-29T16:57:42Z | 2024-11-29T16:58:06Z | null | AlexandreBrown |
pytorch/TensorRT | 3,307 | ❓ [Question] TensorRT Export Failure with Large Input Sizes | ## ❓ Question
<!-- Your question -->
I'm trying to export a torch model that processes large inputs (e.g., 8192x2048). I have noticed that `torch_tensorrt.compile` fails with inputs greater than 4096x2048 (I haven't tried them all, only powers of 2). Specifically, the conversion fails for convolution and ReLU ope... | https://github.com/pytorch/TensorRT/issues/3307 | open | [
"question"
] | 2024-11-29T16:01:14Z | 2024-12-04T15:53:40Z | null | AndreaBrg |
huggingface/diffusers | 10,055 | Training script for a Controlnet based on SD3 does not work | ### Describe the bug
Hi @sayakpaul and all others :)
The training script for a Control-net based on Stable Diffusion 3 seems to not work.
**RuntimeError: Given groups=1, weight of size [1536, 17, 2, 2], expected input[4, 16, 64, 64] to have 17 channels, but got 16 channels instead**
I tried to follow th... | https://github.com/huggingface/diffusers/issues/10055 | open | [
"bug",
"stale"
] | 2024-11-29T13:46:29Z | 2025-02-03T15:03:46Z | 17 | Putzzmunta |
huggingface/diffusers | 10,050 | Is there any img2img KDiffusion equivalent of StableDiffusionKDiffusionPipeline? | ### Model/Pipeline/Scheduler description
I'm working on result alignment between diffusers and A1111 webui.
In txt2img scene, I can achieve via `StableDiffusionKDiffusionPipeline`, refer to https://github.com/huggingface/diffusers/issues/3253.
But in img2img scene, is there any KDiffusion pipeline equivalent?
I... | https://github.com/huggingface/diffusers/issues/10050 | open | [
"stale"
] | 2024-11-29T07:47:11Z | 2024-12-29T15:03:05Z | 2 | juju812 |
huggingface/diffusers | 10,043 | F5-TTS Integration | ### Model/Pipeline/Scheduler description
F5-TTS is a fully non-autoregressive text-to-speech system based on flow matching with Diffusion Transformer (DiT).
It has excellent voice cloning capabilities, and audio generation is of quite high quality.
### Open source status
- [X] The model implementation is available.... | https://github.com/huggingface/diffusers/issues/10043 | open | [
"help wanted",
"contributions-welcome"
] | 2024-11-28T11:14:18Z | 2025-11-02T18:46:02Z | 11 | nityanandmathur |
pytorch/pytorch | 141,746 | How to specify the port for processes with rank > 1 in the Gloo communication backend? | In Pytorch, when performing distributed training using gloo as the communication backend, you only need to specify master_addr and master_port; other processes will actively connect and use random ports for initialization. May I ask if it is possible for other processes to perform initialization by specifying the port?... | https://github.com/pytorch/pytorch/issues/141746 | open | [
"oncall: distributed",
"triaged"
] | 2024-11-28T02:07:49Z | 2024-12-19T03:52:52Z | null | tecaccc |
huggingface/lerobot | 533 | How to merge multiple recorded datasets? | Hi, Thank you so much for the automatic resume during data recording,sometimes ubstable camera issues or other situations (e.g. do not have enough time to finish recording) might cause process stopping.
I was wondering is there anyway to merge multiple recorded datasets? for instance I have two datasets 'cube grabbi... | https://github.com/huggingface/lerobot/issues/533 | closed | [
"question",
"dataset"
] | 2024-11-28T01:53:28Z | 2025-10-08T08:33:31Z | null | mydhui |
huggingface/transformers | 34,981 | How to Log Training Loss at Step Zero in Hugging Face Trainer or SFT Trainer? | ### Feature request
log train loss on start
----
’m using the Hugging Face `Trainer` (or `SFTTrainer`) for fine-tuning, and I want to log the training loss at step 0 (before any training steps are executed). I know there’s an `eval_on_start` option for evaluation, but I couldn't find a direct equivalent for trai... | https://github.com/huggingface/transformers/issues/34981 | open | [
"Feature request"
] | 2024-11-28T00:24:43Z | 2024-11-29T07:35:28Z | null | brando90 |
huggingface/transformers.js | 1,055 | Support for Typescript docs | ### Question
I have been trying to implement server side sentiment analysis using this [tutorial](https://huggingface.co/docs/transformers.js/main/en/tutorials/next#prerequisites) but its in Javascript. I looked through the docs but there seems to be no information on implementing it using Typescript. So far I have in... | https://github.com/huggingface/transformers.js/issues/1055 | open | [
"question"
] | 2024-11-26T21:38:54Z | 2024-11-27T02:20:59Z | null | SadmanYasar |
huggingface/datasets | 7,299 | Efficient Image Augmentation in Hugging Face Datasets | ### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
... | https://github.com/huggingface/datasets/issues/7299 | open | [] | 2024-11-26T16:50:32Z | 2024-11-26T16:53:53Z | 0 | fabiozappo |
huggingface/lerobot | 527 | Is there a `select_actions` abstraction? | This line references a `select_actions` function which doesn't seem to exist. This functionality (abstract away access to the future action queue, instead of just returning the first action) would be useful - did it use to / will it exist?
https://github.com/huggingface/lerobot/blob/96c7052777aca85d4e55dfba8f81586103b... | https://github.com/huggingface/lerobot/issues/527 | closed | [
"question",
"policies",
"stale"
] | 2024-11-26T14:22:31Z | 2025-10-08T08:33:51Z | null | genemerewether |
huggingface/diffusers | 10,025 | attention mask for transformer Flux | ### Describe the bug
Is it possible to get back the `attention_mask` argument in the flux attention processor
```
hidden_states = F.scaled_dot_product_attention(query, key, value, dropout_p=0.0, is_causal=False,attn_mask=attention_mask)
```
https://github.com/huggingface/diffusers/blob/main/src/diffusers/mo... | https://github.com/huggingface/diffusers/issues/10025 | closed | [
"bug"
] | 2024-11-26T08:51:20Z | 2024-12-05T00:22:37Z | 19 | christopher5106 |
huggingface/accelerate | 3,263 | How to load checkpoint shards one by one to avoid OOM error? | ### System Info
```Shell
- `Accelerate` version: 1.1.0
- Platform: Linux-5.10.112-005.ali5000.al8.x86_64-x86_64-with-glibc2.17
- `accelerate` bash location: /home/admin/anaconda3/envs/llama_factory/bin/accelerate
- Python version: 3.10.14
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- ... | https://github.com/huggingface/accelerate/issues/3263 | closed | [] | 2024-11-26T08:25:37Z | 2025-01-06T15:06:50Z | null | amoyplane |
pytorch/torchtitan | 700 | Is `autocast` needed with FSDP2? | Hi, is it necessary to wrap the forward pass in `autocast` when using FSDP2? I noticed that the `torchtitan` training loop does not.
If I wrap in `torch.autocast(device_type="cuda", dtype=torch.bfloat16)` my matmuls will be `bfloat16`, but my softmaxes (say) will be in `float32`. This behavior requires the autocast ... | https://github.com/pytorch/torchtitan/issues/700 | closed | [
"question"
] | 2024-11-25T22:32:13Z | 2024-12-05T15:51:06Z | null | garrett361 |
pytorch/vision | 8,749 | Pretrained weights for ResNet[18, 34, 50, 101] are incorrect | ### 🐛 Describe the bug
Hi,
I have been trying to run the pretrained ResNet models. The model weights seem to be incorrect. Below is a code to reproduce the erroneous results:
```
import torch
from torchvision.models import resnet18, ResNet18_Weights
from PIL import Image
resnet = resnet18(weights=ResNet18... | https://github.com/pytorch/vision/issues/8749 | closed | [] | 2024-11-25T22:17:58Z | 2024-11-27T18:24:38Z | 3 | longyuxi |
huggingface/lerobot | 525 | Train a RL agent (without initial dataset) | Hi,
I'm currently working on trying to integrate the following environment in the repo : https://github.com/perezjln/gym-lowcostrobot
I would like to use it for learning a RL agent in sim and try it out on the real robot after.
However, the current training script requires to have a local or online pre-recorded da... | https://github.com/huggingface/lerobot/issues/525 | closed | [
"enhancement",
"question",
"simulation"
] | 2024-11-25T20:02:38Z | 2025-04-07T16:19:01Z | null | alexcbb |
huggingface/chat-ui | 1,592 | Add Markdown support for user messages | ## Describe your feature request
In pr #1562 , a WSIWYG editor has been added to the text input area, however, when a text is sent, it is displayed in unrendered markdown. The idea is to use `marked` to conditionally render certain elements in the user's sent message into markdown, and leave others untouched.
The... | https://github.com/huggingface/chat-ui/issues/1592 | open | [
"enhancement"
] | 2024-11-25T17:26:10Z | 2024-11-27T20:42:19Z | 2 | Mounayer |
huggingface/accelerate | 3,260 | How to Properly Resume Multi-GPU Training with accelerate launch Without OOM or Loss Issues? | I encountered an issue while running multi-GPU training using `accelerate launch`. I am using 4 GPUs for training, and during the process, I save my model state using:
```python
accelerator.save_state(state_path)
```
Later, I attempt to resume training by loading the model parameters with:
```python
acceler... | https://github.com/huggingface/accelerate/issues/3260 | closed | [] | 2024-11-25T17:19:06Z | 2025-05-29T10:26:13Z | null | tqxg2018 |
pytorch/xla | 8,413 | Review documentation in the docs/source/contribute directory | ## 📚 Documentation
Review content in the docs/source/learn directory to improve readability and ensure it aligns with Google documentation standards.
| https://github.com/pytorch/xla/issues/8413 | closed | [
"documentation"
] | 2024-11-25T17:13:51Z | 2025-06-02T21:59:49Z | 2 | mikegre-google |
huggingface/chat-ui | 1,589 | Models using OpenAI endpoint have caching enabled | When using models that are currently using the OpenAI endpoint type on HuggingChat (Nemotron, llama 3.2, qwen coder) they seem to have caching enabled.
This means retrying will just reload the previous response extremely quickly. This is not the intended behaviour and does not match what is happening when using the T... | https://github.com/huggingface/chat-ui/issues/1589 | closed | [
"huggingchat"
] | 2024-11-25T12:47:01Z | 2025-03-12T12:56:00Z | 1 | nsarrazin |
pytorch/pytorch | 141,473 | How to use torch.compile + HF model? | ### 🐛 Describe the bug
Problem: There seem to be 2 ways of using torch compile with a HF model, both of which don't work for all the ways a model inference is called, which is one of 3 possible methods: `generate()`, `forward()` and `__call__()`.
## Option 1: `model = torch.compile(model)`
This works if we us... | https://github.com/pytorch/pytorch/issues/141473 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2024-11-25T05:19:31Z | 2024-11-26T04:21:05Z | null | SilverSoldier |
huggingface/diffusers | 10,004 | how to use kohya sd-scripts flux loras with text encoder keys in diffusers? | resulting lora weights from setting train text encoder to true is incompatible with diffusers load_lora_weights. the script networks/convert_flux_lora.py does not convert the text encoder keys either. | https://github.com/huggingface/diffusers/issues/10004 | open | [
"contributions-welcome"
] | 2024-11-23T20:54:30Z | 2025-03-16T15:39:25Z | null | neuron-party |
pytorch/pytorch | 141,422 | What is "recompilation profiler" in doc? (Seems to have a dangling link) | ### 📚 The doc issue
https://pytorch.org/docs/stable/torch.compiler_faq.html says:

But by clicking on it, it jumps to nowhere. I would appreciate it if I could know how to debug this excessive recompilation issue.
### Sugg... | https://github.com/pytorch/pytorch/issues/141422 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2024-11-23T06:01:44Z | 2024-11-26T23:22:21Z | null | fzyzcjy |
pytorch/torchtitan | 696 | [question] Need clarification on the purpose and performance benefits of GarbageCollection class | For the [impl](https://github.com/pytorch/torchtitan/blob/5525d7723175a1b4477bde3034a96f803b6c3fae/torchtitan/utils.py#L104)
I have several questions about the motivation and use cases for this class:
Could you provide examples of scenarios where this class can improves performance? compare against default Python... | https://github.com/pytorch/torchtitan/issues/696 | closed | [
"documentation",
"question"
] | 2024-11-23T04:39:20Z | 2024-11-26T00:25:12Z | null | qsh-zh |
huggingface/transformers.js | 1,050 | How to lengthen the Whisper max audio length? | ### Question
I'm working from the [webgpu-whisper](https://github.com/huggingface/transformers.js/tree/main/examples/webgpu-whisper) demo, and I'm having a hard time lengthening the maximum audio input allowed. I made the following changes:
```js
-const MAX_AUDIO_LENGTH = 30; // seconds
+const MAX_AUDIO_LENGTH = 12... | https://github.com/huggingface/transformers.js/issues/1050 | closed | [
"question"
] | 2024-11-22T17:50:50Z | 2024-11-26T03:59:03Z | null | stinoga |
huggingface/diffusers | 9,996 | Flux.1 cannot load standard transformer in nf4 | ### Describe the bug
loading different flux transformer models is fine except for nf4.
it works for 1% of fine-tunes provided on Huggingface, but it doesn't work for 99% standard fine-tunes available on CivitAI.
example of such model: <https://civitai.com/models/118111?modelVersionId=1009051>
*note* i'm using `... | https://github.com/huggingface/diffusers/issues/9996 | open | [
"bug",
"wip"
] | 2024-11-22T16:55:11Z | 2024-12-28T19:56:54Z | 16 | vladmandic |
huggingface/diffusers | 9,990 | How to diagnose problems in training custom inpaint model | ### Discussed in https://github.com/huggingface/diffusers/discussions/9989
<div type='discussions-op-text'>
<sup>Originally posted by **Marquess98** November 22, 2024</sup>
What I want to do is to perform image inpainting when the input is a set of multimodal images, using sdxl as the pre trained model. But the... | https://github.com/huggingface/diffusers/issues/9990 | closed | [] | 2024-11-22T03:16:50Z | 2024-11-23T13:37:53Z | null | Marquess98 |
pytorch/executorch | 7,030 | how to build a llama2 runner binary with vulkan backends in the server with intel x86 server | ### 📚 The doc issue
https://pytorch.org/executorch/stable/native-delegates-executorch-vulkan-delegate.html
https://pytorch.org/executorch/stable/build-run-vulkan.html
dear helper, above documentation descripe how to build the LLaMA runner binary on Android with VULKAN backend. however I can't find how to build the ... | https://github.com/pytorch/executorch/issues/7030 | closed | [
"module: vulkan",
"triaged"
] | 2024-11-22T03:16:40Z | 2025-12-18T21:39:49Z | null | l2002924700 |
pytorch/xla | 8,405 | Einsum is not added to the supported list for autocast | We noticed that einsum is not added to the supported ops list for low precision policy in autocast, is there a reason for that? Does this op have some issues in the support?
| https://github.com/pytorch/xla/issues/8405 | closed | [
"enhancement"
] | 2024-11-21T17:25:01Z | 2025-02-17T14:31:09Z | 3 | avizon-aws |
pytorch/torchtitan | 687 | Question about FSDP2 + FP8 all gather | Does FSDP2 work with both FP8 allgather and FP8 linear? | https://github.com/pytorch/torchtitan/issues/687 | closed | [
"question"
] | 2024-11-21T17:13:39Z | 2024-11-21T23:52:06Z | null | sbhavani |
huggingface/Google-Cloud-Containers | 123 | Querying PaliGemma VLMs | My collaborators and I are trying to use your very useful containers to deploy and use Google's PaliGemma models on GCS/Vertex. I was wondering what is the best way to query the model with images, especially if the images are stored locally? I see that there is an [example showing this for Llama Vision](https://github.... | https://github.com/huggingface/Google-Cloud-Containers/issues/123 | closed | [
"question"
] | 2024-11-21T14:52:41Z | 2024-12-04T16:31:01Z | null | kanishkamisra |
huggingface/diffusers | 9,983 | Using StableDiffusionControlNetImg2ImgPipeline Enable_vae_tiling(), seemingly fixed the patch is 512 x 512, where should I set the relevant parameters | ```
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a beautiful landscape photograph"
pipe.enable_vae_tiling()
``` | https://github.com/huggingface/diffusers/issues/9983 | closed | [] | 2024-11-21T09:21:24Z | 2024-12-02T08:32:52Z | null | reaper19991110 |
huggingface/datatrove | 305 | How to read text files | Hey all is there any text reader in the repo?
I have text files where each line is a document/data sample.
Are there any readers which can read these kind of files directly? | https://github.com/huggingface/datatrove/issues/305 | open | [] | 2024-11-21T06:55:21Z | 2025-05-16T10:51:33Z | null | srinjoym-cerebras |
huggingface/diffusers | 9,979 | flux img2img controlnet channels error | ### Describe the bug
When I use flux's img2img controlnet for inference, a channel error occurs.
### Reproduction
```python
import numpy as np
import torch
import cv2
from PIL import Image
from diffusers.utils import load_image
from diffusers import FluxControlNetImg2ImgPipeline, FluxControlNetPipeline
fr... | https://github.com/huggingface/diffusers/issues/9979 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-11-21T03:39:12Z | 2025-04-23T20:43:51Z | 10 | wen020 |
huggingface/diffusers | 9,976 | ControlNet broken from_single_file | ### Describe the bug
controlnet loader from_single_file was originally added via #4084
and method `ControlNet.from_single_file()` works for non-converted controlnets.
but for controlnets in safetensors format that contain already converted state_dict, it errors out.
its not reasonable to expect from user to k... | https://github.com/huggingface/diffusers/issues/9976 | closed | [
"bug"
] | 2024-11-20T13:46:14Z | 2024-11-22T12:22:53Z | 7 | vladmandic |
pytorch/xla | 8,402 | Kaggle Notebook: model return loss None on TPU | ## ❓ Questions and Help
Hi, I recieved loss None when training model. Anyone can help?
Simple reproduct kaggle notebook [link](https://www.kaggle.com/code/liondude/notebook548442067d)
```
import os
import time
import pandas as pd
import numpy as np
from tqdm import tqdm
import datasets
import torch
... | https://github.com/pytorch/xla/issues/8402 | closed | [
"question"
] | 2024-11-20T09:50:51Z | 2025-02-17T14:32:56Z | null | hiwamk |
pytorch/pytorch | 141,118 | Dynamo: how to deal with multiple inheritance (nn.Module/MutableMapping)? | ### 🐛 Describe the bug
TensorDict is a MutableMapping object, and is treated as such by torch.compile:
```python
import torch
from tensordict import TensorDict
td = TensorDict(a=1, b=2, c=True)
@torch.compile(fullgraph=True)
def add1(td):
return TensorDict(**td)+1
add1(td)
```
We also have a `... | https://github.com/pytorch/pytorch/issues/141118 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-dicts",
"dynamo-nn-modules"
] | 2024-11-20T09:01:58Z | 2024-12-10T19:22:18Z | null | vmoens |
pytorch/pytorch | 141,116 | How to fuse batchnorm to conv2d in the graph exported by torch.export | I used the torch.export to export my CNN model in eval mode,but the op batchnorm still exists. how to eliminate it. Is there some options in torch.export.export function or I should write a fusion pass by myself.
Thanks.
code:
```
import torch
import torch.nn as nn
class CNN(nn.Module):
def __init__(self):
... | https://github.com/pytorch/pytorch/issues/141116 | open | [
"oncall: pt2",
"oncall: export"
] | 2024-11-20T07:46:28Z | 2024-11-20T19:06:46Z | null | TingfengTang |
pytorch/ao | 1,315 | How to trigger torchao unit tests? | We plan to run unit tests when we switch to different torch versions and triton versions.
How should we leverage with torchao's unit tests to make sure new torch version and triton versions are working?
Thanks! | https://github.com/pytorch/ao/issues/1315 | closed | [] | 2024-11-19T22:50:34Z | 2024-12-05T01:43:54Z | null | goldhuang |
huggingface/lerobot | 515 | ACT is working, but not Diffusion | Hello Team,
your work is so good, I am currently working on creating some nice policies with Lerobot repo, architecture and software. I tried ACT on my robot, it is working fine, able to execute the tasks what it learnt in the evaluation.
I tried training Diffusion policy, multiple times with different params and ... | https://github.com/huggingface/lerobot/issues/515 | closed | [
"question",
"policies",
"stale"
] | 2024-11-19T18:58:28Z | 2025-11-30T02:37:09Z | null | Kacchan16 |
huggingface/transformers.js | 1,042 | how can i pass embeddings or context to a text2text-generation model | ### Question
I downloaded the model to local. I found that there doesn't seem to be an API that allows me to pass embeddings. How can I make this model understand the context?
Then I tried to pass the context content to this model, but the model didn't seem to accept it and output the following words.
The code i... | https://github.com/huggingface/transformers.js/issues/1042 | closed | [
"question"
] | 2024-11-19T18:32:45Z | 2024-11-20T05:34:45Z | null | electroluxcode |
huggingface/transformers.js | 1,041 | Full preload example | ### Question
Hello!
I'm looking for a full "preload model" nodejs example.
Say I do this:
```ts
import { env } from '@huggingface/transformers';
env.allowRemoteModels = false;
env.localModelPath = '/path/to/local/models/';
```
how do I "get" the model to that path? I want to download it when building... | https://github.com/huggingface/transformers.js/issues/1041 | closed | [
"question"
] | 2024-11-19T12:34:04Z | 2024-11-26T12:44:55Z | null | benjick |
pytorch/benchmark | 2,543 | How to get benchmark statistics? | I'm building a CI to test some models on certain types of devices. I want get benchmark statistics like which model cases failed? which tests were skipped and why? These statistics will be used to generate a table like this:
<table>
<tr>
<th rowspan="2">Devices</th>
<th colspan="2">BERT_pytorch</th>
... | https://github.com/pytorch/benchmark/issues/2543 | closed | [] | 2024-11-19T09:36:22Z | 2025-02-11T08:15:40Z | null | shink |
pytorch/torchchat | 1,388 | eval doc does not pass test | ### 🐛 Describe the bug
https://github.com/pytorch/torchchat/pull/1383 enables `run-docs evaluation` to extract a test script from eval documentation,
to run evaluation script. In turn, this extracts the command
```
python3 torchchat.py eval stories15M --tasks wikitext --limit 10
```
from the eval doc as a t... | https://github.com/pytorch/torchchat/issues/1388 | closed | [
"documentation"
] | 2024-11-19T05:38:54Z | 2024-12-10T04:41:51Z | 2 | mikekgfb |
pytorch/ao | 1,310 | [NF4] Various bugs in how NF4 handles `.to()` to move to a different device | Reproduction
```python
import torch
from torch import nn
from torchao.dtypes.nf4tensor import to_nf4
x = torch.randn(1024, 1024)
x_nf4 = to_nf4(x)
print(x_nf4.cuda()) # this will dequantize NF4 -> unwanted
print(x_nf4.to(device="cuda")) # this will raise error
print(x_nf4.to("cuda")) # this will do the ... | https://github.com/pytorch/ao/issues/1310 | closed | [
"bug"
] | 2024-11-19T04:31:35Z | 2024-11-26T06:19:03Z | null | gau-nernst |
pytorch/torchchat | 1,385 | Update dead link in https://github.com/pytorch/torchchat/blob/main/docs/quantization.md | ### 🐛 Describe the bug
There is a dead link https://github.com/pytorch/torchchat/blob/main/torchchat/utils/quantize.py#L1260-L1266 in https://github.com/pytorch/torchchat/blob/main/docs/quantization.md like `See the available quantization schemes [here](https://github.com/pytorch/torchchat/blob/main/torchchat/utils/q... | https://github.com/pytorch/torchchat/issues/1385 | closed | [
"documentation",
"Quantization"
] | 2024-11-19T01:34:54Z | 2024-12-09T22:37:22Z | 4 | yanbing-j |
pytorch/xla | 8,390 | [TPU][torch.compile] How to introduce in-place custome Ops through Pallas ? | ## ❓ Questions and Help
Hi torch.xla team, thank you so much for the great work on making pytoch available on XLA devices! We have had great experience with it so far.
We are exploring the idea of adding custome Pallas kernels in the graph and using it along with `torch.compile(..., backend='openxla')` for TPUs.... | https://github.com/pytorch/xla/issues/8390 | closed | [] | 2024-11-18T19:03:23Z | 2024-11-18T19:08:50Z | null | xinli-sw |
pytorch/xla | 8,389 | Prepare a subsection to educate users on the PyTorch workloads on AI-Hypercomputer | ## 📚 Documentation
AI-Hypercomputer is where customers and users can find optimized implementation of representative models.
Please add a section in the PyTorchXLA README page (and the html documentation) that introduces this concept and points the users to the following resource: https://github.com/AI-Hypercomp... | https://github.com/pytorch/xla/issues/8389 | closed | [
"documentation"
] | 2024-11-18T18:48:24Z | 2024-12-10T00:24:25Z | 1 | miladm |
huggingface/transformers.js | 1,038 | script.convert tfjs model to onnx support | ### Question
I'm using tfjs-node to create an image-classifier model;
but I'm stuck with how to convert model.json to a format that can be used by optimum or script.convert to convert it to a onnx file.
I'm able to convert to a graph model using
```
tensorflowjs_converter --input_format=tfjs_layers_model \ --... | https://github.com/huggingface/transformers.js/issues/1038 | open | [
"question"
] | 2024-11-18T15:42:46Z | 2024-11-19T10:08:28Z | null | JohnRSim |
huggingface/chat-ui | 1,573 | Include chat-ui in an existing React application | Hello,
Is it possible to integrate / embed chat-ui in an existing application, like a React component?
For example, to add a chat module to an existing website with the UI of chat-ui.
As is the case with Chainlit : https://docs-prerelease.chainlit.io/customisation/react-frontend | https://github.com/huggingface/chat-ui/issues/1573 | open | [
"enhancement"
] | 2024-11-18T14:11:58Z | 2024-11-18T14:15:17Z | 0 | martin-prillard |
huggingface/optimum | 2,097 | TFJS support model.json to ONNX conversion | ### Feature request
Currently using node to create an image-classifier model.json with tfjs
- I don't think Optimum support this format to convert to onnx?
It would be nice to just use optimum and point to model.json.
### Motivation
Currently I'm creating the model converting it to graph and then converting t... | https://github.com/huggingface/optimum/issues/2097 | open | [
"exporters",
"tflite"
] | 2024-11-18T12:55:05Z | 2024-11-19T10:22:35Z | 0 | JohnRSim |
huggingface/optimum-benchmark | 294 | How to Use a Local Model When Calling the Python API | 
| https://github.com/huggingface/optimum-benchmark/issues/294 | closed | [] | 2024-11-18T06:36:24Z | 2024-12-09T12:23:30Z | null | WCSY-YG |
pytorch/xla | 8,388 | Need help validating TPU/XLA devices support for ComfyUI. | ## ❓ Questions and Help
I'm working on adding initial XLA support to ComfyUI https://github.com/comfyanonymous/ComfyUI/pull/5657 and would greatly appreciate any feedback or validation from the community. Specifically, I'm looking for:
- Testing across different XLA-compatible hardware (e.g., TPUs or GPUs with XLA ... | https://github.com/pytorch/xla/issues/8388 | open | [
"question"
] | 2024-11-17T23:09:49Z | 2025-02-17T18:13:57Z | null | radna0 |
huggingface/lerobot | 511 | Minimum Requirements - Running Policies in production/ Training Policies | I was wondering what types of hardware can policies trained using lerobot can run on. Lets say I wanted to run policies in production on say a raspberry pi. Is it possible to run training on beefier hardware and then deploy policies to lower-end hardware to run? Is it better to record with various cameras or just use t... | https://github.com/huggingface/lerobot/issues/511 | closed | [
"question"
] | 2024-11-17T17:34:50Z | 2025-04-07T16:23:41Z | null | rkeshwani |
huggingface/transformers.js | 1,035 | How can I implement partial output in the react demo? | ### Question
Hello! I am reading the Transformers.js documentation for "[Building a react application](https://huggingface.co/docs/transformers.js/tutorials/react)", but I encountered an issue at [step 4](https://huggingface.co/docs/transformers.js/tutorials/react#step-4-connecting-everything-together).
I don't kn... | https://github.com/huggingface/transformers.js/issues/1035 | open | [
"question"
] | 2024-11-17T11:29:22Z | 2024-12-02T23:00:13Z | null | DikkooXie |
huggingface/lerobot | 510 | Do we have to compulsory use trossen robotics robots for this repo? | Or any robot will work fine?
Also one more question.
Do we have to use depth camera or simple camera will work fine? | https://github.com/huggingface/lerobot/issues/510 | closed | [
"question",
"robots"
] | 2024-11-17T11:14:52Z | 2025-04-07T16:27:40Z | null | hemangjoshi37a |
huggingface/diffusers | 9,942 | Unable to install pip install diffusers>=0.32.0dev | ### Describe the bug
I am installing the following version
pip install diffusers>=0.32.0dev
However it does nothing
```
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>pip install diffusers>=0.32.0dev
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>
```
I even uninstalled the previous version
```... | https://github.com/huggingface/diffusers/issues/9942 | closed | [
"bug"
] | 2024-11-17T10:26:19Z | 2024-11-17T12:27:23Z | 0 | nitinmukesh |
huggingface/candle | 2,622 | How to compute `Atan2` for tensors? | I am trying to implement DeepPhase in candle but I am struggling figuring out how to calculate the phase angles from two tensors using `atan2` operation. | https://github.com/huggingface/candle/issues/2622 | open | [] | 2024-11-16T16:45:36Z | 2024-11-17T14:21:50Z | null | cryscan |
pytorch/xla | 8,387 | Can Triton be used with XLA/TPU devices? | ## ❓ Questions and Help
I see that there are docs for triton support but only for GPU? Is it possible for TPU to use triton?
```[tasklist]
### Tasks
```
| https://github.com/pytorch/xla/issues/8387 | closed | [] | 2024-11-16T09:46:06Z | 2024-12-11T06:21:18Z | 1 | radna0 |
pytorch/torchchat | 1,380 | What is the future plan of model expansion? | ### 🚀 The feature, motivation and pitch
I see current torchchat only support a few kinds of model, like llama based(liked) architecture, or pre-defined Transformer architecture models. Is there any plan to support other kinds of model architecture in the future? which kinds of model you're considering to add? If ther... | https://github.com/pytorch/torchchat/issues/1380 | open | [
"enhancement",
"Question",
"triaged"
] | 2024-11-15T23:33:01Z | 2025-03-31T20:39:15Z | null | jenniew |
huggingface/transformers.js | 1,032 | How to identify which models will work with transformers.js? | ### Question
I've tried multiple models from MTEB dashboard (e.g. `jinaai/jina-embeddings-v3`, `jinaai/jina-embeddings-v2`, `dunzhang/stella_en_400M_v5`), but none of them work.
It's not clear which models will work?
```ts
const generateGteSmallEmbedding = await pipeline(
'feature-extraction',
'dunzhang/s... | https://github.com/huggingface/transformers.js/issues/1032 | open | [
"question"
] | 2024-11-15T22:13:00Z | 2024-12-22T02:41:43Z | null | punkpeye |
huggingface/datasets | 7,291 | Why return_tensors='pt' doesn't work? | ### Describe the bug
I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List?

### Steps to reproduce the bug
 deploy mesh with torchtian? | Under the 128k long sequence, the activation value memory increases significantly.
CP8 + TP8 seems necessary (they reduce the activation value memory almost linearly), but there is still as much as 50G of activation value memory.
Reccompute the activations of the MLP can reduce it by about 9G, while the recalculati... | https://github.com/pytorch/torchtitan/issues/678 | closed | [
"enhancement",
"question"
] | 2024-11-15T03:36:20Z | 2025-02-26T06:40:07Z | null | medivh-xp |
huggingface/diffusers | 9,930 | [PAG] - Adaptive Scale bug | ### Describe the bug
I am looking for the purpose of the PAG adaptive scale? Because I was passing a value in it, for example 5.0, and passing 3.0 in the PAG scale, according to the implemented code we will have a negative number and the scale will return 0 and the PAG will not be applied and I did not find an expla... | https://github.com/huggingface/diffusers/issues/9930 | open | [
"bug",
"stale"
] | 2024-11-15T02:00:19Z | 2024-12-15T15:03:05Z | 1 | elismasilva |
huggingface/safetensors | 541 | [Question] Safetensors seem to block the main thread -- but torch.save does not? | I have the following code in my training loop:
```
if rank == 0:
t = Thread(
target=save_file,
args=(model_sd, f"{cfg.model_dir}/model_{step + 1}.safetensors"),
daemon=True
)
... | https://github.com/huggingface/safetensors/issues/541 | open | [] | 2024-11-15T00:37:55Z | 2025-02-26T09:51:23Z | 4 | vedantroy |
pytorch/xla | 8,380 | How are PJRT asynchronous executions throttled by torch_xla? | ## 🐛 Bug
Here at AWS we have a single PJRT device plugin for both PyTorch and JAX, and recently we've made implements to our device plugin to make it work better with JAX. I.e. now `PJRT_LoadedExecutable_Execute()` is fully asynchronous, we queue up an execution and return immediately, and expect the caller to wait... | https://github.com/pytorch/xla/issues/8380 | closed | [] | 2024-11-14T18:39:43Z | 2024-11-27T17:59:21Z | 7 | mcuiaws |
pytorch/torchtitan | 677 | Fine-Tuning Llama Model with Large Context and Customized Dataset Using Torchtitan | Hi,
I am trying to fine-tune a Llama model with a large context size, and I found that to efficiently shard activations across multiple GPUs, I need to use Torchtitan. Here are some questions related to my setup:
See related issue: [meta-llama/llama-recipes#785](https://github.com/meta-llama/llama-recipes/issues/... | https://github.com/pytorch/torchtitan/issues/677 | closed | [
"enhancement",
"question"
] | 2024-11-14T17:29:52Z | 2024-12-17T16:11:20Z | null | Amerehei |
huggingface/peft | 2,216 | How to specify the coefficients of loading lora during inference? | https://github.com/huggingface/peft/issues/2216 | closed | [] | 2024-11-14T11:47:00Z | 2024-11-18T11:30:03Z | null | laolongboy | |
huggingface/chat-ui | 1,565 | Is there any place that uses this environment variable? | https://github.com/huggingface/chat-ui/blob/ab349d0634ec4cf68a781fd7afc5e7fdd6bb362f/.env#L59-L65
It seems like it can be deleted. | https://github.com/huggingface/chat-ui/issues/1565 | closed | [] | 2024-11-14T11:12:49Z | 2024-11-14T11:17:04Z | 2 | calycekr |
huggingface/diffusers | 9,927 | HeaderTooLarge when train controlnet with sdv3 | ### Describe the bug
Hello, I tried diffuser to train controlnet with sdv3 but it didn't start training and send `safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge` feedback. I don't know how to handle it.
### Reproduction
Follow the README_v3 guide.
### Logs
```shell
(diffusers) [... | https://github.com/huggingface/diffusers/issues/9927 | closed | [
"bug"
] | 2024-11-14T07:28:03Z | 2024-11-21T13:02:05Z | 3 | Viola-Siemens |
huggingface/datasets | 7,290 | `Dataset.save_to_disk` hangs when using num_proc > 1 | ### Describe the bug
Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours.
Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than... | https://github.com/huggingface/datasets/issues/7290 | open | [] | 2024-11-14T05:25:13Z | 2025-11-24T09:43:03Z | 4 | JohannesAck |
pytorch/executorch | 6,846 | How to Apply Different Quantization Settings Per Layer in ExecuTorch? | Dear @kimishpatel @jerryzh168 @shewu-quic
I want to split a model(eg, Llama-3.2-3B) into multiple layers and apply different quantization settings(qnn_8a8w, qnn_16a4w...) to each layer.
Has such a method been tested in ExecuTorch?
If not, could you suggest how this can be achieved?
Thank you | https://github.com/pytorch/executorch/issues/6846 | open | [
"partner: qualcomm",
"triaged",
"module: quantization"
] | 2024-11-14T02:48:39Z | 2024-12-23T19:32:53Z | null | crinex |
huggingface/trl | 2,356 | How to train from scratch? Can you provide the code | ### System Info
train from scratch
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
train from scratch
### Expected behavior
train from scr... | https://github.com/huggingface/trl/issues/2356 | closed | [
"❓ question"
] | 2024-11-14T02:39:41Z | 2024-12-13T23:00:20Z | null | sankexin |
huggingface/sentence-transformers | 3,054 | 'scale' hyperparameter in MultipleNegativesRankingLoss | I am looking through the MultipleNegativesRankingLoss.py code and I have question about the 'scale' hyperparameter. Also known as the 'temperature', the scale is used to stretch or compress the range of output values from the similarity function. A larger scale creates greater distinction between positive and negative ... | https://github.com/huggingface/sentence-transformers/issues/3054 | closed | [
"question"
] | 2024-11-14T00:11:23Z | 2025-01-16T13:54:45Z | null | gnatesan |
huggingface/diffusers | 9,924 | Can we get more schedulers for flow based models such as SD3, SD3.5, and flux | It seems advanced schedulers such as DDIM, and the dpm++ 2m does work with flow based model such as SD3, SD3.5, and flux.
However, I only see 2 flow based schedulers in diffusers codebase:
FlowMatchEulerDiscreteScheduler, and'
FlowMatchHeunDiscreteScheduler
I tried to use DPMSolverMultistepScheduler, but it do... | https://github.com/huggingface/diffusers/issues/9924 | open | [
"wip",
"scheduler"
] | 2024-11-14T00:07:56Z | 2025-01-14T18:31:12Z | 40 | linjiapro |
pytorch/torchtitan | 676 | Very low wps with H200 Gpus | Hello, I am running the multinode_trainer.slurm (llama3_70b.toml) on 4 nodes that have 32 H200 Gpus. However, wps is only around ~200. Any ideas what can cause this slowness?
[output.txt](https://github.com/user-attachments/files/17740634/output.txt)
[multinode_trainer.slurm.txt](https://github.com/user-attachme... | https://github.com/pytorch/torchtitan/issues/676 | closed | [
"question"
] | 2024-11-13T23:59:00Z | 2025-02-26T04:16:21Z | null | aniltrkkn |
pytorch/xla | 8,379 | Confusing text in bazel.md | ## 📚 Documentation
The bazil.md file contains the following text:
Bazel brings in [pybind11](https://github.com/pybind/pybind11) embeded python and links against it to provide libpython to the plugin using this mechanism. Python headers are also sourced from there instead of depending on the system version. Thes... | https://github.com/pytorch/xla/issues/8379 | open | [
"documentation",
"build"
] | 2024-11-13T23:11:00Z | 2025-11-13T00:46:46Z | 3 | mikegre-google |
pytorch/executorch | 6,813 | How to convert tokenizer of SmolLM model as accepted by executorch | Hi,
I am trying to convert [SmolLm-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) model to .pte format and then run on an android device.
I have been successful in converting the model but executorch requires the tokenizer in either .bin format or .model format which can then be converted i... | https://github.com/pytorch/executorch/issues/6813 | open | [
"triaged",
"module: extension",
"module: user experience"
] | 2024-11-13T11:19:13Z | 2025-12-18T20:16:46Z | null | Arpit2601 |
huggingface/pytorch-image-models | 2,332 | [BUG] How to customize the number of classification heads | **Describe the bug**
**To Reproduce**
Steps to reproduce the behavior:
from timm.models import create_model
checkpoint_path = "/nas_mm_2/yinxiaofei.yxf/open_source_model/InternViT-300M-448px/tmp/timm__vit_intern300m_patch14_448.ogvl_dist/model.safetensors"
model = create_model('vit_intern300m_patch14_448',chec... | https://github.com/huggingface/pytorch-image-models/issues/2332 | closed | [
"bug"
] | 2024-11-12T08:08:50Z | 2024-11-12T15:28:42Z | null | JarvisFei |
pytorch/xla | 8,371 | TPU Trillium Base Docker Image cannot initialize | ## TPU initialization is failed
When I started tpu v6e-4 TPU Vm with v2-alpha-tpuv6e base image, with pip enviroment and xla updates I can clearly initialized tpus. However when I start to dockerize my pipelie, it fails to initialize TPUs. I tried so much tpu xla base images but I could not achieve to initialize. Th... | https://github.com/pytorch/xla/issues/8371 | open | [
"bug",
"xla:tpu"
] | 2024-11-12T07:38:53Z | 2025-02-18T12:43:11Z | 9 | hsebik |
huggingface/unity-api | 30 | [QUESTION] | I have a simple game built in unity and I'm using this Hugging face API client for voice parsing. I'm trying to understand when I build and run the game, and want to distribute it to many users, how do I keep the same api key every time so that users can install and run voice control it without any issue? | https://github.com/huggingface/unity-api/issues/30 | closed | [
"question"
] | 2024-11-12T02:35:52Z | 2024-11-20T01:46:16Z | null | harshal-14 |
pytorch/vision | 8,721 | make processing of arbitrary inputs to transforms.v2 public and document it | ### 🚀 The feature
Supporting arbitrary input structures in custom transforms is very important in the case of transform compositions:
```python
tr = Compose([RandomCrop((128,128), CustomTransform])
```
This can be done by inheriting from `torchvision.transforms.v2.Transform` and implementing the **private** `._tr... | https://github.com/pytorch/vision/issues/8721 | closed | [] | 2024-11-11T13:48:03Z | 2024-12-09T12:39:09Z | 3 | liopeer |
huggingface/swift-transformers | 140 | How to use customized tokenizer? | Hello. I am writing this post because I have a question about loading the tokenizer model. I am trying to use a pre-trained tokenizer in a Swift environment. After training, how do I apply the byproduct .model and .vocab files so that I can use the tokenizer I trained in Swift while using the swift-transformer API? I w... | https://github.com/huggingface/swift-transformers/issues/140 | open | [
"tokenization"
] | 2024-11-11T09:36:14Z | 2025-09-10T13:19:10Z | null | cch1219 |
pytorch/audio | 3,852 | Can anyone provide a real-time pretrain model for Visual Speech Recognition? | ### 📚 The doc issue
I don't have the LRS3 dataset, I can't use the author's real time recipe, I would like to ask if I can directly request the trained MODEL? I would like to ask the author if he can provide the trained mods directly, or if there is anyone who has the download point of LRS3, thank you!
### Suggest a... | https://github.com/pytorch/audio/issues/3852 | open | [] | 2024-11-11T06:19:57Z | 2024-11-11T06:19:57Z | 0 | bernie-122 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.