repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/serve | 3,296 | integrating the Torch Serve hosted model with a third party application | I have an application that takes an image converts that image into base64 to create a input request for API call.
The input schema structure created by my application looks something like this,
{
"instances":
[
{
"base64": "base64 string of image",
"mode_type": "some value"
... | https://github.com/pytorch/serve/issues/3296 | open | [] | 2024-08-22T06:18:20Z | 2024-08-22T16:27:35Z | 1 | tarunsk1998 |
huggingface/sentence-transformers | 2,900 | how to keep `encode_multi_process` output on the GPU | I saw this [example](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic-search/semantic_search.py) where we can do the following:
`query_embedding = embedder.encode(query, convert_to_tensor=True)`
`hits = util.semantic_search(query_embedding, corpus_embeddings, top_k=5)`
I r... | https://github.com/huggingface/sentence-transformers/issues/2900 | open | [] | 2024-08-21T21:05:35Z | 2024-08-21T21:07:39Z | null | anshuchen |
pytorch/TensorRT | 3,109 | ❓ [Question] how to specify dynamic shape when using torch_tensorrt.save | ## ❓ Question
<!-- Your question -->
I was following [the documentation](https://pytorch.org/TensorRT/user_guide/dynamic_shapes.html#dynamic-shapes) on compiling a model with dynamic input shape. When saving the compiled graph module (following [this](https://pytorch.org/TensorRT/user_guide/saving_models.html)), th... | https://github.com/pytorch/TensorRT/issues/3109 | closed | [
"question"
] | 2024-08-21T18:35:28Z | 2024-09-26T20:38:44Z | null | Qi-Zha0 |
pytorch/ao | 724 | What is the difference between WeightNormSparsifier and torch.nn.utils.prune.l1_unstructured ? | https://github.com/pytorch/ao/issues/724 | open | [
"question"
] | 2024-08-21T18:14:19Z | 2024-08-23T15:03:35Z | null | mayank64ce | |
huggingface/parler-tts | 116 | How to use italian language? | It is possible use an italian style speaker? I've tried many prompt but all of this are in english style | https://github.com/huggingface/parler-tts/issues/116 | open | [] | 2024-08-21T15:24:57Z | 2025-06-18T13:20:22Z | null | piperino11 |
huggingface/chat-ui | 1,423 | Generated answers with Llama 3 include <|start_header_id|>assistant<|end_header_id|> | ## Bug description
I have set up a local endpoint serving Llama 3. All the answers I get from it start with `<|start_header_id|>assistant<|end_header_id|>`.
## Steps to reproduce
Set up Llama 3 in a local endpoint. In my `.env.local`, it is defined as the following:
```
MODELS=`[
{
"name": "lla... | https://github.com/huggingface/chat-ui/issues/1423 | closed | [
"support"
] | 2024-08-21T11:56:47Z | 2024-08-26T14:31:53Z | 5 | erickrf |
huggingface/trl | 1,955 | How to fine-tune LLaVA using PPO | Does LLaVA support training with PPO?
If not, what modifications do I need to make to enable this support? | https://github.com/huggingface/trl/issues/1955 | open | [
"✨ enhancement",
"👁️ VLM"
] | 2024-08-21T07:34:30Z | 2024-08-26T11:13:46Z | null | Yufang-Liu |
pytorch/xla | 7,897 | Import "torch_xla.core.xla_model" could not be resolved | getting issues on torch_xla.core.xla_model. , while installing package also getting errors : "ERROR: Could not find a version that satisfies the requirement torch-xla (from versions: none)
ERROR: No matching distribution found for torch-xla"
I have installed python version is : Python 3.10.0
Any Solution ?
| https://github.com/pytorch/xla/issues/7897 | closed | [
"question"
] | 2024-08-21T05:25:35Z | 2025-04-01T12:26:48Z | null | hiralU |
huggingface/diffusers | 9,235 | Is there any way to get diffusers-v0.27.0.dev0? | Is there any way to get diffusers-v0.27.0.dev0? I want to compare the difference between diffusers-v0.27.0.dev0 and branches that develop on it in another project, but I didn't find it on the releases or tags page. | https://github.com/huggingface/diffusers/issues/9235 | closed | [] | 2024-08-21T03:42:11Z | 2024-08-21T05:10:26Z | 2 | D222097 |
huggingface/llm.nvim | 108 | How to use proxy env var | I am unable to communicate with any http endpoints because I am behind a corporate proxy that uses self-signed certificates. Typically we use the http_proxy and https_proxy environment variables for this purpose, but I can't see any obvious configurations that I can add to my lua config to make this work.
I have tri... | https://github.com/huggingface/llm.nvim/issues/108 | open | [] | 2024-08-20T18:52:54Z | 2024-08-20T18:53:36Z | null | SethARhodes |
huggingface/huggingface_hub | 2,468 | How can I modify this repo files downloader jupyter notebook script to improve downloading speed? Perhaps multiple downloads at the same time? | This below code works but it is just slow
How can i speed up? Machine has much bigger speed and i really need to download lots of AI models to test
Thank you
```
import os
import requests
import hashlib
from huggingface_hub import list_repo_files, hf_hub_url, hf_hub_download
from huggingface_hub.utils ... | https://github.com/huggingface/huggingface_hub/issues/2468 | closed | [] | 2024-08-20T15:13:13Z | 2024-08-27T16:22:14Z | null | FurkanGozukara |
pytorch/xla | 7,890 | In spmd training of multiple machines, xp.trace is problematic | ## ❓ Questions and Help
I printed all the thunk that was executed and found that there were a lot of thunk that didn't appear in my tensorboard. And the order of the front and back is also wrong.
I trace according to this example:https://github.com/pytorch/xla/blob/master/test/spmd/test_train_spmd_imagenet.py#L318-L3... | https://github.com/pytorch/xla/issues/7890 | open | [
"question"
] | 2024-08-20T12:48:39Z | 2025-04-01T12:28:34Z | null | mars1248 |
huggingface/datasets | 7,116 | datasets cannot handle nested json if features is given. | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value... | https://github.com/huggingface/datasets/issues/7116 | closed | [] | 2024-08-20T12:27:49Z | 2024-09-03T10:18:23Z | 3 | ljw20180420 |
huggingface/datasets | 7,113 | Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch) | ### Describe the bug
Hi there,
I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgr... | https://github.com/huggingface/datasets/issues/7113 | closed | [] | 2024-08-20T08:26:40Z | 2024-08-26T04:24:11Z | 1 | memray |
pytorch/serve | 3,290 | model_yaml_config usage is not explained well enough | ### 📚 The doc issue
### Expected :
The [documentation ](https://github.com/pytorch/serve/blob/master/docs/configuration.md#config-model)about `model_yaml_config` sounds as if we could use it as below in `config.properties` and access it later.
- file name : `config.properties`
- content :
```
inference_addres... | https://github.com/pytorch/serve/issues/3290 | open | [] | 2024-08-20T00:34:32Z | 2024-08-26T18:49:27Z | 1 | Foundsheep |
pytorch/torchchat | 1,041 | Improve support for and documentation of custom models | ### 🚀 The feature, motivation and pitch
torchchat supports adding models to the "known_model" list and has CLI support for local models not hosted in torchchat's, but this can be better documented.
### Alternatives
_No response_
### Additional context
Some PR's Related to this theme:
* https://github.com/pytorc... | https://github.com/pytorch/torchchat/issues/1041 | closed | [
"documentation",
"enhancement",
"Known Gaps",
"triaged"
] | 2024-08-19T16:43:48Z | 2025-02-04T18:22:48Z | 1 | Jack-Khuu |
huggingface/diffusers | 9,216 | I made a pipeline that lets you use any number of models at once | ### Model/Pipeline/Scheduler description
Here's how to do it:
from rubberDiffusers import StableDiffusionRubberPipeline
pipe=StableDiffusionRubberPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float32,local_files_only=True,safety_checker=None, requires_safety_checker=False,
)
... | https://github.com/huggingface/diffusers/issues/9216 | open | [
"stale"
] | 2024-08-19T11:46:08Z | 2024-09-21T15:03:31Z | 3 | alexblattner |
pytorch/torchtitan | 528 | How to train using bfloat16? | Hi! I have a quick question: how to train using bfloat16? I found the default setting using fp32.
I changed ''data_parallel_degree" to 4 (my number of GPUs) but still did not use bfloat16.
Thanks in advance! | https://github.com/pytorch/torchtitan/issues/528 | closed | [] | 2024-08-19T07:38:12Z | 2024-08-20T13:45:47Z | null | zyushun |
pytorch/ao | 704 | Question: How to use Float8InferenceLinear with FSDP1/2? | Hey Team,
I'm trying to use FSDP1/2 with Float8InferenceLinear but seems have some issues (with torch 2.3.1+cu118). Do you suggestion to bump to higher version of torch and have a try or maybe use the training setup without using the inference layer? I also tried using the Flont8linear layer without using the quanti... | https://github.com/pytorch/ao/issues/704 | open | [
"float8",
"inference"
] | 2024-08-19T07:33:07Z | 2024-08-26T02:40:18Z | null | qingquansong |
huggingface/transformers | 32,873 | How to use 【examples/pytorch/contrastive-image-text】 to inter inference | ### Feature request
I have reviewed the training code for CLIP and successfully executed it. Now, I want to use the obtained model for inference testing.
### Motivation
I would like to test the performance of the model I have trained.
### Your contribution
I hope I can get a example script to inference testing... | https://github.com/huggingface/transformers/issues/32873 | open | [
"Feature request"
] | 2024-08-19T05:54:54Z | 2024-08-19T08:33:50Z | null | rendaoyuan |
pytorch/TensorRT | 3,098 | ❓ [Question] When using torch_tensorrt.compile to optimize Mask2Former's multi_scale_deformable_attn layer, an error occurs. | ## ❓ Question
<!-- Your question -->
I was preparing to export a TRT model for Mask2Former using the command **optimized_model = torch_tensorrt.compile(model, inputs=imgs, enabled_precisions={torch.half})**, where model is a Mask2Former loaded through mmseg.
However, I encountered an error at the line **value_l_ =... | https://github.com/pytorch/TensorRT/issues/3098 | open | [
"question"
] | 2024-08-19T03:03:03Z | 2024-09-24T18:38:56Z | null | edition3234 |
huggingface/chat-ui | 1,415 | Bad request: Task not found for this model | Hi all,
I am facing the following issue when using HuggingFaceEndpoint for my custom finetuned model in my repository "Nithish-2001/RAG-29520hd0-1-chat-finetune" which is public with gradio.
llm_name: Nithish-2001/RAG-29520hd0-1-chat-finetune
Traceback (most recent call last):
File "/usr/local/lib/python3.10/... | https://github.com/huggingface/chat-ui/issues/1415 | open | [
"support"
] | 2024-08-18T09:33:10Z | 2024-08-25T22:38:00Z | 1 | NITHISH-Projects |
pytorch/TensorRT | 3,095 | ❓ [Question] Why does the speed (fps) of torch-tensorrt perform so badly in `torch.multiprocessing`? | ## ❓ Question
Hello, dear developer:
Thank your for your amazing job!
Why does the speed (fps) of torch-tensorrt perform so badly in `torch.multiprocessing`?
Currently I use `torch.multiprocessing` to create and run 3 Process (in 1 GPU) of resnet18, resnet50 and resnet101 at the same time. But I find their speeds o... | https://github.com/pytorch/TensorRT/issues/3095 | open | [
"question"
] | 2024-08-17T08:32:46Z | 2025-04-15T13:54:47Z | null | zhongqiu1245 |
pytorch/torchx | 945 | Using torchx as a SDK | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
The examples on the documentation refer to using torchx via the cli implem... | https://github.com/meta-pytorch/torchx/issues/945 | open | [] | 2024-08-17T03:21:30Z | 2024-08-19T14:18:45Z | 1 | juinquok |
huggingface/sentence-transformers | 2,893 | how to finetune sentence-transformers with unsupervised methods? | how to finetune sentence-transformers with unsupervised methods? for semantic search | https://github.com/huggingface/sentence-transformers/issues/2893 | closed | [] | 2024-08-17T02:32:09Z | 2024-08-18T02:51:29Z | null | keyuchen21 |
huggingface/diffusers | 9,205 | Can we pass output_attentions=True to DiT model such as pixart to get attention output? | Can we pass output_attentions=True to DiT model such as pixart to get attention output? Like using output_attentions=True in transformer? | https://github.com/huggingface/diffusers/issues/9205 | open | [
"stale"
] | 2024-08-16T17:26:14Z | 2024-09-16T15:02:42Z | 1 | foreverpiano |
huggingface/datatrove | 266 | How to look into the processed data? | Hi,
After running `tokenize_from_hf_to_s3.py`, I would like to inspect the resulting data. But I find that the current data is in a binary file (`.ds`). is there a way to allow me to look into the data?
Thanks! | https://github.com/huggingface/datatrove/issues/266 | open | [] | 2024-08-16T16:54:45Z | 2024-08-29T15:26:35Z | null | shizhediao |
huggingface/trl | 1,934 | How to Save the PPOTrainer? | The previous issue for this question https://github.com/huggingface/trl/issues/1643#issue-2294886330 is closed but remained unanswered. If I do `ppo_trainer.save_pretrained('path/to/a/folder')` and then `ppo_trainer.from_pretrained('path/to/that/folder')`, I get this error:
ValueError: tokenizer must be a PreTrained... | https://github.com/huggingface/trl/issues/1934 | closed | [] | 2024-08-16T09:41:39Z | 2024-10-07T14:57:51Z | null | ThisGuyIsNotAJumpingBear |
huggingface/parler-tts | 109 | How many epoch of training did you do? What is the accuracy? | How many epoch of training did you do? What is the accuracy? | https://github.com/huggingface/parler-tts/issues/109 | open | [] | 2024-08-16T09:35:31Z | 2024-08-16T09:35:31Z | null | xuezhongfei2008 |
pytorch/torchchat | 1,038 | How to deploy a new model by torchchat? | I want to use torchchat to load the trained model directly from the local. How to change the torchchat/config/data/models.json? Need to change download _ and _ convert in download.py?And, what other documents may need to be changed? | https://github.com/pytorch/torchchat/issues/1038 | open | [
"bug"
] | 2024-08-16T09:33:29Z | 2024-08-19T18:24:37Z | null | liu8060 |
huggingface/diffusers | 9,195 | Problem with Flux Schnell bfloat16 multiGPU | ### Describe the bug
Hello! I set device_map='balanced' and get images generated in 2.5 minutes (expected in 12-20 seconds), while in pipe.hf_device_map it shows that the devices are distributed like this:
```
{
"transformer": "cuda:0",
"text_encoder_2": "cuda:2",
"text_encoder": "cuda:0",
"vae": "cuda:1"
... | https://github.com/huggingface/diffusers/issues/9195 | closed | [
"bug"
] | 2024-08-16T06:30:54Z | 2025-12-05T06:38:14Z | 26 | OlegRuban-ai |
pytorch/TensorRT | 3,092 | ❓ [Question] Is there any way to deploy on a single machine with multi-gpus? | ## ❓ Question
<!-- Your question -->
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.... | https://github.com/pytorch/TensorRT/issues/3092 | open | [
"question"
] | 2024-08-16T02:01:21Z | 2024-08-16T17:58:02Z | null | SZ-ing |
pytorch/pytorch | 133,643 | How to Manage CPU Memory Usage in PyTorch After Moving Model to CPU? | ### 📚 The doc issue
Hi everyone,
I'm currently working on a deep learning project using PyTorch, and I've run into some issues with managing CPU memory after transferring a model to the GPU.
In specific, I'm loading a pre-trained model using PyTorch, then moving the model to the GPU. However, I've noticed that ... | https://github.com/pytorch/pytorch/issues/133643 | closed | [] | 2024-08-15T23:23:15Z | 2024-08-16T20:43:11Z | null | prisnguyen |
pytorch/xla | 7,858 | [Bug] Notebook `Stable Diffusion with PyTorch/XLA 2.0` is outdated | ## 🐛 Bug
Official Notebook `Stable Diffusion with PyTorch/XLA 2.0` is outdated
## To Reproduce:
Run [Stable Diffusion with PyTorch/XLA 2.0 Notebook](https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb) on Kaggle TPU VM v3-8
## Environment
Kaggle TPU VM v3-8
## Expect... | https://github.com/pytorch/xla/issues/7858 | open | [
"bug",
"documentation",
"xla:tpu"
] | 2024-08-15T11:21:01Z | 2025-05-02T23:15:34Z | 2 | steveepreston |
pytorch/xla | 7,857 | Why do the communication in my spmd training have control-predecessors | ## ❓ Questions and Help
In my formal training task, there are some control-predecessors in the communication operator, but the single test I constructed cannot reproduce this situation. I would like to know under what circumstances these control-predecessors can be generated.
```
all-gather-start.12 = (f32[256]{0}, ... | https://github.com/pytorch/xla/issues/7857 | closed | [
"question",
"distributed"
] | 2024-08-15T11:17:08Z | 2025-04-01T12:33:52Z | null | mars1248 |
huggingface/diffusers | 9,184 | What is the correct way to apply the dictionary with the control strengths (called “scales”) but with blocks? | ### Describe the bug
I have managed to apply the basic dictionary. as the documentation mentions
```
adapter_weight_scales = { "unet": { "down": 1, "mid": 0, "up": 0} }
pipe.set_adapters("Lora1", adapter_weight_scales)
```
and it already works for N number of LORAS that I want to load, for example
```
ada... | https://github.com/huggingface/diffusers/issues/9184 | closed | [
"bug"
] | 2024-08-15T06:05:42Z | 2024-08-17T00:54:28Z | null | Eduardishion |
pytorch/xla | 7,855 | How to sync TPUs when using a pod with more than 1 VM in SPMD | ## ❓ Questions and Help
Generally we feel that since in SPMD most of the work is under the hood its hard to understand what is required from us when using it in order to sync between TPUs on a pod with multiple VMs.
We would like to know the stages of syncing in that case, and how is it different from the regular... | https://github.com/pytorch/xla/issues/7855 | closed | [
"question",
"distributed"
] | 2024-08-14T18:51:04Z | 2025-04-01T12:35:29Z | null | dudulightricks |
pytorch/xla | 7,854 | Using mark_sharding vs. MpDeviceLoader with input_sharding=xs.ShardingSpec | ## ❓ Questions and Help
If we have a few tensors in a batch with different sizes and we use mark_sharding on each of them, we lose something comparing to input_sharding=xs.ShardingSpec in the MpDeviceLoader (which only works for a single size of tensor in the batch)? @JackCaoG | https://github.com/pytorch/xla/issues/7854 | closed | [
"question",
"distributed"
] | 2024-08-14T18:41:34Z | 2025-04-01T12:36:56Z | null | dudulightricks |
pytorch/xla | 7,850 | SPMD - how to use different dataloader on each VM of a TPU pod in SPMD | ## ❓ Questions and Help
While in SPMD mode If we run the train command of a model on all the VMs together (single program multiple machines) each VM has its own dataloader using cpu cores.
Then, when we use mark_sharding on the batch its practically copy the batch of the first VM (rank 0) to all the TPUs and ignore ... | https://github.com/pytorch/xla/issues/7850 | closed | [
"question",
"distributed"
] | 2024-08-14T17:50:09Z | 2025-04-01T12:41:07Z | null | dudulightricks |
huggingface/diffusers | 9,180 | Pipeline has no attribute '_execution_device' | ### Describe the bug
Hello, I implemented my own custom pipeline referring StableDiffusionPipeline (RepDiffusionPipeline), but there are some issues
I called "accelerator.prepare" properly, and mapped the models on device (with "to.(accelerator.device)")
But when I call pipeline and the '__call__' function is call... | https://github.com/huggingface/diffusers/issues/9180 | open | [
"bug",
"stale"
] | 2024-08-14T14:43:15Z | 2025-11-18T13:22:52Z | 33 | choidaedae |
pytorch/vision | 8,588 | size mismatch for rpn | ### 🐛 Describe the bug
I created a Mask R-CNN model using a set of parameters that I saved in a JSON file. Once the model was trained, I saved the weights using `torch.save(model.state_dict(), "MaskRCNN.pt")`. Later, I recreated the same model and loaded the saved weights `model.load_state_dict(torch.load("MaskRCNN... | https://github.com/pytorch/vision/issues/8588 | closed | [] | 2024-08-14T11:08:41Z | 2024-08-15T09:49:41Z | 4 | FiReTiTi |
pytorch/xla | 7,849 | Is it possible free TPU memory without restarting in pytorch xla? | ## 📚 Documentation
I have tried to move a TPU tensor to CPU or delete the tensor. However, the memory is not released.
https://colab.research.google.com/drive/1pTTDu_eJssUwjsrjBDiiyo6tlOEZTjMf?usp=sharing
<!-- A clear and concise description of what content is an issue. -->
| https://github.com/pytorch/xla/issues/7849 | closed | [] | 2024-08-14T10:48:37Z | 2024-08-26T01:25:00Z | 6 | fengyang0317 |
huggingface/diffusers | 9,174 | [Quantization] bring quantization to diffusers core | Now that we have a working PoC (#9165) of NF4 quantization through `bitsandbytes` and also [this](https://huggingface.co/blog/quanto-diffusers) through `optimum.quanto`, it's time to bring in quantization more formally in `diffusers` 🎸
In this issue, I want to devise a rough plan to attack the integration. We are g... | https://github.com/huggingface/diffusers/issues/9174 | closed | [
"quantization"
] | 2024-08-14T08:05:34Z | 2024-10-21T04:42:46Z | 15 | sayakpaul |
huggingface/diffusers | 9,172 | why rebuild a vae in inference stage? | Thanks for ur effort for diffusion model.
I want to know why we need to rebuild a vae in inference stage. I think it will introduce extra GPU cost.
https://github.com/huggingface/diffusers/blob/a85b34e7fdc0a5fceb11aa0fa6199bd9afaca396/examples/text_to_image/train_text_to_image_sdxl.py#L1217C16-L1223C24
| https://github.com/huggingface/diffusers/issues/9172 | open | [
"stale"
] | 2024-08-14T05:52:38Z | 2024-11-14T15:03:55Z | 2 | WilliammmZ |
huggingface/candle | 2,413 | How to load multiple safetensors with json format | For such a task:
https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/transformer
how should safetensors be loaded?
| https://github.com/huggingface/candle/issues/2413 | open | [] | 2024-08-14T04:50:37Z | 2025-06-11T19:05:05Z | null | oovm |
pytorch/pytorch | 133,397 | Don't know how to explain but here's the error | ### 🐛 Describe the bug
File "C:\Users\USER\Downloads\pytorch\main.py", line 3, in <module>
import torch
File "C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\__init__.py", line 148, in <module>
raise err
OSError: [WinError 126] The specified module could not be found. Err... | https://github.com/pytorch/pytorch/issues/133397 | closed | [
"module: windows"
] | 2024-08-14T02:53:02Z | 2024-08-15T00:59:59Z | null | Nohj9984 |
huggingface/diffusers | 9,170 | sdxl and contronet must has a GPU memory more than 36G? | ### Describe the bug
https://github.com/huggingface/diffusers/blob/15eb77bc4cf2ccb40781cb630b9a734b43cffcb8/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py
line73---line113
I run the demo with 24G GPU, then OOM everytime.
so I must run SDXl with 48G?
@yiyixuxu @sayakpaul @DN6 tks
### Reprod... | https://github.com/huggingface/diffusers/issues/9170 | closed | [
"bug"
] | 2024-08-14T01:46:35Z | 2024-11-13T08:49:22Z | 3 | henbucuoshanghai |
huggingface/trl | 1,927 | how to use kto_pair loss in the latest version ? | I can see that kto_pair losstype is no longer available in the latest version of dpo trainer. You suggest to use ktotrainer instead.
But kto_pair loss worked much better than kto_trainer on my dataset, so how do I continue to use kto_pair if I'm using the latest version of the trl library?
thanks a lot! | https://github.com/huggingface/trl/issues/1927 | closed | [
"🏋 DPO",
"🏋 KTO"
] | 2024-08-13T15:59:25Z | 2024-10-20T16:56:21Z | null | vincezengqiang |
pytorch/xla | 7,846 | Is pytorch xla spmd working as expected? | ## 🐛 Bug
I tried to run [test_train_spmd_linear_model.py](https://github.com/pytorch/xla/blob/master/test/spmd/test_train_spmd_linear_model.py) with `sharding='batch'`. The input data sharing is {devices=[8,1]0,1,2,3,4,5,6,7}, which is expected. However, after a linear layer, the fc1 output sharding becomes 'replicat... | https://github.com/pytorch/xla/issues/7846 | closed | [] | 2024-08-13T14:43:50Z | 2024-09-01T12:58:48Z | 3 | fengyang0317 |
huggingface/autotrain-advanced | 728 | [BUG] Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. How to mitigate this? | ### Prerequisites
- [X] I have read the [documentation](https://hf.co/docs/autotrain).
- [X] I have checked other issues for similar problems.
### Backend
Local
### Interface Used
CLI
### CLI Command
```
!autotrain --config path-to.yml
```
```
task: llm-sft
base_model: teknium/OpenHermes-2.... | https://github.com/huggingface/autotrain-advanced/issues/728 | closed | [
"bug"
] | 2024-08-13T05:00:10Z | 2024-08-13T12:31:19Z | null | jackswl |
huggingface/diffusers | 9,164 | the dog example of train_dreambooth_lora_flux.py can not convergence | ### Describe the bug
```
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="trained-flux-lora"
accelerate launch train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
-... | https://github.com/huggingface/diffusers/issues/9164 | closed | [
"bug"
] | 2024-08-13T03:08:10Z | 2024-08-13T10:23:23Z | 7 | chongxian |
pytorch/xla | 7,837 | Make `tpu-info` more visible to the community | ## 📚 Documentation
We highlighted tpu-info in the [PyTorch/XLA 2.4 release](https://cloud.google.com/blog/products/ai-machine-learning/pytorch-xla-2-4-improves-pallas-and-adds-eager-mode?e=13802955). I understand we have a [CoLab demo page](https://colab.sandbox.google.com/drive/1aMYTONPE4f3BtZpRq1_jPcRcIiSKtoY9?us... | https://github.com/pytorch/xla/issues/7837 | closed | [
"usability"
] | 2024-08-12T19:17:30Z | 2024-08-17T06:39:58Z | 5 | miladm |
huggingface/text-embeddings-inference | 380 | How do i deploy to vertex ? | How do i deploy to vertex ? I think i saw some feature=google setting in code which supports compatibility with vertex . Please guide. | https://github.com/huggingface/text-embeddings-inference/issues/380 | closed | [] | 2024-08-12T17:15:30Z | 2024-10-17T10:19:02Z | null | pulkitmehtaworkmetacube |
pytorch/vision | 8,585 | Cant find nms function in code? | ### 🐛 Describe the bug
I am looking for a method in torch, but for the love of god I can not not find the function definition!
The reason I need to find it is that I need to get rid of the torch dependency and I want to try to convert it into numpy.
I am speaking about torchvision.ops.nms()
This method is locate... | https://github.com/pytorch/vision/issues/8585 | closed | [] | 2024-08-12T12:17:23Z | 2024-08-12T12:26:58Z | 1 | asusdisciple |
pytorch/xla | 7,832 | 80B model how to shard restore in spmd training | ## ❓ Questions and Help
In pytorch we can use `fsdp meta init` shard restore my big model(like have 80B parameters),in torch_xla i only find shard save like use this.https://github.com/pytorch/xla/blob/master/torch_xla/experimental/distributed_checkpoint/manager.py#L257.
Is there a way to recover the original pytorch... | https://github.com/pytorch/xla/issues/7832 | closed | [
"question",
"distributed"
] | 2024-08-12T11:52:00Z | 2025-04-01T12:50:25Z | null | mars1248 |
pytorch/pytorch | 133,205 | How to use libtorch in a c++11 project? | ### 🐛 Describe the bug
c++14_warning.h:32:2: 错误:#error This file requires compiler and library support for the forthcoming ISO C++ 2014 standard. This support is currently experimental, and must be enabled with the -std=c++1y or -std=gnu++1y compiler options.
#error This file requires compiler and library support... | https://github.com/pytorch/pytorch/issues/133205 | closed | [
"module: docs",
"module: cpp",
"triaged"
] | 2024-08-12T08:45:32Z | 2024-09-24T02:03:36Z | null | zhb0920 |
huggingface/trl | 1,916 | How to Add PEFT to PPO Trainer or PPO Config | I am trying to realize RLHF through PPO.
May I ask how can I realize PEFT in RLHF/PPO. I can see this parameter in DPOTrainer. However, I cannot see that in PPOTrainer.
| https://github.com/huggingface/trl/issues/1916 | closed | [
"✨ enhancement",
"🧒 good second issue",
"🏋 PPO"
] | 2024-08-12T01:02:07Z | 2024-11-18T10:54:10Z | null | ZhichaoWang970201 |
huggingface/trl | 1,915 | How to dpo llava? | Thank you for great work!
I do dpo llava using raw `/trl/examples/scripts/dpo_visual.py` code by using a command
`CUDA_VISIBLE_DEVICES=0 accelerate launch examples/scripts/dpo_visual.py --dataset_name HuggingFaceH4/rlaif-v_formatted --model_name_or_path llava-hf/llava-1.5-7b-hf --per_device_train_batch_... | https://github.com/huggingface/trl/issues/1915 | closed | [] | 2024-08-11T00:57:38Z | 2024-08-11T01:23:16Z | null | ooooohira |
huggingface/transformers.js | 887 | VSCode Interpolation | ### Question
I'm finding that VSCode is extremely slow when reading type definitions from the `@xenova/transformers` path. Is there anything I might be doing wrong? I've noticed that it uses JS comments to define the types instead of a type definition file, is the issue I am having a known issue with using that type o... | https://github.com/huggingface/transformers.js/issues/887 | closed | [
"question"
] | 2024-08-11T00:08:30Z | 2024-08-25T01:55:36Z | null | lukemovement |
huggingface/diffusers | 9,140 | Diffusers model not working as good as repo ckpt model | Hi,
When I try to run the models stable diffusion v1-5 or Instructpix2pix through the diffusers pipeline and use .from_pretrained() it downloads the models from hugging face and I'm using the code to run inference given in hugging face, the results are not good at all in the sense that there is still noise in the gener... | https://github.com/huggingface/diffusers/issues/9140 | closed | [
"stale"
] | 2024-08-09T09:34:30Z | 2024-12-14T12:13:15Z | 6 | kunalkathare |
pytorch/TensorRT | 3,075 | ❓ [Question] failed to run the `examples/dynamo/vgg16_fp8_ptq.y` example | ## ❓ Question
I'm trying to run the `examples/dynamo/vgg16_fp8_ptq.y` example but got following error:
```
Traceback (most recent call last):
File "/home/wh/generative_action/SynHSI/vgg_quat.py", line 232, in <module>
exp_program = torch.export.export(model, (input_tensor,))
File "/home/wh/miniconda3/en... | https://github.com/pytorch/TensorRT/issues/3075 | open | [
"question"
] | 2024-08-09T08:01:14Z | 2024-08-23T22:06:56Z | null | broken-dream |
pytorch/xla | 7,823 | [XLA:GPU compile Error] nvcc fatal : Unsupported gpu architecture 'compute_35' | detail:
NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4
for Kepler GPUs are removed from CUDA 12.x. how can i compile torch_xla for gpu in CUDA Version 12.X(GPU guide use CUDA12.X). really confused. thanks for reply.
:
File ... | https://github.com/pytorch/text/issues/2270 | open | [] | 2024-08-08T23:25:46Z | 2024-09-18T09:00:08Z | 1 | fizwit |
pytorch/vision | 8,570 | RandomPhotometricDistort has undocumented channel shuffle feature | ### 🐛 Describe the bug
The documentation for RandomPhotometricDistort neither exposes the channel shuffle behavior as a parameter or lists in the description that this is a possibility.
https://pytorch.org/vision/stable/generated/torchvision.transforms.v2.RandomPhotometricDistort.html#torchvision.transforms.v2.R... | https://github.com/pytorch/vision/issues/8570 | closed | [] | 2024-08-08T19:14:05Z | 2024-08-13T02:50:14Z | 1 | chadrockey |
huggingface/transformers.js | 885 | TimeSformer on the web | ### Question
Glad to see this repo! If I want to use TimeSformer on the web, any suggestion or guide for it? Where can I learn from this repo or it's a totally different things? Thanks in advance! | https://github.com/huggingface/transformers.js/issues/885 | open | [
"question"
] | 2024-08-08T17:59:13Z | 2024-08-11T09:02:47Z | null | tomhsiao1260 |
pytorch/functorch | 1,146 | Strange behaviour of autograd.functional.jacobian when vectorize=True and strategy=‘forward-mode’ | I calculate the Jacobian of a neural network with respect to its 14 input variables. The network has an output of 9015, meaning I have 126210 gradients. Because I have some complex calculations in my neural network I cannot use jacrev/jacfwd, see [ jacfwd and jacrev are fundamentally broken for complex inputs #94397 ](... | https://github.com/pytorch/functorch/issues/1146 | closed | [] | 2024-08-08T12:51:16Z | 2024-08-09T11:27:07Z | 0 | dezenn |
pytorch/TensorRT | 3,073 | ❓ Cannot figure out the following error: AttributeError: module 'torch_tensorrt' has no attribute 'ptq'. | ## ❓ Question
I am encountering an AttributeError when trying to use the ptq module from Torch-TensorRT on google colab.
I am attempting to run this line of code
calibrator = torch_tensorrt.ptq.DataLoaderCalibrator(...)
## Environment
- PyTorch Version (e.g., 1.0): 2.4.0+cu121
- CUDA Version: 12.2
- Pyth... | https://github.com/pytorch/TensorRT/issues/3073 | closed | [
"question"
] | 2024-08-08T11:37:08Z | 2024-08-09T06:04:57Z | null | ImaanIbrar |
huggingface/cookbook | 163 | Incorrect markdown table rendering in Colab in "How to use Inference Endpoints to Embed Documents" | There is an issue with the rendering of the Inference Endpoints table in Colab in [How to use Inference Endpoints to Embed Documents](https://huggingface.co/learn/cookbook/automatic_embedding_tei_inference_endpoints). Although the table correctly renders on HF cookbook webpage:
<img width="610" alt="image" src="http... | https://github.com/huggingface/cookbook/issues/163 | closed | [] | 2024-08-08T11:16:40Z | 2024-08-08T16:22:48Z | null | sergiopaniego |
huggingface/alignment-handbook | 192 | Constant training loss in the model adapter card | Hello,
I could fine-tune a model using a small dataset and I see that the validation loss decreases, while the training loss remains the same in the model card.
I don't think this is normal, even though the new task I try to teach the model is similar to what it already does, I think it should be able to learn fr... | https://github.com/huggingface/alignment-handbook/issues/192 | closed | [] | 2024-08-08T09:35:40Z | 2024-08-08T13:29:00Z | 1 | Michelet-Gaetan |
huggingface/optimum | 1,985 | Correct example to use TensorRT? | ### System Info
```shell
optimum: 1.20.0
os: ubuntu 20.04 with RTX 2080TI
python: 3.10.14
```
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `exam... | https://github.com/huggingface/optimum/issues/1985 | open | [
"bug"
] | 2024-08-08T08:46:14Z | 2024-08-29T11:24:35Z | 2 | sherlcok314159 |
huggingface/diffusers | 9,127 | flux.1-dev device_map didn't work | I try to use device_map to use multiple gpu's, but it not worked, how can I use all my gpus?
| https://github.com/huggingface/diffusers/issues/9127 | closed | [] | 2024-08-08T08:30:33Z | 2024-11-26T02:11:03Z | 33 | hznnnnnn |
pytorch/tutorials | 2,994 | [Reinforcement Learning] - help on cartpull tutorial | hello im completely new to machine learning and just trying to learn. im getting this warning an none of the figure are showing up (libEGL warning: DRI2: failed to authenticate) does anyone know what i could be missing or what might be the cause? im running this in unraid on a VM with a graphics card passed thru with i... | https://github.com/pytorch/tutorials/issues/2994 | closed | [
"question"
] | 2024-08-08T03:33:14Z | 2024-08-09T03:47:52Z | null | Misticfury |
pytorch/vision | 8,569 | Allow ffmpeg-python backend for torchvision.io.write_video? | ### 🚀 The feature
Create another backend for torchvision.io.write_video which uses ffmpeg-python as a backend, but which otherwise has exactly the same interface/functionality.
### Motivation, pitch
torchvision.io.write_video currently calls PyAV, which in turn is a wrapper for ffmpeg. [PyAV has an issue](https://g... | https://github.com/pytorch/vision/issues/8569 | closed | [] | 2024-08-08T01:14:07Z | 2024-10-11T11:53:49Z | 1 | adaGrad1 |
huggingface/diffusers | 9,120 | [ar] Translating docs to Arabic (العربية) | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/m... | https://github.com/huggingface/diffusers/issues/9120 | closed | [] | 2024-08-07T21:04:54Z | 2024-10-29T08:14:24Z | 2 | AhmedAlmaghz |
huggingface/chat-ui | 1,394 | I need to reload to get the response | 
i am using LLama 3.1 70B to chat, but it is so slow to get response and i need to reload to get response , is it because the model is overload ? | https://github.com/huggingface/chat-ui/issues/1394 | closed | [
"support"
] | 2024-08-07T09:31:03Z | 2024-08-15T06:56:59Z | 2 | renaldy-therry |
huggingface/chat-ui | 1,393 | Generation Error with Ollama - Inconsistent Output Generation | Hi,
I'm experiencing issues while running GEMMA2 on Ollama. Specifically, I'm encountering the following problems:
Error on Message Generation:
Whenever a new chat is created, every message results in the error:
Error: Generation failed, in the back end
No output is generated,on the front end.
... | https://github.com/huggingface/chat-ui/issues/1393 | open | [
"support"
] | 2024-08-07T09:02:19Z | 2024-08-07T11:05:19Z | 1 | juanjuanignacio |
huggingface/chat-ui | 1,392 | Cannot send the message and get response in hugging chat | I cannot send message and get a response from llm, and i cannot click "activate" to change model in huggingchat (https://huggingface.co/chat/) | https://github.com/huggingface/chat-ui/issues/1392 | closed | [
"support",
"huggingchat"
] | 2024-08-07T08:37:01Z | 2024-08-07T09:06:59Z | 4 | renaldy-therry |
pytorch/executorch | 4,579 | how to realize the sliding window of kv cache? | hello,
now I want to realize the sliding window of kv cache, so dynamic allocation and reclamation of memory needs to be realized. could you please teach me how to realize the dynamic allocation and reclamation of memory in the transformer?
Thank you in advanced. | https://github.com/pytorch/executorch/issues/4579 | closed | [] | 2024-08-07T07:05:42Z | 2024-08-15T05:04:51Z | null | l2002924700 |
huggingface/text-embeddings-inference | 371 | how to support a SequenceClassification model | ### Feature request
I have a model can be run by transformers.AutoModelForSequenceClassification.from_pretrained, how can i serve it in TEI
### Motivation
to support more models
### Your contribution
YES | https://github.com/huggingface/text-embeddings-inference/issues/371 | closed | [] | 2024-08-06T10:45:00Z | 2024-10-17T10:24:09Z | null | homily707 |
huggingface/chat-ui | 1,387 | CopyToClipBoardBtn in ChatMessage.svelte has a bug? | https://github.com/huggingface/chat-ui/blob/6de97af071c69aa16e8f893adebb46f86bdeeaff/src/lib/components/chat/ChatMessage.svelte#L378-L384
When compared to other components, classNames is the only difference here.
When rendered, the icon appears faint in the browser.
Is there a reason for this, or is it a bug?
h... | https://github.com/huggingface/chat-ui/issues/1387 | closed | [
"bug",
"good first issue",
"front"
] | 2024-08-06T04:59:45Z | 2024-08-12T09:35:21Z | 5 | calycekr |
huggingface/diffusers | 9,092 | Fluxpipeline report model_index.json not found | ### Describe the bug
I use the Fluxpipeline and report no file model_index.json.
I read other issue and set the `revision="refs/pr/3"`,but it doesn't work, how can i do to solve this problem and how to use the T5xxl as text encoder? thanks for your help
### Reproduction
```
import torch
from diffusers impor... | https://github.com/huggingface/diffusers/issues/9092 | closed | [
"bug"
] | 2024-08-06T01:48:40Z | 2024-08-06T02:25:03Z | 3 | chongxian |
huggingface/trl | 1,900 | How to speed up PPOTrainer .generate()? | During PPO, I'm finding that `.generate()` is extremely slow. The following call takes ~3 and a half minutes for batch size of 64 with a 1.4B parameter policy LM:
```
ppo_trainer.generate(
input_token_ids_list,
pad_token_id=policy_model_tokenizer.eos_token_id,
retu... | https://github.com/huggingface/trl/issues/1900 | closed | [] | 2024-08-05T18:35:31Z | 2024-10-01T06:35:50Z | null | RylanSchaeffer |
huggingface/chat-ui | 1,386 | System role problem running Gemma 2 on vLLM | Hello,
In running chat ui and trying some models, with phi3 and llama i had no problem but when I run gemma2 in vllm Im not able to make any good api request,
in env.local:
{
"name": "google/gemma-2-2b-it",
"id": "google/gemma-2-2b-it",
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}<start_of_turn>us... | https://github.com/huggingface/chat-ui/issues/1386 | closed | [
"support"
] | 2024-08-05T13:22:10Z | 2024-11-07T21:39:47Z | 5 | juanjuanignacio |
pytorch/TensorRT | 3,060 | ❓ [Question] function `torch._ops.aten.aten::_to_copy` not currently supported with dynamic input shape | ## ❓ Question
I'm trying to compile a model with dynamic input shape but told that the `function torch._ops.aten.aten::_to_copy` is not currently supported:
```Traceback (most recent call last):
File "/home/wh/generative_action/SynHSI/test_module.py", line 325, in <module>
model = torch_tensorrt.compile(mod... | https://github.com/pytorch/TensorRT/issues/3060 | open | [
"question"
] | 2024-08-05T12:20:32Z | 2024-12-12T18:33:18Z | null | broken-dream |
huggingface/optimum | 1,981 | [GPTQQuantizer] How to use multi-GPU for GPTQQuantizer? | ### System Info
```shell
hello:
I encountered an out-of-memory error while attempting to quantize a model using GPTQQuantizer. The error seems to be related to the large size of the model weights. Below is the quantization code I used:
from optimum.gptq import GPTQQuantizer
quantizer = GPTQQuantizer(
bi... | https://github.com/huggingface/optimum/issues/1981 | closed | [
"bug"
] | 2024-08-05T07:58:11Z | 2024-08-08T02:19:18Z | null | RunTian1 |
huggingface/datasets | 7,087 | Unable to create dataset card for Lushootseed language | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering la... | https://github.com/huggingface/datasets/issues/7087 | closed | [
"enhancement"
] | 2024-08-04T14:27:04Z | 2024-08-06T06:59:23Z | 2 | vaishnavsudarshan |
huggingface/diffusers | 9,076 | Add a better version of 'callback_on_step_end' for FluxPipeline | **Is your feature request related to a problem? Please describe.**
There is a huge delay before starting the inference and once the 4th step is complete and there is no callback for that and it feels like it is stuck, just want a more responsive version.
```
prompt = "A cat holding a sign that says hello world"
ima... | https://github.com/huggingface/diffusers/issues/9076 | closed | [
"stale"
] | 2024-08-04T10:34:04Z | 2024-11-23T00:24:14Z | 3 | nayan-dhabarde |
pytorch/data | 1,309 | what's the exact plan for torchdata now? | hi, as a user of torchdata, i'm very happy to see the resurrection of the project.
i have a question about the development plan. from the README, i see:
> torchdata repo to be an iterative enhancement of torch.utils.data.DataLoader
this is somewhat surprising. although the current Datapipes seem to have variou... | https://github.com/meta-pytorch/data/issues/1309 | closed | [] | 2024-08-04T00:25:26Z | 2024-08-04T00:27:17Z | 1 | keunwoochoi |
pytorch/xla | 7,805 | Kaggle Notebooks: TPU detected but wont use | ## ❓ Questions and Help
Hi All,
I Have this code
```
import optuna
from torch.optim.lr_scheduler import ReduceLROnPlateau
# Assuming dataset is already defined
train_size = int(0.8 * len(dataset))
val_size = len(dataset) - train_size
train_dataset, val_dataset = random_split(dataset, [train_size, val_size]... | https://github.com/pytorch/xla/issues/7805 | closed | [
"question",
"xla:tpu"
] | 2024-08-03T16:32:58Z | 2025-04-01T12:55:08Z | null | MichaelSchroter |
huggingface/diffusers | 9,069 | TypeError: expected np.ndarray (got numpy.ndarray) | ### Describe the bug
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
# Depending on the variant being used, the pipeline call will slightly ... | https://github.com/huggingface/diffusers/issues/9069 | closed | [
"bug"
] | 2024-08-03T12:45:03Z | 2024-10-27T06:43:32Z | 11 | xiangyumou |
pytorch/torchchat | 1,001 | [Raspbian] streamlit GUI interface does not work / no documentation how to install | ### 🐛 Describe the bug
from #985:
> 2. If you're interested in debugging the browser, feel free to spin up another issue with the error message from this
> > streamlit run torchchat.py -- browser llama3
Thanks, I will. I suspect it's pretty straightforward - there's no streamlit installed on my system. I ... | https://github.com/pytorch/torchchat/issues/1001 | closed | [
"bug",
"Browser"
] | 2024-08-03T03:24:10Z | 2024-08-06T00:32:41Z | null | sunshinesfbay |
pytorch/pytorch | 132,559 | How to fix tensor.numpy() not supported for torch.export with strict=False | ### 🐛 Describe the bug
This is trying to do a BE task to unblock https://github.com/pytorch/pytorch/pull/130977. The problem is very similar to https://github.com/pytorch/pytorch/pull/120261, though that one uses torch.export with strict=True.
# repro:
```
import numpy as np
import torch
class MyNumpyModel(t... | https://github.com/pytorch/pytorch/issues/132559 | open | [
"module: numpy",
"tensor subclass",
"module: functionalization",
"export-triage-review",
"oncall: export"
] | 2024-08-02T23:04:39Z | 2024-08-06T18:42:05Z | null | henrylhtsang |
pytorch/xla | 7,803 | [question] Seeking information on low-level TPU interaction and libtpu.so API | I'm looking to build an automatic differentiation library for TPUs without using high-level front-ends like TensorFlow/JAX/PyTorch-XLA, but I'm finding information about lower-level TPU usage is practically non-existent.
Specifically, I'm interested in:
1. How to interact with TPUs at a lower level than what's typi... | https://github.com/pytorch/xla/issues/7803 | closed | [
"question",
"xla:tpu"
] | 2024-08-02T10:16:01Z | 2025-04-01T12:56:19Z | null | notlober |
huggingface/evaluate | 611 | How to customize my own evaluator and metrics? | I'm facing a task on VQA, where I need to compute [VQA](https://visualqa.org/evaluation.html) accuracy](https://visualqa.org/evaluation.html) as follows:
```math
\text{Acc}(ans) = \min{ \left\{ \frac{\text{\# humans that said } ans }{3}, 1 \right\} }
```
I have following questions:
1. Do I need to customize my o... | https://github.com/huggingface/evaluate/issues/611 | closed | [] | 2024-08-02T08:37:47Z | 2024-08-15T02:26:30Z | null | Kamichanw |
huggingface/diffusers | 9,055 | ImportError: cannot import name 'StableDiffusionLoraLoaderMixin' from 'diffusers.loaders' | ### Describe the bug
I get this error in diffusers versions 25,26,27,28,29, how can I solve it?
### Reproduction
import ast
import gc
import inspect
import math
import warnings
from collections.abc import Iterable
from typing import Any, Callable, Dict, List, Optional, Union
import torch
import torch.nn.... | https://github.com/huggingface/diffusers/issues/9055 | closed | [
"bug"
] | 2024-08-02T07:58:16Z | 2024-08-02T09:32:12Z | 2 | MehmetcanTozlu |
huggingface/optimum | 1,980 | Issue converting moss-moon-003-sft-int4 model to ONNX format | ### System Info
```shell
I've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:
optimum-cli export onnx --task text-generation -m"/HDD/cz/tools/moss/" --trust-remote-code "HDD/cz/moss_onnx/"
Unfortunately, I'm facing the follow... | https://github.com/huggingface/optimum/issues/1980 | open | [
"bug",
"onnx"
] | 2024-08-02T01:18:46Z | 2024-10-08T15:51:12Z | 0 | ZhiChengWHU |
pytorch/executorch | 4,510 | How to link custom ops? | Hi!
I'm trying to integrate some of quantized MatMul C++ kernels into Executorch and I'm having a bad time: the documentation is very vague about what exactly I need to include/link for ATen to pick up my ops.
I would greatly appreciate any help in trying to make it work.
### Overview:
Source code for the d... | https://github.com/pytorch/executorch/issues/4510 | closed | [] | 2024-08-01T21:16:01Z | 2024-08-21T21:09:03Z | null | BlackSamorez |
huggingface/transformers | 32,376 | AutoModel how to modify config? | ```
config = AutoConfig.from_pretrained(
**self.params, trust_remote_code=True
)
config.vision_config.use_flash_attn = False
print(config.vision_config)
self.model = AutoModel.from_pretrained(
**self.params, t... | https://github.com/huggingface/transformers/issues/32376 | closed | [] | 2024-08-01T12:40:44Z | 2024-08-02T02:30:22Z | null | lucasjinreal |
huggingface/diffusers | 9,039 | how to load_lora_weights in FlaxStableDiffusionPipeline | ### Describe the bug
how to load lora in FlaxStableDiffusionPipeline, there are no load_lora_weights in FlaxStableDiffusionPipeline
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9039 | closed | [
"bug",
"stale"
] | 2024-08-01T11:23:52Z | 2024-10-15T03:23:54Z | null | ghost |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.