repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/accelerate | 2,164 | how to get same timestamp in different subprocesses while using accelerate launch | I would like to get a unique timestamp to name my result folder like below
```
def get_time_string() -> str:
x = datetime.datetime.now()
return f"{(x.year - 2000):02d}{x.month:02d}{x.day:02d}-{x.hour:02d}{x.minute:02d}{x.second:02d}"
```
, however, it sometimes will get a different timestamp in different... | https://github.com/huggingface/accelerate/issues/2164 | closed | [] | 2023-11-17T06:36:00Z | 2023-11-29T07:30:04Z | null | shliu0 |
huggingface/open_asr_leaderboard | 14 | How to run calc_rtf.py? Cannot reproduce rtf results. | There is no guide on how to execute calc_rtf.py. For example, this one https://github.com/huggingface/open_asr_leaderboard/blob/main/transformers/calc_rtf.py references 4469669.mp3. But there is no such file in the repo from what I see.
So the results are not reproducible.
Same for https://github.com/huggingface/... | https://github.com/huggingface/open_asr_leaderboard/issues/14 | open | [] | 2023-11-16T21:14:31Z | 2023-11-16T21:14:31Z | null | galv |
huggingface/transformers.js | 397 | [Question] Tokenizing a base64 for string is very slow? | Hi! I happened to be encoding some files using transformers.js and one of the files happened to have some base64 in it. What I noticed is that base64 takes an enormously long time, relative to the number of tokens produced. Tokenizing a string of english text to the same number of tokens is far quicker.
For example:
... | https://github.com/huggingface/transformers.js/issues/397 | closed | [
"question"
] | 2023-11-16T20:27:51Z | 2023-11-17T19:48:57Z | null | samlhuillier |
huggingface/transformers.js | 396 | [Question] How to use transformer.js in langchain | Hi all, I'm writing a custom LLM to use transformer.js with langchain. Does a structure like this make sense? Any advice for optimizing it or best practices to apply?
Any suggestions or feedback would be greatly appreciated ๐ ๐
```
import { pipeline } from "@xenova/transformers";
import { LLM } from "langcha... | https://github.com/huggingface/transformers.js/issues/396 | open | [
"question"
] | 2023-11-16T17:27:52Z | 2023-12-21T16:27:28Z | null | mrddter |
huggingface/autotrain-advanced | 349 | How to reload the checkpoints for LLM finetuning? | May I ask how to resume from the latest checkpoint using `autotrain llm` if it crashed. I only found one from the `dreambooth` trainers, but I cannot find the `resume_from_checkpoint` anywhere else.
I was wondering if it has currently not fully supported this feature yet or I was missing something? It would be supe... | https://github.com/huggingface/autotrain-advanced/issues/349 | closed | [
"stale"
] | 2023-11-16T11:51:25Z | 2024-02-02T08:58:47Z | null | xihajun |
huggingface/trl | 1,004 | Guidance on how to fix the scheduler and ConstantLengthDataset | Hello,
I want to fix the issue related to the `ConstantLengthDataset` not knowing the dataset's length in advance.
Besides having a broken progressbar and a wrong epoch count, the only problem I see is related to the scheduler, as most of us are training using cosine with warmup; if we want a complete cycle, the ... | https://github.com/huggingface/trl/issues/1004 | closed | [] | 2023-11-16T10:58:30Z | 2024-01-05T15:05:18Z | null | tcapelle |
huggingface/diffusers | 5,816 | low attention to prompt in SDXL | Hi,
One of the difference between DALLE3 and SDXL is that SDXL pay less attention to prompt,
Is there a way to solve this problem? I don't Know. for example changing the text encoder to other can help to solve this problem ?
Thanks
| https://github.com/huggingface/diffusers/issues/5816 | closed | [
"question",
"stale"
] | 2023-11-16T07:24:15Z | 2024-01-09T15:06:55Z | null | saeedkhanehgir |
huggingface/transformers | 27,526 | How to preupgrade transformer cache and build the upgraded into docker image? | ### System Info
Linux ubuntu 22.04
Docker 24.05
I am not sure if this is the right place for this issue. Apology if it isn't and please direct me to the right place.
I have been using transformer in docker images that are deployed at runpod/replicate. The containers of the images could go cold and be relaunch... | https://github.com/huggingface/transformers/issues/27526 | closed | [] | 2023-11-16T02:53:54Z | 2023-12-24T08:03:44Z | null | lanyusan |
pytorch/benchmark | 2,040 | How to run test_bench.py with ROCM? | Hi @xuzhao9,
I don't know how to create a dockerfile for AMD ROCM, is there any example?
Best Regards
| https://github.com/pytorch/benchmark/issues/2040 | closed | [
"module: rocm",
"ciflow/rocm"
] | 2023-11-15T14:16:59Z | 2024-03-18T22:00:08Z | null | jinsong-mao |
pytorch/TensorRT | 2,471 | โ [Question] How to compile model when input is a list of tensors | ## โ Question
I am trying to follow the tutorial [here](https://pytorch.org/TensorRT/tutorials/serving_torch_tensorrt_with_triton.html) and am stuck at compiling the model with tensor-rt. The model i am using takes a list of tensors as inputs and hence i could not get the following compile code to work as i cannot g... | https://github.com/pytorch/TensorRT/issues/2471 | closed | [
"question"
] | 2023-11-15T09:50:36Z | 2025-11-24T17:44:36Z | null | HeChengHui |
pytorch/vision | 8,118 | missing labels in FER2013 test data | ### ๐ Describe the bug
The file **test.csv** has no label column, so the labels in the test split all have value None:
```
from torchvision.datasets import FER2013
dat = FER2013(root='./', split='test')
print(dat[0][1])
```
Adding labels to the file raises a RuntimeError, presumably because of a resulting diffe... | https://github.com/pytorch/vision/issues/8118 | closed | [
"enhancement",
"help wanted",
"module: datasets"
] | 2023-11-15T09:01:24Z | 2024-06-04T10:21:51Z | 8 | dtafler |
huggingface/optimum | 1,538 | Optimum supports AMDGPUใ๏ผ | ### Feature request
Onnxruntime supports AMD-ROCM ๏ผ
how to compile on optimum
### Motivation
Our company is currently testing amdgpu and has learned that optim can accelerate inference on CUDA. We are not sure if it will support ROCM in the future?
### Your contribution
none | https://github.com/huggingface/optimum/issues/1538 | closed | [] | 2023-11-15T04:15:21Z | 2024-01-09T16:10:39Z | 1 | taikai-zz |
huggingface/tokenizers | 1,391 | How to split special token in encode? | i have converted a slow tokenizer into PreTrainedTokenizerFast, and get a tokenizer.json file.But i found that this tokenizer did not split special tokens.Here is my add_tokens in tokenizer.json:
` tokenizer.add_special_tokens(
[
AddedToken("[gMASK]", normalized=True, single_word=... | https://github.com/huggingface/tokenizers/issues/1391 | closed | [] | 2023-11-15T03:41:22Z | 2024-01-04T06:26:38Z | null | leizhao1234 |
pytorch/TensorRT | 2,468 | โ [Question] New release of torch-tensorRT with PyTorch 2.1 | ## โ Question
New release of torch-tensort with PyTorch 2.1
## What you have already tried
Is there going to be a new release? or is this supported now through torch.compile only?
| https://github.com/pytorch/TensorRT/issues/2468 | closed | [
"question"
] | 2023-11-14T23:42:50Z | 2025-01-21T17:21:34Z | null | agunapal |
pytorch/TensorRT | 2,465 | ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin Mod version 1 | I want to use tensorrt to accelerate VisionEncoderDecoderModel. Use the following code to convert it to onnx and it was successful.
```
from transformers import VisionEncoderDecoderModel
def model_converter():
model = VisionEncoderDecoderModel.from_pretrained("./examples/data")
model.to(device)
mode... | https://github.com/pytorch/TensorRT/issues/2465 | closed | [
"question"
] | 2023-11-14T09:37:12Z | 2023-11-15T01:34:35Z | null | lin-lcx |
pytorch/executorch | 1,203 | How to load original images for model inference | Hi, I am invsetigating in `examples/portable/executor_runner/executor_runner.cpp.`
And on the [PrepareInputTensors](https://github.com/pytorch/executorch/blob/47900c96388453c83d9a6706151c0c2157fbfabd/examples/portable/executor_runner/executor_runner.cpp#L154), [method of PrepareInputTensor](https://github.com/pytorc... | https://github.com/pytorch/executorch/issues/1203 | closed | [
"need-user-input",
"triaged"
] | 2023-11-14T05:11:42Z | 2024-01-15T07:12:37Z | null | EarthMu |
huggingface/diffusers | 5,786 | How to load a precomputed dataset in the cache folder on a different machine? | **Is your feature request related to a problem? Please describe.**
Some slurm cluster may have a limit on time allocation, so I'd like to precompute the dataset on my local machine then move it to a location on the cluster to directly reuse it.
**Describe the solution you'd like**
I saw load dataset automatica... | https://github.com/huggingface/diffusers/issues/5786 | closed | [
"question",
"stale"
] | 2023-11-14T02:26:00Z | 2024-01-09T15:07:14Z | null | linnanwang |
huggingface/alignment-handbook | 22 | How to perform full parameter finetuning without A100 GPUs | Hi, thank you for your great work! I'd like to reproduce full parameter fine-tuning of dpo training. However I only have 10 * Nvidia A40 GPUs (46 Gbs memory each).
I tried the command
`CUDA_VISIBLE_DEVICES=2,3,4,5,6,7,8,9 ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deeps... | https://github.com/huggingface/alignment-handbook/issues/22 | open | [] | 2023-11-14T01:33:41Z | 2024-02-14T13:47:16Z | null | ChenDRAG |
huggingface/controlnet_aux | 83 | How to get keypoints output .json file like original OpenPose ? | https://github.com/huggingface/controlnet_aux/issues/83 | open | [] | 2023-11-13T21:55:35Z | 2023-11-17T21:04:49Z | null | mayank64ce | |
huggingface/chat-ui | 550 | Can this ui be run on a colab? | I am wondering if this ui can be used inside a colab. | https://github.com/huggingface/chat-ui/issues/550 | closed | [
"question"
] | 2023-11-13T16:58:35Z | 2023-11-15T16:17:10Z | null | amida47 |
huggingface/text-generation-inference | 1,258 | How to deal with bias=True Model | ### Feature request
How to deploy model within bias=True. Example: vinai/PhoGPT-7B5-Instruct
### Motivation
.
### Your contribution
. | https://github.com/huggingface/text-generation-inference/issues/1258 | closed | [
"Stale"
] | 2023-11-13T09:20:08Z | 2024-01-20T01:46:38Z | null | anhnh2002 |
huggingface/trl | 985 | how to setup epoch number in SFTTrainer? | there my example code
from datasets import load_dataset
from trl import SFTTrainer
dataset = load_dataset("IMDB", split="train")
trainer = SFTTrainer(
"sshleifer/tiny-gpt2",
train_dataset=dataset,
dataset_text_field="text",
max_seq_length=512,
)
trainer.train() | https://github.com/huggingface/trl/issues/985 | closed | [] | 2023-11-12T20:02:31Z | 2023-11-14T18:29:53Z | null | KlausikPL |
huggingface/diffusers | 5,774 | How to fine tune Stable Diffusion on custom dataset {caption, image}? | I need to do the task that fine tuning SD on custom dataset {caption, image} and custom size? Could you please give me a tutorial for this task? | https://github.com/huggingface/diffusers/issues/5774 | closed | [
"stale"
] | 2023-11-12T14:52:23Z | 2024-01-09T15:07:21Z | null | npk7264 |
huggingface/diffusers | 5,772 | Does webdataset faster than default huggingface datasets? | ### Describe the bug
Hi, I see there is a large scale training example https://github.com/huggingface/diffusers/blob/controlnet_webdatasets/examples/controlnet/train_controlnet_webdatasets.py using webdatasets, which suggests that webdatasets may have better data loading performance than huggingface datasets that is o... | https://github.com/huggingface/diffusers/issues/5772 | closed | [
"question",
"stale"
] | 2023-11-12T08:40:22Z | 2024-01-09T15:07:23Z | null | Luciennnnnnn |
huggingface/chat-ui | 549 | How can I use this offline with local models? | I really like the web_search feature, can I somehow use it with local models? I tried but I dont see any bat files to launch it. | https://github.com/huggingface/chat-ui/issues/549 | closed | [
"support"
] | 2023-11-11T23:59:09Z | 2023-11-20T21:38:27Z | 9 | iChristGit |
huggingface/diffusers | 5,766 | Image+Image+Text to Image | Maybe a dumb question but I can't seem to find good ways to have multiple images to image modeling. I looked into Multi-ControlNet but I can't tell how to use it. I'm trying to train a model that takes in 2 images and a prompt:
1. a template base image (e.g. a photo of a room in someone's house with a painting on the... | https://github.com/huggingface/diffusers/issues/5766 | closed | [
"question",
"stale"
] | 2023-11-11T20:15:27Z | 2024-01-09T15:07:25Z | null | tval2 |
huggingface/optimum | 1,531 | Pytorch + TensorRT support | ### Feature request
Is it possible to start supporting Pytorch and TensorRT inference optimizations? There are a lot of use cases where it could be useful, and optimum seems to already have a lot of good tooling to enable this.
### Motivation
Using Pytorch or TensorRT in production is painful today, and requires a l... | https://github.com/huggingface/optimum/issues/1531 | closed | [
"feature-request",
"Stale"
] | 2023-11-11T17:27:47Z | 2025-02-27T02:04:37Z | 2 | youssefadr |
huggingface/optimum | 1,530 | AnimateDiff support? | ### Feature request
Hi!
can u guys please support animatediff for onnx in the future? it will be great for both gpu directml and cpu too
kind regards
### Motivation
not a bug, just a feature that i really would like to see for us directml and cpu users for onnx
### Your contribution
i would but i don't know an... | https://github.com/huggingface/optimum/issues/1530 | closed | [
"feature-request",
"Stale"
] | 2023-11-11T14:21:25Z | 2025-03-01T02:08:38Z | 1 | Amin456789 |
huggingface/autotrain-advanced | 338 | How to | I successfully trained the mistral 7B sharded model on google colab using the autotrain
Now, how can I do inference , I am unable to merger the adapter with the base model , can someone please share the code for inference with me . Please help | https://github.com/huggingface/autotrain-advanced/issues/338 | closed | [
"stale"
] | 2023-11-11T12:58:24Z | 2024-05-06T13:35:52Z | null | eviIgenius |
huggingface/diffusers | 5,761 | The cost of consistency decoder | ### Describe the bug
I replace original VAE decoder of a stable diffusion model with Consistency Decoder, then CUDA out of memory occurs. My question is that How large of Consistency Decoder is compared to original VAE decoder.
- `diffusers` version: 0.23.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35... | https://github.com/huggingface/diffusers/issues/5761 | closed | [
"question",
"stale"
] | 2023-11-11T03:54:20Z | 2024-01-09T15:07:30Z | null | Luciennnnnnn |
pytorch/serve | 2,785 | How to batch process in the intermediate node in touchserve workflow | Hi, I need some help with the TouchServe workflow. Currently, I use the Touchserve to orchestrate my server model and logic to work together, which could be represented in the graph below.
```mermaid
stateDiagram-v2
[*] --> PreProcess
PreProcess --> Model_A
Model_A --> IntermediaProcess
PreProce... | https://github.com/pytorch/serve/issues/2785 | closed | [] | 2023-11-11T02:47:56Z | 2023-11-27T08:18:12Z | null | RTae |
huggingface/candle | 1,319 | Question: How to edit specific indices of a tensor? | Hello everybody,
While developing beam search for candle-sampling, I have run into a small issue where it appears there is no way to edit specific indices of a tensor after creation. For example, in Python the following works for lists (and very similar for pytorch tensors):
```python
values = [[1,2,3],[4,5,6]]
... | https://github.com/huggingface/candle/issues/1319 | closed | [] | 2023-11-11T01:10:42Z | 2023-11-26T15:53:19Z | null | EricLBuehler |
huggingface/datasets | 6,400 | Safely load datasets by disabling execution of dataset loading script | ### Feature request
Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution.
Any suggested workarounds are welcome as well.
### Motivation
This is a security vulnerability that could lead to arbitrary code e... | https://github.com/huggingface/datasets/issues/6400 | closed | [
"enhancement"
] | 2023-11-10T23:48:29Z | 2024-06-13T15:56:13Z | 4 | irenedea |
huggingface/diffusers | 5,758 | how to run huggingface model in replicate | ### Describe the bug
i am trying to run https://medium.com/ai-artistry/streamlining-ai-agent-development-with-autogen-and-llava-b84fb0d25262 code by adding https://huggingface.co/LLaVA-VL/llava_plus_v0_7b instead of replicate code.
My Question is: Challenges running the huggingface model using replicate?
somet... | https://github.com/huggingface/diffusers/issues/5758 | closed | [
"bug"
] | 2023-11-10T20:31:04Z | 2023-11-11T03:33:51Z | null | andysingal |
pytorch/tutorials | 2,670 | ๐ก [REQUEST] - Tutorial of USB for Semi-Supervised Learning | ### ๐ Descirbe the improvement or the new tutorial
This tutorial helps people to get a basic usage understanding of the Semi-Supervised Learning codebase [USB](https://github.com/microsoft/Semi-supervised-learning) - benchmark. We will show how to use the API provided in USB to train Semi-Supervised Algorithms, e.g... | https://github.com/pytorch/tutorials/issues/2670 | closed | [] | 2023-11-10T16:02:32Z | 2023-12-07T15:57:32Z | 0 | Hhhhhhao |
huggingface/diffusers | 5,756 | How to we generate LCM LoRA of an existing model? | I generated a DreamBooth model from SDXL base 1.0
To get the speed boost of LCM I need to generate a LCM LoRA from this model
How we do it? I don't see documentation | https://github.com/huggingface/diffusers/issues/5756 | closed | [
"stale"
] | 2023-11-10T15:44:52Z | 2023-12-27T13:28:38Z | null | FurkanGozukara |
pytorch/tutorials | 2,669 | ๐ก [REQUEST] - A Tutorial on Whole Slide Image Classification using PyTorch and TIAToolbox | ### ๐ Descirbe the improvement or the new tutorial
Whole Slide Images are the digital data format from which pathologists and computational pathology researchers investigate cancer growth. To due their enormous image resolutions and and file size (in the order of several gigabytes), conventional image processing me... | https://github.com/pytorch/tutorials/issues/2669 | closed | [
"module: vision",
"docathon-h2-2023"
] | 2023-11-10T14:32:47Z | 2023-12-19T06:57:38Z | 1 | Abdol |
huggingface/chat-ui | 548 | MaxListenersExceededWarning: Possible EventEmitter memory leak detected. | Running dev, and no errors until i try to write into the chat interface on the website locally hosted in WSL2 (win11).
Worked before i updated to version v.0.6.0
error message in web ui:

Error message in ter... | https://github.com/huggingface/chat-ui/issues/548 | closed | [
"support"
] | 2023-11-10T13:56:03Z | 2023-11-16T20:02:07Z | 7 | patchie |
huggingface/sentence-transformers | 2,355 | How to Finetune a Clip Model with Custom Data | I want to do my custom data training to get high accuracy embeddings of my image data.
Are there any scripts or documentation that would be helpful?
thank you. | https://github.com/huggingface/sentence-transformers/issues/2355 | closed | [] | 2023-11-10T07:27:23Z | 2023-12-25T03:23:20Z | null | unmo |
huggingface/diffusers | 5,742 | where is the Parameter Description? | https://github.com/huggingface/diffusers/issues/5742 | closed | [] | 2023-11-10T07:07:03Z | 2023-11-13T18:01:56Z | null | MRG-DOT | |
pytorch/vision | 8,107 | cannot install torch==2.0.0 torchvision==0.15.2 | ### ๐ Describe the bug
For some reason, I cannot do:
```
pip install torch==2.0.0 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118
```
But I can install them separately with `--no-deps` and `torchvision` seems to work just fine. Why is this the case? Isn't `torchvision==0.15` supposed t... | https://github.com/pytorch/vision/issues/8107 | closed | [] | 2023-11-10T02:13:06Z | 2023-11-10T14:26:40Z | 1 | wemoveon2 |
huggingface/setfit | 436 | ใquestionใcould you tell me the latest embedding model which usable by setfit? | Hi!
This is not bug report but question.
From my understand, when we use SetFit, we have to choose one of embedding model from sentense transformer.
But now, I feel those models are kind of old and would like to know the latest model for embedding which can be used by setfit
Thank you in adv | https://github.com/huggingface/setfit/issues/436 | closed | [
"question"
] | 2023-11-10T02:10:01Z | 2023-11-12T01:02:24Z | null | Yongtae723 |
pytorch/serve | 2,780 | example of integrating deepspeed fastgen into TorchServe | ### ๐ The feature
Provide an example of integrating deepspeed fastgen in TorchServe.
### Motivation, pitch
deepspeed fastgen was published in mii.
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/serve/issues/2780 | open | [
"future",
"example"
] | 2023-11-09T19:32:46Z | 2023-11-09T19:32:46Z | 0 | lxning |
pytorch/xla | 5,784 | Is there a Bug with AllGather backprop algorithm? | https://github.com/pytorch/xla/blob/d5d023063bfa8ecb4629f621f9b5890bc8396f58/torch_xla/core/functions.py#L66C1-L66C1
In the aforementioned line, we see the class
```
class AllGather(torch.autograd.Function):
@staticmethod
def forward(ctx, input, dim):
ctx.dim = dim
ctx.ordinal = xm.get_ordinal()... | https://github.com/pytorch/xla/issues/5784 | open | [
"question",
"distributed"
] | 2023-11-09T18:30:24Z | 2025-04-28T12:21:19Z | null | mathephysicist |
pytorch/pytorch | 113,370 | Incorrect stride when permuting shapes where a zero dimension is present. | ### ๐ Describe the bug
I ran into a problem while permuting the following tensor (to convert into a complex dtype):
```python
>>> torch.view_as_complex(torch.empty(1,0,2,100,100).permute(0,1,3,4,2).contiguous())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Tensor mus... | https://github.com/pytorch/pytorch/issues/113370 | open | [
"triaged",
"module: edge cases",
"module: empty tensor"
] | 2023-11-09T17:16:14Z | 2024-02-23T18:06:34Z | null | rehno-lindeque |
huggingface/datasets | 6,394 | TorchFormatter images (H, W, C) instead of (C, H, W) format | ### Describe the bug
Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy.
However, pytorch normally uses (C, H, W) format.
Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.
If not using the format it is possible to ... | https://github.com/huggingface/datasets/issues/6394 | closed | [] | 2023-11-09T16:02:15Z | 2024-04-11T12:40:16Z | 9 | Modexus |
huggingface/transformers.js | 386 | [Question] Any plan to rewrite js in typescript ? | I'm doing it for my own usage although I'm loosing the benfit of upgrades.
Typings are usefull you know :)
While doing it I found this,
in models.js, line 1027 :
```javascript
let sampledTokens = sampler(logits);
```
should be
```javascript
let sampledTokens = sampler.sample(logits);
``` | https://github.com/huggingface/transformers.js/issues/386 | closed | [
"question"
] | 2023-11-09T13:41:10Z | 2023-11-15T18:18:39Z | null | pnocera |
huggingface/candle | 1,304 | How to repeat_interleave on Tensor? | There is [repeat_interleave](https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html) function, but I can't find analog in candle.
I need convert `tensor([[6110, 1]])` to `tensor([[6110, 1], [6110, 1], [6110, 1]])`
I found some examples [like](https://github.com/huggingface/candle/blob/... | https://github.com/huggingface/candle/issues/1304 | closed | [] | 2023-11-09T06:31:04Z | 2023-11-09T08:16:19Z | null | bragovo |
huggingface/diffusers | 5,709 | How to run stable diffusion pipeline using multithreading in fastapi ? | Hi.. I have created an stable diffusion API using Fastapi and it is working perfectly fine if sequential request are been made. I have tried to implement multithreading in the api to concurrently run multiple request, but the problem is every request output generation time is dependent on total number of request that a... | https://github.com/huggingface/diffusers/issues/5709 | closed | [
"stale"
] | 2023-11-08T16:19:45Z | 2024-01-09T15:07:46Z | null | minkvirparia |
huggingface/gsplat.js | 23 | How do you set up initial camera position? | When loading a splat file, I'd like to set the initial camera position to a specific location. How can this be achieved? | https://github.com/huggingface/gsplat.js/issues/23 | closed | [
"enhancement",
"question"
] | 2023-11-08T16:04:04Z | 2023-11-11T16:35:57Z | null | reconlabs-chris |
huggingface/safetensors | 381 | Would a CLI to perform convert operation be useful? | ### Feature request
Could it be possible to add to this repo a CLI tool that would use the library to convert files stored in different format and convert them to safetensors.
It would be useful to have also from the command line a way to introspect a model and find some property about it (layers, metadata, ...)
###... | https://github.com/huggingface/safetensors/issues/381 | closed | [
"Stale"
] | 2023-11-08T15:39:02Z | 2024-01-02T01:48:28Z | 2 | remyleone |
huggingface/transformers | 27,361 | Add how to preprocess mask for finetuning with SAM | ### Feature request
The [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size expect as input fo the SAM model.
For inference, th... | https://github.com/huggingface/transformers/issues/27361 | closed | [
"Feature request",
"Vision"
] | 2023-11-08T11:53:31Z | 2024-01-08T16:40:38Z | null | rwood-97 |
huggingface/chat-ui | 546 | Custom Theme | I want to change the UI layout yet still be able to update the code in order to enjoy the new features as they are released.
Is there a way to add my changes in a way that would be similar to a theme? or an outside addon?
| https://github.com/huggingface/chat-ui/issues/546 | closed | [] | 2023-11-08T08:26:43Z | 2023-11-15T09:32:22Z | 2 | kaplanyaniv |
pytorch/executorch | 1,162 | How to deploy llama2 on Qualcomm Snapdragon chips through ExecuTorch๏ผ | Excuse me, if I need to deploy llama2 on Qualcomm Snapdragon chip through ExecuTorch and want to use NPU computing power as an inference computing unit, what do I need to do?
The chip specs I'm currently using are SG885G-WF https://www.quectel.com/product/wi-fi-bt-sg885g-wf-smart-moduleใ | https://github.com/pytorch/executorch/issues/1162 | closed | [
"need-user-input",
"partner: qualcomm",
"triaged"
] | 2023-11-07T12:32:59Z | 2025-02-03T18:21:13Z | null | tensorflowt |
huggingface/datasets | 6,388 | How to create 3d medical imgae dataset? | ### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to... | https://github.com/huggingface/datasets/issues/6388 | open | [
"enhancement"
] | 2023-11-07T11:27:36Z | 2023-11-07T11:28:53Z | null | QingYunA |
huggingface/datasets | 6,387 | How to load existing downloaded dataset ? | Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
... | https://github.com/huggingface/datasets/issues/6387 | closed | [
"enhancement"
] | 2023-11-06T22:51:44Z | 2023-11-16T18:07:01Z | null | liming-ai |
huggingface/gsplat.js | 15 | Does it work with polycam models? | Hello! Thank you for your work, it looks very promising. Got it working with the README file... Just tried it with a .ply object out of polycam and got error
```
Uncaught (in promise) RangeError: byte length of Float32Array should be a multiple of 4
at new Float32Array (<anonymous>)
at R.setData (Scene.ts... | https://github.com/huggingface/gsplat.js/issues/15 | closed | [
"question"
] | 2023-11-06T21:15:51Z | 2023-11-10T18:26:55Z | null | karen-pal |
pytorch/tutorials | 2,655 | Why multiply sqrt(d_model) before TransformerEncoderLayer? | Hi,
Thank you so much for the tutorial! I notice that in https://github.com/pytorch/tutorials/blob/main/beginner_source/transformer_tutorial.py#L92, you multiply sqrt(d_model) before TransformerEncoderLayer. May I ask why we need to do this?
Thanks! | https://github.com/pytorch/tutorials/issues/2655 | closed | [
"question"
] | 2023-11-06T19:48:45Z | 2023-11-06T20:13:28Z | null | yuzhenmao |
huggingface/chat-ui | 545 | Chat-UI throws an 403 forbidden when access settings | When viewing the settings page after first setup the settings page fives the error: ```Failed to load resource: the server responded with a status of 403 (Forbidden) settings:1``` in the console. Without any explanation of what and why.
Setup:
```yaml
services:
# Chat ui webserver
chat-ui:
container_nam... | https://github.com/huggingface/chat-ui/issues/545 | closed | [
"support"
] | 2023-11-06T15:09:33Z | 2024-02-15T21:03:04Z | 5 | IT-Guy007 |
pytorch/audio | 3,688 | Why does `transforms.TimeStretch` return of type `complex64`? | ### ๐ Describe the bug
Good day!
https://pytorch.org/audio/2.1.0/generated/torchaudio.transforms.TimeStretch.html#torchaudio.transforms.TimeStretch.forward:
> Stretched spectrogram. The resulting tensor is of the same dtype as the input spectrogram, but the number of frames is changed to `ceil(num_frame / rate)... | https://github.com/pytorch/audio/issues/3688 | closed | [] | 2023-11-05T12:02:57Z | 2023-11-10T10:25:51Z | 4 | kuraga |
huggingface/alignment-handbook | 9 | How to finetune or lora on custom dataset | How to finetune or lora on custom dataset | https://github.com/huggingface/alignment-handbook/issues/9 | open | [] | 2023-11-05T02:38:33Z | 2024-11-11T07:52:57Z | null | universewill |
huggingface/peft | 1,080 | Add docs on how to merge adapters after 4bit QLoRA with PEFT 0.6 | ### Feature request
there has been some controversy on how to correctly **merge the adapters with the base model after 4bit LoRA** training.
to me it seems there are two ways to merge and save:
- ChrisHayduk https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930
- TheBloke https://github.com/Th... | https://github.com/huggingface/peft/issues/1080 | closed | [] | 2023-11-04T10:07:16Z | 2023-11-17T22:22:06Z | null | geronimi73 |
huggingface/huggingface_hub | 1,801 | Entire operation get cancelled when 1 file fails when using api.upload_folder - how to make it iterative | I am using below code. Uploaded like 80 GB file and the entire operation failed just because of 1 png failed to upload for some reason
I see uploaded repo has 0 changes
How can I make it iterative? So after each file upload it is committed to the repo
I don't need commit or file history. Just upload newer file... | https://github.com/huggingface/huggingface_hub/issues/1801 | closed | [
"bug"
] | 2023-11-04T00:20:00Z | 2023-11-26T09:09:35Z | null | FurkanGozukara |
pytorch/xla | 5,768 | How to provide sharding annotation for MpDeviceLoader when data has different dimensions | ## โ Questions and Help
Let's say my dataloader yields a dict when iterating over and the members of this dict has different dimensions
```python
{
"input_ids": shape = (batch, seq),
"masks": shape = (batch, seq, seq),
}
```
`pl.MpDeviceLoader` appears to only able to provide one sharding annotation... | https://github.com/pytorch/xla/issues/5768 | closed | [
"question",
"distributed"
] | 2023-11-03T20:43:19Z | 2025-04-28T12:30:11Z | null | hanzhi713 |
pytorch/pytorch | 112,876 | How to handle CVE vulnerabilities in underlying operating system? | Hello,
The base images for Cuda are pretty old (2.1.0-cuda11.8 was pushed more than a month ago) how should we act to get latest security updates from the Ubuntu base image? | https://github.com/pytorch/pytorch/issues/112876 | open | [
"triaged",
"module: docker",
"security"
] | 2023-11-03T17:32:14Z | 2023-11-06T22:34:04Z | null | bjorn-ali-goransson |
huggingface/transformers.js | 378 | Security issue - content security policy - script unsafe-eval | Context:
I use @xenova/transformers 2.6.2 npm package from a web application to do image classifcations. Here is the gist of my setup:
```js
const modelPath = 'own-domain/models-and-wasm/'
env.localModelPath = "/";
env.useBrowserCache = true;
env.backends.onnx.wasm.wasmPaths = modelPath;
const classifier =... | https://github.com/huggingface/transformers.js/issues/378 | open | [
"question"
] | 2023-11-03T13:50:30Z | 2023-11-06T13:44:57Z | null | stiano |
huggingface/diffusers | 5,643 | How to use the ip adapter controlnet? | Hi, I can't use this specific controlnet because it's from here: https://huggingface.co/lllyasviel/sd_control_collection/tree/main
and the format doesn't allow from_pretrained. When I use from_single_file, I get:
```
stable_diffusion/convert_from_ckpt.py", line 422, in convert_ldm_unet_checkpoint
new_checkp... | https://github.com/huggingface/diffusers/issues/5643 | closed | [] | 2023-11-03T13:34:44Z | 2023-11-13T15:12:29Z | null | alexblattner |
huggingface/dataset-viewer | 2,050 | Should we support video datasets? | Like https://huggingface.co/datasets/commaai/commavq
There was a previous intent in datasets: https://github.com/huggingface/datasets/pull/5339 | https://github.com/huggingface/dataset-viewer/issues/2050 | closed | [
"question",
"feature request"
] | 2023-11-03T13:33:00Z | 2023-12-11T15:04:08Z | null | severo |
huggingface/distil-whisper | 16 | How to use ONNX model? | Hello there,
I'm interested in using the ONNX model, as I saw that you are providing the weights for it.
I tried to use it with `optimum` library, but didn't manage to make it work.
Could someone indicate in which direction I should look into?
Thank you so much for this repository and the work you put into it. ... | https://github.com/huggingface/distil-whisper/issues/16 | open | [] | 2023-11-03T11:51:44Z | 2023-11-07T07:36:50Z | null | H-G-11 |
huggingface/dataset-viewer | 2,049 | Retry jobs that finish with `ClientConnection` error? | Maybe here: https://github.com/huggingface/datasets-server/blob/f311a9212aaa91dd0373e5c2d4f5da9b6bdabcb5/chart/env/prod.yaml#L209
Internal conversation on Slack: https://huggingface.slack.com/archives/C0311GZ7R6K/p1698224875005729
Anyway: I'm wondering if we can have the error now that the dataset scripts are dis... | https://github.com/huggingface/dataset-viewer/issues/2049 | closed | [
"question",
"improvement / optimization",
"P2"
] | 2023-11-03T11:28:19Z | 2024-02-06T17:29:45Z | null | severo |
huggingface/transformers.js | 377 | GPU Acceleration to increase performance | Do we have any option to use GPU to increase performance of model loading and detection?
As currently in Object Detection it's taking around 10 seconds. If we want to do this on GPU, can we do that?
Running below lines through web worker, increases overall UI experience but not increases any performance.
```
cons... | https://github.com/huggingface/transformers.js/issues/377 | closed | [
"question"
] | 2023-11-03T07:44:05Z | 2024-10-18T13:30:08Z | null | milind-yadav |
pytorch/serve | 2,766 | How to auto-scale model replicas in a single GPU based EC2 instance based on number-of-requests-in-queue ? | Hi team, I mainly had 1 question and 1 observation:
---
### **Question:**
- **I was not able to locate any resource explaining ways to auto-scale ML model in torch-serve on single GPU instance.**
- I did had a look at the model configuration documentation which explained the 2 parameters: min-workers and max-w... | https://github.com/pytorch/serve/issues/2766 | closed | [
"triaged"
] | 2023-11-02T16:03:49Z | 2023-11-26T18:39:03Z | null | yogendra-yatnalkar |
pytorch/serve | 2,765 | How to auto-scale model replicas in a single GPU based EC2 instance based on time_of_request_in_queue | https://github.com/pytorch/serve/issues/2765 | closed | [] | 2023-11-02T15:39:16Z | 2023-11-02T17:46:34Z | null | yogendra-yatnalkar | |
huggingface/distil-whisper | 11 | [Speculative Decoding] How to run speculative decoding for batch_size > 1? | Transformers 4.35 only supports speculative decoding for batch size == 1. In order to use speculative decoding for batch size > 1, please make sure to use this branch: https://github.com/huggingface/transformers/pull/26875
To do so, you need to install transformers as follows:
```
pip install git+https://github.... | https://github.com/huggingface/distil-whisper/issues/11 | open | [] | 2023-11-02T14:19:55Z | 2024-10-03T13:12:22Z | null | patrickvonplaten |
pytorch/vision | 8,090 | to_pil_image different results depending on numpy/torch input | ### ๐ Describe the bug
to_pil_image has different behaviour depending on torch or numpy input. This is not documented as far as I can see. There is a note that numpy is expected to be HWC, whereas torch is expected to be CHW, but that's not relevant here.
```python
import torch
from torchvision.transforms.functi... | https://github.com/pytorch/vision/issues/8090 | closed | [] | 2023-11-02T12:46:29Z | 2023-11-08T08:51:45Z | 5 | rb-synth |
huggingface/chat-ui | 542 | Request: more clarity on JSON response from custom models | Note: duplicate from https://huggingface.co/spaces/huggingchat/chat-ui/discussions/309, not sure which is the proper place to post.
I followed the guide chat-ui to deploy a version in gcp, and I love the chat interface.
I would love to hook it up to one of my custom models, so I specified
```
"endpoints": [{"ur... | https://github.com/huggingface/chat-ui/issues/542 | open | [
"support"
] | 2023-11-02T10:31:53Z | 2023-11-03T19:44:02Z | 1 | thubreg |
huggingface/distil-whisper | 8 | Where is the model? | Link to HF leads to empty files section. | https://github.com/huggingface/distil-whisper/issues/8 | closed | [] | 2023-11-02T08:47:23Z | 2023-11-02T17:31:08Z | null | lkmdhertg |
pytorch/xla | 5,762 | how to use torch-xla with huggingface transformers | ## โ Questions and Help
I am fine-tuning the model provided by huggingface, modify a model from pytorch to torch-xla and run it. but it will freeze when running. Is there something wrong here?
dataset as follows:
https://github.com/zyds/transformers-code/blob/master/01-Getting%20Started/04-model/ChnSentiCorp_htl_... | https://github.com/pytorch/xla/issues/5762 | closed | [] | 2023-11-02T08:43:46Z | 2023-11-03T01:34:51Z | null | markc-614 |
huggingface/candle | 1,241 | How to reduce memory usage of backpropagation? | I implemented the [tiny NeRF example](https://github.com/bmild/nerf/blob/master/tiny_nerf.ipynb) using `candle` here: https://github.com/laptou/nerfy/blob/fc50dbd61c4012d1f12f556a72474b59a8b3c158/examples/tiny_nerf.rs
The example, which is written using TensorFlow, runs fine on my laptop. My `candle` implementation ... | https://github.com/huggingface/candle/issues/1241 | open | [] | 2023-11-02T03:38:32Z | 2025-09-10T05:14:01Z | null | laptou |
huggingface/candle | 1,240 | Demo showing how to load in candle computer vision model using webcam | ```
use anyhow::Result; // Automatically handle the error types
use opencv::{
prelude::*,
videoio,
highgui
}; // Note, the namespace of OpenCV is changed (to better or worse). It is no longer one enormous.
fn main() -> Result<()> { // Note, this is anyhow::Result
// Open a GUI window
highgu... | https://github.com/huggingface/candle/issues/1240 | open | [] | 2023-11-02T03:38:19Z | 2023-11-02T06:24:11Z | null | bazylhorsey |
huggingface/candle | 1,239 | How inference on a new model, have to hand written model.rs manually? | Just wonder if there scripts convert a pth or onnx to candle format maybe? | https://github.com/huggingface/candle/issues/1239 | closed | [] | 2023-11-02T03:32:11Z | 2023-11-02T07:03:54Z | null | lucasjinreal |
huggingface/safetensors | 375 | How do I load the tensors in Rust? | Hi,
I am unable to find good documentation to read the weights in rust. I want to write gpt2 from scratch, and want to be able to load the HF weights. Since, I only plan to use the ndarray library, I want to be able to load the FP32 tensors somehow. Please help.
In python I do:
```python
# Load model directly... | https://github.com/huggingface/safetensors/issues/375 | closed | [
"Stale"
] | 2023-11-02T02:11:11Z | 2024-01-02T01:48:31Z | 5 | arunpatro |
huggingface/safetensors | 374 | safetensor.*.save_file the parameter name to set the incoming tensors change from "tensors" to "tensor_dict" | ### Feature request
In Jax, torch, and paddle is:
> tensors (Dict[str, torch.Tensor]) โ The incoming tensors. Tensors need to be contiguous and dense.
Check: https://huggingface.co/docs/safetensors/api/torch#safetensors.torch.save
In Numpy:
> tensor_dict (Dict[str, np.ndarray]) โ The incoming tensors. T... | https://github.com/huggingface/safetensors/issues/374 | closed | [
"Stale"
] | 2023-11-02T00:41:14Z | 2024-01-02T01:48:32Z | 2 | csaybar |
huggingface/safetensors | 373 | Stream load models (load model larger than system memory) | ### Feature request
I'm not very familiar with the details, but I'd like to load a 20GB model while having only 8 GB system memory.
Currently, safetensors loads the entire model into system memory.
Is it possible to load models incrementally/as a stream?
Related:
https://github.com/turboderp/exllama/issues/2... | https://github.com/huggingface/safetensors/issues/373 | closed | [
"Stale"
] | 2023-11-01T16:14:18Z | 2024-01-03T01:48:07Z | 6 | erikschul |
huggingface/text-embeddings-inference | 59 | how to resolve this compile error? | ### System Info
cargo 1.73.0 (9c4383fb5 2023-08-26)
gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
cuda 11.8
v100
```
"-Wl,-Bdynamic" "-llayernorm" "-lcudart" "-lstdc++" "-lcuda" "-lnvrtc" "-lcurand" "-lcublas" "-lcublasLt" "-lssl" "-lcrypto" "-lgcc_s" "-lutil" "-lrt" "-lpthread" "-lm" "-ldl" "-lc" "-Wl,--eh-fram... | https://github.com/huggingface/text-embeddings-inference/issues/59 | closed | [] | 2023-10-31T11:35:02Z | 2023-11-02T07:52:18Z | null | kingder |
pytorch/tutorials | 2,630 | ๐ก [REQUEST] - <title>An inbuilt function to retrieve a list of datasets categorised by problem type (e.g., classification, regression, clustering). | ### ๐ Descirbe the improvement or the new tutorial
PyTorch has inbuilt function to list all datasets.
`import torchvision.datasets as datasets
//Get a list of all datasets
all_datasets = datasets.__all__
//Print the list of datasets
print(all_datasets)
`
Rather than focusing on getting al... | https://github.com/pytorch/tutorials/issues/2630 | open | [] | 2023-10-31T09:08:51Z | 2023-11-01T16:06:56Z | 1 | xd932 |
huggingface/optimum | 1,497 | about LCM onnx model | Hi!
can someone please tell how we can use the LCM model in onnx? i see u guys made an script to run it in onnx, but what about the model? can we simply use the normal stable diffusion script onnx conversation for lcm model too? or we have to wait someone make an conversation script?
or could someone upload onnx ... | https://github.com/huggingface/optimum/issues/1497 | closed | [
"bug"
] | 2023-10-31T08:57:16Z | 2024-01-04T14:21:54Z | 6 | Amin456789 |
pytorch/executorch | 1,117 | [build Error initializing DaemonStateData] how to fix it | hi,
I reference [the tutorial](https://pytorch.org/executorch/stable/getting-started-setup.html#building-a-runtime) to install the buck2-x86_64-unknown-linux-musl.zst on my PC.
And I want to build
```
/tmp/buck2 build //examples/portable/executor_runner:executor_runner --show-output
```
and face the build fail... | https://github.com/pytorch/executorch/issues/1117 | closed | [] | 2023-10-31T05:49:12Z | 2024-01-23T10:08:57Z | null | kris-himax |
pytorch/pytorch | 112,454 | Inductor chooses too large of a block size in cases where the `YBLOCK` dimension is too large. | ### ๐ Describe the bug
```python
import torch
torch.set_default_device('cuda')
@torch.compile
def f(x, y):
return x.t() + y
f(torch.randn(2**25, 128), torch.randn(128, 2**25))
```
The concrete issue is that this results in us potentially choosing a config like `XBLOCK=256, YBLOCK=512`, which req... | https://github.com/pytorch/pytorch/issues/112454 | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2023-10-31T00:18:12Z | 2023-11-07T01:48:02Z | null | Chillee |
huggingface/dataset-viewer | 2,038 | How to pass single quote in /filter endpoint "where" parameter? | See `https://huggingface.co/datasets/albertvillanova/lm_en_dummy2/viewer/default/train?f[meta][value]='{'file': 'file_4.txt'}'`
From `https://datasets-server.huggingface.co/filter?dataset=albertvillanova/lm_en_dummy2&config=default&split=train&where=meta='{'file': 'file_4.txt'}'`, we get:
```
{"error":"Parameter... | https://github.com/huggingface/dataset-viewer/issues/2038 | closed | [
"bug",
"documentation",
"P1"
] | 2023-10-30T22:21:24Z | 2023-11-02T17:22:54Z | null | severo |
huggingface/datasets | 6,364 | ArrowNotImplementedError: Unsupported cast from string to list using function cast_list | Hi,
I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue.
CSV Data sample(golden_dataset.csv):
Question | Context | answer | groundtruth
"what is abc?"... | https://github.com/huggingface/datasets/issues/6364 | closed | [] | 2023-10-30T20:14:01Z | 2023-10-31T19:21:23Z | 2 | divyakrishna-devisetty |
pytorch/pytorch | 112,369 | In the func Tensor.to, how can I make privateuse lazy init | ### ๐ Describe the bug
Iโm using privateuse1 to add our backend. My customer find the following code is working in cuda, but not working in my backend.
Use `Tensor.to()` with device message which not has a index, for example "cuda".
```
import torch
tensor_a = torch.rand(2).to("cuda")
```
Privateuse1 uses the... | https://github.com/pytorch/pytorch/issues/112369 | closed | [
"module: internals",
"triaged"
] | 2023-10-30T06:39:18Z | 2024-01-09T20:12:12Z | null | huihoaan |
huggingface/diffusers | 5,575 | How to set the "transformer_in" layer's hidden size in LoRA training? | ### Describe the bug
I modify the code for text-to-image [lora](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) as Figure 1,
<img width="908" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/0639998b-8106-49d9-8761-c58014095e7e">
Howev... | https://github.com/huggingface/diffusers/issues/5575 | closed | [
"bug",
"stale"
] | 2023-10-30T03:44:32Z | 2024-01-10T15:07:20Z | null | lxycopper |
huggingface/diffusers | 5,574 | How to train a part of UNet attention parameters with LoRA | ### Describe the bug
I adapt the LoRA training code in # to train my model.
And I only want to update the parameters in "down block", so I comment out the code for other attention blocks:
<img width="909" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/6b204ad8-e201-43b0-ab97-5d29a936e3c... | https://github.com/huggingface/diffusers/issues/5574 | closed | [
"bug",
"stale"
] | 2023-10-30T02:58:07Z | 2023-12-08T15:05:16Z | null | lxycopper |
pytorch/TensorRT | 2,419 | โ [Question] How do the dtypes work with torch.compile(backend="torch_tensorrt"). Getting error. | ## โ Question
I tried the following script to load a resnet50 model and test a sample input -
```python
import torch_tensorrt
import torch
# Load a pre-trained ResNet50 model
x = torch.randn(1, 3, 224, 224, device='cuda').half()
model = torch.hub.load(
'pytorch/vision:v0.6.0', 'resnet50', pretrained=... | https://github.com/pytorch/TensorRT/issues/2419 | closed | [
"question"
] | 2023-10-28T16:48:28Z | 2023-10-30T17:24:55Z | null | shreyansh26 |
huggingface/transformers.js | 372 | [Question] onnxruntime_binding.node issue on mac electron app | Hi,
I'm getting this error on an intel macbook running an electron forge app:
```
(node:63267) UnhandledPromiseRejectionWarning: Error: Cannot find module '../bin/napi-v3/darwin/x64/onnxruntime_binding.node'
Require stack:
- /Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js
- /Use... | https://github.com/huggingface/transformers.js/issues/372 | closed | [
"question"
] | 2023-10-28T00:34:05Z | 2023-11-01T21:56:19Z | null | samlhuillier |
huggingface/transformers | 27,107 | How to export a Marian model in rust ? | Most models based on Marian are also available in rust, such as : Helsinki-NLP/opus-mt-en-roa
Is it possible to do this using transformers ?
Did you asssit Helsinki-NLP in exporting the models to Rust ? | https://github.com/huggingface/transformers/issues/27107 | closed | [] | 2023-10-27T13:01:13Z | 2023-12-05T08:03:53Z | null | flutter-painter |
pytorch/vision | 8,071 | How to tell if Faster RCNN Detection model is overfitting | I'm confused as to how I can tell if the Faster RCNN Detection model I'm training is overfitting or not given that the validation loss is not computed in the `evaluate` function seen [here](https://github.com/pytorch/vision/blob/main/references/detection/engine.py#L75C1-L115C26) and below.
Any help would be greatly ... | https://github.com/pytorch/vision/issues/8071 | open | [] | 2023-10-27T00:03:39Z | 2025-12-22T11:12:36Z | null | 1andDone |
huggingface/chat-ui | 535 | API format? | ok, so this may be a dumb question, but i am not sure where else to ask it. So if we use this repo to deploy our app on HF, what is the format of the API parameters for calling our space? | https://github.com/huggingface/chat-ui/issues/535 | closed | [] | 2023-10-26T21:56:22Z | 2023-10-27T15:01:57Z | 3 | silvacarl2 |
pytorch/tutorials | 2,624 | ~ PyTorch Docathon H2 2023 ~ | # ~ PyTorch Docathon H2 2023 ~
We have a large backlog of issues that we want to address and it's a great opportunity for you to start contributing to PyTorch. We have limited this docathon to the [pytorch/tutorials](https://github.com/pytorch/tutorials/pulls?q=is%3Apr+is%3Aopen+label%3Adocathon-h2-2023+) and [pytorch... | https://github.com/pytorch/tutorials/issues/2624 | open | [
"docathon-h2-2023"
] | 2023-10-26T16:14:39Z | 2023-11-06T17:50:19Z | 3 | sekyondaMeta |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.