repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch | 100,800 | [cpu inductor] where is silently incorrect when SIMD code is generated. | ### 🐛 Describe the bug
```python
import torch
input_tensor = torch.ones(3, 3)
def f(x):
return torch.where(torch.ones_like(x).to(torch.bool), torch.zeros_like(x), torch.ones_like(x)* 2)
res1 = f(input_tensor)
print(res1)
jit_func = torch.compile(f)
res2 = jit_func(input_tensor)
print(res2)
```... | https://github.com/pytorch/pytorch/issues/100800 | closed | [
"triaged",
"module: inductor"
] | 2023-05-06T13:03:01Z | 2023-05-10T02:16:14Z | null | kshitij12345 |
huggingface/transformers.js | 104 | [Question] npm install error in windows | I install transformers.js with npm but I get an error:
```
2135 info run canvas@2.11.2 install node_modules/canvas node-pre-gyp install --fallback-to-build --update-binary
2136 info run sharp@0.32.1 install node_modules/sharp (node install/libvips && node install/dll-copy && prebuild-install) || (node install/can-... | https://github.com/huggingface/transformers.js/issues/104 | closed | [
"question"
] | 2023-05-06T09:13:41Z | 2023-05-06T12:48:23Z | null | DominguitoLamo |
pytorch/TensorRT | 1,889 | Multi-GPU: optimize for cuda:1 but model also gets pushed on cuda:0, why??? | ## ❓ Question
I have two GPUs in my system. When optimize my model for the cuda:1 device the model gets somehow ALSO loaded onto the cuda:0 device (probably because that's the default device?). This happends during the optimization process which is called with:
`optModel = torch_tensorrt::torchscript::compile(model... | https://github.com/pytorch/TensorRT/issues/1889 | closed | [
"question"
] | 2023-05-05T11:43:50Z | 2023-07-06T15:04:44Z | null | bjaeger1 |
huggingface/datasets | 5,818 | Ability to update a dataset | ### Feature request
The ability to load a dataset, add or change something, and save it back to disk.
Maybe it's possible, but I can't work out how to do it, e.g. this fails:
```py
import datasets
dataset = datasets.load_from_disk("data/test1")
dataset = dataset.add_item({"text": "A new item"})
dataset.sav... | https://github.com/huggingface/datasets/issues/5818 | open | [
"enhancement"
] | 2023-05-04T01:08:13Z | 2023-05-04T20:43:39Z | 3 | davidgilbertson |
pytorch/data | 1,149 | [RFC] Performance Profiling Tools | ### 🚀 The feature
1. Store usage statistics in `Prefetcher`
- By tracking statistics within `Prefetcher`, we can reasonably determine whether upstream processes or downstream processes are faster. For example, the emptiness of the buffer queue may imply consumers are faster than producers. Users can insert this... | https://github.com/meta-pytorch/data/issues/1149 | open | [
"topic: new feature"
] | 2023-05-03T22:01:19Z | 2023-05-30T11:27:53Z | 3 | NivekT |
pytorch/TensorRT | 1,882 | ❓ [Question] Request for a model which is supported by Torch-TRT(FX) | ## ❓ Question
I'm trying to evaluate the Torch-TensorRT tool, using FX backend for running models in the C++ library.
My goal is to convert models which are not fully supported by TRT, and accelerate them by running some of the sub-graphs on TRT(as explained by this notebook- https://github.com/pytorch/TensorRT/blo... | https://github.com/pytorch/TensorRT/issues/1882 | closed | [
"question",
"No Activity"
] | 2023-05-03T13:53:40Z | 2023-11-17T00:02:12Z | null | DanielLevi6 |
huggingface/datasets | 5,815 | Easy way to create a Kaggle dataset from a Huggingface dataset? | I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset.
While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example:

- [X]... | https://github.com/huggingface/optimum/issues/1024 | open | [
"bug"
] | 2023-05-02T09:42:15Z | 2023-06-12T11:40:23Z | 4 | piegu |
pytorch/kineto | 756 | urgent!!! profiler: Profiler is not initialized: skipping step() invocation | I got the warning, when using torch profiler to profiling, the steps are merged into one:
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step(... | https://github.com/pytorch/kineto/issues/756 | closed | [
"question"
] | 2023-05-01T23:35:54Z | 2024-04-23T15:28:55Z | null | Johnsonms |
pytorch/TensorRT | 1,872 | ❓ [Question] How do you ....? | ## ❓ Question
<!-- Your question -->
How to compile torch-tensorrt for NVIDIA Jetson TX2 (jetpack4.6)
## What you have already tried
Hi, @kneatco
I have the same issue when I downlograded numpy version from 1.19.5 to 1.19.4.
I did following steps.
1. Downloading docker image for TX2 (jetpack=4.6)
```
#... | https://github.com/pytorch/TensorRT/issues/1872 | closed | [
"question"
] | 2023-05-01T13:53:19Z | 2023-05-19T18:30:16Z | null | godhj93 |
pytorch/TensorRT | 1,871 | ❓ [Question] torch.fx.proxy.TraceError: Proxy object cannot be iterated | ## ❓ Question
I'm trying to convert an nn.Module of ASLfeat(Pytorch) to a runtime Torch-TensorRT model(for C++)
The steps I followed are the same as written in- https://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py
But for some reason, the tracing step fails every time.
The error me... | https://github.com/pytorch/TensorRT/issues/1871 | closed | [
"question",
"No Activity",
"component: fx"
] | 2023-05-01T11:24:21Z | 2023-08-21T00:02:11Z | null | DanielLevi6 |
huggingface/datasets | 5,809 | wiki_dpr details for Open Domain Question Answering tasks | Hey guys!
Thanks for creating the wiki_dpr dataset!
I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr.
As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr. | https://github.com/huggingface/datasets/issues/5809 | closed | [] | 2023-04-30T06:12:04Z | 2023-07-21T14:11:00Z | 1 | yulgok22 |
pytorch/hub | 328 | Need help on how to contribute | Hello everyone.
I wanted to add simplenet architecture from 2016 which outperforms vggnets resnet18, resbet34 and the likes while being a plain CNN with 5m to 9m parameters to the pytorch hub.
I read the docs but I'm a bit confused. Could you kindly help me get this sorted out?
Here are my issues:
1.where exactl... | https://github.com/pytorch/hub/issues/328 | closed | [] | 2023-04-29T14:52:26Z | 2023-05-03T09:56:37Z | null | Coderx7 |
pytorch/pytorch | 100,293 | How to get nn.MultiheadAttention mid layer output | ### 📚 The doc issue
Hello, I have a quetion about MultiheadAttention(short for MA). Not about the [doc explaination](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html?highlight=multiheadattention#torch.nn.MultiheadAttention), but is about using this module. I want to plot a heatmap(CAM) for... | https://github.com/pytorch/pytorch/issues/100293 | closed | [] | 2023-04-28T23:30:29Z | 2023-04-30T05:51:23Z | null | Lucky-Light-Sun |
huggingface/datasets | 5,805 | Improve `Create a dataset` tutorial | Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading.
1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required f... | https://github.com/huggingface/datasets/issues/5805 | open | [
"documentation"
] | 2023-04-28T13:26:22Z | 2024-07-26T21:16:13Z | 4 | polinaeterna |
huggingface/dataset-viewer | 1,104 | Delete finished jobs immediately? | Currently, finished jobs are deleted after 7 days by an index. See https://github.com/huggingface/datasets-server/blob/259fd092c12d240d9b8d733c965c4b9362e90684/libs/libcommon/src/libcommon/queue.py#L144
But we never use the finished jobs, so:
- we could delete them immediately after finishing
- we could reduce the... | https://github.com/huggingface/dataset-viewer/issues/1104 | closed | [
"question",
"improvement / optimization"
] | 2023-04-28T11:49:10Z | 2023-05-31T12:20:38Z | null | severo |
pytorch/pytorch | 100,181 | [Dynamo] How to better handle customized list/dict | ### 🐛 Describe the bug
This is a pattern I found from Meta internal user case:
```
import torch
import logging
import torch._dynamo
from typing import Any, List, Optional
torch._logging.set_logs(dynamo=logging.DEBUG)
class _non_none_list(list):
def append(self, obj: Any):
if obj is not None:
... | https://github.com/pytorch/pytorch/issues/100181 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2023-04-27T16:26:39Z | 2023-05-03T04:25:40Z | null | yanboliang |
pytorch/TensorRT | 1,861 | ❓ [Question] Binding index warnings while using fx backend | ## ❓ Question
I want to convert a torch model(from python-nn.Module) to a runtime model(in C++), using the torch.fx capabilities. That will allow me to accelerate a model that isn't fully supported by TensorRT.
The model I'm using is-
`class Model(nn.Module):`
` def __init__(self):`
` super().__init__(... | https://github.com/pytorch/TensorRT/issues/1861 | closed | [
"question",
"No Activity"
] | 2023-04-27T13:31:59Z | 2023-08-10T00:02:37Z | null | DanielLevi6 |
huggingface/transformers.js | 102 | How to convert Whisper Large v2 | Hello!
How to convert whisper-large-v2 model to onnx?
I'm using this command
`python3.9 -m scripts.convert --model_id whisper-large-v2 --quantize --task automatic-speech-recognition`
But when i try to connect the converted model i get the following error:
`Error: File not found. Could not locate "encode... | https://github.com/huggingface/transformers.js/issues/102 | closed | [
"question"
] | 2023-04-27T13:30:33Z | 2023-05-31T13:18:33Z | null | hotmeatballs |
huggingface/datasets | 5,797 | load_dataset is case sentitive? | ### Describe the bug
load_dataset() function is case sensitive?
### Steps to reproduce the bug
The following two code, get totally different behavior.
1. load_dataset('mbzuai/bactrian-x','en')
2. load_dataset('MBZUAI/Bactrian-X','en')
### Expected behavior
Compare 1 and 2.
1 will download all 52 subsets, sh... | https://github.com/huggingface/datasets/issues/5797 | open | [] | 2023-04-26T18:19:04Z | 2023-04-27T11:56:58Z | 2 | haonan-li |
huggingface/chat-ui | 122 | Add pre-prompt | cc @OlivierDehaene
> Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is neede... | https://github.com/huggingface/chat-ui/issues/122 | closed | [] | 2023-04-26T15:58:55Z | 2023-04-26T16:46:05Z | 1 | coyotte508 |
pytorch/TensorRT | 1,860 | ❓ [Question] Runtimes for timm + TensorRT | ## ❓ Question
I created a script to compare inference runtimes with `torch`, `torch.compile` and `torch_tensorrt.compile` for any timm model, input shape and dtype and some runtimes are worse using TensorRT, why ?
## What you have already tried
I used [latest NVIDIA pytorch container](https://catalog.ngc.nvid... | https://github.com/pytorch/TensorRT/issues/1860 | closed | [
"question",
"No Activity"
] | 2023-04-26T15:19:14Z | 2024-10-04T15:58:16Z | null | SimJeg |
huggingface/setfit | 367 | Massive Text Embedding Benchmark (MTEB) Leaderboard | https://huggingface.co/spaces/mteb/leaderboard
Can we use all of these with setfit? | https://github.com/huggingface/setfit/issues/367 | closed | [
"question"
] | 2023-04-26T09:18:27Z | 2023-12-05T14:48:55Z | null | vahuja4 |
huggingface/huggingface.js | 165 | Add E2E where the module is downloaded (or linked) to a TS project | To prevent things like #164 | https://github.com/huggingface/huggingface.js/issues/165 | closed | [
"tooling"
] | 2023-04-25T20:23:17Z | 2023-05-07T09:18:47Z | null | coyotte508 |
pytorch/TensorRT | 1,858 | ❓ [Question] Why was this Repo renamed to TensorRT ? | Thank you all for the great work on Torch-TensorRT.
It's been a pleasure to see it evolve since the days of TRTorch.
This repo went through multiple names but I think the current one is extremely confusing, if I clone both this repo and the original TensorRT repo I now have two TensorRT folders.
This is extrem... | https://github.com/pytorch/TensorRT/issues/1858 | closed | [
"question"
] | 2023-04-25T12:03:06Z | 2023-05-02T10:08:41Z | null | MatthieuToulemont |
huggingface/transformers.js | 100 | Whisper on webGPU? | Somewhat related to [this thread](https://github.com/xenova/transformers.js/issues/20).
Is it within scope to implement a webGPU accelerated version of Whisper?
Not sure if this helps, but there is a [C port for Whisper wirh CPU implementation](https://github.com/ggerganov/whisper.cpp), and as mentioned in [this... | https://github.com/huggingface/transformers.js/issues/100 | closed | [
"question"
] | 2023-04-25T09:34:10Z | 2024-10-18T13:30:07Z | null | sandorkonya |
pytorch/data | 1,140 | Shuffle batches across workers | ### 🚀 The feature
I have a Dataloader with n workers. My understanding is that each worker constructs a full batch independently, which is then served by the dataloader. My samples are large, so I cannot increase the shuffle buffer size in each worker. Is there a way to perform the batching and shuffling only in the ... | https://github.com/meta-pytorch/data/issues/1140 | closed | [] | 2023-04-24T15:53:08Z | 2023-04-28T02:49:08Z | 2 | platers |
pytorch/text | 2,159 | how to use Field ,RawField with torchtext 0.15.0 , don't need lower version | ## 🐛 Bug
**Describe the bug** A clear and concise description of what the bug is.
- PyTorch Version (e.g., 1.0): 1.12
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:3.8
- CUDA/cuDNN version: 10.2
- GPU ... | https://github.com/pytorch/text/issues/2159 | open | [] | 2023-04-22T03:17:29Z | 2023-04-23T07:51:49Z | null | cqray1990 |
huggingface/optimum | 1,002 | Add a README & log at export | ### Feature request
The logs of the ONNX export are insightful.
Moreover, it would be good to generate automatically a README/json containing:
* which params were used at export
* For decoders, how to use the obtained `.onnx` models, as it can be a bit involved for somebody who does not use the Optimum ORT integr... | https://github.com/huggingface/optimum/issues/1002 | open | [
"feature-request",
"onnx",
"tflite"
] | 2023-04-21T15:31:43Z | 2023-04-21T15:31:43Z | 0 | fxmarty |
huggingface/optimum | 999 | Remove attention mask creation for batch size = 1 when using SDPA | ### Feature request
Some pieces of transformers code are not useful when using SDPA with batch size = 1, for example:
https://github.com/huggingface/transformers/blob/874c7caf1966b1d0ee2749046703ada7a12ed797/src/transformers/models/gpt2/modeling_gpt2.py#L804-L822
https://github.com/huggingface/transformers/blob/87... | https://github.com/huggingface/optimum/issues/999 | closed | [
"feature-request",
"bettertransformer",
"Stale"
] | 2023-04-21T14:41:04Z | 2025-05-29T02:14:32Z | 1 | fxmarty |
pytorch/serve | 2,253 | Troubled me too, How to solve this problem in TorchServe 0.7.1 | Just to let you know that I have the same kind of issue on Windows server 2019, with TorchServe 0.7.1.
From Anaconda Prompt (ran as admninistrator), I run `torchserve --start ...`, everything goes fine including the inference test on the served model. I stop the `torchserve --start ...` command with C... | https://github.com/pytorch/serve/issues/2253 | closed | [
"triaged",
"windows"
] | 2023-04-21T04:09:42Z | 2023-10-28T19:39:28Z | null | Z863058 |
pytorch/TensorRT | 1,845 | ❓ [Question] Can I use TensorRT8.5.3.1 and torch1.10.1 with torch_TensorRT? | ## ❓ Question
<!-- Your question -->
I found that when pip install torch_tensorrt corresponding to TensorRT8.5.3.1, torch must be 1.13. Can I use TensorRT8.5.3.1 and torch1.10.1 with torch_TensorRT?
And if I use c++ torch_tensorrt, can i avoid this situation?
<!-- A clear and concise description of what yo... | https://github.com/pytorch/TensorRT/issues/1845 | closed | [
"question"
] | 2023-04-20T12:34:27Z | 2023-04-23T08:05:33Z | null | Yoh-Z |
pytorch/TensorRT | 1,844 | ❓ [Question] Internal Error-given invalid tensor name | ## ❓ Question
I want to convert a torch model(from python) to a runtime model(in C++), using the torch.fx capabilities. That will allow me to accelerate a model that isn't fully supported by TensorRT.
I understand that this flow is experimental, so I used the examples which are given in this repository.
By using... | https://github.com/pytorch/TensorRT/issues/1844 | closed | [
"question"
] | 2023-04-20T10:35:35Z | 2023-04-27T16:10:46Z | null | DanielLevi6 |
huggingface/optimum | 987 | Have optimum supported BLIP-2 model converted to onnx? | Hi, have optimum supported BLIP-2 model converted to onnx? | https://github.com/huggingface/optimum/issues/987 | closed | [] | 2023-04-20T07:07:53Z | 2023-04-21T11:45:41Z | 1 | joewale |
pytorch/serve | 2,242 | How to send a json body to Torchserve | I'd like to do a post request to torch serve with application/json as its content-type, instead of a file. data could be `{"text": "hi"}`. Is that possible?
In the docs it is shown how you can send binary file data
```
import requests
res = requests.post("http://localhost:8080/predictions/squeezenet1_1", file... | https://github.com/pytorch/serve/issues/2242 | closed | [] | 2023-04-19T16:17:51Z | 2023-04-19T22:47:55Z | null | nihiluis |
huggingface/setfit | 364 | Understanding the trainer parameters | I am looking at the SetFit example with SetFitHead:
```
# Create trainer
trainer = SetFitTrainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
loss_class=CosineSimilarityLoss,
metric="accuracy",
batch_size=16,
num_iterations=20, # The number of text pairs to... | https://github.com/huggingface/setfit/issues/364 | closed | [
"question"
] | 2023-04-19T15:19:42Z | 2023-11-24T13:22:31Z | null | vahuja4 |
huggingface/diffusers | 3,151 | What is the format of the training data | Hello,I'm training Lora, but I don't know what the data format looks like,
The error is as follows:
--caption_column' value 'text' needs to be one of: image
What is the data format? | https://github.com/huggingface/diffusers/issues/3151 | closed | [
"stale"
] | 2023-04-19T07:51:16Z | 2023-08-04T10:20:18Z | null | WGS-note |
pytorch/TensorRT | 1,835 | ❓ [Question] Is torch-tensorrt compiled code device agnostic? | Thanks for this wonderful repo!
Is the torch-tensorrt compiled code runnable on any (Nvidia) device or should it be compiled on the target device? I know that the usual tensorrt programs (compiled from onnx) need to be compiled on the target device. I would expect the same from torch-tensorrt. However, the docs on ... | https://github.com/pytorch/TensorRT/issues/1835 | closed | [
"question"
] | 2023-04-18T16:44:56Z | 2023-04-18T17:05:52Z | null | FabianSchuetze |
huggingface/setfit | 360 | Token padding makes ONNX inference 6x slower, is attention_mask being used properly? | Here's some code that loads in my ONNX model and tokenizes 293 short examples. The longest length in the set is 153 tokens:
```python
input_text = test_ds['text']
import onnxruntime
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer(
input_text,
... | https://github.com/huggingface/setfit/issues/360 | open | [
"question"
] | 2023-04-18T15:33:01Z | 2023-04-19T05:40:02Z | null | bogedy |
huggingface/datasets | 5,767 | How to use Distill-BERT with different datasets? | ### Describe the bug
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxL... | https://github.com/huggingface/datasets/issues/5767 | closed | [] | 2023-04-18T06:25:12Z | 2023-04-20T16:52:05Z | 1 | sauravtii |
huggingface/transformers.js | 93 | [Feature Request] "slow tokenizer" format (`vocab.json` and `merges.txt`) | Wondering whether this code is supposed to work (or some variation on the repo URL - I tried a few different things):
```js
await import("https://cdn.jsdelivr.net/npm/@xenova/transformers@1.4.2/dist/transformers.min.js");
let tokenizer = await AutoTokenizer.from_pretrained("https://huggingface.co/cerebras/Cerebras-G... | https://github.com/huggingface/transformers.js/issues/93 | closed | [
"question"
] | 2023-04-18T05:11:31Z | 2023-04-23T07:41:27Z | null | josephrocca |
huggingface/datasets | 5,766 | Support custom feature types | ### Feature request
I think it would be nice to allow registering custom feature types with the 🤗 Datasets library. For example, allow to do something along the following lines:
```
from datasets.features import register_feature_type # this would be a new function
@register_feature_type
class CustomFeature... | https://github.com/huggingface/datasets/issues/5766 | open | [
"enhancement"
] | 2023-04-17T15:46:41Z | 2024-03-10T11:11:22Z | 4 | jmontalt |
huggingface/transformers.js | 92 | [Question] ESM module import in the browser (via jsdelivr) | Wondering how to import transformers.js as a module (as opposed to `<script>`) in the browser? I've tried this:
```js
let { AutoTokenizer } = await import("https://cdn.jsdelivr.net/npm/@xenova/transformers@1.4.2/dist/transformers.min.js");
```
But it doesn't seem to export anything. I might be making a mistake here... | https://github.com/huggingface/transformers.js/issues/92 | closed | [
"question"
] | 2023-04-17T10:06:55Z | 2023-04-22T19:17:56Z | null | josephrocca |
pytorch/serve | 2,236 | How to get image name | I use curl http://localhost:8080/predictions/resnet-18 -T kitten_small.jpg
I want to get the image name like kitten_small.jpg but the data in the handler is only image | https://github.com/pytorch/serve/issues/2236 | closed | [] | 2023-04-17T08:31:55Z | 2023-10-28T19:39:20Z | null | zzh1230 |
pytorch/data | 1,132 | torchdata.datapipes.map.Shuffler should return a MapDataPipe | ### 🐛 Describe the bug
Hello. I am working on mixing two speech datasets, both of them are indexable datasets. Using MapDataPipe, shuffle one of the speech datasets, and zip them together with one zipper:
```python
import torchdata.datapipes as dp
dp1 = dp.map.SequenceWrapper([0, 1, 2, 3, 4, 5]) # speech 1
... | https://github.com/meta-pytorch/data/issues/1132 | closed | [] | 2023-04-17T02:29:55Z | 2023-04-18T14:56:05Z | 7 | quancs |
huggingface/optimum | 973 | How to run the encoder part only of the model transformed by BetterTransformer? | ### Feature request
If I want to run the encoder part of the model, e.g., "bert-large-uncased", skipping the word embedding stage, I could run with `nn.TransformerEncoder` as the Pytorch eager mode. How could I implement the BetterTransformer version encoder?
```
encoder_layer = nn.TransformerEncoderLayer(d_mode... | https://github.com/huggingface/optimum/issues/973 | closed | [
"Stale"
] | 2023-04-17T02:29:44Z | 2025-06-04T02:15:33Z | 2 | WarningRan |
huggingface/datasets | 5,759 | Can I load in list of list of dict format? | ### Feature request
my jsonl dataset has following format:
```
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
```
I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises
```
File "site-p... | https://github.com/huggingface/datasets/issues/5759 | open | [
"enhancement"
] | 2023-04-16T13:50:14Z | 2023-04-19T12:04:36Z | 1 | LZY-the-boys |
huggingface/setfit | 358 | Domain adaptation | Does setfit cover Adapter Transformers? https://arxiv.org/pdf/2007.07779.pdf | https://github.com/huggingface/setfit/issues/358 | closed | [
"question"
] | 2023-04-16T12:44:50Z | 2023-12-05T14:49:36Z | null | Elahehsrz |
huggingface/diffusers | 3,120 | The controlnet trained by diffusers scripts produce always same result no matter what the input images is | ### Describe the bug
I train a controlnet with the base model Chilloutmix-Ni and datasets Abhilashvj/vto_hd_train using the train_controlnet.py script provided in diffuses repo
After training I got a controlnet model.
When I inference the image with the model, if I use the same prompt and seed, no matter how I cha... | https://github.com/huggingface/diffusers/issues/3120 | closed | [
"bug",
"stale"
] | 2023-04-16T11:16:58Z | 2023-07-08T15:03:12Z | null | garyhxfang |
pytorch/data | 1,131 | What does it mean for a DataPipe to be 'replicable'? | ### 📚 The doc issue
In the [ReadingService docs](https://pytorch.org/data/main/reading_service.html?highlight=replicable) the different sharding options and that one applies to replicable and one to non-replicable datapipes, but it's not really explained what that means.
Indirectly related, I'm also confused by th... | https://github.com/meta-pytorch/data/issues/1131 | open | [] | 2023-04-15T03:27:12Z | 2023-05-27T21:47:09Z | 4 | lendle |
pytorch/TensorRT | 1,824 | ❓ [Question] Pytorch 2.0 Compatability? | ## ❓ Question
Thanks for this repo. Is TensorRT compatible with pytorch 2.0? I see that the latest release targets pytorch 1.13. Is there some way I can use TensorRT with pytorch 2.0?
| https://github.com/pytorch/TensorRT/issues/1824 | closed | [
"question"
] | 2023-04-14T17:38:33Z | 2023-04-22T21:21:34Z | null | FabianSchuetze |
huggingface/transformers.js | 87 | Can whisper-tiny speech-to-text translate to English as well as transcribe foreign language? | I know there is a separate translation engine (t5-small), but I'm wondering if speech-to-text with whisper-tiny (not whisper-tiny.en) can return English translation alongside the foreign-language transcription? -- I read Whisper.ai can do this. It seems like it would just be a parameter, but I don't know where to loo... | https://github.com/huggingface/transformers.js/issues/87 | closed | [
"enhancement",
"question"
] | 2023-04-14T16:23:14Z | 2023-06-23T19:07:31Z | null | patrickinminneapolis |
pytorch/pytorch | 99,143 | No documentation to show how to implement aten::view for custom backend | ### 📚 The doc issue
The original code is:
```py
x = torch.empty([1024], device='privateuseone:0')
y = x.view([2, -1]) # raise error by missing aten::view
```
Then I get following errors:
```txt
NotImplementedError: Could not run 'aten::view' with arguments from the 'PrivateUse1' backend. This could be be... | https://github.com/pytorch/pytorch/issues/99143 | open | [
"module: cpp-extensions",
"module: docs",
"triaged"
] | 2023-04-14T11:36:09Z | 2024-04-16T16:18:30Z | null | ghostplant |
huggingface/text-generation-inference | 182 | Is bert-base-uncased supported? | Hi,
I'm trying to deploy bert-base-uncased model by [v0.5.0](https://github.com/huggingface/text-generation-inference/tree/v0.5.0), but got an error: ValueError: BertLMHeadModel does not support `device_map='auto'` yet.
<details>
```
root@nick-test1-8zjwg-135105-worker-0:/usr/local/bin# ./text-generation-launch... | https://github.com/huggingface/text-generation-inference/issues/182 | open | [
"question"
] | 2023-04-14T07:26:05Z | 2023-11-17T09:20:30Z | null | nick1115 |
huggingface/setfit | 355 | ONNX conversion of multi-ouput classifier | Hi,
I am trying to do onnx conversion for multilabel model using the multioutputclassifier
`model = SetFitModel.from_pretrained(model_id, multi_target_strategy="multi-output")`.
When I tried `export_onnx(model.model_body,
model.model_head,
opset=12,
output_path=output_path)`, i... | https://github.com/huggingface/setfit/issues/355 | open | [
"question"
] | 2023-04-13T22:08:13Z | 2023-04-20T17:00:48Z | null | jackiexue1993 |
pytorch/examples | 1,136 | examples/imagenet/main.py Multiple Gpus use for training | By setting up multiple Gpus for use, the model and data are automatically loaded to these Gpus for training. What is the difference between this way and single-node multi-GPU distributed training?
| https://github.com/pytorch/examples/issues/1136 | open | [] | 2023-04-13T12:05:39Z | 2023-04-30T01:18:17Z | 1 | Ansor-ZJJ |
pytorch/tutorials | 2,284 | [BUG] - module 'torch' has no attribute '_six' | ### Add Link
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html
### Describe the bug
When I try to run the data loader section, it keeps returning this error of torch not having the attribute _six. I made sure that my dataroot is right and the files are there but it just doesn't seem to fix the problem... | https://github.com/pytorch/tutorials/issues/2284 | closed | [
"question"
] | 2023-04-13T04:46:34Z | 2024-11-20T14:19:23Z | null | vanilladucky |
huggingface/transformers.js | 84 | [Question] New demo type/use case: semantic search (SemanticFinder) | Hi @xenova,
first of all thanks for the amazing library - it's awesome to be able to play around with the models without a backend!
I just created [SemanticFinder](https://do-me.github.io/SemanticFinder/), a semantic search engine in the browser with the help of transformers.js and [sentence-transformers/all-Mini... | https://github.com/huggingface/transformers.js/issues/84 | closed | [
"question"
] | 2023-04-12T18:57:38Z | 2025-10-13T05:03:30Z | null | do-me |
huggingface/diffusers | 3,075 | Create a Video ControlNet Pipeline | **Is your feature request related to a problem? Please describe.**
Stable Diffusion video generation lacks precise movement control and composition control. This is not surprising, since the model was not trained or fine-tuned with videos.
**Describe the solution you'd like**
By following an analogous extension p... | https://github.com/huggingface/diffusers/issues/3075 | closed | [
"question"
] | 2023-04-12T17:51:35Z | 2023-04-13T16:21:28Z | null | jfischoff |
huggingface/setfit | 352 | False Positives | I had built a model using a muti-label dataset. But I see that I am getting so many False Positive outputs during inference.
For eg:
FIRST NOTICE OF LOSS SENT TO AGENT'S CUSTOMER ACTIVITY ---> This is predicted as 'Total Loss' (Total Loss is one of my labels given fed through the dataset).
I see that there... | https://github.com/huggingface/setfit/issues/352 | closed | [
"question"
] | 2023-04-12T17:42:44Z | 2023-05-18T16:19:27Z | null | cassinthangam4996 |
huggingface/setfit | 349 | Hard Negative Mining vs random sampling | Has anyone tried doing hard negative mining when generating the sentence pairs as opposed to random sampling? @tomaarsen - is random sampling the default? | https://github.com/huggingface/setfit/issues/349 | open | [
"question"
] | 2023-04-12T09:24:53Z | 2023-04-15T16:04:27Z | null | vahuja4 |
pytorch/android-demo-app | 311 | What is MemoryFormat.CHANNELS_LAST? | And What is BitmaptoFloat32Tensor?
Thx. | https://github.com/pytorch/android-demo-app/issues/311 | open | [] | 2023-04-12T02:03:49Z | 2023-04-12T02:03:49Z | null | NeighborhoodCoding |
huggingface/tokenizers | 1,216 | What is the correct way to remove a token from the vocabulary? | I see that it works when I do something like this
```
del tokenizer.get_vocab()[unwanted_token]
```
~~And then it will work when running encode~~, but when I save the model the unwanted tokens remain in the json. Is there a blessed way to remove unwanted tokens?
EDIT:
Now that I tried again see that does not a... | https://github.com/huggingface/tokenizers/issues/1216 | closed | [
"Stale"
] | 2023-04-11T15:40:48Z | 2024-02-10T01:47:15Z | null | tvallotton |
huggingface/optimum | 964 | onnx conversion for custom trained trocr base stage1 | ### Feature request
I have trained the base stage1 trocr on my custom dataset having multiline images. The trained model gives good results while using the default torch format for loading the model. But while converting the model to onnx, the model detects only first line or part of it in first line. I have used th... | https://github.com/huggingface/optimum/issues/964 | open | [
"onnx"
] | 2023-04-11T10:10:23Z | 2023-10-16T14:20:42Z | 1 | Mir-Umar |
huggingface/datasets | 5,727 | load_dataset fails with FileNotFound error on Windows | ### Describe the bug
Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps:
(1) create conda environment
(2) activate environment
(3) install with: ``conda` install -c huggingface -c conda-... | https://github.com/huggingface/datasets/issues/5727 | closed | [] | 2023-04-10T23:21:12Z | 2023-07-21T14:08:20Z | 4 | joelkowalewski |
pytorch/examples | 1,131 | New examples requested | Hi everyone, @svekars and I are looking to increase the number of new contributions to pytorch/examples, this might be especially interesting to you if you've never contributed to an open source project before.
At a high level, we're looking for new interesting models.
So here's what you need to do
1. Check out ... | https://github.com/pytorch/examples/issues/1131 | closed | [
"good first issue"
] | 2023-04-10T19:49:49Z | 2025-07-05T19:17:22Z | 58 | msaroufim |
pytorch/serve | 2,224 | How to prevent torchserve unloading my models in case of inactivity? | ### 📚 The doc issue
According to my experience, even though I wasn't able to find it in documentation, torchserve unloads a model after some time of inactivity. After the inference api for that model is invoked, it will load it again in memory, and thus increasing total inference time.
Can I control that behavior an... | https://github.com/pytorch/serve/issues/2224 | open | [
"triaged",
"sagemaker"
] | 2023-04-10T12:32:26Z | 2023-05-08T21:51:39Z | null | petrovicu |
huggingface/datasets | 5,725 | How to limit the number of examples in dataset, for testing? | ### Describe the bug
I am using this command:
`data = load_dataset("json", data_files=data_path)`
However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter.
### Steps to reproduce the bug
In the description.
### Expected beh... | https://github.com/huggingface/datasets/issues/5725 | closed | [] | 2023-04-10T08:41:43Z | 2023-04-21T06:16:24Z | 3 | ndvbd |
huggingface/transformers.js | 75 | [Question] WavLM support | This is a really good project. I was wondering if WavLM is supported in the project, I wanted to run a voice conversation model in the browser, also if Hifi-gan for voice synthesis.
| https://github.com/huggingface/transformers.js/issues/75 | closed | [
"question"
] | 2023-04-08T09:36:03Z | 2023-09-08T13:17:07Z | null | Ashraf-Ali-aa |
huggingface/datasets | 5,719 | Array2D feature creates a list of list instead of a numpy array | ### Describe the bug
I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array int... | https://github.com/huggingface/datasets/issues/5719 | closed | [] | 2023-04-07T21:04:08Z | 2023-04-20T15:34:41Z | 4 | offchan42 |
huggingface/datasets | 5,716 | Handle empty audio | Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_... | https://github.com/huggingface/datasets/issues/5716 | closed | [] | 2023-04-07T09:51:40Z | 2023-09-27T17:47:08Z | 2 | zyb8543d |
huggingface/setfit | 344 | How to do I have multi text columns? | Text is not one column, there many columns. For example : The text columns are "sex","title","weather". What should I do? | https://github.com/huggingface/setfit/issues/344 | closed | [
"question"
] | 2023-04-07T01:51:21Z | 2023-04-10T00:45:38Z | null | freecui |
huggingface/transformers.js | 71 | [Question] How to run test suit | Hi @xenova,
I want to work on adding new features, but when I try to run the tests of the project I get this error:
```
Error: File not found. Could not locate "/Users/yonatanchelouche/Desktop/passive-project/transformers.js/models/onnx/quantized/distilbert-base-uncased-finetuned-sst-2-english/sequence-classific... | https://github.com/huggingface/transformers.js/issues/71 | closed | [
"question"
] | 2023-04-06T17:03:09Z | 2023-05-15T17:38:46Z | null | chelouche9 |
pytorch/text | 2,145 | Loading vectors into a GPU | ## 🚀 Feature
Is there any way for loading vectors based on device with torchtext.vocab.Vectors class?
| https://github.com/pytorch/text/issues/2145 | closed | [] | 2023-04-06T15:38:38Z | 2023-04-14T18:04:46Z | 4 | saeeddhqan |
pytorch/functorch | 1,123 | Can I call torch.utils.data.WeightedRandomSampler inside vmap? | Dear Experts,
I am trying to accelerate a series of weighted sampling (i.e., transition using a stochastic matrix) using vmap.
Basically, I am trying to accelerate the code from here: https://discuss.pytorch.org/t/best-way-to-implement-series-of-weighted-random-sampling-for-transition-w-stochastic-matrix/176713 usi... | https://github.com/pytorch/functorch/issues/1123 | closed | [] | 2023-04-04T23:47:08Z | 2023-04-04T23:55:33Z | 1 | kwmaeng91 |
huggingface/transformers.js | 69 | How to convert bloomz model | While converting the [bloomz](https://huggingface.co/bigscience/bloomz-7b1l) model, I am getting the 'invalid syntax' error. Is conversion limited to only predefined model types?
If not, please provide the syntax for converting the above model with quantization.
(I will run the inference in nodejs and not in browse... | https://github.com/huggingface/transformers.js/issues/69 | closed | [
"question"
] | 2023-04-04T14:51:16Z | 2023-04-09T02:01:49Z | null | bil-ash |
huggingface/transformers.js | 68 | [Feature request] whisper word level timestamps | I am new to both transformers.js and whisper, so I am sorry for a lame question in advance.
I am trying to make [return_timestamps](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline.__call__) parameter work...
I managed to customize [script.js](https:/... | https://github.com/huggingface/transformers.js/issues/68 | closed | [
"enhancement",
"question"
] | 2023-04-04T10:57:05Z | 2023-07-09T22:48:31Z | null | jozefchutka |
huggingface/datasets | 5,705 | Getting next item from IterableDataset took forever. | ### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda... | https://github.com/huggingface/datasets/issues/5705 | closed | [] | 2023-04-04T09:16:17Z | 2023-04-05T23:35:41Z | 2 | HongtaoYang |
huggingface/optimum | 952 | Enable AMP for BetterTransformer | ### Feature request
Allow for the `BetterTransformer` models to be inferenced with AMP.
### Motivation
Models transformed with `BetterTransformer` raise error when used with AMP:
`bettertransformers.models.base`
```python
...
def forward_checker(self, *args, **kwargs):
if torch.is_autocast_ena... | https://github.com/huggingface/optimum/issues/952 | closed | [] | 2023-04-04T09:14:00Z | 2023-07-26T17:08:42Z | 6 | viktor-shcherb |
huggingface/controlnet_aux | 18 | When using openpose, what is the format of the input image? RGB format, or BGR format? | 

I saw that the image in BGR format is used as input in the open_pose/body.py file, but the hug... | https://github.com/huggingface/controlnet_aux/issues/18 | open | [] | 2023-04-04T03:58:38Z | 2023-04-04T11:23:33Z | null | ZihaoW123 |
huggingface/datasets | 5,702 | Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None? | ### Feature request
Hello! Apologies if my question sounds naive:
I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None?
Specifically, I’d like to define a feature for a list that contains 18... | https://github.com/huggingface/datasets/issues/5702 | closed | [
"enhancement"
] | 2023-04-04T03:20:43Z | 2023-04-05T14:15:18Z | 4 | gitforziio |
pytorch/text | 2,139 | torchtext.vocab.Vectors(..).__getitem__ does not work | ## ❓ Questions and Help
I loaded a model:
```python
vects = torchtext.vocab.Vectors('text5-emb.txt')
```
And when I want to know whether a vocab is in the dataset or not, I run this:
```python
if "the" in vects:
```
and the code stops here. I waited for a long time but it does not do anything.
Then, I loa... | https://github.com/pytorch/text/issues/2139 | closed | [] | 2023-04-03T17:54:45Z | 2023-04-04T13:52:53Z | 0 | saeeddhqan |
huggingface/dataset-viewer | 1,011 | Remove authentication by cookie? | Currently, to be able to return the contents for gated datasets, all the endpoints check the request credentials if needed. The accepted credentials are: HF token, HF cookie, or a JWT in `X-Api-Key`. See https://github.com/huggingface/datasets-server/blob/ecb861b5e8d728b80391f580e63c8d2cad63a1fc/services/api/src/api/au... | https://github.com/huggingface/dataset-viewer/issues/1011 | closed | [
"question",
"P2"
] | 2023-04-03T12:12:56Z | 2024-03-13T09:48:38Z | null | severo |
huggingface/transformers.js | 63 | [Model request] Helsinki-NLP/opus-mt-ru-en (marian) | Sorry for this noob question, can somebody give me a kind of guideline to be able to convert and use
https://huggingface.co/Helsinki-NLP/opus-mt-ru-en/tree/main
thank you | https://github.com/huggingface/transformers.js/issues/63 | closed | [
"enhancement",
"question"
] | 2023-03-31T09:18:28Z | 2023-08-20T08:00:38Z | null | eviltik |
huggingface/safetensors | 222 | Might not related but wanna ask: does there can have a c++ version? | Hello, wanna ask 2 questions:
1. will safetensors provides a c++ version, it looks more convenient then pth or onnx;
2. does it possible to load safetensors into some forward lib not just pytorch, such as onnxruntime etc? | https://github.com/huggingface/safetensors/issues/222 | closed | [
"Stale"
] | 2023-03-31T05:14:29Z | 2023-12-21T01:47:58Z | 5 | lucasjinreal |
huggingface/transformers.js | 62 | [Feature request] nodejs caching | Hi, thank you for your works
I'm a nodejs user and i read that there is no model cache implementation right now, and you are working on it.
Do you have an idea of when you will be able to push a release with a cache implementation ?
Just asking because i was at the point to code it on my side | https://github.com/huggingface/transformers.js/issues/62 | closed | [
"enhancement",
"question"
] | 2023-03-31T04:27:57Z | 2023-05-15T17:26:55Z | null | eviltik |
huggingface/dataset-viewer | 1,001 | Add total_rows in /rows response? | Should we add the number of rows in a split (eg. in field `total_rows`) in response to /rows?
It would help avoid sending a request to /size to get it.
It would also help fix a bad query.
eg: https://datasets-server.huggingface.co/rows?dataset=glue&config=ax&split=test&offset=50000&length=100 returns:
```js... | https://github.com/huggingface/dataset-viewer/issues/1001 | closed | [
"question",
"improvement / optimization"
] | 2023-03-30T13:54:19Z | 2023-05-07T15:04:12Z | null | severo |
pytorch/xla | 4,837 | How to run XLA compilation thru MLIR | ## ❓ Questions and Help
Hi,
Is there a way to switch pytorch->XLA to compilation through MLIR chain? (StableHLO/MHLO/LMHLO etc.) Or will it appear only after switch to openxla/xla repository? (I see such pull requests in the list, but according to OpenXLA community meeting slides, these repositories should have the s... | https://github.com/pytorch/xla/issues/4837 | closed | [] | 2023-03-30T13:16:28Z | 2023-05-22T19:32:41Z | null | MUR-83 |
huggingface/dataset-viewer | 999 | Use the huggingface_hub webhook server? | See https://github.com/huggingface/huggingface_hub/pull/1410
The/webhook endpoint could live in its pod with the huggingface_hub webhook server. Is it useful for our project? Feel free to comment. | https://github.com/huggingface/dataset-viewer/issues/999 | closed | [
"question",
"refactoring / architecture"
] | 2023-03-30T08:44:49Z | 2023-06-10T15:04:09Z | null | severo |
huggingface/datasets | 5,687 | Document to compress data files before uploading | In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are t... | https://github.com/huggingface/datasets/issues/5687 | closed | [
"documentation"
] | 2023-03-30T06:41:07Z | 2023-04-19T07:25:59Z | 3 | albertvillanova |
pytorch/xla | 4,831 | Increasing rendezvous timeout patience? | ## ❓ Questions and Help
Hi, this might be a basic question but how do I increase the timeout of `xm.rendezvous()`? I'm training a large model and due to the system we're training on saving can take >5 minutes which results in timeout errors such as
`2023-03-29 13:52:59 172.16.96.171 [1] RuntimeError: tensorflow/c... | https://github.com/pytorch/xla/issues/4831 | closed | [
"question",
"distributed"
] | 2023-03-29T18:38:42Z | 2025-05-05T13:20:41Z | null | bram-w |
huggingface/datasets | 5,685 | Broken Image render on the hub website | ### Describe the bug
Hi :wave:
Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type
 issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged load... | https://github.com/huggingface/datasets/issues/5681 | closed | [
"documentation"
] | 2023-03-29T11:44:49Z | 2023-04-03T18:31:11Z | 2 | polinaeterna |
pytorch/tutorials | 2,273 | [BUG] - Chatbot Tutorial - Unterminated string starting at: line 1 column 91 (char 90) | ### Add Link
https://pytorch.org/tutorials/beginner/chatbot_tutorial.html#chatbot-tutorial
### Describe the bug
I downloaded the zip and extracted it.
Now I got this error:
```
Processing corpus into lines and conversations...
---------------------------------------------------------------------------
JSO... | https://github.com/pytorch/tutorials/issues/2273 | open | [
"bug",
"question"
] | 2023-03-28T21:29:07Z | 2024-11-09T02:31:22Z | null | levalencia |
pytorch/audio | 3,206 | How to train a wav2vec 2.0 pretrain model from scratch ? | ### 🚀 The feature
There is an example for hubert training [here](https://github.com/pytorch/audio/tree/main/examples/self_supervised_learning), but has no example aboult wav2vec 2.0.
### Motivation, pitch
I'm woking on ssl with/without a pretrained model to continue train the pretrained model like wav2vec 2.0 on ... | https://github.com/pytorch/audio/issues/3206 | closed | [
"triaged"
] | 2023-03-27T13:26:38Z | 2023-04-23T09:57:51Z | null | kobenaxie |
pytorch/pytorch | 97,654 | where is the engine_layer_visualize.py,isn't removed? | ### 🐛 Describe the bug
where is the engine_layer_visualize.py,isn't removed?
### Versions
where is the engine_layer_visualize.py,isn't removed? | https://github.com/pytorch/pytorch/issues/97654 | closed | [] | 2023-03-27T08:45:04Z | 2023-03-27T18:20:59Z | null | cqray1990 |
huggingface/datasets | 5,671 | How to use `load_dataset('glue', 'cola')` | ### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
------------------------------------------------------------------------... | https://github.com/huggingface/datasets/issues/5671 | closed | [] | 2023-03-26T09:40:34Z | 2023-03-28T07:43:44Z | 2 | makinzm |
pytorch/data | 1,110 | `scan` support | ### 🚀 The feature
How does one create an `IterDataPipe` with [`scan`/`fold`](http://learnyouahaskell.com/higher-order-functions) semantics?
### Motivation, pitch
Necessary for pipelines that require some kind of state, eg. label encoding for an unknown number of labels.
### Alternatives
_No response_
### Additi... | https://github.com/meta-pytorch/data/issues/1110 | open | [
"good first issue",
"help wanted"
] | 2023-03-24T18:24:33Z | 2023-03-24T22:19:53Z | 3 | samuela |
pytorch/android-demo-app | 305 | I am doing object detection, app is working fine with android studio emulator. but when run on device it is showing interface as expected, all other buttons working . but when detection is pressed nothing happens . what might be the issue | https://github.com/pytorch/android-demo-app/issues/305 | open | [] | 2023-03-24T12:07:25Z | 2023-05-05T15:17:16Z | null | som1233 | |
huggingface/optimum | 918 | Support for LLaMA | ### Feature request
A support for exporting LLaMA to ONNX
### Motivation
It would be great to have one, to apply optimizations and so on
### Your contribution
I could try implementing a support, but I would need an assist on model config even though it should be pretty simmilar to what is already done with GPT-J | https://github.com/huggingface/optimum/issues/918 | closed | [
"onnx"
] | 2023-03-23T21:07:30Z | 2023-04-17T14:32:37Z | 2 | nenkoru |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.