repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/dataset-viewer | 455 | what to do with /is-valid? | Currently, the endpoint /is-valid is not documented in https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json (but it is in https://github.com/huggingface/datasets-server/blob/main/services/api/README.md).
It's not used in the dataset viewer in moonlanding, but https://github.com/hu... | https://github.com/huggingface/dataset-viewer/issues/455 | closed | [
"question"
] | 2022-07-22T19:29:08Z | 2022-08-02T14:16:24Z | null | severo |
pytorch/torchx | 567 | [exploratory] TorchX Dashboard | ## Description
<!-- concise description of the feature/enhancement -->
Add a new `torchx dashboard` command that will launch a local HTTP server that allows users to view all of their jobs with statuses, logs and integration with any ML specific extras such as artifacts, Tensorboard, etc.
## Motivation/Backgroun... | https://github.com/meta-pytorch/torchx/issues/567 | open | [
"enhancement",
"RFC",
"cli"
] | 2022-07-22T19:28:51Z | 2022-08-02T21:23:14Z | 1 | d4l3k |
pytorch/torchx | 566 | add a TORCHX_JOB_ID environment variable to all jobs launched via runner | ## Description
<!-- concise description of the feature/enhancement -->
As part of the future experiment tracking we want to be able to have the application know it's own identity. When we launch a job we return the full job id (i.e. `kubernetes://session/app_id`) but the app itself doesn't have this exact same job... | https://github.com/meta-pytorch/torchx/issues/566 | open | [
"enhancement",
"module: runner",
"tracking"
] | 2022-07-22T18:22:24Z | 2022-07-22T21:28:02Z | 0 | d4l3k |
pytorch/functorch | 979 | ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl16sym_sizes_customEv | Hi All,
I was running an older version of PyTorch ( - built from source) with FuncTorch ( - built from source), and somehow I've broken the older version of functorch. When I import functorch I get the following error,
```
import functorch
#returns ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so... | https://github.com/pytorch/functorch/issues/979 | closed | [] | 2022-07-22T14:51:13Z | 2022-07-25T19:22:04Z | 24 | AlphaBetaGamma96 |
huggingface/datasets | 4,736 | Dataset Viewer issue for deepklarity/huggingface-spaces-dataset | ### Link
https://huggingface.co/datasets/deepklarity/huggingface-spaces-dataset/viewer/deepklarity--huggingface-spaces-dataset/train
### Description
Hi Team,
I'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is cs... | https://github.com/huggingface/datasets/issues/4736 | closed | [
"dataset-viewer"
] | 2022-07-22T12:14:18Z | 2022-07-22T13:46:38Z | 1 | dk-crazydiv |
pytorch/TensorRT | 1,199 | Cant import torch_tensorrt | ERROR:
from torch.fx.passes.pass_manager import PassManager
ModuleNotFoundError: No module named 'torch.fx.passes.pass_manager'
- PyTorch Version : 1.11
- CPU Architecture: jetson AGX xavier
- OS (e.g., Linux):
- How you installed PyTorch: nvidia forum wheel
- Build command you used (if compiling ... | https://github.com/pytorch/TensorRT/issues/1199 | closed | [
"question",
"channel: linux-jetpack",
"component: fx"
] | 2022-07-22T08:00:34Z | 2022-09-02T18:04:29Z | null | sanath-tech |
huggingface/datasets | 4,732 | Document better that loading a dataset passing its name does not use the local script | As reported by @TrentBrick here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191858596, it could be more clear that loading a dataset by passing its name does not use the (modified) local script of it.
What he did:
- he installed `datasets` from source
- he modified locally `datasets/the_pile/... | https://github.com/huggingface/datasets/issues/4732 | closed | [
"documentation"
] | 2022-07-22T06:07:31Z | 2022-08-23T16:32:23Z | 3 | albertvillanova |
pytorch/TensorRT | 1,198 | β [Question] Where can we get VGG-16 checkpoint pretrained on CIFAR-10 ? | ## β Question
To get $pwd/vgg16_ckpts/ckpt_epoch110.pth, I tried to run the script named [python3 finetune_qat.py](https://github.com/pytorch/TensorRT/tree/v1.1.1/examples/int8/training/vgg16#quantization-aware-fine-tuning-for-trying-out-qat-workflows).
However, the script needs VGG-16 pretrained model at 100-epo... | https://github.com/pytorch/TensorRT/issues/1198 | closed | [
"question"
] | 2022-07-22T05:06:34Z | 2022-07-22T05:13:32Z | null | zinuok |
pytorch/TensorRT | 1,197 | β [Question] Where can we get 'trained_vgg16_qat.jit.pt' ? | ## β Question
Where can we get 'trained_vgg16_qat.jit.pt' ?
the link in [test_qat_trt_accuracy.py](https://github.com/pytorch/TensorRT/blob/master/tests/py/test_qat_trt_accuracy.py#L74)
doesn't work now. | https://github.com/pytorch/TensorRT/issues/1197 | closed | [
"question"
] | 2022-07-22T04:38:53Z | 2022-07-22T04:46:46Z | null | zinuok |
pytorch/serve | 1,753 | how to return the predictions in JSON format(in JSON string and JSON header)? | I was using torchserve to production service, I was able to return the predictions with a JSON string, but I was unable to get the response with a JSON header. | https://github.com/pytorch/serve/issues/1753 | closed | [
"triaged_wait",
"support"
] | 2022-07-22T04:04:26Z | 2022-07-24T16:50:32Z | null | Vincentwei1021 |
pytorch/functorch | 977 | Hessian (w.r.t inputs) calculation in PyTorch differs from FuncTorch | Hi All,
I've been trying to calculate the Hessian of the output of my network with respect to its inputs within FuncTorch. I had a version within PyTorch that supports batches, however, they seem to disagree with each other and I have no idea why they don't give the same results. Something is clearly wrong, I know m... | https://github.com/pytorch/functorch/issues/977 | closed | [] | 2022-07-21T12:11:09Z | 2022-08-01T19:37:18Z | 18 | AlphaBetaGamma96 |
pytorch/benchmark | 1,046 | How to add an new backend? | Hello, I want to add an new backend to run benchmark **without** modify this repo's code. In torchdynamo repo, I use @create_backend decorator to finish this, but I can't find suitable interface in this repo. | https://github.com/pytorch/benchmark/issues/1046 | closed | [] | 2022-07-20T08:45:36Z | 2022-07-27T22:47:49Z | null | zzpmiracle |
huggingface/datasets | 4,719 | Issue loading TheNoob3131/mosquito-data dataset | 
So my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to ... | https://github.com/huggingface/datasets/issues/4719 | closed | [] | 2022-07-19T17:47:37Z | 2022-07-20T06:46:57Z | 2 | thenerd31 |
pytorch/TensorRT | 1,189 | β [Question]Why the GPU memory has doubled when I loaded model from Torch-TensorRT by Pytorch? | ## β Question
<!-- Your question -->
When I'm using Pytorch to load model from Torch-TensorRT(torch.jit.load (*.ts)) file, the model's GPU memory has doubled(1602MB to 3242MB of GPU Memory from Nvidia-smi). At the same time, the gradient of model tensors are both not included. What I'm concern is that the context m... | https://github.com/pytorch/TensorRT/issues/1189 | closed | [
"question",
"No Activity",
"performance"
] | 2022-07-19T10:21:14Z | 2023-03-26T00:02:18Z | null | Jancapcc |
huggingface/datasets | 4,711 | Document how to create a dataset loading script for audio/vision | Currently, in our docs for Audio/Vision/Text, we explain how to:
- Load data
- Process data
However we only explain how to *Create a dataset loading script* for text data.
I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text.
See, for example:
... | https://github.com/huggingface/datasets/issues/4711 | closed | [
"documentation"
] | 2022-07-19T08:03:40Z | 2023-07-25T16:07:52Z | 1 | albertvillanova |
huggingface/optimum | 306 | `ORTModelForConditionalGeneration` did not have `generate()` module after converting from `T5ForConditionalGeneration` | ### System Info
```shell
Machine: Apple M1 Pro
Optimum version: 1.3.0
Transformers version: 4.20.1
Onnxruntime version: 1.11.1
# Question
How to inference a quantized onnx model from class ORTModelForConditionalGeneration (previously using T5ForConditionalGeneration). I've successfully converted T5ForConditiona... | https://github.com/huggingface/optimum/issues/306 | closed | [
"bug"
] | 2022-07-19T07:14:48Z | 2022-07-19T09:29:09Z | 2 | tiketdatailham |
pytorch/TensorRT | 1,188 | β [Question] Cannot install torch-tensorrt package | Hi! Can someone explain why this is error
```shell
(tf-gpu-11.6) C:\Users\myxzlpltk>pip install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
Looking in links: https://github.com/NVIDIA/Torch-TensorRT/releases
Collecting torch-tensorrt
Using cached torch-tensorrt-0.0.0.post1.tar.gz (9.0 k... | https://github.com/pytorch/TensorRT/issues/1188 | closed | [
"question",
"channel: windows"
] | 2022-07-19T01:48:13Z | 2024-02-26T17:16:23Z | null | myxzlpltk |
pytorch/TensorRT | 1,186 | β [Question] Python Package for V1.1.1 Release? | ## β Question
Does the latest release include the python package for supporting JP5.0 too?
- PyTorch Version (e.g., 1.0): 1.11
- CPU Architecture: Arm64
- Python version: 3.8
- CUDA version: 11.4
| https://github.com/pytorch/TensorRT/issues/1186 | closed | [
"question",
"release: patch",
"channel: linux-jetpack"
] | 2022-07-18T15:20:13Z | 2022-07-18T21:47:06Z | null | haichuanwang001 |
huggingface/datasets | 4,694 | Distributed data parallel training for streaming datasets | ### Feature request
Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training?
### Motivation
Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation?
### Your contribution
Does it requires manually spli... | https://github.com/huggingface/datasets/issues/4694 | open | [
"enhancement"
] | 2022-07-17T01:29:43Z | 2023-04-26T18:21:09Z | 6 | cyk1337 |
pytorch/data | 661 | DataLoader2 with reading service | For user dev and onboarding experience of the data component, we will provide examples, tutorials, up-to-date documentations as well as the operational support. We added a simple train loop example. This is to further track adding the uscase and example of DataLoader2 with different reading services. | https://github.com/meta-pytorch/data/issues/661 | closed | [
"documentation"
] | 2022-07-15T17:29:41Z | 2022-11-10T23:07:24Z | 2 | dahsh |
huggingface/datasets | 4,684 | How to assign new values to Dataset? | 
Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it?
For example, I want to change all the labels of the SST2 dataset to `0`:
```python
from datasets import l... | https://github.com/huggingface/datasets/issues/4684 | closed | [
"enhancement"
] | 2022-07-15T04:17:57Z | 2023-03-20T15:50:41Z | 2 | beyondguo |
pytorch/data | 655 | DataLoader2 with OSS datasets/datapipes | For user dev and onboarding experience of the data component, we will provide examples, tutorials, up-to-date documentations as well as the operational support. We added a simple train loop example. This is to further track adding the uscase and example of DataLoader2 with open source datasets/datapipes. | https://github.com/meta-pytorch/data/issues/655 | closed | [] | 2022-07-14T17:51:13Z | 2022-11-10T23:06:20Z | 2 | dahsh |
huggingface/datasets | 4,682 | weird issue/bug with columns (dataset iterable/stream mode) | I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key... | https://github.com/huggingface/datasets/issues/4682 | open | [] | 2022-07-14T13:26:47Z | 2022-07-14T13:26:47Z | 0 | eunseojo |
pytorch/torchx | 557 | how does i run the script and use script args | ## β Questions and Help
how does i run the script and use the script_args --
torchx run --scheduler local_cwd --scheduler_args log_dir=/tmp dist.ddp -j 1x2 --script dlrm_main.py --epoch 30
when i test dlrm by next code
```shell
torchx run --scheduler local_cwd --scheduler_args log_dir=/tmp dist... | https://github.com/meta-pytorch/torchx/issues/557 | closed | [] | 2022-07-14T08:50:39Z | 2023-07-03T19:51:50Z | 3 | davidxiaozhi |
pytorch/examples | 1,022 | How to build a generator for a layout 2 image GANs with images of size 256 and 512 | Hello I am new to GANs and I need you help :
Please could you help me to make the model accept the image size of 256x256 and 512x512
I included the generator model for 128x128
`import torch
import torch.nn as nn
import torch.nn.functional as F
from math import *
from models.bilinear import crop_bbox_batch
... | https://github.com/pytorch/examples/issues/1022 | closed | [] | 2022-07-13T15:45:09Z | 2022-07-16T17:13:15Z | null | TahaniFennir |
pytorch/data | 648 | Chainer/Concater from single datapipe? | The `Concater` datapipe takes multiple DPs as input. Is there a class that would take a **single** datapipe of iterables instead? Something like this:
```py
class ConcaterIterable(IterDataPipe):
def __init__(self, source_datapipe):
self.source_datapipe = source_datapipe
def __iter__(self):
... | https://github.com/meta-pytorch/data/issues/648 | closed | [
"good first issue"
] | 2022-07-13T14:19:43Z | 2023-03-14T20:25:01Z | 9 | NicolasHug |
huggingface/optimum | 290 | Quantized Model size difference when using Optimum vs. Onnxruntime | Package versions


`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
ds = load_dataset(
... | https://github.com/huggingface/datasets/issues/4675 | open | [
"bug"
] | 2022-07-12T15:04:04Z | 2022-07-14T14:17:46Z | 1 | BlueskyFR |
pytorch/functorch | 956 | Batching rule for searchsorted implementation | Hi,
Thanks for the great work, really enjoying functorch in my work. I have encountered the following when using vmap on a function which uses torch.searchsorted:
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::searchsorted.Tensor. Please file us an issue o... | https://github.com/pytorch/functorch/issues/956 | closed | [
"actionable"
] | 2022-07-12T06:36:04Z | 2022-07-18T13:49:42Z | 6 | mingu6 |
pytorch/data | 637 | [TODO] Create dependency on TorchArrow? |
This issue is generated from the TODO line
https://github.com/pytorch/data/blob/2f29adba451e1b87f1c0c654557d9dd98673fdd8/torchdata/datapipes/iter/util/dataframemaker.py#L15
| https://github.com/meta-pytorch/data/issues/637 | open | [] | 2022-07-11T17:34:07Z | 2022-07-11T17:34:07Z | 0 | VitalyFedyunin |
huggingface/datasets | 4,671 | Dataset Viewer issue for wmt16 | ### Link
https://huggingface.co/datasets/wmt16
### Description
[Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.
```
Status cod... | https://github.com/huggingface/datasets/issues/4671 | closed | [
"dataset-viewer"
] | 2022-07-11T08:34:11Z | 2022-09-13T13:27:02Z | 6 | lewtun |
huggingface/optimum | 276 | Force write of vanilla onnx model with `ORTQuantizer.export()` | ### Feature request
Force write of the non-quantized onnx model with `ORTQuantizer.export()`, or add an option to force write.
### Motivation
Currently, if the `onnx_model_path` already exists, we don't write the non-quantized model in to the indicated path.
https://github.com/huggingface/optimum/blob/04a2a6d290c... | https://github.com/huggingface/optimum/issues/276 | closed | [] | 2022-07-09T08:44:27Z | 2022-07-11T10:38:48Z | 2 | fxmarty |
pytorch/data | 580 | [Linter] Ability to disable some lints | ### π The feature
There are several options to disable specific linters.
Option 1. Disable with `linter-ignore: code`
Pros:
- Similar to known syntax of various linters
Cons:
- Need to modify code of datasets to disable something
```
datapipe = datapipe.sharding_filter().shuffle() # linter-ignore... | https://github.com/meta-pytorch/data/issues/580 | open | [] | 2022-07-08T17:25:25Z | 2022-07-15T21:23:17Z | 3 | VitalyFedyunin |
pytorch/pytorch | 81,103 | [Discussion] How to add MPS extension with custom kernel? | ### π The feature, motivation and pitch
Hi,
I am working on adding MPS op for MPS backend with a custom kernel.
Here is an example:
https://github.com/grimoire/TorchMPSCustomOpsDemo
I am new to Metal. I am not sure if it is a good way (or the right way) to add such op. There are something I want to discuss:
... | https://github.com/pytorch/pytorch/issues/81103 | closed | [
"module: cpp-extensions",
"triaged",
"enhancement",
"topic: docs",
"module: mps"
] | 2022-07-08T12:32:14Z | 2023-07-28T17:11:42Z | null | grimoire |
pytorch/pytorch.github.io | 1,071 | Where is documented the resize and crop in EfficientNet for torchvision v0.12.0 | ## π Documentation
Hello, I do not see in any place what resize and center crop were done for training the efficientNet_bx models.
Where is that information?
I saw it in the torchvision v0.13.0 documentation or code ([for example](https://github.com/pytorch/vision/blob/main/torchvision/models/efficientnet.py#L... | https://github.com/pytorch/pytorch.github.io/issues/1071 | closed | [] | 2022-07-08T12:20:23Z | 2022-07-22T22:06:23Z | null | mjack3 |
pytorch/vision | 6,249 | Error when create_feature_extractor in AlexNet | ### π Describe the bug
When I try to obtain the feature of layer "classifier.4" in AlexNet, the program has reported an error. The code is as follows:
```
import torch
from torchvision.models import alexnet, AlexNet_Weights
from torchvision.models.feature_extraction import create_feature_extractor
model = alex... | https://github.com/pytorch/vision/issues/6249 | closed | [
"question",
"module: models",
"topic: feature extraction"
] | 2022-07-08T09:28:06Z | 2022-07-08T10:11:43Z | null | githwd2016 |
pytorch/vision | 6,247 | Probable missing argument for swin transformer | Hello,
When I inspect the swin transformer codes in the original swin repo, mmdetection or detectron2, I have noticed that there is a parameter called `drop_path_rate` which I cannot see in the in the torchvision repo. Maybe, I am overlooking. Is there a similar parameter and is it an important parameter?
Thank... | https://github.com/pytorch/vision/issues/6247 | closed | [
"question",
"module: models"
] | 2022-07-08T08:21:58Z | 2022-07-11T13:17:40Z | null | artest08 |
pytorch/functorch | 940 | Question on how to batch over both: inputs and tangent vectors | I want to compute the jacobian vector product of a function F from R^d to R^D. But I need to do this at a batch of points x_1, ..., x_n in R^d and a batch of tangent vectors v_1, ..., v_m in R^d. Namely, for all i = 1, ..., n and j = 1, ..., m I need to compute the nxm jacobian vector products: J_F(x_i) * v_j.
Is th... | https://github.com/pytorch/functorch/issues/940 | open | [] | 2022-07-07T14:57:28Z | 2022-07-12T17:47:23Z | null | sgstepaniants |
pytorch/serve | 1,725 | Serving other framework models with Torchserve? | Hi everyone.
As in the title, I want to ask if torchserve can serve other framework models or pytorch models only?
For example, I have a model written in mxnet. This is the snippet code of `initialize` method in my custom handler.
```python
def initialize(self, context):
properties = context.system_pro... | https://github.com/pytorch/serve/issues/1725 | closed | [
"help wanted",
"question"
] | 2022-07-06T09:08:44Z | 2022-07-13T07:58:10Z | null | vuongdanghuy |
huggingface/optimum | 262 | How can i set number of threads for Optimum exported model? | ### System Info
```shell
optimum==1.2.3
onnxruntime==1.11.1
onnx==1.12.0
transformers==4.20.1
python version 3.7.13
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task i... | https://github.com/huggingface/optimum/issues/262 | closed | [
"bug"
] | 2022-07-06T06:53:30Z | 2022-09-19T11:25:23Z | 1 | MiladMolazadeh |
huggingface/optimum | 257 | Optimum Inference next steps | # What is this issue for?
This issue is a list of potential next steps for improving inference experience using `optimum`. The current list applies to the main namespace of optimum but should be soon extended to other namespaces including `intel`, `habana`, `graphcore`.
## Next Steps/Features
- [x] #199
- [... | https://github.com/huggingface/optimum/issues/257 | closed | [
"inference",
"Stale"
] | 2022-07-06T05:02:12Z | 2025-09-13T02:01:29Z | 1 | philschmid |
pytorch/TensorRT | 1,166 | β [Question] How to run Torch-Tensorrt on JETSON AGX ORIN? | ## β Question
**Not able to run Torch-Tensorrt on Jetson AGX ORIN**
As per the [release note](https://github.com/pytorch/TensorRT/discussions/1043), it is mentioned that current release doesn't have support for Jetpack 5.0DP but ORIN only supports Jetpack 5.0DP (I might be wrong but inferring from this [Jetpack Archi... | https://github.com/pytorch/TensorRT/issues/1166 | closed | [
"question",
"channel: linux-jetpack"
] | 2022-07-05T19:46:00Z | 2022-08-11T02:55:46Z | null | krmayankb |
pytorch/functorch | 933 | Cannot import vmap after new release | I am installing functorch on google colab; when I don't specify the version, it installs version 0.2.2 and PyTorch version 1.12.0, and uninstall currently installed PyTorch 1.11.0 on colab. But, in the line where I import vmap, it throws an error that functorch is not compatible with PyTorch 1.12.0:
```
RuntimeErro... | https://github.com/pytorch/functorch/issues/933 | open | [] | 2022-07-05T18:47:06Z | 2022-08-08T14:31:27Z | 4 | KananMahammadli |
pytorch/vision | 6,239 | n classes in ConvNeXt model | ### π Describe the bug
HI,
I'm trying to train a ConvNeXt tiny model as a binary classifier by loading the model architecture and pretrained weights from torchvision.models.
I use the following two lines of code to load the model and change the number of output nodes:
>num_classes=2
model_ft = models.conv... | https://github.com/pytorch/vision/issues/6239 | closed | [
"question",
"module: models"
] | 2022-07-05T17:47:40Z | 2022-07-06T08:13:15Z | null | jrsykes |
pytorch/vision | 6,235 | Creating a `cache-dataset` for Video classification. | Hello, now I am trying to test the video classification model R(2+1)D on Kinetics400. However the speed of loading data is so slow. I believe the loading speed can be improved by caching the data but I am not sure how to cache video files. In the code also, it is mentioned. I want to know to cache video files? is cache... | https://github.com/pytorch/vision/issues/6235 | closed | [
"question",
"module: reference scripts",
"module: video"
] | 2022-07-05T04:27:54Z | 2022-07-05T08:28:20Z | null | yakhyo |
huggingface/datasets | 4,621 | ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present | ## Describe the bug
If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass fe... | https://github.com/huggingface/datasets/issues/4621 | closed | [
"bug"
] | 2022-07-04T11:21:44Z | 2022-07-15T14:24:24Z | 0 | polinaeterna |
pytorch/audio | 2,526 | Need more detail and tutorial on how to use the language model to decrease the word rate error. | ### π The doc issue
1. How do we build our own language model and add it to the language model, such as wav2vec2? However many of the solutions from the doc require using another library.
2. If 1 requires training the language model again, then It looks like we can use our own text file for the language model to... | https://github.com/pytorch/audio/issues/2526 | open | [] | 2022-07-03T11:05:05Z | 2022-07-18T21:02:59Z | null | AliceSum |
huggingface/datasets | 4,619 | np arrays get turned into native lists | ## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datas... | https://github.com/huggingface/datasets/issues/4619 | open | [
"bug"
] | 2022-07-02T17:54:57Z | 2022-07-03T20:27:07Z | 3 | ZhaofengWu |
pytorch/tutorials | 1,961 | Update SpaCy to latest. | The old `spacy==2.3.2` is out of date, and I cannot install it (due to build failure). Is it possible to remove the version constraint? | https://github.com/pytorch/tutorials/issues/1961 | closed | [
"dependencies"
] | 2022-07-02T11:01:23Z | 2022-12-09T17:47:43Z | 2 | evan0greenup |
pytorch/tutorials | 1,960 | Question: how to run individual tutorial? | I don't make to `make doc`, I just want to run a specific individual tutorial.
Is it safe to directly run it as script? | https://github.com/pytorch/tutorials/issues/1960 | closed | [
"question"
] | 2022-07-02T10:59:34Z | 2022-08-01T21:15:19Z | null | evan0greenup |
pytorch/TensorRT | 1,156 | β [Question] Support for CUDA 11.6? | ## Does latest version support CUDA 11.6β
Pytorch officially supports CUDA 11.6, however docs say torch_tensort supports CUDA 11.3 at max. But in some issues it is said that CUDA version 11.6 is used. Is CUDA 11.6 officially supported by torch_tensorrt?
## Environment
- PyTorch Version (e.g., 1.0): any
- C... | https://github.com/pytorch/TensorRT/issues/1156 | closed | [
"question",
"component: dependencies"
] | 2022-07-01T11:41:24Z | 2022-08-12T03:16:44Z | null | alercelik |
pytorch/data | 564 | [RFC] Restricting `IterDataPipe` to have method `__iter__` as a generator function without method `__next__` | ### π The feature
** Note that this is a RFC to solely discuss the design. There is currently no plan to implement this feature. This issue serves as a developer documentation of the current design and the complexity/issue that we encounter with certain aspects of `IterDataPipe`. It also provides a space to discuss... | https://github.com/meta-pytorch/data/issues/564 | open | [] | 2022-06-30T20:39:17Z | 2022-06-30T20:41:30Z | 0 | NivekT |
huggingface/datasets | 4,603 | CI fails recurrently and randomly on Windows | As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\to... | https://github.com/huggingface/datasets/issues/4603 | closed | [
"bug"
] | 2022-06-30T10:59:58Z | 2022-06-30T13:22:25Z | 0 | albertvillanova |
pytorch/vision | 6,221 | Customize FasterRCNN | Hi,
I've been trying, unsuccessfully to customize a bit the implementation of FasterRCNN proposed by torchvision. For example, one thing I would like to do, would be to write a customized [postprocess_detections ](https://github.com/pytorch/vision/blob/87cde716b7f108f3db7b86047596ebfad1b88380/torchvision/models/dete... | https://github.com/pytorch/vision/issues/6221 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2022-06-30T09:40:50Z | 2022-07-06T14:15:49Z | null | paullixo |
huggingface/dataset-viewer | 430 | Shuffle the rows? | see https://github.com/huggingface/moon-landing/issues/3375 | https://github.com/huggingface/dataset-viewer/issues/430 | closed | [
"question",
"feature request",
"P2"
] | 2022-06-30T08:31:20Z | 2023-09-08T13:41:42Z | null | severo |
pytorch/TensorRT | 1,150 | β [Question] The same inputs producing very different outputs via pytorch & TensorRT. | ## β Question
<!-- Your question -->
Hey, guys!
I'm new to TensorRT, after the environment setup. I'm very excited to try the official demo in this page. [Resnet50-example.](https://pytorch.org/TensorRT/_notebooks/Resnet50-example.html). I got very different outputs when inference with the same inputs via pytorch ... | https://github.com/pytorch/TensorRT/issues/1150 | closed | [
"bug",
"question",
"No Activity",
"performance"
] | 2022-06-29T10:10:01Z | 2023-03-26T00:02:20Z | null | Amoko |
pytorch/vision | 6,216 | EfficientNet_v2 models not loading through torchvision | ### π Describe the bug
I am trying to train efficient_v2 classification models on custom dataset using
[this script](https://github.com/pytorch/vision/tree/f75272fa704452a1d9405126c3a09e2d7432d489/references/classification)
I used following command
```
python3 train.py --model efficientnet_v2 --batch-size 128 ... | https://github.com/pytorch/vision/issues/6216 | closed | [
"question",
"module: models"
] | 2022-06-29T09:12:09Z | 2022-06-29T11:09:27Z | null | suyashhchougule |
huggingface/datasets | 4,591 | Can't push Images to hub with manual Dataset | ## Describe the bug
If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed,
instead it looks for image where image local path is/used to be.
This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is compli... | https://github.com/huggingface/datasets/issues/4591 | closed | [
"bug"
] | 2022-06-29T00:01:23Z | 2022-07-08T12:01:36Z | 1 | cceyda |
pytorch/serve | 1,713 | How to specify which gpu is to be used for serve? | ### π The feature
```console
:~$ lspci | grep VGA
0000:00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P Integrated Graphics Controller (rev 0c)
0000:01:00.0 VGA compatible controller: NVIDIA Corporation GA103M [GeForce RTX 3080 Ti Mobile] (rev a1)
:~$ glxinfo | egrep -i "device|memory"
D... | https://github.com/pytorch/serve/issues/1713 | closed | [
"triaged_wait",
"support"
] | 2022-06-28T19:35:33Z | 2022-07-07T02:13:46Z | null | jiapei-nexera |
huggingface/dataset-viewer | 423 | Add terms of service to the API? | See https://swagger.io/specification/#info-object
Maybe to mention a rate-limiter, if we implement one | https://github.com/huggingface/dataset-viewer/issues/423 | closed | [
"question"
] | 2022-06-28T11:27:16Z | 2022-09-16T17:30:38Z | null | severo |
pytorch/vision | 6,206 | Wrong for pytorch-nightly version | ### π Describe the bug
The wrong is below:
Traceback (most recent call last):
File "/home/hxj/PycharmProjects/ImageNetTrain/main.py", line 9, in <module>
weights = P.models.ResNet50_Weights.IMAGENET1K_V1
AttributeError: module 'torchvision.prototype.models' has no attribute 'ResNet50_Weights'
### Versions
... | https://github.com/pytorch/vision/issues/6206 | open | [
"question",
"module: models"
] | 2022-06-27T08:44:00Z | 2022-06-27T08:55:49Z | null | wwwsent |
huggingface/datasets | 4,571 | move under the facebook org? | ### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset... | https://github.com/huggingface/datasets/issues/4571 | open | [] | 2022-06-26T11:19:09Z | 2023-09-25T12:05:18Z | 3 | lewtun |
huggingface/datasets | 4,570 | Dataset sharding non-contiguous? | ## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggi... | https://github.com/huggingface/datasets/issues/4570 | closed | [
"bug"
] | 2022-06-26T08:34:05Z | 2022-06-30T11:00:47Z | 5 | cakiki |
huggingface/datasets | 4,569 | Dataset Viewer issue for sst2 | ### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with Connectio... | https://github.com/huggingface/datasets/issues/4569 | closed | [
"dataset-viewer"
] | 2022-06-26T07:32:54Z | 2022-06-27T06:37:48Z | 2 | lewtun |
pytorch/data | 550 | DataLoader2 should reset when a new iterator is created? | When a new iterator is created, `DataLoader2` currently resumes from when it was left off rather than resetting and starting from the beginning again (see code snippet below). This is divergent from the behavior of the original `DataLoader`. Users likely expect the latter behavior and we should properly reset the state... | https://github.com/meta-pytorch/data/issues/550 | closed | [] | 2022-06-24T18:19:09Z | 2022-08-26T21:02:39Z | 1 | NivekT |
pytorch/data | 549 | DataLoader2.__len__() ? | This is somewhat related to https://github.com/pytorch/data/issues/533
As described in https://github.com/pytorch/data/issues/533#issuecomment-1163381945, we like to check the `len()` of the DataLoader in torchvision in our logging utils.
Are there plans to implement `__len__()` on `DataLoader2`? | https://github.com/meta-pytorch/data/issues/549 | open | [] | 2022-06-24T17:10:39Z | 2022-07-06T19:21:39Z | 1 | NicolasHug |
pytorch/data | 538 | Warn about pickle-ablity when using `dp.map(some_local_function)` ? | `torchdata` issues a warning about pickle when we use lambdas (which is great!)
Another kind of function that isn't compatible with pickle are local functions. Would it be possible to throw the same warning there?
```py
import torchdata
import pickle
def make_dp():
def f(x): # local function, not pic... | https://github.com/meta-pytorch/data/issues/538 | closed | [] | 2022-06-23T13:02:33Z | 2022-06-27T21:48:29Z | 1 | NicolasHug |
huggingface/dataset-viewer | 416 | Remove the Kubernetes CPU "limits"? | https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-%28Prometheus-Alert%29#why-you-dont-need-cpu-limits
> ## Why you don't need CPU limits
>
> As long as your pod has a CPU request, [Kubernetes maintainers like Tim Hockin recommend not using limits at all](https://twitter.com/thockin/status/1... | https://github.com/huggingface/dataset-viewer/issues/416 | closed | [
"question"
] | 2022-06-23T12:26:39Z | 2022-07-22T13:15:41Z | null | severo |
huggingface/dataset-viewer | 415 | Expose an endpoint with the column types/modalities of each dataset? | It could be used on the Hub to find all the "images" or "audio" datasets.
By the way, the info is normally already in the datasets-info.json (.features) | https://github.com/huggingface/dataset-viewer/issues/415 | closed | [
"question"
] | 2022-06-23T10:36:01Z | 2022-09-16T17:32:45Z | null | severo |
pytorch/data | 533 | `len(dataloader)` in distributed setting is different with datapipes and with map-style datasets | In a distributed setting, `len(dataloader)` will return:
- `len(dataset) // (batch_size * num_GPUs)` if `dataset` is a map-style dataset
- `len(dataset) // batch_size` if `dataset` is a datapipe
This discrepancy makes it a bit difficult to work with torchvision's training recipes, where we often need the size o... | https://github.com/meta-pytorch/data/issues/533 | open | [] | 2022-06-22T16:32:01Z | 2022-06-22T16:57:09Z | 2 | NicolasHug |
huggingface/datasets | 4,542 | [to_tf_dataset] Use Feather for better compatibility with TensorFlow ? | To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_... | https://github.com/huggingface/datasets/issues/4542 | open | [
"generic discussion"
] | 2022-06-22T14:42:00Z | 2022-10-11T08:45:45Z | 48 | lhoestq |
huggingface/dataset-viewer | 413 | URL design | Currently, the API is available at the root, ie: https://datasets-server.huggingface.co/rows?...
This can lead to some issues:
- if we add other services, such as /doc or /search, the API will share the namespace with these other services. This means that we must take care of avoiding collisions between services an... | https://github.com/huggingface/dataset-viewer/issues/413 | closed | [
"question"
] | 2022-06-22T07:13:24Z | 2022-06-28T08:48:02Z | null | severo |
pytorch/pytorch | 80,007 | when forward use **kwargsοΌhow to construct the example_ Inputs parameter in jit.trace? | ### π Describe the bug
import torch
class Model(nn.Module):
def forward(self, **kwargs):
# kwargs contains more than dozens of tensors
pass
model = Model()
trace_model = torch.jit.trace(model, example_inputs=??)
### Versions
PyTorch version: 1.6.0+cu101
Is debug build: False
CUDA u... | https://github.com/pytorch/pytorch/issues/80007 | open | [
"oncall: jit"
] | 2022-06-22T03:20:17Z | 2023-03-11T03:33:15Z | null | zyDotwei |
huggingface/datasets | 4,538 | Dataset Viewer issue for Pile of Law | ### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines... | https://github.com/huggingface/datasets/issues/4538 | closed | [
"dataset-viewer"
] | 2022-06-22T02:48:40Z | 2022-06-27T07:30:23Z | 5 | Breakend |
pytorch/TensorRT | 1,138 | problem build in jetson nano jetpack4.6 | ## β Question
Hello
I tried to compile the torch-tensorrt on the jetson nano I got this error
suggestions please
Thanks
bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 --verbose_failures
jetson@jetson-desktop:~/TensorRT$ bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 -... | https://github.com/pytorch/TensorRT/issues/1138 | closed | [
"question",
"channel: linux-jetpack"
] | 2022-06-21T16:45:05Z | 2022-09-02T18:08:45Z | null | Sylia-C |
pytorch/functorch | 892 | Figure out how to get test coverage for more compositions of transforms | ## Motivation
Currently, we only test the following compositions:
- vmap
- jvp
- vjp
- vmap x jvp
- vmap x vjp
- vjp x vjp
- vjp x vmap
This has caught most of our bugs, but users still come to us with code that doesn't work due to it not being one of the above compositions. For example:
- vmap x vmap can... | https://github.com/pytorch/functorch/issues/892 | closed | [
"actionable"
] | 2022-06-21T14:34:58Z | 2022-09-15T15:01:19Z | null | zou3519 |
pytorch/serve | 1,701 | curl 404 ResourceNotFoundException | Hello,
I am stuck with an error that I am not sure what does it mean.
when I do `curl "http://localhost:8080/models"` I get :
`{
"code": 404,
"type": "ResourceNotFoundException",
"message": "Requested resource is not found, please refer to API document."
}`
I make an `.mar` file for my model with
`... | https://github.com/pytorch/serve/issues/1701 | open | [
"help wanted",
"question"
] | 2022-06-21T14:15:31Z | 2023-01-31T16:04:09Z | null | ma-batita |
pytorch/serve | 1,699 | How to properly understand MaxBatchDelay | From the documentation https://github.com/pytorch/serve/blob/master/docs/management_api.md#register-a-model
The parameter `maxBatchDelay` is the maximum delay for batch aggregation. It will wait this amount of time before aggregating all the requests (please correct me if I am wrong) into batches. Now, on the user sid... | https://github.com/pytorch/serve/issues/1699 | closed | [
"documentation",
"benchmark"
] | 2022-06-20T19:01:44Z | 2023-08-18T02:53:37Z | null | Hegelim |
pytorch/serve | 1,698 | Confused about Cumulative Inference Duration vs. PredictionTime | ### π The doc issue
I am running a model on TorchServe and I am trying to see how long it takes for inference.
If I use logging and view the logs, then I can see there is something called PredictionTime:
 a bit confusing.
```
from torchdata.datapipes.iter import S3FileLister, S3FileLoader
s3_prefixes = ['s3://bucket-name/folder/', ...]
dp_s3_urls = S3FileLister(s3_p... | https://github.com/meta-pytorch/data/issues/523 | closed | [] | 2022-06-20T15:52:53Z | 2022-06-23T00:17:12Z | 4 | enric1994 |
pytorch/TensorRT | 1,136 | β [Question] unable to save the model in TorchScript format? | ## β Question
I'm trying to save my model as TorchScript format, unfortunately getting error.
## What you have already tried
```torch.jit.script(model)```
## Environment
python
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):1.11.0+cu113
-... | https://github.com/pytorch/TensorRT/issues/1136 | closed | [
"question",
"bug: triaged [not a bug]"
] | 2022-06-20T10:43:11Z | 2022-07-05T20:57:03Z | null | IamExperimenting |
pytorch/TensorRT | 1,134 | β [Question] Why TensorRT model is slower? | ## β Question
<!-- Your question -->
Why TensorRT model is slower? I have tried TensorRT in a MHA (multihead attention) model, but found it is even slower than the jit scripted model.
## What you have already tried
I tested the original model, the jit scripted model, the jit model after optimization, and the Tens... | https://github.com/pytorch/TensorRT/issues/1134 | closed | [
"question",
"No Activity",
"performance"
] | 2022-06-20T06:55:23Z | 2023-11-09T09:01:52Z | null | geekinglcq |
pytorch/TensorRT | 1,133 | β [Question] How to install torch_tensorrt python API in ubuntu 20.04? | ## β Question
I want to install ```torch_tensorrt``` python API in ubuntu 20.04. could you please provide step by a step installation procedure? I tried
```pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases```
when I try to import the module
```import torch_tensorrt```
I'm get... | https://github.com/pytorch/TensorRT/issues/1133 | closed | [
"question",
"component: build system",
"component: packaging",
"component: dependencies"
] | 2022-06-19T14:50:36Z | 2022-12-15T17:24:39Z | null | IamExperimenting |
pytorch/serve | 1,692 | TorchServe How to Curl Multiple Images Properly | I am using TorchServe to potentially serve a model from MMOCR (https://github.com/open-mmlab/mmocr), and I have several questions:
1. I tried to do inference on hundreds of images together using batch mode by using & to concatenate curl commands together, such as suggested here https://github.com/pytorch/serve/issues/... | https://github.com/pytorch/serve/issues/1692 | open | [
"documentation",
"help wanted",
"perf"
] | 2022-06-17T18:54:26Z | 2024-08-04T15:18:11Z | null | Hegelim |
huggingface/datasets | 4,522 | Try to reduce the number of datasets that require manual download | > Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to β 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, w... | https://github.com/huggingface/datasets/issues/4522 | open | [] | 2022-06-17T11:42:03Z | 2022-06-17T11:52:48Z | 0 | severo |
huggingface/dataset-viewer | 394 | Implement API pagination? | Should we add API pagination right now? Maybe useful for the "technical" endpoints like https://datasets-server.huggingface.co/queue-dump-waiting-started or https://datasets-server.huggingface.co/cache-reports
https://simonwillison.net/2021/Jul/1/pagnis/
| https://github.com/huggingface/dataset-viewer/issues/394 | closed | [
"question"
] | 2022-06-17T08:54:41Z | 2022-08-01T19:02:00Z | null | severo |
pytorch/TensorRT | 1,129 | β [Question] Torch traced model conversion with List[torch.Tensor] input | Is it possible to convert a torch traced model that accepts List[torch.Tensor] type of input to trt ts module?
| https://github.com/pytorch/TensorRT/issues/1129 | closed | [
"question",
"component: core"
] | 2022-06-17T08:17:26Z | 2022-08-12T01:53:15Z | null | ArmenGhambaryan |
huggingface/dataset-viewer | 390 | How to best manage the datasets that we cannot process due to RAM? | The dataset worker pod is killed (OOMKilled) for:
```
bigscience/P3
Graphcore/gqa-lxmert
echarlaix/gqa-lxmert
```
and the split worker pod is killed (OOMKilled) for:
```
imthanhlv/binhvq_news21_raw / started / train
openclimatefix/nimrod-uk-1km / sample / train/test/validation
PolyAI/minds14 / zh-CN / t... | https://github.com/huggingface/dataset-viewer/issues/390 | closed | [
"bug",
"question"
] | 2022-06-17T08:04:45Z | 2022-09-19T09:42:36Z | null | severo |
huggingface/dataset-viewer | 388 | what happened to the pods? | ```
$ k get pods -w
...
datasets-server-prod-datasets-worker-776b774978-g7mpk 1/1 Evicted 0 73m βDEBUG: 2022-06-16 18:42:46,966 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 Pendin... | https://github.com/huggingface/dataset-viewer/issues/388 | closed | [
"question"
] | 2022-06-16T19:46:00Z | 2022-06-17T07:48:20Z | null | severo |
pytorch/functorch | 882 | Can I use jvp with vmap? | Hi, experts.
I want to use jvp with vmap, so that I can run jvp for each sample in a batch.
However, unlike the jacrev example, jvp does not return a callable function, so I am not sure if it is compatible with vmap.
It seems like vjp returns a function like jacrev, so might be usable, but can I use jvp with vmap?... | https://github.com/pytorch/functorch/issues/882 | closed | [] | 2022-06-16T18:40:05Z | 2022-06-16T19:00:52Z | 2 | kwmaeng91 |
huggingface/pytorch_block_sparse | 17 | What is "custom" "custom-back" in dispatch_policies.h? | Hi! I am learning SGEMM and find in dispatch_policies.h has a "Custom", "CustomBack". Not sure what does this mean? Thank you!!! | https://github.com/huggingface/pytorch_block_sparse/issues/17 | open | [] | 2022-06-16T05:46:42Z | 2022-06-16T05:46:42Z | null | ziyuhuang123 |
huggingface/datasets | 4,507 | How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script | If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.
Or I can paraphrase the question in the following way: how to skip `_spl... | https://github.com/huggingface/datasets/issues/4507 | closed | [
"enhancement"
] | 2022-06-15T18:56:34Z | 2022-06-16T10:40:08Z | 2 | liyucheng09 |
pytorch/torchx | 520 | [torchx/ray] Is elastic training on ray clusters supported? | ## π Bug
Hi, I would like to know the current state of running elastic training on ray clusters.
I tried to repeat some experiments([notebook](https://colab.research.google.com/drive/1vVCpgQ9z_1SN8K9CJxUT2LtvUDN0AlND?usp=sharing)) in this [blog](https://www.anyscale.com/blog/large-scale-distributed-training-with-t... | https://github.com/meta-pytorch/torchx/issues/520 | open | [
"question",
"ray"
] | 2022-06-15T18:25:55Z | 2022-06-22T21:34:39Z | 7 | ntlm1686 |
huggingface/datasets | 4,504 | Can you please add the Stanford dog dataset? | ## Adding a Dataset
- **Name:** *Stanford dog dataset*
- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Data:** *[link to the Github... | https://github.com/huggingface/datasets/issues/4504 | closed | [
"good first issue",
"dataset request"
] | 2022-06-15T15:39:35Z | 2024-12-09T15:44:11Z | 16 | dgrnd4 |
huggingface/datasets | 4,502 | Logic bug in arrow_writer? | https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values())... | https://github.com/huggingface/datasets/issues/4502 | closed | [] | 2022-06-15T14:50:00Z | 2022-06-18T15:15:51Z | 10 | changjonathanc |
huggingface/optimum | 219 | Support to wav2vec2 | ### Feature request
Is there any plan to include wav2vec2 class to optimum?
```python
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
# The model we wish to quantize
model_checkpoint = "facebook/wav2vec2-base-960h"
# The type of quantization t... | https://github.com/huggingface/optimum/issues/219 | closed | [] | 2022-06-15T12:47:42Z | 2022-07-08T10:34:33Z | 4 | asr-lord |
pytorch/serve | 1,687 | How to install torchserve from source ??? | ### π The feature
Without using
`pip install torchserve` and `docker pull pytorch/torchserve`, how can I install **torchserve** using this open source ??
I can build `model-archiver` and `workflow-archiver`, but how can I build out `torchserve` from source ?
### Motivation, pitch
Without using
`pip install torch... | https://github.com/pytorch/serve/issues/1687 | closed | [] | 2022-06-14T18:03:48Z | 2022-06-15T03:05:30Z | null | jiapei-nexera |
huggingface/dataset-viewer | 373 | Add support for building GitHub Codespace dev environment | Add support for building a GitHub Codespace dev environment (as it was done for the [moon landing](https://github.com/huggingface/moon-landing/pull/3188) project) to make it easier to contribute to the project. | https://github.com/huggingface/dataset-viewer/issues/373 | closed | [
"question"
] | 2022-06-14T14:37:58Z | 2022-09-19T09:05:26Z | null | mariosasko |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.