repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | 4,491 | Dataset Viewer issue for Pavithree/test | ### Link
https://huggingface.co/datasets/Pavithree/test
### Description
I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missi... | https://github.com/huggingface/datasets/issues/4491 | closed | [
"dataset-viewer"
] | 2022-06-14T13:23:10Z | 2022-06-14T14:37:21Z | 1 | Pavithree |
pytorch/examples | 1,012 | Using SLURM for Imagenet training on multiple nodes | In the pytorch imagenet example of this repo, it says that for multiple nodes we have to run the command on each node like below:

Since I am using a shared HPC cluster with SLURM, I cannot actively know... | https://github.com/pytorch/examples/issues/1012 | closed | [
"distributed"
] | 2022-06-14T09:39:59Z | 2022-07-10T20:11:43Z | 2 | b0neval |
pytorch/pytorch | 79,495 | How to stacked RGB images | ### π The feature, motivation and pitch
Hi, pytorch support teams.
I want to stack a RGB images.
I want to construct a 3D or 4D RGB tensor.
And, create a GAN model using these tensor.
How do I define how to create such a tensor?
I would like to stack the attached 2D RGB images.
Or can you extract each RGB ele... | https://github.com/pytorch/pytorch/issues/79495 | closed | [] | 2022-06-14T02:40:40Z | 2022-06-14T18:01:50Z | null | kazuma0606 |
pytorch/tutorials | 1,945 | Calculating accuracy. | How can i calculate the accuracy of the model on seq2seq with attention chatbot? | https://github.com/pytorch/tutorials/issues/1945 | closed | [
"question"
] | 2022-06-13T22:34:03Z | 2022-08-17T20:26:00Z | null | OmarHaitham520 |
pytorch/torchx | 514 | Launching hello world job on Kubernetes and getting logs | ## π Documentation
## Link
<!-- link to the problematic documentation -->
https://pytorch.org/torchx/0.1.0rc2/quickstart.html
## What does it currently say?
<!-- copy paste the section that is wrong -->
`torchx run --scheduler kubernetes my_component.py:greet --image "my_app:latest" --user "your name"`
The ... | https://github.com/meta-pytorch/torchx/issues/514 | open | [
"documentation"
] | 2022-06-13T14:20:20Z | 2022-06-13T16:50:35Z | 1 | vishakha-ramani |
pytorch/TensorRT | 1,114 | How can i compile CUDA C in this projectβ [Question] How do you ....? | ## β Question
I want compile tensorrt plugin in this project. But I do not know how to use bazel to compile the cuda c.
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on... | https://github.com/pytorch/TensorRT/issues/1114 | closed | [
"question"
] | 2022-06-13T11:27:52Z | 2022-06-20T22:11:37Z | null | p517332051 |
pytorch/serve | 1,684 | How to decode the gRPC PredictionResponse string efficiently | ### π The doc issue
There is no documentation about decoding the received bytes form PredictionResponse into torch tensor efficiently. Currently, the only working solution is using `ast.literal_eval`, which is extremely slow.
```
response = inference_stub.Predictions(
inference_pb2.PredictionsReques... | https://github.com/pytorch/serve/issues/1684 | open | [
"documentation"
] | 2022-06-13T10:47:16Z | 2022-09-20T11:50:44Z | null | IamMohitM |
pytorch/pytorch | 79,384 | torch.load() fails on MPS backend ("don't know how to restore data location") | ### π Describe the bug
```bash
# warning: 5.8GB file
wget https://huggingface.co/Cene655/ImagenT5-3B/resolve/main/model.pt
```
```python
import torch
torch.load('./model.pt', map_location='mps')
```
Error thrown [from serialization.py](https://github.com/pytorch/pytorch/blob/bd1a35dfc894eced537b825e556983... | https://github.com/pytorch/pytorch/issues/79384 | closed | [
"module: serialization",
"triaged",
"module: mps"
] | 2022-06-12T19:30:24Z | 2022-08-06T09:25:21Z | null | Birch-san |
huggingface/datasets | 4,478 | Dataset slow during model training | ## Describe the bug
While migrating towards π€ Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training.
First, I have optimized my dataset following https://discuss.huggingface.co/... | https://github.com/huggingface/datasets/issues/4478 | open | [
"bug"
] | 2022-06-11T19:40:19Z | 2022-06-14T12:04:31Z | 5 | lehrig |
pytorch/pytorch | 79,332 | How to reimplement same behavior in AdaptiveAvgPooling2D | ### π The doc issue
Hi, am trying written an op which should mimic behavior in Pytorch's AdaptiveAvgPooling, but I can not align the result.
Here is what I do:
```
def test_pool():
a = np.fromfile("in.bin", dtype=np.float32)
a = np.reshape(a, [1, 12, 25, 25])
a = torch.as_tensor(a)
b = F.... | https://github.com/pytorch/pytorch/issues/79332 | closed | [] | 2022-06-11T02:06:59Z | 2022-08-18T11:39:51Z | null | lucasjinreal |
pytorch/functorch | 867 | Why is using vmap(jacrev) for BatchNorm2d in non-tracking mode not working? | Hi, experts.
I am trying to use vmap(jacrev) to calculate the per-sample jacobian in a batch for my network during inference. However, when there is BatchNorm2d, it does not work. Because during inference, BatchNorm2d is simply applying the statistics previously tracked (and not doing any inter-sample operations), I t... | https://github.com/pytorch/functorch/issues/867 | closed | [] | 2022-06-11T00:15:32Z | 2022-07-18T18:44:14Z | 6 | kwmaeng91 |
pytorch/pytorch | 79,106 | How to find the code in '...'? | https://github.com/pytorch/pytorch/blob/4305f8e9bda34f18eb7aacab51c63651cfc61802/torch/storage.py#L34
Here, I want to read the detailed code in `.cuda` func, however, I do not find any code about this api?π’
Hope someone could help meοΌβ€
cc @ngimel | https://github.com/pytorch/pytorch/issues/79106 | closed | [
"module: cuda",
"triaged"
] | 2022-06-08T02:49:10Z | 2022-06-13T20:44:10Z | null | juinshell |
pytorch/data | 574 | Support offloading data pre-processing to auxiliary devices | ### π The feature, motivation and pitch
Occasionally one might find that their GPU is idle due to a bottleneck on the input data pre-processing pipeline (which might include data loading/filtering/manipulation/augmentation/etc). In these cases one could improve resource utilization by offloading some of the pre-pro... | https://github.com/meta-pytorch/data/issues/574 | open | [
"feature",
"module: dataloader",
"triaged",
"module: data"
] | 2022-06-07T10:12:00Z | 2022-07-06T18:12:47Z | 2 | czmrand |
pytorch/kineto | 615 | How to limit the scope of the profiler? | I am wondering if it is possible to limit the scope of the profiler to a particular part of the neural network. Currently, I am trying to analyze the bottleneck of my model using the following pseudocode:
```
import torch.profiler as profiler
with profiler.profile(
activities=[
pr... | https://github.com/pytorch/kineto/issues/615 | closed | [] | 2022-06-06T20:34:35Z | 2022-06-21T17:57:42Z | null | hyhuang00 |
pytorch/torchx | 510 | Implement an HPO builtin | ## Description
Add a builtin component for launching HPO (hyper-parameter optimization) jobs. At a high-level something akin to:
```
# for grid search
$ torchx run -s kubernetes hpo.grid_search --paramspacefile=~/parameters.json --component dist.ddp
# for bayesian search
$ torchx run -s kubernetes hpo.bayesia... | https://github.com/meta-pytorch/torchx/issues/510 | open | [
"enhancement",
"module: components"
] | 2022-06-03T20:06:10Z | 2022-10-27T01:55:08Z | 0 | kiukchung |
huggingface/datasets | 4,439 | TIMIT won't load after manual download: Errors about files that don't exist | ## Describe the bug
I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both c... | https://github.com/huggingface/datasets/issues/4439 | closed | [
"bug"
] | 2022-06-02T16:35:56Z | 2022-06-03T08:44:17Z | 3 | drscotthawley |
pytorch/vision | 6,124 | How to timing 'model.to(device)' correctly? | I am using pytorch's api in my python code to measure time for different layers of resnet152 to device(GPU, V-100).However, I cannot get a stable result.
Here is my code:
```python
import torch.nn as nn
device = torch.device('cuda:3' if torch.cuda.is_available() else 'cpu')
model = torchvision.models.resnet152(pre... | https://github.com/pytorch/vision/issues/6124 | closed | [
"question"
] | 2022-06-02T11:55:14Z | 2022-06-06T08:34:34Z | null | juinshell |
pytorch/functorch | 848 | AOTAutograd makes unsafe assumptions on how the backward pass will look like | ## Context: how AOTAutograd works today
Given a function `f`:
- AOTAutograd traces out `run_forward_and_backward_f(*args, *grad_outputs)` to produce `forward_and_backward_trace`
- AOTAutograd partitions `forward_and_backward_trace` into a forward_trace and a backward_trace
- AOTAutograd compiles the forward_trace... | https://github.com/pytorch/functorch/issues/848 | open | [] | 2022-06-01T18:18:28Z | 2023-02-01T01:10:36Z | 4 | zou3519 |
huggingface/dataset-viewer | 332 | Change moonlanding app token? | Should we replace `dataset-preview-backend`with `datasets-server`:
- here: https://github.com/huggingface/moon-landing/blob/f2ee3896cff3aa97aafb3476e190ef6641576b6f/server/models/App.ts#L16
- and here: https://github.com/huggingface/moon-landing/blob/82e71c10ed0b385e55a29f43622874acfc35a9e3/server/test/end_to_end_app... | https://github.com/huggingface/dataset-viewer/issues/332 | closed | [
"question"
] | 2022-06-01T09:29:12Z | 2022-09-19T09:33:33Z | null | severo |
huggingface/dataset-viewer | 325 | Test if /valid is a blocking request | https://github.com/huggingface/datasets-server/issues/250#issuecomment-1142013300
> > the requests to /valid are very long: do they block the incoming requests?)
> Depends on if your long running query is blocking the GIL or not. If you have async calls, it should be able to switch and take care of other requests, ... | https://github.com/huggingface/dataset-viewer/issues/325 | closed | [
"bug",
"question"
] | 2022-05-31T13:43:20Z | 2022-09-16T17:39:20Z | null | severo |
huggingface/datasets | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library... | https://github.com/huggingface/datasets/issues/4419 | closed | [
"enhancement"
] | 2022-05-30T12:13:18Z | 2022-09-30T16:01:37Z | 3 | alvarobartt |
huggingface/datasets | 4,417 | how to convert a dict generator into a huggingface dataset. | ### Link
_No response_
### Description
Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.
The generator contains all the samples needed for ... | https://github.com/huggingface/datasets/issues/4417 | closed | [
"question"
] | 2022-05-29T16:28:27Z | 2022-09-16T14:44:19Z | null | StephennFernandes |
pytorch/pytorch | 78,365 | How to calculate the gradient of the previous layer when the gradient of the latter layer is given? | Hi, there. Can someone help me solve this problem? if the gradients of a certain layer is known, how can I use the API in torch to calculate the gradient of the previous layer?I would appreciate it if anyone could reply me in time. | https://github.com/pytorch/pytorch/issues/78365 | closed | [] | 2022-05-26T16:05:40Z | 2022-05-31T14:46:40Z | null | mankasto |
pytorch/data | 469 | Suggestion: Dataloader with RPC-based workers | ### π The feature
A dataloader which communicates with its workers via torch.distributed.rpc API.
### Motivation, pitch
Presently, process-based workers for Dataloader mean the workers live on the same server/PC as the process consuming that data. This incurs the following limitations:
- the pre-processing w... | https://github.com/meta-pytorch/data/issues/469 | closed | [] | 2022-05-26T11:14:13Z | 2024-01-30T09:29:17Z | 2 | nlgranger |
pytorch/examples | 1,010 | Accessing weights of a pre-trained model | Hi,
Can you share how to print weights and biases for each layer of a pre-trained Alexnet model?
Regards,
Nivedita | https://github.com/pytorch/examples/issues/1010 | closed | [] | 2022-05-26T06:50:13Z | 2022-06-02T00:11:56Z | 1 | nivi1501 |
pytorch/TensorRT | 1,091 | β [Question] Linking error with PTQ function | ## β Question
I am getting a linking error when using `torch_tensorrt::ptq::make_int8_calibrator`. I am using the Windows build based on CMake, so I'm not sure if it's a problem with the way it was built, but I suspect not since I can use functions from ::torchscript just fine.
I am trying to create a barebones p... | https://github.com/pytorch/TensorRT/issues/1091 | closed | [
"question",
"component: quantization",
"channel: windows"
] | 2022-05-26T01:19:17Z | 2022-09-02T17:45:50Z | null | jonahclarsen |
pytorch/torchx | 503 | add `torchx list` command and `Runner.list` APIs | ## Description
<!-- concise description of the feature/enhancement -->
Add a `torchx list` and `Runner/Scheduler.list` methods. This would allow listing all jobs the user has launched and see their status when tracking multiple different jobs.
## Motivation/Background
<!-- why is this feature/enhancement impor... | https://github.com/meta-pytorch/torchx/issues/503 | closed | [
"enhancement",
"module: runner",
"cli"
] | 2022-05-25T21:02:11Z | 2022-09-21T21:52:31Z | 10 | d4l3k |
pytorch/TensorRT | 1,089 | I wonder if torch_tensorrt support mixed precisions for different layer | **Is your feature request related to a problem? Please describe.**
I write a converter and plugin, but plugin only support fp32, then if I convert with enabled_precisions: torch.int8, then error happend
**Describe the solution you'd like**
if different layer can use different precisions, i can use fp32 this plugin... | https://github.com/pytorch/TensorRT/issues/1089 | closed | [
"question"
] | 2022-05-25T10:07:21Z | 2022-05-30T06:05:07Z | null | pupumao |
huggingface/dataset-viewer | 309 | Scale the worker pods depending on prometheus metrics? | We could scale the number of worker pods depending on:
- the size of the job queue
- the available resources
These data are available in prometheus, and we could use them to autoscale the pods. | https://github.com/huggingface/dataset-viewer/issues/309 | closed | [
"question"
] | 2022-05-25T09:56:05Z | 2022-09-19T09:30:49Z | null | severo |
huggingface/dataset-viewer | 307 | Add a /metrics endpoint on every worker? | https://github.com/huggingface/dataset-viewer/issues/307 | closed | [
"question"
] | 2022-05-25T09:52:28Z | 2022-09-16T17:40:55Z | null | severo | |
pytorch/data | 454 | Make `IterToMap` loading more lazily | ### π The feature
Currently, `IterToMap` starts to load all data from prior `IterDataPipe` when the first `__getitem__` is invoked here.
https://github.com/pytorch/data/blob/13b574c80e8732744fee6ab9cb7e35b5afc34a3c/torchdata/datapipes/iter/util/converter.py#L78
We can stop loading data from prior `IterDataPipe` w... | https://github.com/meta-pytorch/data/issues/454 | open | [
"help wanted"
] | 2022-05-24T14:14:30Z | 2022-06-02T08:24:35Z | 7 | ejguan |
pytorch/data | 453 | Fix installation document for nightly and official release | ### π The doc issue
In https://github.com/pytorch/data#local-pip-or-conda, we talk about the commands would install nightly pytorch and torchdata, which is actually the official release.
We should change this part and add another section for nightly installation
### Suggest a potential alternative/fix
_No respon... | https://github.com/meta-pytorch/data/issues/453 | closed | [
"documentation"
] | 2022-05-24T14:07:13Z | 2022-05-24T17:33:20Z | 0 | ejguan |
pytorch/torchx | 498 | Document .torchxconfig behavior in home directory | ## π Documentation
## Link
<!-- link to the problematic documentation -->
https://pytorch.org/torchx/main/runner.config.html
Context: https://fb.workplace.com/groups/140700188041197/posts/326515519459662/?comment_id=328106399300574&reply_comment_id=328113552633192
## What does it currently say?
<!-- copy... | https://github.com/meta-pytorch/torchx/issues/498 | open | [
"documentation"
] | 2022-05-23T18:39:05Z | 2022-06-16T00:04:19Z | 2 | d4l3k |
pytorch/serve | 1,647 | How to return n images instead of 1? | Hi,
I am trying to deploy a DALL-E type model, in which you get as input a text and you receive as output a couple of images.
```
outputs = []
for i, image in enumerate(images):
byte_output = io.BytesIO()
output.convert('RGB').save(byte_output, format='JPEG')
bin_img_data = byte_output.getvalue... | https://github.com/pytorch/serve/issues/1647 | closed | [] | 2022-05-23T15:13:07Z | 2022-05-23T17:21:30Z | null | mhashas |
pytorch/data | 436 | Is our handling of open files safe? | Our current strategy is to wrap all file handles in a [`StreamWrapper`](https://github.com/pytorch/pytorch/blob/88fca3be5924dd089235c72e651f3709e18f76b8/torch/utils/data/datapipes/utils/common.py#L154). It dispatches all calls to wrapped object and adds a `__del__` method:
```py
class StreamWrapper:
def __init... | https://github.com/meta-pytorch/data/issues/436 | closed | [] | 2022-05-23T10:37:11Z | 2023-01-05T15:05:51Z | 3 | pmeier |
huggingface/sentence-transformers | 1,562 | Why is "max_position_embeddings" 514 in sbert where as 512 in bert | Why is "max_position_embeddings" different in sbert then in Bert? | https://github.com/huggingface/sentence-transformers/issues/1562 | open | [] | 2022-05-22T17:27:01Z | 2022-05-22T20:52:40Z | null | omerarshad |
pytorch/TensorRT | 1,076 | β [Question] What am I missing to install TensorRT v1.1.0 in a Jetson with JetPack 4.6 | ## β Question
I am getting some errors trying to install TensorRT v1.1.0 in a Jetson with JetPack 4.6 for using with Python3
## What you have already tried
I followed the Official installation of Pytorch v1.10.0 by using binaries according to the [ offical Nvidia Forum](https://forums.developer.nvidia.com/t/py... | https://github.com/pytorch/TensorRT/issues/1076 | closed | [
"question",
"channel: linux-jetpack"
] | 2022-05-20T13:56:30Z | 2022-05-20T22:35:42Z | null | mjack3 |
pytorch/data | 433 | HashChecker example is broken | https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/torchdata/datapipes/iter/util/hashchecker.py#L36-L48
Running this will raise a `StopIteration`. The reason is simple: we want to read from a stream that was already exhausted by the hash checking. The docstring tells us that much
https:... | https://github.com/meta-pytorch/data/issues/433 | closed | [
"documentation",
"good first issue"
] | 2022-05-20T11:44:59Z | 2022-05-23T22:29:38Z | 1 | pmeier |
pytorch/functorch | 823 | Dynamic shape error in vmap with jacrev of jacrev | I'd like to compute the following expression in a vectorized way: first take the derivative wrt. to the data, and then take the derivative of this expression wrt. the parameters. I tried implementing it like this
```
func, params, buffer = make_functional_with_buffers(network)
vmap(jacrev(jacrev(func, 2), 0), (None,... | https://github.com/pytorch/functorch/issues/823 | closed | [] | 2022-05-20T10:41:39Z | 2022-05-25T12:12:20Z | 5 | zimmerrol |
pytorch/data | 432 | The developer install instruction are outdated | https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/CONTRIBUTING.md?plain=1#L49-L56
While debugging #418 it took my quite a while to figure out that I need to set
https://github.com/pytorch/data/blob/6a8415b1ced33e5653f7a38c93f767ac8e1c7e79/tools/setup_helpers/extension.py#L41
for th... | https://github.com/meta-pytorch/data/issues/432 | closed | [
"documentation"
] | 2022-05-20T08:35:01Z | 2022-06-10T20:04:08Z | 3 | pmeier |
huggingface/datasets | 4,374 | extremely slow processing when using a custom dataset | ## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the d... | https://github.com/huggingface/datasets/issues/4374 | closed | [
"bug",
"question"
] | 2022-05-19T14:18:05Z | 2023-07-25T15:07:17Z | null | StephennFernandes |
huggingface/optimum | 198 | Posibility to load an ORTQuantizer or ORTOptimizer from Onnx | FIrst, thanks a lot for this library, it make work so much easier.
I was wondering if it's possible to quantize and then optimize a model (or the reverse) but looking at the doc, it seems possible to do so only by passing a huggingface vanilla model.
Is it possible to do so with already compiled models?
Lik... | https://github.com/huggingface/optimum/issues/198 | closed | [] | 2022-05-18T20:19:23Z | 2022-06-30T08:33:58Z | 1 | ierezell |
pytorch/pytorch | 77,732 | multiprocessing: how to put a model which copied from main thread in the shared_queue | ### π Describe the bug
1. If I shared a model in cuda, it raises
```RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.```
Specifically, I accept a model from the main process and return a duplication create by using ```copy... | https://github.com/pytorch/pytorch/issues/77732 | closed | [
"module: multiprocessing",
"triaged"
] | 2022-05-18T07:41:34Z | 2022-06-29T08:18:00Z | null | Force1ess |
pytorch/vision | 6,034 | Question about center-ness branch in FCOS | Hi, thank you for your great work. I'm learning FCOS these days. I find some differences about position of center-ness between code and paper. In paper(https://arxiv.org/abs/1904.01355), the center-ness branch is put together with the classification branch.
 to TRT engine? Is there any Python API to do that?
## What you have already tried
In examples, I found
```cpp
auto engine = torch_tensorrt::ts::convert_method_to_trt_engine(mod, "forward", compile_spec);
```
in https://github.com/pytorch/TensorRT/blob... | https://github.com/pytorch/TensorRT/issues/1070 | closed | [
"question"
] | 2022-05-17T07:36:45Z | 2022-05-23T16:16:13Z | null | lingffff |
pytorch/pytorch | 77,589 | How to handle __module__ attribute for Public API bindings | While working on the NN onboarding lab (with corresponding closed PR: #77425 ), after registering the functional version of new module in `torch/nn/functional.py` The following test would fail ` pytest test/test_public_bindings.py` with:
```Bash
Full list:
# torch.nn.functional.bias:
- Is public: it is an attribu... | https://github.com/pytorch/pytorch/issues/77589 | open | [
"module: tests",
"triaged"
] | 2022-05-16T20:40:52Z | 2022-05-17T14:37:45Z | null | drisspg |
huggingface/datasets | 4,352 | When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way | ## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not... | https://github.com/huggingface/datasets/issues/4352 | open | [
"bug"
] | 2022-05-14T17:55:15Z | 2022-05-16T15:09:17Z | null | plamb-viso |
huggingface/optimum | 191 | Not possible to configure GPU in pipelines nor leveraging batch_size parallelisation | When setting the `device` variable in the `pipeline` function/class to `>= 0`, an error appears `AttributeError: 'ORTModelForCausalLM' object has no attribute 'to' - when running in GPU`. This was initially reported in #161 so opening this issue to encompass supporting the `device` parameter in the ORT classes. This is... | https://github.com/huggingface/optimum/issues/191 | closed | [
"inference"
] | 2022-05-14T05:05:51Z | 2022-09-05T08:37:46Z | 4 | axsaucedo |
pytorch/vision | 6,011 | Imagenet Version not documented? | ### π The doc issue
Hello torchvision team,
First, thanks for the epic work you are all putting into this tool! I would like to know the exact version of imagenet used at pertaining different models in torchvision, for research purposes regarding model inversion. All of them use the 2012 Imagenet Dataset version o... | https://github.com/pytorch/vision/issues/6011 | open | [
"question"
] | 2022-05-13T11:24:32Z | 2022-05-13T11:51:24Z | null | tudorcebere |
huggingface/datasets | 4,343 | Metrics documentation is not accessible in the datasets doc UI | **Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the met... | https://github.com/huggingface/datasets/issues/4343 | closed | [
"enhancement",
"Metric discussion"
] | 2022-05-13T07:46:30Z | 2022-06-03T08:50:25Z | 1 | fxmarty |
huggingface/optimum | 183 | about run_glue.py | how to enable GPU when run run_glue.py | https://github.com/huggingface/optimum/issues/183 | closed | [] | 2022-05-12T12:13:16Z | 2022-06-23T13:35:25Z | 1 | yichuan-w |
huggingface/dataset-viewer | 255 | Create a custom nginx image? | I think it would be clearer to create a custom nginx image, in /services/reverse-proxy, than the current "hack" with a template and env vars on the official nginx image.
This way, all the services (API, worker, reverse-proxy) would follow the same flow. | https://github.com/huggingface/dataset-viewer/issues/255 | closed | [
"question"
] | 2022-05-12T08:48:12Z | 2022-09-16T17:43:30Z | null | severo |
huggingface/datasets | 4,323 | Audio can not find value["bytes"] | ## Describe the bug
I wrote down _generate_examples like:

but where is the bytes?

## ... | https://github.com/huggingface/datasets/issues/4323 | closed | [
"bug"
] | 2022-05-12T08:31:58Z | 2022-07-07T13:16:08Z | 9 | YooSungHyun |
pytorch/pytorch | 77,341 | The input of the forward part of my model is a tuple, which cannot be converted to onnx format according to the existing methods. Can you tell me how to solve it | ### π Describe the bug
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Linear(32, 16)
self.relu1 = nn.ReLU(inplace=True)
self.relu2 = nn.ReLU(inplace=True)
self.fc = nn.Linear(32, 2)
... | https://github.com/pytorch/pytorch/issues/77341 | closed | [
"module: onnx",
"triaged"
] | 2022-05-12T06:38:49Z | 2022-05-18T01:04:49Z | null | singaln |
pytorch/extension-ffi | 26 | How to fix "undefined symbol: state error" once importing a c shared library? | I'm trying to import the compiled c shared library "_crop_and_resize.so", but I am receiving below error!
pytorch version = 1.9.0+cu102
Torchvision version = 0.9.1
python version = 3.6.10
```
>>> import _crop_and_resize as _backend
Traceback (most recent call last):
File "<stdin>", line 1, in <module... | https://github.com/pytorch/extension-ffi/issues/26 | closed | [] | 2022-05-12T00:01:49Z | 2022-05-14T22:33:53Z | null | Abbsalehi |
pytorch/examples | 1,004 | error: the following arguments are required: DIR | Excuse meοΌhow can I deal with this problemοΌ
<img width="1227" alt="image" src="https://user-images.githubusercontent.com/58496897/167763473-f5d2a189-3ac5-4e77-9451-c6817065d5ed.png"> | https://github.com/pytorch/examples/issues/1004 | closed | [] | 2022-05-11T03:31:07Z | 2022-07-01T16:07:30Z | 1 | Elijah123463 |
pytorch/pytorch | 77,228 | How can i remove 'lib/libtorch_cuda.so' gracefully to make deploy more small. γQuestions and Helpγ | i want import torch in my project . and i will not use 'cuda' clearly .
how can i to remove 'lib/libtorch_cuda.so' gracefully to make deploy package more smaller. (serverless deploy)
i remove lib/libtorch_cuda.so ,then cmd 'python3 index.py' . the result show...
**Traceback (most recent call last):
File... | https://github.com/pytorch/pytorch/issues/77228 | closed | [
"triaged"
] | 2022-05-11T03:27:31Z | 2022-05-12T00:26:04Z | null | wangping886 |
huggingface/dataset-viewer | 241 | Setup the users directly in the images, not in Kubernetes? | See the second point in https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/: using `runAsUser` / `runAsGroup` is a (relative) security risk.
| https://github.com/huggingface/dataset-viewer/issues/241 | closed | [
"question"
] | 2022-05-10T15:15:49Z | 2022-09-19T08:57:20Z | null | severo |
pytorch/TensorRT | 1,049 | β [Question] How can I move the converted tensorRT model in a Jetson system? | ## β Question
I optimized a pytorch module with torch-TensorRT. How can I move the engine to a Jetson?
## What you have already tried
I tried torch.jit.load('trt_traced_model.ts')
but get **__torch__.torch.classes.tensorrt.Engine** error
## Environment
> Build information about Torch-TensorRT can... | https://github.com/pytorch/TensorRT/issues/1049 | closed | [
"question"
] | 2022-05-10T15:08:47Z | 2022-05-10T15:45:51Z | null | mjack3 |
huggingface/datasets | 4,304 | Language code search does direct matches | ## Describe the bug
Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-taggin... | https://github.com/huggingface/datasets/issues/4304 | open | [
"bug"
] | 2022-05-10T11:59:16Z | 2022-05-10T12:38:42Z | 1 | leondz |
pytorch/TensorRT | 1,047 | can torch-tensorrt-1.1.0 support libtorch1.9 and cuda10.2? | ## β Question
I want to know if torch-tensorrt-1.1.0 can be compiled with libtorch1.9 and cuda-10.2 ?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.9.0):
- CPU Architecture: x86
- OS (e.g., Linux): linux
- CUDA version:10.... | https://github.com/pytorch/TensorRT/issues/1047 | closed | [
"question"
] | 2022-05-10T11:54:58Z | 2022-05-11T07:27:45Z | null | f291400 |
pytorch/TensorRT | 1,045 | β __torch__.torch.classes.tensorrt.Engine what does it mean? | Hello community and thanks for this repo.
## β Question
How can I load a tensorRT model after using torch.jit.save?
## What you have already tried
```
import torch
model = torch.jit.load('trt_model.torch-tensorrt') # give error __torch__.torch.classes.tensorrt.Engine
```
## Environment
> Build in... | https://github.com/pytorch/TensorRT/issues/1045 | closed | [
"question"
] | 2022-05-10T09:56:05Z | 2022-09-03T02:25:25Z | null | mjack3 |
pytorch/data | 391 | Allow users to provide `auth` and other data to `HttpReader` | ### π The feature
This should extend the functionality of `HttpReader` to send more complicated POST request.
For authentication, users don't necessarily need to provide via `http://user:password@domain.com/`. They should be able to provide `auth` to the `HttpReader` and relay it to `request`.
https://github.com/... | https://github.com/meta-pytorch/data/issues/391 | closed | [
"good first issue",
"help wanted"
] | 2022-05-09T22:36:27Z | 2022-05-11T19:28:14Z | 3 | ejguan |
pytorch/TensorRT | 1,034 | torch_tensorrt.compile dynamic input shape failed | ## dynamic input shape failed


if set min_shape=[1,3,h, h] and op_shape= [1,3, h, h] a... | https://github.com/pytorch/TensorRT/issues/1034 | closed | [
"question",
"component: core",
"No Activity"
] | 2022-05-09T08:25:50Z | 2022-08-21T00:02:41Z | null | f291400 |
pytorch/pytorch | 77,016 | Where is fx2trt fx to tensorrt tool? | ### π The doc issue
I found there are some PR:
https://github.com/jerryzh168/pytorch/tree/fb09fd4ab4ba618db148f9dfc035be589efb9355/torch/fx/experimental/fx2trt
which persist of fx2trt tool, where does it goes in main stream pytorch code?
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/pytorch/issues/77016 | open | [
"triaged",
"module: fx"
] | 2022-05-07T08:43:04Z | 2022-07-20T21:25:20Z | null | lucasjinreal |
pytorch/serve | 1,609 | How to set model batch size with TS_ environmental var | ## π Documentation
Hi, I can't seem to figure out how to set the batch size with an environmental parameter.
My `config.properties` looks like this:
```
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
number_of_netty_threads=32
enable_envvars_config=true
job_queue_size=1000
... | https://github.com/pytorch/serve/issues/1609 | closed | [] | 2022-05-05T14:25:43Z | 2022-05-09T21:52:41Z | null | austinmw |
pytorch/vision | 5,945 | Training recipe for these weights | https://github.com/pytorch/vision/blob/62740807c18e68bb0acd85895dca527f9a655bd5/torchvision/models/vision_transformer.py#L377
Does anyone know how these weights were generated. Where they training from scratch only on ImageNet 1k or was it pre-trained on ImageNet 21k? Looking at the original Vision transformer paper... | https://github.com/pytorch/vision/issues/5945 | closed | [
"question",
"module: models"
] | 2022-05-04T21:07:25Z | 2022-05-05T16:49:12Z | null | briancheung |
pytorch/serve | 1,606 | How to distribute multi models to each gpu? | I have two models: model0,model1 and two gpus: gpu0,gpu1. I want to set model0 to gpu0,model0 to gpu1,it means that the work of model0 will always on gpu0 and model1 is on gpu1.
How to make it?
Is it possible to implement by serve configuration or handle.py?
Could you help me?Thank you very much! | https://github.com/pytorch/serve/issues/1606 | open | [
"enhancement"
] | 2022-05-04T16:08:39Z | 2022-05-12T01:56:17Z | null | dzcmingdi |
pytorch/data | 382 | The protocol of fsspec can be a list of strings rather than a single string | ### π Describe the bug
https://github.com/pytorch/data/blob/92d18b088eb43b9805bed5c90a0afca87292a338/torchdata/datapipes/iter/load/fsspec.py#L61-L62
The `fs.protocol` can be a list rather than a string. For example of `s3`, it will return a list of `['s3', 's3a']`.
Then, there will be an error due to `self.root.sta... | https://github.com/meta-pytorch/data/issues/382 | closed | [
"good first issue"
] | 2022-05-03T21:47:05Z | 2022-05-04T16:50:16Z | 1 | ejguan |
pytorch/TensorRT | 1,019 | Missing 3 input files: libnvinfer_plugin.so, libcudnn.so and libnvinfer.so | ## β Question
I've been looking at all the great progress done previously when it comes to using Torch-TensorRT on Windows.
I made progress to the point that it seems like only 1 thing is missing. I'm missing the 3 .so mentioned above.
How are they supposed to be built? Am I missing something? Is there any fix tha... | https://github.com/pytorch/TensorRT/issues/1019 | closed | [
"question",
"channel: windows"
] | 2022-05-03T01:10:39Z | 2022-08-01T16:01:45Z | null | fschvart |
pytorch/TensorRT | 1,014 | β [Question] Building torch_tensorrt.lib on Windows | ## β Question
I am wondering how to build the torch_tensorrt.lib on Windows.
## What you have already tried
I have followed #960 and #856 (with the same WORKSPACE as the latter) and managed to successfully build torch_tensorrt.dll. However, I need the .lib file in order to compile my Libtorch program. I tried ... | https://github.com/pytorch/TensorRT/issues/1014 | closed | [
"question",
"channel: windows"
] | 2022-04-29T14:24:59Z | 2022-09-02T18:09:26Z | null | jonahclarsen |
huggingface/datasets | 4,238 | Dataset caching policy | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/d... | https://github.com/huggingface/datasets/issues/4238 | closed | [
"bug"
] | 2022-04-27T10:42:11Z | 2022-04-27T16:29:25Z | 3 | loretoparisi |
huggingface/datasets | 4,235 | How to load VERY LARGE dataset? | ### System Info
```shell
I am using transformer trainer while meeting the issue.
The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of da... | https://github.com/huggingface/datasets/issues/4235 | closed | [
"bug"
] | 2022-04-27T07:50:13Z | 2023-07-25T15:07:57Z | 1 | CaoYiqingT |
pytorch/TensorRT | 1,006 | [Question]Doesn't torch tensorrt support LSTM-based decoder optimization?? | ## β Question
Doesn't torch tensorrt support LSTM-based decoder optimization? The reason for asking this question is that the model forward and model test structures learned in the seq2seq structure are different (beam search, sequence inference ..), and the optimized model cannot be used by inputting only training fo... | https://github.com/pytorch/TensorRT/issues/1006 | closed | [
"question",
"No Activity"
] | 2022-04-27T06:50:45Z | 2022-11-10T00:02:45Z | null | koliaok |
huggingface/datasets | 4,230 | Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data? | 
But on huggingface datasets:

Where is the German data? | https://github.com/huggingface/datasets/issues/4230 | closed | [
"enhancement"
] | 2022-04-27T00:53:52Z | 2023-07-25T15:10:15Z | null | beyondguo |
huggingface/datasets | 4,221 | Dictionary Feature | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | https://github.com/huggingface/datasets/issues/4221 | closed | [
"question"
] | 2022-04-26T12:50:18Z | 2022-04-29T14:52:19Z | null | jordiae |
pytorch/TensorRT | 1,001 | β [Question] How to differentiate a Torch-TensorRT model from a pure TorchScript model? | ## β Question
<!-- Your question -->
I'm developing a C++ inference server to deploy Torch-TensorRT models and TorchScript models. Since the Torch-TensorRT compilation process is done AOT, Is there a way to know wether the given .pt model file is a Torch-TensorRT model or a pure TorchScript model?
Thanks! | https://github.com/pytorch/TensorRT/issues/1001 | closed | [
"question"
] | 2022-04-26T12:29:20Z | 2022-04-27T02:05:50Z | null | tiandi111 |
pytorch/vision | 5,872 | Keypoint RCNN visibility flag for keypoints | ### π The feature
Hello All,
This is only my first day posting a request here so I apologize for any errors on my part. Also, sorry for the long post below.
The purpose of this post is to request an improvement/correction for the visibility flag behavior of Keypoint RCNN. Based on my results and those of other ... | https://github.com/pytorch/vision/issues/5872 | open | [
"question",
"topic: object detection"
] | 2022-04-24T21:44:35Z | 2024-08-26T08:33:51Z | null | mbadal1996 |
pytorch/torchx | 470 | Improve torchx/resources README | ## π Documentation
## Link
https://github.com/pytorch/torchx/tree/main/resources
## What does it currently say?
```
**Creating EKS cluster**
eksctl create cluster -f torchx-dev-eks.yml
**Creating KFP**
kfctl apply -V -f torchx-dev-kfp.yml
```
## What should it say?
For the **Creating EKS Cluster** i... | https://github.com/meta-pytorch/torchx/issues/470 | closed | [
"documentation"
] | 2022-04-22T18:03:56Z | 2022-06-02T21:26:12Z | 1 | kiukchung |
pytorch/PiPPy | 149 | Figure out how to get `**kwargs` working with MetaTracer | https://github.com/pytorch/PiPPy/pull/138/files#diff-6d49246d94990874a38b3d05e50ea765d5c0a75270de5eec6dcda377f934976dR251
Michael B from HF is also looking into this, maybe we'll figure something out together | https://github.com/pytorch/PiPPy/issues/149 | closed | [] | 2022-04-21T16:34:00Z | 2022-06-10T18:19:27Z | null | jamesr66a |
pytorch/vision | 5,845 | about paste_mask_in_image question in mask rcnn | First of all, thanks for your great work.
Recently, I was studying Mask R-CNN code in this repo. I have some questions, and I hope you could answer it when you are free.
First question, Why do I need to expand the mask and box when mapping mask back to the original scale. I read the original paper of Mask R-CNN, ... | https://github.com/pytorch/vision/issues/5845 | closed | [
"question",
"topic: object detection"
] | 2022-04-21T08:52:39Z | 2022-05-18T00:51:04Z | null | WZMIAOMIAO |
pytorch/torchx | 464 | Volcano job scheduling issues due to bad upgrade | This is an after the fact issue to help anyone who stumbles upon it later resolve the issue.
## Pod won't schedule due to CreateContainerConfigError
```
Warning Failed 12m (x12 over 15m) kubelet Error: couldn't find key VC_PYTHON-0_HOSTS in ConfigMap default/torchxcomponentspython-bwg4m0sktd9mwc-svc
``... | https://github.com/meta-pytorch/torchx/issues/464 | closed | [
"bug",
"documentation",
"kubernetes"
] | 2022-04-20T19:14:15Z | 2022-04-20T20:30:06Z | 0 | d4l3k |
pytorch/vision | 5,838 | return_layers problem about fasterrcnn_mobilenet_v3_large_fpn | ### π Describe the bug
There may be a problem with the setting of return_layers in fasterrcnn_mobilenet_v3_large_fpn. If the default setting is used, the resolution of collected feature map is the same. As a result, the effect of detecting small targets will become worse.
https://github.com/pytorch/vision/blob/e... | https://github.com/pytorch/vision/issues/5838 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2022-04-20T04:32:20Z | 2022-04-21T07:47:05Z | null | WZMIAOMIAO |
pytorch/data | 364 | Linter for DataPipe/DataLoader2 | ### π The feature
This issue proposes the addition of a linter for DataPipes and DataLoader2. The linter can analyze the graph of DataPipes and input arguments to DataLoaderV, and inform the users if any errors may occur ahead of time. The incomplete list of issues that the linter may try to analyze and raise is be... | https://github.com/meta-pytorch/data/issues/364 | open | [
"help wanted"
] | 2022-04-19T21:49:54Z | 2023-04-11T16:58:51Z | 5 | NivekT |
pytorch/TensorRT | 987 | β [Question] How do you add CUDA kernels used for implemented plugins ? | ## β Question
How do you add CUDA kernels used for implemented plugins ? I have developed my own implementation for several layers that are not supported yet by Torch-TensorRT. I'm not familiar with the bazel compilation flow and i would like to know how to compile .cu files in Torch-TensorRT.
Current provided... | https://github.com/pytorch/TensorRT/issues/987 | closed | [
"question",
"No Activity",
"component: plugins"
] | 2022-04-19T15:59:59Z | 2022-08-12T00:02:25Z | null | david-PHR |
huggingface/datasets | 4,181 | Support streaming FLEURS dataset | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | https://github.com/huggingface/datasets/issues/4181 | closed | [
"dataset bug"
] | 2022-04-19T11:09:56Z | 2022-07-25T11:44:02Z | 9 | patrickvonplaten |
pytorch/pytorch | 76,023 | How to disable check onnx in torch.onnx.export in pytorch1.11 version? | ### π The doc issue
Old params were removed, now how to disable check on onnx when export?
### Suggest a potential alternative/fix
Also, why disable this feature? Some onnx using customized op can not pass check. | https://github.com/pytorch/pytorch/issues/76023 | closed | [
"module: onnx",
"triaged",
"onnx-needs-info"
] | 2022-04-19T08:26:42Z | 2022-05-05T04:57:24Z | null | lucasjinreal |
pytorch/TensorRT | 985 | Error Code 1: Myelin (Compiled against cuBLASLt 10.2.2.0 but running against cuBLASLt 11.4.2.0.) | Hi I am using TensorRT for an images in python but getting this issue.
**I am Yolort to infer image.**
[https://github.com/zhiqwang/yolov5-rt-stack](url)
```
import os
import torch
import cv2
from yolort.utils import Visualizer
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
cuda_visible = "0"
os.environ["CUD... | https://github.com/pytorch/TensorRT/issues/985 | closed | [
"question"
] | 2022-04-19T06:40:10Z | 2022-04-20T10:02:08Z | null | IamNaQi |
huggingface/optimum | 147 | Support for electra model | I came across this tool and it looks very interesting but i am trying to use electra model and i can see this is not supported. By this
`"electra is not supported yet. Only ['albert', 'bart', 'mbart', 'bert', 'ibert', 'camembert', 'distilbert', 'longformer', 'marian', 'roberta', 't5', 'xlm-roberta', 'gpt2', 'gpt-neo'... | https://github.com/huggingface/optimum/issues/147 | closed | [] | 2022-04-15T11:03:21Z | 2022-04-21T07:24:48Z | 1 | OriAlpha |
pytorch/TensorRT | 977 | β [Question] how to enable "torch fallback" | ## β Question
I was told that torch-trt was able to partially convert graph to tensorrt while keep the unsupported part running on torch-runtime.
And I also hava Found some 'Torch Fallback' or 'torch_fallback' str at the source code.
So I generate a module containing `torch.argmax` , which is not supported by to... | https://github.com/pytorch/TensorRT/issues/977 | closed | [
"question"
] | 2022-04-15T08:25:12Z | 2022-04-15T09:28:54Z | null | WingEdge777 |
pytorch/pytorch | 75,723 | [ONNX] How to export fx quantized model to onnx? | ### π The feature, motivation and pitch
FX is great! How to export fx quantized model to onnx?
### Alternatives
Currently, I have traced the quantized int8 model to torchscript, it works OK.
### Additional context
I just wonder, If torch already supported export fx model to onnx, how to do it? I got error:
```
... | https://github.com/pytorch/pytorch/issues/75723 | closed | [
"module: onnx",
"triaged",
"onnx-needs-info",
"module: fx"
] | 2022-04-13T07:40:14Z | 2022-11-15T23:44:03Z | null | lucasjinreal |
huggingface/tokenizers | 979 | What is the correct format for file for tokenizer.train_from_files? | I am trying to use this library and train a new model with my own data. But before I start building my corpora, I want to understand what file format should I be looking for, if I am feeding it to [`train_from_files`](https://docs.rs/tokenizers/0.11.3/tokenizers/tokenizer/struct.TokenizerImpl.html#method.train_from_fil... | https://github.com/huggingface/tokenizers/issues/979 | closed | [] | 2022-04-12T22:54:39Z | 2022-04-14T07:05:58Z | null | winston0410 |
pytorch/examples | 987 | What accuracy should we expect when training Alexnet from scratch on ImageNet? | ## π Documentation
The README https://github.com/pytorch/examples/blob/main/imagenet/README.md is very helpful when getting started with training AlexNet.
We are able to successfully train AlexNet to approximately 56% top-1 and 79% top-5 accuracy on the validation set. But this is still a fair bit below Krizhev... | https://github.com/pytorch/examples/issues/987 | open | [
"reproducibility"
] | 2022-04-11T20:56:15Z | 2023-01-12T03:26:38Z | 8 | yoderj |
huggingface/datasets | 4,141 | Why is the dataset not visible under the dataset preview section? | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| https://github.com/huggingface/datasets/issues/4141 | closed | [
"dataset-viewer"
] | 2022-04-11T08:36:42Z | 2022-04-11T18:55:32Z | 0 | Nid989 |
huggingface/datasets | 4,139 | Dataset viewer issue for Winoground | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | https://github.com/huggingface/datasets/issues/4139 | closed | [
"dataset-viewer",
"dataset-viewer-gated"
] | 2022-04-11T06:11:41Z | 2022-06-21T16:43:58Z | 11 | alcinos |
huggingface/datasets | 4,138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdow... | https://github.com/huggingface/datasets/issues/4138 | closed | [] | 2022-04-11T02:07:13Z | 2022-04-19T03:15:46Z | 5 | iluvvatar |
huggingface/datasets | 4,134 | ELI5 supporting documents | if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs?? | https://github.com/huggingface/datasets/issues/4134 | open | [
"question"
] | 2022-04-08T23:36:27Z | 2022-04-13T13:52:46Z | null | saurabh-0077 |
huggingface/dataset-viewer | 204 | Reduce the size of the endpoint responses? | Currently, the data contains a lot of redundancy, for example every row of the `/rows` response contains three fields for the dataset, config and split, and their value is the same for all the rows. It comes from a previous version in which we were able to request rows for several configs or splits at the same time.
C... | https://github.com/huggingface/dataset-viewer/issues/204 | closed | [
"question"
] | 2022-04-08T15:31:35Z | 2022-08-24T18:03:38Z | null | severo |
pytorch/text | 1,677 | what is currently the ideal effective torchtext pipeline for almost any nlp tasks | ## searching the ideal torchtext pipeline
**Description**
hey there, so ive been using the legacy version of torchtext for quite sometime as it provides easier ways to load custom dataset and custom pretrained word embeddings locally and i can semlessly implement it for seq2seq, text classification, pos tagging, ... | https://github.com/pytorch/text/issues/1677 | open | [] | 2022-04-07T13:29:10Z | 2022-04-07T13:29:10Z | null | StephennFernandes |
pytorch/data | 352 | DataLoader tutorial does not handle num_workers > 0 | I just wanted to document an issue with the tutorials https://pytorch.org/data/beta/tutorial.html#working-with-dataloader
The code in the tutorial will not work when running multiple DataLoader processes as the datapipe will be duplicated across workers:
```py
dl = DataLoader(dataset=datapipe, batch_size=2, ... | https://github.com/meta-pytorch/data/issues/352 | closed | [
"documentation"
] | 2022-04-07T13:00:41Z | 2022-06-10T20:02:57Z | 3 | NicolasHug |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.