repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch | 54,212 | How to update a Wiki page? | ## ❓ Questions and Help
The `-k` option for filtering tests with a string can no longer be used with `python`, and should be used with `pytest` now.
Pull requests can't be submitted for the Wiki, so I couldn't suggest an update to https://github.com/pytorch/pytorch/wiki/Writing-tests-in-PyTorch-1.8.
Please ... | https://github.com/pytorch/pytorch/issues/54212 | closed | [
"module: docs",
"module: tests",
"triaged"
] | 2021-03-17T21:31:48Z | 2021-03-18T15:10:54Z | null | imaginary-person |
pytorch/FBGEMM | 553 | Is it possible to speed up matrix multiplication by adjusting the values of the Packing parameters under the same hardware environment? | Hi! I am reading the source code of FBGEMM and interested in the CPU optimization part. I found that FBGEMM sets Packing parameters for each ISA separately. I am curious whether the values of these parameters are determined empirically or by a certain algorithm? Is it possible to speed up matrix multiplication by adjus... | https://github.com/pytorch/FBGEMM/issues/553 | closed | [
"question"
] | 2021-03-17T05:03:16Z | 2021-03-25T07:39:09Z | null | umiswing |
pytorch/pytorch | 53,993 | How to set the amp to all fp16 training? | Hello, I would like to ask how to set up all amp training for fp16? Similar to apex's O1 O2 O3 mode? thank you very much!
cc @mcarilli @ptrblck | https://github.com/pytorch/pytorch/issues/53993 | closed | [
"triaged",
"module: amp (automated mixed precision)"
] | 2021-03-15T08:02:23Z | 2021-03-16T03:04:16Z | null | sky-fly97 |
pytorch/pytorch | 53,957 | Is pytorch 1.8.0 incompatible with cuda 11.2 or what is the reason for this error? | I have spent all day trying to upgrade cuda to 11.2 and get it working with pytorch. At the moment I believe I should have a fully working version of Cuda 11.2, yet I still get the following error when I try to run my pytorch code, which normally works without issues.
```
Traceback (most recent call last):
File ... | https://github.com/pytorch/pytorch/issues/53957 | open | [
"module: cuda",
"triaged"
] | 2021-03-13T06:02:04Z | 2021-03-24T14:13:31Z | null | tueboesen |
pytorch/pytorch | 53,888 | How to shift columns (or rows) in a tensor with different offsets? | `torch.roll` function is only able to shift columns (or rows) with same offsets. But I want to shift columns with different offsets. Suppose the input tensor is
```
[[1,2,3],
[4,5,6],
[7,8,9]]
```
Say, to shift with offset `i` for the i-th column, the expected output is
```
[[1,8,6],
[4,2,9],
[7,5,3]]
``... | https://github.com/pytorch/pytorch/issues/53888 | closed | [
"triaged",
"module: advanced indexing"
] | 2021-03-12T10:11:21Z | 2021-03-13T05:05:51Z | null | changmenseng |
pytorch/FBGEMM | 540 | Is it possible to generate SPMDM kernels with asmjit? | Hi all,
Thanks for sharing such a high-performance GEMM library.
After reading through source codes, I found that only U8S8S32AC* kernels are generated from asmjit.
Is it possible to port SpMDM codes to asmjit? I'm tring to optimzie SpMDM by myself.
Thanks!
Yang | https://github.com/pytorch/FBGEMM/issues/540 | closed | [
"question"
] | 2021-03-12T03:08:40Z | 2021-03-17T16:38:56Z | null | YangWang92 |
pytorch/vision | 3,547 | How to train a classifier with custom class num while also want pretrain=True? | It will gives an error:
```
size mismatch for fc.weight: copying a param with shape torch.Size([1000, 1024]) from checkpoint, the shape in current model is torch.Size([42, 1024]).
``` | https://github.com/pytorch/vision/issues/3547 | closed | [
"question",
"module: models"
] | 2021-03-11T09:35:42Z | 2021-03-19T18:06:32Z | null | lucasjinreal |
pytorch/pytorch | 53,693 | how to use torch.distributions.Normal/log_prob in libtorch? | I dont find class like torch.distributions in libtorch,so is there any way to get log_prob of a tensor?
cc @yf225 @glaringlee @fritzo @neerajprad @alicanb @vishwakftw @nikitaved | https://github.com/pytorch/pytorch/issues/53693 | closed | [
"module: distributions",
"module: cpp",
"triaged"
] | 2021-03-10T07:23:51Z | 2021-03-10T15:33:47Z | null | scirocc |
pytorch/pytorch | 53,678 | [FX] Regression from 1.8: FX can no longer trace functions where the first element of an int list is a Proxy | ```
import torch
import torch.fx as fx
def f(x):
return torch.reshape(x, (x.shape[0], -1))
mod = fx.symbolic_trace(f)
print(mod.code)
```
In 1.18 this worked, but it was broken by this PR, which fails since it verifies that the first element of the list is an integer (while it's actually a Proxy): https... | https://github.com/pytorch/pytorch/issues/53678 | open | [
"triaged",
"module: fx"
] | 2021-03-10T02:13:32Z | 2022-07-20T21:23:30Z | null | Chillee |
pytorch/pytorch | 53,676 | How to concatenate a variable number of tensors | ## ❓ Questions and Help
How to concatenate a variable number of tensors using `torch.cat() `. For example, I have three layers and I need to concatenate the output of these layers as below:
```
for layer in self.layers:
src = layer(src, src_mask)
# I have three layer... | https://github.com/pytorch/pytorch/issues/53676 | closed | [] | 2021-03-10T01:36:15Z | 2021-03-10T08:34:26Z | null | aimanmutasem |
pytorch/TensorRT | 391 | ❓ [Question] PyTorch 1.8 Support | ## ❓ Question
<!-- Your question -->
## What you have already tried
PyTorch 1.8(stable) is released recently.
When will TRTorch be compatible to PyTorch 1.8? | https://github.com/pytorch/TensorRT/issues/391 | closed | [
"question"
] | 2021-03-09T05:57:53Z | 2021-03-22T21:50:54Z | null | developer0hye |
pytorch/pytorch | 53,584 | How to delete Module from GPU? (libtorch C++) | All the demo only show how to load model files. But how to unload the model file from the GPU and free up the GPU memory space?
I tried this, but it doesn't work.
```cpp
model.~Module();
c10::cuda::CUDACachingAllocator::emptyCache();
```
cc @yf225 @glaringlee | https://github.com/pytorch/pytorch/issues/53584 | open | [
"module: cpp-extensions",
"module: cpp",
"triaged"
] | 2021-03-09T02:55:03Z | 2021-03-11T03:11:09Z | null | ZhiZe-ZG |
pytorch/pytorch | 53,580 | how to use logging in libtorch C++ ? any example ? Many thanks | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/53580 | closed | [
"module: cpp",
"triaged"
] | 2021-03-09T02:34:56Z | 2021-03-10T02:58:13Z | null | yulinhuyang |
pytorch/serve | 1,001 | How to deploy on the cloud sentence transformer from the UKPLab from | Hi community,
How could I practically deploy on the cloud pre-trained sentence transformer from the UKPLab ?
I saw the issue #681 and customisation proposed but didn't know whether it was intended for cloud.
Secondly, once deployed on cloud how to configure at scale?
Thanks ! | https://github.com/pytorch/serve/issues/1001 | closed | [
"triaged_wait"
] | 2021-03-08T20:24:40Z | 2021-05-13T16:51:01Z | null | mattvan83 |
pytorch/tutorials | 1,401 | Dynamic Quantization for GPT2 model from huggingface. | Hi,
Reproducibility required: PyTorch version 1.4.0
I am trying to use the ```torch.quantization.quantize_dynamic``` function to quantize the ```pre_trained``` DistilGPT2 model from Hugging-face.
As most transformer blocks in this model are made up of the ```nn.Conv1d``` modules, there occurs a problem while p... | https://github.com/pytorch/tutorials/issues/1401 | open | [
"question",
"module: quantization"
] | 2021-03-08T15:06:23Z | 2023-03-09T19:37:48Z | null | mriganktiwari |
pytorch/pytorch | 53,395 | How to solve dist.init_process_group from hanging (or deadlocks) with DGX A100? | ## 🐛 Bug
DDP deadlocks on a new dgx A100 machine with 8 gpus
## To Reproduce
Run this self contained code:
```
"""
For code used in distributed training.
"""
from typing import Tuple
import torch
import torch.distributed as dist
import os
from torch import Tensor
import torch.multiprocessing... | https://github.com/pytorch/pytorch/issues/53395 | closed | [
"oncall: distributed"
] | 2021-03-05T19:14:08Z | 2023-06-08T10:36:24Z | null | brando90 |
pytorch/pytorch | 53,348 | How to obtain the gradient of a tensor when in-place operation included? | ## ❓ How to obtain the gradient of a tensor when in-place operation included?
For simplicity, here is the code to describe the question: when using `res = ma @ mb` in pytorch, we can easily obtain the gradient of ma by calling some backward function, e.g. `(res**2).sum().backward(); print(ma.grad)`. But when this mult... | https://github.com/pytorch/pytorch/issues/53348 | closed | [
"module: autograd",
"triaged"
] | 2021-03-05T09:33:53Z | 2021-03-06T02:53:54Z | null | Leiwx52 |
pytorch/vision | 3,509 | simple API discussion about the AutoAugment | ## ❓ Questions and Help
question about the user interface API
[transforms/autoaugment.py](https://github.com/pytorch/vision/blob/7b9d30eb7c4d92490d9ac038a140398e0a690db6/torchvision/transforms/autoaugment.py)
The current usage would be `AutoAugment(AutoAugmentPolicy('cifar10'))`, but since the policy is just a... | https://github.com/pytorch/vision/issues/3509 | closed | [
"question",
"module: transforms"
] | 2021-03-05T06:19:31Z | 2021-03-07T02:58:29Z | null | ain-soph |
pytorch/examples | 889 | Low training accuracy using pre-trained model | Hello,
I am trying to evaluate a pre-trained mobilenetv2 model from torchvision on the ImageNet training dataset using this script.
To do so, I modify lines 235-237 to perform validation on the train loader instead of the val loader:
```
if args.evaluate:
validate(train_loader, model, criterion, args)... | https://github.com/pytorch/examples/issues/889 | open | [
"help wanted",
"vision"
] | 2021-03-04T15:15:11Z | 2022-03-09T21:10:33Z | 2 | AndreiXYZ |
pytorch/pytorch | 53,264 | How to load trained . torch from conversion to .mlmodel | Hi , I need in help in converting the .torch to .mlmodel ,, while doing it i faced an error . After researching found no solution for the same and posted for help .
the error:
<img width="1009" alt="Screenshot 2021-03-01 at 10 24 01 PM" src="https://user-images.githubusercontent.com/35099512/109978249-b2154d80-7d23-1... | https://github.com/pytorch/pytorch/issues/53264 | open | [
"oncall: mobile"
] | 2021-03-04T14:27:11Z | 2021-03-12T05:28:21Z | null | NaveenTg |
pytorch/serve | 989 | How to get the URL parameters within the custom inference handler? | Hi guys, recently I'm writing an custom service handler for yolov5. However, I have no idea about how to get the URL parameters in my inference handler.
For example:
```
curl -XPOST http://localhost:8080/predictions/yolo?my_parameter=123 -T@sample.jpg
```
How can I get the value of ``my_parameter`` in my custom... | https://github.com/pytorch/serve/issues/989 | open | [
"triaged_wait"
] | 2021-03-03T09:48:50Z | 2023-11-07T12:42:08Z | null | neoragex2002 |
huggingface/datasets | 1,973 | Question: what gets stored in the datasets cache and why is it so huge? | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in... | https://github.com/huggingface/datasets/issues/1973 | closed | [] | 2021-03-02T14:35:53Z | 2021-03-30T14:03:59Z | null | ioana-blue |
pytorch/pytorch | 53,101 | How to compile torch/lib/c10d/ProcessGroupNCCL.cpp | I want to modify `ProcessGroupNCCL.cpp` to add some print statements, but I don't know how to recompile this file.
It is located at [https://github.com/pytorch/pytorch/tree/v1.7.1/torch/lib/c10d](https://github.com/pytorch/pytorch/tree/v1.7.1/torch/lib/c10d).
I'm using pytorch 1.7.1 installed by anaconda.
cc @... | https://github.com/pytorch/pytorch/issues/53101 | closed | [
"oncall: distributed"
] | 2021-03-02T09:06:49Z | 2021-03-04T03:14:39Z | null | 1013801464 |
pytorch/text | 1,218 | how to load data using TabularDataset and the new nightly torchtext experimental dataloader | the `torchtext.data.TabularDataset` returns an iterable of objects that cannot further be split into batches, or (x,y) sets of values. making it impossible to use the new `torchtext.vocab.Vocab` to build vocab using `Counter`
**my use-case code:**
tokenize = lambda x:x.split(" ")
konkani = Field(sequential=True... | https://github.com/pytorch/text/issues/1218 | closed | [] | 2021-02-26T08:08:49Z | 2021-02-26T16:47:50Z | null | StephennFernandes |
pytorch/pytorch | 52,850 | How to skip the images in a custom dataset and deal with None values? | I have an object detection dataset with RGB images and annotations in Json. I use a custom DataLoader class to read the images and the labels. One issue that I’m facing is that I would like to skip images when training my model if/when labels don’t contain certain objects.
For example, If one image doesn’t contain a... | https://github.com/pytorch/pytorch/issues/52850 | open | [
"module: dataloader",
"triaged"
] | 2021-02-25T18:04:33Z | 2021-02-25T22:04:56Z | null | srinivasgln |
pytorch/vision | 3,451 | Can't compile master: requires nightly PyTorch? | I have installed torch 1.7.1 and g++ 7.5.0. Do I need nightly PyTorch version to compile nightly torchvision 0.9.0?
`pip install git+https://github.com/pytorch/vision --no-dependencies`: [log.txt](https://github.com/pytorch/vision/files/6037409/log.txt)
| https://github.com/pytorch/vision/issues/3451 | closed | [
"question"
] | 2021-02-24T16:28:35Z | 2021-02-24T18:18:30Z | null | vadimkantorov |
pytorch/vision | 3,436 | Windows CPU build missing on PyPI? | ## 🐛 Bug
Is there a reason the CPU build of `torchvision` is not pushed to PyPI anymore?
## To Reproduce
Steps to reproduce the behavior:
1. `pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.1`
Output:
```
Collecting torch==1.7.1
Downloading torch-1.7.1-cp38-cp38-win_amd64.whl (184.0 MB)... | https://github.com/pytorch/vision/issues/3436 | closed | [
"question",
"windows",
"topic: binaries"
] | 2021-02-23T11:54:25Z | 2021-03-09T11:25:53Z | null | 1enn0 |
pytorch/audio | 1,298 | how to compute log filter bank energy in torch audio compare with python_speech_feature? | ## ❓ I want re-procedure result like when i use compute log-filterbank energy of lib: python_speech_feature by using torchaudio.
this is my code, and I'm see the result is difference:
```
# load audio data by librosa
path_audio = "audio_a.wav"
y, sr = librosa.load(path_audio, sr=16000, offset=0.5, duration=0.4... | https://github.com/pytorch/audio/issues/1298 | closed | [] | 2021-02-23T10:20:25Z | 2021-02-23T16:34:42Z | null | trangtv57 |
pytorch/vision | 3,429 | Inconsistency between the pretrained models and labels | I notice that for pretrain models that are provided the labels are not consistent.
For example vgg16 class 1 is different from Resnet50 class 1.
Can you let us know where we can find the corresponding labels for each model?
For vgg i notice that the one that looks like this:
```{
"0": [
"n01440764",
"t... | https://github.com/pytorch/vision/issues/3429 | closed | [
"question",
"module: models",
"module: reference scripts"
] | 2021-02-22T22:50:42Z | 2021-03-31T08:46:32Z | null | seyeeet |
pytorch/text | 1,193 | Looking for an example on how to use BucketIterator with a transformer model? | I would appreciate an end-to-end example. The examples that I found stop with the BucketIterator. It is unclear what to do with it.
| https://github.com/pytorch/text/issues/1193 | closed | [
"legacy"
] | 2021-02-20T02:41:12Z | 2024-07-12T11:58:25Z | null | sorenwacker |
pytorch/vision | 3,421 | error making: python-torchvision-cuda | can't make an app from AUR `python-torchvision-cuda` in Arch Linux
```sh
=========================================================================================== short test summary info ===========================================================================================
FAILED test/test_functional_tens... | https://github.com/pytorch/vision/issues/3421 | closed | [
"question",
"topic: build"
] | 2021-02-19T19:53:46Z | 2021-02-21T23:01:29Z | null | chiboreache |
pytorch/cpuinfo | 53 | Cpuinfo in sparc | I was able to compile pytorch on Debian 10, with Sparc processor. However, when it runs, it gives the error that it does not recognize the cpuinfo information and uses only one processor of the 32 existing ones. I would like to know if I can modify something to take at least one 16 core socket. On several occasions I w... | https://github.com/pytorch/cpuinfo/issues/53 | open | [
"question"
] | 2021-02-19T17:48:14Z | 2024-01-11T00:57:03Z | null | alerenato |
pytorch/TensorRT | 344 | [Question ][Error ] at least 4 dimensions are required for input | ## ❓ Question
Hi I managed to compile TRTorch but it gives me very weird results when I apply it to a simple Conv2d model.
The model is as follows :
```
class DummyModel(torch.nn.Module):
def __init__(self,):
super().__init__()
self.conv = torch.nn.Conv2d(in_channels=3, out_channels=10, ke... | https://github.com/pytorch/TensorRT/issues/344 | closed | [
"question"
] | 2021-02-17T14:59:17Z | 2021-02-17T17:36:29Z | null | MatthieuToulemont |
pytorch/vision | 3,406 | RetinaNet: TypeError: __init__() got an unexpected keyword argument 'trainable_backbone_layers' | ## 🐛 Bug
`retinanet_resnet50_fpn` throws an error while passing `trainable_backbone_layers` as an argument.
## To Reproduce
Steps to reproduce the behavior:
```python
import torchvision
model = torchvision.models.detection.retinanet_resnet50_fpn(trainable_backbone_layers=2)
```
```
~/gridai/venv/lib... | https://github.com/pytorch/vision/issues/3406 | closed | [
"question"
] | 2021-02-16T05:20:25Z | 2021-02-27T17:22:53Z | null | kaushikb11 |
pytorch/vision | 3,397 | Bug Report: No module named 'torchvision.models.mobilenetv2' | ## ❓ Questions and Help
Hi there, I encounter a bug when running this following line
>>> import torch
>>> res = torch.hub.load('pytorch/vision', 'resnet50')
the error is:
-------------------------------------begin of error info---------------------------------
Using cache found in /root/.cache/torch/hu... | https://github.com/pytorch/vision/issues/3397 | closed | [
"question"
] | 2021-02-15T12:24:50Z | 2021-02-15T14:35:27Z | null | DemonsHunter |
pytorch/vision | 3,392 | How to compile arbitrary nn modules with jit pytorch? ( RuntimeError: builtin cannot be used as a value, with a dict) | ## 🐛 Bug
Similar to https://github.com/pytorch/vision/issues/1675.
Simple, I compare my value to a dict and it throws an error.
```
"""
if type(json_data) is dict:
~~~~ <--- HERE
```
## To Reproduce
Simple, any code that has a comparison with a dict:
... | https://github.com/pytorch/vision/issues/3392 | closed | [
"invalid"
] | 2021-02-12T21:11:15Z | 2021-02-17T16:29:07Z | null | brando90 |
huggingface/sentence-transformers | 753 | What is 'sentence_embedding' of a Sentence Transformer Model? | Hey, I try to understand where this comes from. It is just mentioned here [link](https://github.com/UKPLab/sentence-transformers/blob/9932965c92a06835eda255dac7eacd53f48c5cd7/sentence_transformers/SentenceTransformer.py#L144)
But seems not be used anywhere than. Because this feature is used in the losses like Onlin... | https://github.com/huggingface/sentence-transformers/issues/753 | open | [] | 2021-02-11T20:48:07Z | 2021-02-12T14:03:59Z | null | PaulForInvent |
pytorch/pytorch | 52,147 | Pointer passed where number is expected for PYTORCH_CUDA_FUSER_JIT_OPT_LEVEL leading to crash | ## 🐛 Bug
The CUDA API expects a `void**` for option values for functions like `cuModuleLoadDataEx`. The documentation seems to be unclear, what that should be but according to other sources (see below) that value should be simply the value casted to a `void*`, not a pointer to that value.
Hence the code at https:/... | https://github.com/pytorch/pytorch/issues/52147 | open | [
"oncall: jit"
] | 2021-02-11T17:04:53Z | 2021-02-11T17:44:14Z | null | Flamefire |
pytorch/TensorRT | 338 | ❓ [Question] What is the correct way to create a trtorch::CompileSpec for a single input? | ## ❓ Question
My network has a single input of the following shape [1, 3, 224, 224]. I a trying to create the trtorch::CompileSpec as follows
`auto compile_settings = trtorch::CompileSpec({1, 3, 224, 224});` however I am getting the following output
````
terminate called after throwing an instance of 'trtorch::... | https://github.com/pytorch/TensorRT/issues/338 | closed | [
"question"
] | 2021-02-10T14:23:19Z | 2021-02-11T08:04:42Z | null | federicohml |
pytorch/tutorials | 1,354 | Tensors tutorial broken? | It looks like a lot of content is missing from this tutorial: https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py. | https://github.com/pytorch/tutorials/issues/1354 | closed | [] | 2021-02-10T09:58:54Z | 2021-02-12T07:20:38Z | 2 | Attila94 |
pytorch/TensorRT | 337 | ❓ [Question] Why bazel is not able to find libcudart-xxxxxxx.so.11.0? | ## ❓ Question
I cloned TRTorch repo and try to play with it with a sample code. I created a folder for this playground in the root path (next to WORKSPACE), add the corresponding `BUILD` and `cpp` files. However when executing `bazel build //adv_test:adv_torchscript --distdir third_party/dist_dir/x86_64-linux-gnu/` ... | https://github.com/pytorch/TensorRT/issues/337 | closed | [
"question"
] | 2021-02-09T16:33:25Z | 2021-02-09T21:34:10Z | null | federicohml |
pytorch/TensorRT | 335 | ❓ [Question] Typo in "/py/README.md" | ## ❓ Question
<!-- Your question -->
There are typo in example in "/py/README.md"
## Example Usage
``` python
import torch
import torchvision
import trtorch
# Get a model
model = torchvision.models.alexnet(pretrained=True).eval().cuda()
# Create some example data
data = torch.randn((1, 3, 224, 22... | https://github.com/pytorch/TensorRT/issues/335 | closed | [
"question"
] | 2021-02-09T07:43:17Z | 2021-02-09T23:57:31Z | null | developer0hye |
pytorch/TensorRT | 334 | ❓ [Question] Typo in "core/conversion/conversionctx/ConversionCtx.cpp " | ## ❓ Question
<!-- Your question -->
There are typo in "core/conversion/conversionctx/ConversionCtx.cpp "
https://github.com/NVIDIA/TRTorch/blob/6442fce997e1506d859fab789527fe1e282f683f/core/conversion/conversionctx/ConversionCtx.cpp#L57-L62
Is this typo, right?
## What you have already tried
<!-- A c... | https://github.com/pytorch/TensorRT/issues/334 | closed | [
"question"
] | 2021-02-09T07:36:32Z | 2021-02-09T23:57:40Z | null | developer0hye |
pytorch/pytorch | 51,859 | Need help when using torch jit with an thread pool. (how to use at::set_num_threads correctly) | Hi, I'm trying to using an thread pool with size N to manage N torch::jit::Module instances, and I want assign one thread to each individual torch::jit::Modules. I'm currently wrapping one torch::jit::Module with a wrapper class, and in the constructor I call at::set_num_threads(1) and at::set_num_interop_threads(1), b... | https://github.com/pytorch/pytorch/issues/51859 | closed | [
"oncall: jit"
] | 2021-02-07T13:09:51Z | 2021-02-12T08:23:52Z | null | w1d2s |
pytorch/TensorRT | 326 | ❓ [Question] Is there a way to do multithreaded half-precision compilation? | ## ❓ Question
I want to compile a Torch script in a different thread than the main thread in a C++ program. However, doing so with half precision for large networks will result in a Segmentation fault.
Here's a program that extracts what I want to do:
https://github.com/SakodaShintaro/trtorch-test/blob/master/ma... | https://github.com/pytorch/TensorRT/issues/326 | closed | [
"bug",
"question",
"bug: triaged [verified]"
] | 2021-02-05T08:50:21Z | 2021-02-26T02:18:13Z | null | SakodaShintaro |
pytorch/examples | 885 | DDP on GPUs invalid ordinal | there is a node with 8 gpus,and I can't train my model on any 4 of the gpus, except gpu-id is 0,1,2,3.
how can I use any permutation and combination of the 8 gpus? Thanks
`-- Process 2 terminated with the following error:
Traceback (most recent call last):
File "/home/lab-chen.qi/anaconda3/envs/torch17/lib/pyt... | https://github.com/pytorch/examples/issues/885 | open | [
"distributed"
] | 2021-02-05T02:40:06Z | 2023-03-31T08:30:25Z | 1 | ccijunk |
pytorch/serve | 965 | How to change loadedAtStartup to be true while registering a model? | ## 📚 Documentation
<!-- A clear and concise description of what content in https://pytorch.org/serve/ is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.or... | https://github.com/pytorch/serve/issues/965 | closed | [
"triaged_wait"
] | 2021-02-05T01:58:37Z | 2021-05-13T17:41:39Z | null | wangs0007 |
pytorch/pytorch | 51,712 | UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you e... | https://github.com/pytorch/pytorch/issues/51712 | closed | [] | 2021-02-04T08:03:59Z | 2021-02-04T15:52:25Z | null | vkl-git |
huggingface/transformers | 9,961 | What is the correct way to use Adafactor? | Hi, from the papers I've seen that Adafactor is typically used with no learning rate (as in Pegasus paper), however, when I try to execute run_seq2seq.py or seq2seq/finetune_trainer.py from your examples, and set --adafactor parameter, without specifying learning rate (for no learning rate), it uses the default 3e-05. ... | https://github.com/huggingface/transformers/issues/9961 | closed | [
"wontfix"
] | 2021-02-02T15:42:08Z | 2021-03-06T00:12:07Z | null | avacaondata |
huggingface/datasets | 1,808 | writing Datasets in a human readable format | Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq | https://github.com/huggingface/datasets/issues/1808 | closed | [
"enhancement",
"question"
] | 2021-02-02T02:55:40Z | 2022-06-01T15:38:13Z | null | ghost |
pytorch/pytorch | 51,431 | torch.where dtype inference is not smart | ## 🐛 Bug
If we call `torch.where(mask, float_py_scalar, int_py_scalar)`, the dtype inference will error, but it should use floating type.
```py
In [198]: torch.__version__
Out[198]: '1.7.0'
In [199]: x = torch.randn(3)
In [200]: x
Out[200]: tensor([0.1649, 2.0497, 1.2026])
In [201]: torch.where(x >... | https://github.com/pytorch/pytorch/issues/51431 | closed | [
"triaged",
"module: sorting and selection",
"function request"
] | 2021-01-31T17:00:30Z | 2021-02-03T17:33:05Z | null | ssnl |
pytorch/examples | 880 | How to run | https://github.com/pytorch/examples/issues/880 | closed | [] | 2021-01-31T08:26:07Z | 2022-03-09T19:59:23Z | null | 1158481739 | |
pytorch/TensorRT | 305 | aten::view error | ## ❓ Question
During conversion, it seems like I found an incomplete support of the torch.view function:
Error as follows:
`at most one dimension may be inferred`
The function it is trying to convert is this:
`out.view(out.shape[0], -1, 4)`
| https://github.com/pytorch/TensorRT/issues/305 | closed | [
"question",
"No Activity"
] | 2021-01-29T21:31:49Z | 2021-05-11T00:06:59Z | null | rafale77 |
pytorch/pytorch | 51,345 | how to convert torch::conv2d return value(tensor) to cv::mat | I run the following program:
read a picture of 3 channel input torch::nn::conv2d(3,3,3).pad(1).stride(1),then I got the results:

code:
```
cv::Mat img = cv::imread("babyx2.png", 1);
torch::Tensor i... | https://github.com/pytorch/pytorch/issues/51345 | closed | [] | 2021-01-29T08:58:10Z | 2021-01-29T16:20:09Z | null | yzqxmu |
pytorch/pytorch | 51,339 | gcc 4.8.5 -std=11 how to build pytorch1.7 | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/51339 | closed | [] | 2021-01-29T07:55:01Z | 2021-01-30T03:39:58Z | null | joinhe |
pytorch/vision | 3,322 | a question about segmentation model loading | ## ❓ Questions and Help
Why they are different?
### Please note that this issue tracker is not a help form and this issue will be closed.

We have a set of [listed resources available on the website](htt... | https://github.com/pytorch/vision/issues/3322 | closed | [
"question",
"module: models",
"topic: semantic segmentation"
] | 2021-01-29T03:19:01Z | 2021-01-29T13:45:39Z | null | njzyxiong |
pytorch/pytorch | 51,320 | Pytorch not working properly (I don't know how to summarize it, see below) | When I have a pytorch model, I sometimes would like to extract the features before the final softmax layers or such. Here, I have a model trained and loaded from a pickle:
```
def build_model():
model = resnet18(pretrained=True)
n_features = model.fc.in_features
n_hidden = 100
model.fc = tor... | https://github.com/pytorch/pytorch/issues/51320 | open | [
"module: nn",
"triaged"
] | 2021-01-29T00:17:31Z | 2021-02-08T23:52:23Z | null | ghost |
huggingface/transformers | 9,867 | where is position_embedding_type used | When I was using pytorch Electra Model, I read its source code but I didn't find where position_embedding_type is used.
So did I miss something? | https://github.com/huggingface/transformers/issues/9867 | closed | [] | 2021-01-28T08:29:08Z | 2021-01-29T02:00:07Z | null | awdrgyjilplij |
huggingface/datasets | 1,786 | How to use split dataset | 
Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro... | https://github.com/huggingface/datasets/issues/1786 | closed | [
"question"
] | 2021-01-27T21:37:47Z | 2021-04-23T15:17:39Z | null | kkhan188 |
pytorch/xla | 2,756 | How to sync XLA GPU Tensor between torch and torch_xla | I'm newly to torch_xla and trying to enable torch_xla in distributed training in PyTorch with multi-node gpu.
However, it seems torch_xla doesn't support this scenario well,for the following reasons:
1. torch_xla only support single-node multi-processing training by [xmp.spawn](https://pytorch.org/xla/release/1.7/in... | https://github.com/pytorch/xla/issues/2756 | closed | [
"stale"
] | 2021-01-27T02:21:37Z | 2021-06-26T02:22:41Z | null | tanyokwok |
pytorch/TensorRT | 294 | Python Library error after painful compilation. | ## ❓ Question
After very painfully building the repo from source due to a lot of strangely hardcoded paths to libraries and include which had me modify both the setup.py and the WORKSPACE, I have successfully completed the compilation using bazel. However when I try to use the python extension, I get the following e... | https://github.com/pytorch/TensorRT/issues/294 | closed | [
"question"
] | 2021-01-27T01:57:24Z | 2021-02-15T02:41:41Z | null | rafale77 |
pytorch/pytorch | 51,114 | How to find the module dependency? | ## ❓ There are many operations in a Model
If we run these codes below:
```
import torch
import torchvision
model = torchvision.models.resnet18()
inp = torch.zeros([64, 3, 7, 7])
for temp in model.children():
print(temp)
```
We can get several modules:
```
Conv2d(3, 64, kernel_size=(7, 7), stride... | https://github.com/pytorch/pytorch/issues/51114 | closed | [] | 2021-01-26T17:28:57Z | 2021-01-26T21:51:08Z | null | Xuyuanjia2014 |
pytorch/vision | 3,294 | Using torchvision roi_align in libtorch c++ jit modules | ## 🐛 Bug
Hi, I’m trying to use libtorch 1.7.1 to load a jit model that is created with pytorch 1.5.1 and torchvision 0.6.1.
This model is using torchvision::roi_align operator.
When running the model I get this error:
**Could not find any similar ops to torchvision::roi_align. This op may not exist or may not ... | https://github.com/pytorch/vision/issues/3294 | closed | [
"question",
"module: ops",
"topic: object detection",
"module: c++ frontend"
] | 2021-01-26T07:23:02Z | 2022-11-28T05:56:59Z | null | natangold85 |
pytorch/vision | 3,293 | Affine Transform: why is translate a list[int] when the code suggests it could be floating point? | https://github.com/pytorch/vision/blob/f16322b596c7dc9e9d67d3b40907694f29e16357/torchvision/transforms/functional.py#L956
cc @vfdev-5 | https://github.com/pytorch/vision/issues/3293 | open | [
"question",
"module: transforms"
] | 2021-01-26T07:14:08Z | 2021-01-26T15:41:51Z | null | varung |
pytorch/TensorRT | 291 | Questions about Value_Tensor_map and Evaluated_Value_map? (Not an issue, just try to understand them...) | I have just gone through TRTorch's 2020 GTC talk/slides/documentation focusing mainly on the graph conversion implementation part. There are some confusions of concepts and questions:
1. What's the relationship between `torch::jit::Values` and `torch::jit::IValue`, Are they the same thing? I noticed they are used in... | https://github.com/pytorch/TensorRT/issues/291 | closed | [
"question"
] | 2021-01-25T12:40:35Z | 2021-01-25T19:19:48Z | null | maxyanghu |
pytorch/elastic | 140 | Torch Elastic - How to make sure all nodes are in the same AZ? | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our documentation. Here
are some links that may be helpful:
* [What is torchelastic?](../../README.md)
* [Quickstart on AWS](../../aws/REA... | https://github.com/pytorch/elastic/issues/140 | closed | [] | 2021-01-25T00:14:10Z | 2021-05-17T15:47:49Z | null | thecooltechguy |
pytorch/vision | 3,283 | How to install torchvision to use video_reader backend? | I simply installed torchvision from conda (as advertised on pytorch.org). But `torchvision.set_video_backend('video_reader')` prints `video_reader video backend is not available. Please compile torchvision from source and try again`. This should be mentioned in https://pytorch.org/docs/stable/torchvision/index.html#tor... | https://github.com/pytorch/vision/issues/3283 | closed | [
"enhancement",
"module: documentation",
"module: video"
] | 2021-01-24T03:09:56Z | 2022-08-16T10:58:31Z | null | vadimkantorov |
pytorch/vision | 3,281 | Can we use DeeplabV3 in Salient Object Detection ? | Recently, I start doing more in Deep Learning in Semantic Segmentation. I can't figure DeepLabV3 is possible to apply in Salient Object Detection ? | https://github.com/pytorch/vision/issues/3281 | closed | [
"question"
] | 2021-01-24T01:32:09Z | 2021-04-12T07:40:18Z | null | duynguyen51 |
pytorch/xla | 2,750 | How to change torch tpu v3 baseline into torch tpu pod v2? | i was trying to run this working torch tpu v3 baseline : https://www.kaggle.com/mobassir/faster-pytorch-tpu-baseline-for-cld-cv-0-9 into torch tpu pod v2.
i changed hardware accelerator from tpu v3-8 to tpu v2 pod in kaggle and changed used batch size = 1 and
```
def _mp_fn(rank, flags):
global acc_list
... | https://github.com/pytorch/xla/issues/2750 | closed | [] | 2021-01-23T07:48:39Z | 2021-01-25T21:29:29Z | null | mobassir94 |
pytorch/vision | 3,274 | Different ENODATA code on macOS | ## 🐛 Bug
It seems macOS ENODATA code (96) is different than the Linux one (61). The Linux code is currently hard-coded in `Video.cpp`, which results in an (unnecessary?) error being shown when using the video decoder on macOS:
https://github.com/pytorch/vision/blob/7d831a2f9b3ebab9eb8e5c899cf70b103ad6908a/torchvis... | https://github.com/pytorch/vision/issues/3274 | closed | [
"question",
"module: video"
] | 2021-01-22T12:05:45Z | 2021-01-22T17:29:52Z | null | stefanwayon |
pytorch/serve | 943 | how to return Chinese characters with UTF-8 code | 1. When I use torch sever, I return a list in the **postprocess function** of the handler. Each element of the list is a python dictionary and the dictionary value is Chinese characters. Torch sever directly returns a json with the unicode encoding like "\u59d3". Can I control the return using UTF-8?
2. In addition... | https://github.com/pytorch/serve/issues/943 | open | [
"triaged_wait",
"language"
] | 2021-01-22T09:19:00Z | 2021-05-27T04:36:56Z | null | aixuedegege |
pytorch/vision | 3,273 | What is expected Kinetics400 dataset directory structure? | Given that the dataset does not come with official downloader scripts and that most roll their own or hack some third-party scripts, it would be much clearer if https://pytorch.org/docs/stable/torchvision/datasets.html#kinetics-400 explained what directory structure is expected by `torchvision.datasets.Kinetics400`
... | https://github.com/pytorch/vision/issues/3273 | closed | [
"enhancement",
"module: datasets",
"module: documentation"
] | 2021-01-22T01:02:24Z | 2021-03-01T10:18:21Z | null | vadimkantorov |
pytorch/vision | 3,267 | get v0.8.1 branch compile out torchvision==0.9.0a0+7b9d30e | I clone the v0.8.1 branch and compiled it with pytorch 1.7.0, but at last the compiled version is 0.9.0, does anything wrong?
| https://github.com/pytorch/vision/issues/3267 | closed | [
"question"
] | 2021-01-20T09:52:05Z | 2021-01-20T10:29:08Z | null | helloyan |
pytorch/pytorch | 50,709 | conv3d in r3d_18: How to maintain the dimension? | ## How to maintain the dimension in conv3d(r3d_18)?
### convolution in conv3d about padding
1. the input is (1, 3, 5, 112, 112)
2. the model is `models.video.r3d_18(pretrained=True, progress=False)`
3. the model summary
```
VideoResNet(
(stem): BasicStem(
(0): Conv3d(3, 64, kernel_size=(3, 7, 7), str... | https://github.com/pytorch/pytorch/issues/50709 | closed | [] | 2021-01-19T03:07:16Z | 2021-01-20T14:08:55Z | null | u0251077 |
pytorch/vision | 3,261 | ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. from torchvision import _C
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
>>> from torchvision import _C ... | https://github.com/pytorch/vision/issues/3261 | closed | [
"question",
"topic: binaries"
] | 2021-01-17T17:08:27Z | 2021-06-16T15:08:15Z | null | IISCAditayTripathi |
pytorch/pytorch | 50,657 | How to maximize inference speed of models implemented with C++ API ? (not using torchscript or jit) | I'm currently implementing some seq2seq model with LibTorch C++ API (build from torch::nn::Modules, not using jit), is there any special techniques to optimize the inference speed ? Thanks.
cc @yf225 @glaringlee @VitalyFedyunin @ngimel @gmagogsfm | https://github.com/pytorch/pytorch/issues/50657 | closed | [
"module: performance",
"module: cpp",
"triaged"
] | 2021-01-17T02:55:51Z | 2024-06-27T07:58:38Z | null | w1d2s |
huggingface/sentence-transformers | 693 | What is 'Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and the gold labels.' ? | In your paper,you mention this
`we compute the Spearman’s rank
correlation between the cosine-similarity of the
sentence embeddings and the gold labels.`
in **section 4.1**
Here is my question,what is the `gold labels` mean ,and can you provide a example to explain how to calculate the Spearman’s rank correlati... | https://github.com/huggingface/sentence-transformers/issues/693 | closed | [] | 2021-01-15T08:46:57Z | 2021-01-15T09:55:00Z | null | Gpwner |
pytorch/xla | 2,733 | How to install Torch_XLA in my own laptop? | ## ❓ Questions and Help
I want build a envirment about Torch_XLA on my own laptop by Annconda3. But I do not find any information about this. Is it difficult to use Annconda3 or pip install Torch_XLA? | https://github.com/pytorch/xla/issues/2733 | closed | [] | 2021-01-15T02:38:39Z | 2021-04-09T04:54:46Z | null | TianshengSun |
pytorch/examples | 870 | Permissions to contribute | Hi there, I thought I could contribute a few notebooks with really low barrier to entry for concepts like regression using tensors and for loops, small and highly documented shallow nets to illustrate concepts etc. I tried to push a notebook today to a branch I checked out for a PR but don't have permissions. How I can... | https://github.com/pytorch/examples/issues/870 | closed | [] | 2021-01-13T13:26:02Z | 2022-03-09T20:16:51Z | 1 | rbownes |
huggingface/datasets | 1,733 | connection issue with glue, what is the data url for glue? | Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks | https://github.com/huggingface/datasets/issues/1733 | closed | [] | 2021-01-13T08:37:40Z | 2021-08-04T18:13:55Z | null | ghost |
pytorch/vision | 3,246 | assert error len(grid_sizes) == len(strides) == len(cell_anchors) | It looks like a bug. When I do not set the AnchorGenerator() in FasterRCNN, the default anchor_sizes in ### **detection/faster_rcnn.py** line**182** shows that 'anchor_sizes = ((32,), (64,), (128,), (512,))' which cause len(cell_anchors) == 5. And I found that in the **detection/faster_rcnn.py** line**120** the anchor... | https://github.com/pytorch/vision/issues/3246 | closed | [
"question"
] | 2021-01-13T03:30:16Z | 2021-01-20T11:06:09Z | null | ghost |
huggingface/transformers | 9,556 | Where is convert_bert_original_tf_checkpoint_to_pytorch.py? | HI:
I am getting the following error when implementing entity extraction in BERT. OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index']
I am very new to using BERT, and noted that [issue 2110](https://github.com/huggingface/transformers/issues/2110) had a similar issue. Issue 2... | https://github.com/huggingface/transformers/issues/9556 | closed | [
"wontfix",
"Migration"
] | 2021-01-13T02:49:48Z | 2021-03-06T00:13:15Z | null | sednaasil |
pytorch/pytorch | 50,426 | How to do gathering on a tensor with two-dim indexing | ### Question
Hi,
Want to add symbolic func to a custom PyTorch op and export it to ONNX using existing ONNX ops. There is two-dim indexing operation. Have tried `index_select`, but not work. So could anyone take a look into this and help me with this?
### Further information
Sample code
```
def my_custom_op(dat... | https://github.com/pytorch/pytorch/issues/50426 | closed | [] | 2021-01-12T10:21:14Z | 2021-01-12T22:15:39Z | null | RunningLeon |
pytorch/pytorch | 50,346 | how to save weights when using RPC framework | Hi,
I am using the RPC framework to split the model across different processes/ranks. However, I notice that calling torch.save will only save the weights of the part of the model on a single rank. I am wondering if there is a way to save the weights of all models into one file?
cc @pietern @mrshenli @pritamdaman... | https://github.com/pytorch/pytorch/issues/50346 | open | [
"oncall: distributed",
"triaged",
"module: rpc"
] | 2021-01-10T08:26:37Z | 2024-11-18T17:04:45Z | null | FrankLeeeee |
pytorch/TensorRT | 267 | prim::ListUnpack unable to get schema | When I try to complie a model, I got such error
```
[1;35mDEBUG: [0mUnable to get schema for Node %b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15) (NodeConverterRegistry.Convertable)
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at co... | https://github.com/pytorch/TensorRT/issues/267 | closed | [
"question"
] | 2021-01-08T09:28:32Z | 2021-01-22T19:51:16Z | null | inocsin |
pytorch/vision | 3,233 | Which paper is torchvision.ops.deform_conv2d from? | ## 📚 Documentation
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/... | https://github.com/pytorch/vision/issues/3233 | closed | [
"question",
"module: documentation"
] | 2021-01-08T09:17:08Z | 2021-01-08T10:11:11Z | null | songyuc |
pytorch/pytorch | 50,139 | How to correctly nest datasets and dataloaders? | ## ❓ Questions and Help
Hi, I am asking here because it seemed like the right place, if it isn't please tell me where to ask.
Consider a stream of tabular data.
```
import pandas as pd
import numpy as np
def data_stream():
for _ in range(1000):
df = pd.DataFrame({
'a': n... | https://github.com/pytorch/pytorch/issues/50139 | closed | [] | 2021-01-06T11:44:07Z | 2021-01-07T00:46:10Z | null | noamzilo |
pytorch/tutorials | 1,304 | NLP FROM SCRATCH: TRANSLATION WITH A SEQUENCE TO SEQUENCE NETWORK AND ATTENTION | Hi
I'm exgausted... how to save and load model in future? | https://github.com/pytorch/tutorials/issues/1304 | closed | [] | 2021-01-06T10:45:46Z | 2021-06-02T19:39:35Z | 1 | aloska |
pytorch/TensorRT | 266 | How to convert model from double to float | When I try to complie torchscript model, I get this log
```
DEBUG: [TRTorch Conversion Context] - Found IValue containing object of type Double(requires_grad=0, device=cpu)
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/util/trt_util.cpp:293] Expected aten_trt_type_... | https://github.com/pytorch/TensorRT/issues/266 | closed | [
"question",
"component: core"
] | 2021-01-06T09:59:10Z | 2022-08-12T21:10:14Z | null | inocsin |
pytorch/pytorch | 50,118 | torch.where scalar/tensor documentation is unclear and not formatted | ## 📚 Documentation
See:
`
Currently valid scalar and tensor combination are 1. Scalar of floating dtype and torch.double 2. Scalar of integral dtype and torch.long 3. Scalar of complex dtype and torch.complex128
`
I believe these are supposed to be on separate lines. Also this message comes before the type i... | https://github.com/pytorch/pytorch/issues/50118 | open | [
"module: docs",
"triaged",
"module: sorting and selection"
] | 2021-01-05T22:52:49Z | 2021-01-07T17:14:35Z | null | gchanan |
pytorch/pytorch | 50,112 | need a clear guide for when and how to use torch.cuda.set_device() | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
I find myself quite unclear about `torch.cuda.set_device()`. The current documentation is very unsatisfactory, ambgious and confusing. e.g. the first 3 lines of code sample: https://pytorch.org/docs/stable/notes/cuda.html#cuda-semantics
... | https://github.com/pytorch/pytorch/issues/50112 | open | [
"module: docs",
"module: cuda",
"triaged",
"needs design"
] | 2021-01-05T22:11:26Z | 2025-12-26T12:57:46Z | null | stas00 |
pytorch/examples | 866 | Structure of train_loader | Hi and thanks in advice for your help! I would like to upload my own set of images and to train the variational autoencoder model with my training set. I don't understand what is the structure of your train_loader. I see you use torch.utils.data.DataLoader on datasets.MNIST to obtain train_loader, but I don't understan... | https://github.com/pytorch/examples/issues/866 | closed | [] | 2021-01-04T15:57:34Z | 2022-03-09T21:17:33Z | 1 | Silvia-Sciva |
pytorch/pytorch | 50,030 | How to realize Cross Validation using torchtext? | I want to realize cross validation using torchtext. Here is what I have done:
1. First, I use TabularDataset to define a dataset from the JSON file
2. Then, I use train_exs_arr = np.array(train_data.examples), d_train = train_exs_arr[train_idx].tolist()
3. Then, I use Dataset to define a sub-dataset from Examples d... | https://github.com/pytorch/pytorch/issues/50030 | closed | [] | 2021-01-04T03:08:29Z | 2021-01-04T07:09:40Z | null | yipliu |
huggingface/transformers | 9,387 | Where is the impact when output_attentions=True? | Is there any impact regarding performance (training/fine-tuning time, GPU memory, batch size, etc.) when `output_attentions=True`?
```python
self.bert_encoder = BertModel.from_pretrained(
hparams.architecture, # "bert-base-uncased"
output_attentions=True)
``` | https://github.com/huggingface/transformers/issues/9387 | closed | [
"wontfix"
] | 2021-01-02T23:16:57Z | 2021-03-06T00:13:32Z | null | celsofranssa |
pytorch/xla | 2,707 | How to write pure Python function which can be ran on TPUs while using PyTorch-XLA? | I got existing code to train EfficientNet using PyTorch which contains custom augmentations like CutMix, MixUp etc. in my training loop. This runs perfectly on GPU. Now I want to change my code such that it can run on TPUs.
I've made required changes to run my code on 8 TPU cores using PyTorch XLA but it's runs very... | https://github.com/pytorch/xla/issues/2707 | closed | [] | 2020-12-31T14:25:56Z | 2021-01-08T17:34:16Z | null | Kaushal28 |
pytorch/examples | 862 | Why not move images onto gpu? | https://github.com/pytorch/examples/blob/792d336019a28a679e29cf174e10cee80ead8722/imagenet/main.py#L284
I'm trying to training vgg on imagenet with one node DataParallel and no multiprocessing。But I find 'images.device' before computation is 'cpu', and 'target.device=cuda:0'. I'm not sure why these four lines of co... | https://github.com/pytorch/examples/issues/862 | closed | [
"good first issue"
] | 2020-12-29T13:52:36Z | 2022-04-28T14:55:08Z | 3 | I-Doctor |
pytorch/pytorch | 49,888 | How to apply functions to nested modules? | ## ❓ Questions and Help
Hi, all,
I understood when we want to apply a certain function to layers in a model, we can call self.apply(_function). For instance, apply weight norm to all convolutional layers. I checked the document of module.apply(), where its says the function will be applied to all the childr... | https://github.com/pytorch/pytorch/issues/49888 | closed | [
"module: nn",
"triaged"
] | 2020-12-28T12:34:25Z | 2020-12-28T17:34:15Z | null | 121898 |
pytorch/pytorch | 49,862 | How to transform the adjacency matrix into the incidence matrix? | ## ❓ Questions and Help
How to transform the adjacency matrix into the incidence matrix using the pytorch functions provided? It's easy to implement it using for loops, but it's Inefficient.
| https://github.com/pytorch/pytorch/issues/49862 | closed | [] | 2020-12-26T02:34:08Z | 2020-12-26T03:31:19Z | null | zlpure |
pytorch/pytorch | 49,855 | NN.CTCloss may be something wrong?How to decode CTC results? | pytorch 1.7.0 windows python3.7.5
I tried to train the ocr rec model with this code, where Nn. Ctcloss was used : https://github.com/WenmuZhou/PytorchOCR/tree/master/tools/rec_train.py
Loss went down to 0.02, ACC to 0.99. And then I try to deduce the model with https://github.com/WenmuZhou/PytorchOCR/tree/master/to... | https://github.com/pytorch/pytorch/issues/49855 | closed | [] | 2020-12-25T15:18:50Z | 2020-12-29T20:43:02Z | null | williamlzw |
pytorch/vision | 3,198 | Boxes with negative scores in NMS input? | Hi, I found that the use of NMS in `RegionProposalNetwork` can take on boxes with negative scores as inputs. I found this when running MaskRCNN in v0.8 release.
https://github.com/pytorch/vision/blob/90645ccd0e774ad76200245e32222a23d09f2312/torchvision/models/detection/rpn.py#L261
In other use of NMS in `ROIHea... | https://github.com/pytorch/vision/issues/3198 | closed | [
"question",
"topic: object detection"
] | 2020-12-21T22:53:14Z | 2021-01-06T13:57:38Z | null | masahi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.