repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch | 33,343 | How to convert the model to onnx in libtorch? | struct Net : torch::nn::Module {
Net()
: conv1(torch::nn::Conv2dOptions(1, 20, /*kernel_size=*/5).stride(1)),
conv2(torch::nn::Conv2dOptions(20, 40, /*kernel_size=*/5)),
fc1(640, 120),
fc2(120, 10) {
register_module("conv1", conv1);
register_module("conv2", conv2);
register_module("conv2_drop", c... | https://github.com/pytorch/pytorch/issues/33343 | closed | [
"module: onnx",
"module: cpp",
"triaged"
] | 2020-02-14T13:14:23Z | 2021-11-08T22:01:30Z | null | bjliuzp |
pytorch/pytorch | 33,341 | how-to-adjust-learning-rate using libtorch | ## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/33341 | open | [
"triaged"
] | 2020-02-14T11:25:57Z | 2020-02-14T17:57:33Z | null | w1005444804 |
pytorch/examples | 715 | C++ tutorial on sentence classification | @soumith
Currently, all the examples in C++ are related to image classification/ GAN. There are not many examples on text/nlp. I would like to include a starter example on sentence classification in c++. Can I go ahead and work on this?? | https://github.com/pytorch/examples/issues/715 | open | [
"c++"
] | 2020-02-13T17:05:24Z | 2024-03-16T23:09:13Z | 4 | avinashsai |
pytorch/vision | 1,883 | Torchvision NMS description | I think here should be `boxes with IoU >= iou_threshold`. Is this only a documentation typo and the cuda function called here is actually correctly implemented?
https://github.com/pytorch/vision/blob/bf8595798eaccbaffb6c04db11406426eb1b3800/torchvision/ops/boxes.py#L22 | https://github.com/pytorch/vision/issues/1883 | closed | [
"question",
"module: documentation"
] | 2020-02-13T14:53:30Z | 2020-02-13T18:03:20Z | null | sharifza |
pytorch/vision | 1,882 | How to modify the loss function of models in torchvison? | Excuse me if this question is a little stupid, for I just recently got access to this extraordinary field and cannot find the answer after some researching.
I invoked the pretrained mrcnn model in torchvison however its output wasn't so ideal. So I wonder if I can modify the loss function to improve its performance w... | https://github.com/pytorch/vision/issues/1882 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2020-02-13T13:23:31Z | 2023-06-28T15:01:18Z | null | Michael-J98 |
pytorch/tutorials | 850 | Why is the pytorch sphinx theme included as a submodule? | I'm not an expert in sphinx, but after a lot of testing and headache while trying to improve a tutorial I really wonder why the sphinx theme under `./src` is included at all (as a submodule on github).
If you clone the repo with `git clone ...` it doesn't get downloaded.
The theme gets downloaded with `pip install -e... | https://github.com/pytorch/tutorials/issues/850 | closed | [
"build issue"
] | 2020-02-13T13:02:16Z | 2024-09-06T21:25:48Z | 1 | wAuner |
pytorch/vision | 1,878 | So, what is the meaning for DeepLabHead in deeplabv3.py | Hi guys,
I am implementing the deeplabv3+, imitating the pattern of deeplabv3.py,
but I don't quite understand the meaning for DeepLabHead,
so do I need to put the upsampling operations in the DeepLabHead?
Any answer and idea will be appreciated! | https://github.com/pytorch/vision/issues/1878 | closed | [
"question",
"module: models",
"topic: semantic segmentation"
] | 2020-02-12T09:45:51Z | 2020-02-14T05:57:44Z | null | songyuc |
pytorch/vision | 1,875 | [Bug?] roialign operation returning incorrect numerics | torchvision.ops.roialign is returning incorrect results for a simple test case-
```
# x: tensor of size (1,1,3,3)
x= torch.tensor([[[[1,2,3],[4,5,6],[7,8,9]]]], dtype=torch.float)
boxes = torch.tensor(([[0, 0, 2, 2, 0]]), dtype=torch.float)
z = torchvision.ops.roi_align(x, boxes, (2,2),sampling_ratio=1)
```... | https://github.com/pytorch/vision/issues/1875 | closed | [
"question",
"module: ops"
] | 2020-02-11T21:06:04Z | 2020-02-14T13:24:48Z | null | coderAddy |
pytorch/vision | 1,872 | Shouldn't have a `+1` in the NMS implementation for the boxes width/height computation ? | The standard is to have a bounding box defined as quoted [here](https://github.com/facebookresearch/Detectron/blob/master/detectron/utils/boxes.py#L23).
But in the NMS [source code](https://github.com/pytorch/vision/blob/e2a8b4185e2b668b50039c91cdcf81eb4175d765/torchvision/csrc/cpu/nms_cpu.cpp), there is no `+1` whe... | https://github.com/pytorch/vision/issues/1872 | closed | [
"question",
"module: ops"
] | 2020-02-11T15:11:17Z | 2020-02-14T13:59:38Z | null | viniciusarruda |
pytorch/vision | 1,870 | Unexpected behavior of torchvision.ops.nms | Following the example below and looking the nms [source code](https://github.com/pytorch/vision/blob/e2a8b4185e2b668b50039c91cdcf81eb4175d765/torchvision/csrc/cpu/nms_cpu.cpp), I expected a `NaN` error, as the intersection and union will be zero.
import torchvision # torchvision==0.5.0+cpu
import torch ... | https://github.com/pytorch/vision/issues/1870 | closed | [
"question",
"module: ops"
] | 2020-02-11T12:09:02Z | 2020-02-27T19:57:35Z | null | viniciusarruda |
pytorch/vision | 1,869 | It seems there is no upsampling operations in the implementation of Deeplabv3? | Hi, guys,
I am learning about the the implementation of Deeplabv3 today,
and I find that it seems, there is no upsampling operations in deeplabv3.py,
so where is the upsampling operations of Deeplabv3 model?
Any answer or idea will be appreciated! | https://github.com/pytorch/vision/issues/1869 | closed | [
"question",
"module: models",
"topic: semantic segmentation"
] | 2020-02-11T11:12:13Z | 2020-02-13T18:23:26Z | null | songyuc |
pytorch/vision | 1,860 | Is there a backbone implementation of Xception? | Hi, guys,
I want to know if there is a backbone implementation of Xception?
Any answer or idea will be appreciated! | https://github.com/pytorch/vision/issues/1860 | closed | [
"question",
"module: models",
"topic: classification"
] | 2020-02-10T10:06:27Z | 2020-02-10T13:46:21Z | null | songyuc |
pytorch/vision | 1,859 | Is there an implementation of Deeplabv3+? | Hi, guys,
I want to know if there is an implementation of Deeplabv3+?
Any answer will be appreciated! | https://github.com/pytorch/vision/issues/1859 | closed | [
"question",
"module: models",
"topic: semantic segmentation"
] | 2020-02-10T07:24:51Z | 2020-02-10T14:10:28Z | null | songyuc |
pytorch/vision | 1,856 | FasterRCNN ground truth boxes reference system | Hi,
I'm trying to train a FasterRCNN on a custom dataset.
I have the ground truth bounding boxes in the [x1, y1, x2, y2] format, where:
- 0 <= x1 <= x2 <= H
- 0 <= y1 <= y2 <= W
- `H, W = img.shape` with img being loaded with cv2
With numpy, if I extract `img[x1:x2, y1:y2]`, it's the correct portion of the image.... | https://github.com/pytorch/vision/issues/1856 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2020-02-07T12:55:44Z | 2020-02-11T07:54:48Z | null | Robylyon93 |
pytorch/vision | 1,854 | Clarify the quantization bits in the pretrained models? | Thanks for the great work, and quantized pretrained models had been added in torchvision 0.5.
https://github.com/pytorch/vision/releases
>Quantized models
torchvision now provides quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2, as well as reference scripts for quantizing... | https://github.com/pytorch/vision/issues/1854 | closed | [
"question",
"module: documentation",
"module: models.quantization"
] | 2020-02-07T04:50:29Z | 2020-03-10T10:39:08Z | null | kentaroy47 |
pytorch/pytorch | 33,022 | How do you convert Torch output iOS NSNumber to UIImage | I recently trained a model in PyTorch and created the .pt model file. I was able to use the model file in iOS with https://pytorch.org/mobile/ios/ to get an output.
But the output is an array of NSNumber.
How can I convert that to UIImage?
Here's how i'm loading the model:
```
private lazy var module: ... | https://github.com/pytorch/pytorch/issues/33022 | closed | [
"oncall: mobile",
"module: ios"
] | 2020-02-05T21:41:29Z | 2020-02-07T19:12:04Z | null | rooseveltrp |
pytorch/vision | 1,848 | training FCN and DeepLab for segmentation | does PyTorch provide steps on how to use the deeplab or fcn for training a segmentation task?
if it already exists, where I can find it? | https://github.com/pytorch/vision/issues/1848 | closed | [
"question",
"module: reference scripts",
"topic: semantic segmentation"
] | 2020-02-04T19:34:28Z | 2020-02-13T17:50:09Z | null | isalirezag |
huggingface/sentence-transformers | 120 | What is the expected number of epochs for training sentenceBERT | Hi,
Given a model in {BERT, XLM, .XLnet, ...}, do you have a dictionary of estimated best number of epochs for training your Siamese Network on NLI dataset?
Else, what would be your suggestion on this? (other than just keep trying with different epochs parameters since it takes a lot of computational time ๐ )
... | https://github.com/huggingface/sentence-transformers/issues/120 | open | [] | 2020-02-04T14:17:22Z | 2020-06-08T19:48:20Z | null | MastafaF |
pytorch/vision | 1,847 | Required range is confusing in torchvision.utils.save_image | https://discuss.pytorch.org/t/float-vs-int-in-torchvision-utils-save-image/68596 | https://github.com/pytorch/vision/issues/1847 | closed | [
"question",
"module: transforms"
] | 2020-02-04T07:47:28Z | 2025-01-23T10:55:55Z | null | chinglamchoi |
huggingface/transformers | 2,705 | What is the input for TFBertForSequenceClassification? | # โ Questions & Help
What is the input for TFBertForSequenceClassification?
## Details
I have a simple multiclass text data on which I want to train the BERT model.
From docs I have found the input format of data:
```a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: ... | https://github.com/huggingface/transformers/issues/2705 | closed | [] | 2020-02-01T10:20:29Z | 2020-03-12T08:41:25Z | null | sainimohit23 |
pytorch/pytorch | 32,690 | How to customize build torchscript model to be used in end devices codebase | ## ๐ Feature
I want to compile my model to be executed in the Python/C script running on our customers computers/end devices, without the need to load the entire torch/libtorch package, but only what is needed based on the model operations.
## Motivation
Currently, the size of my ResNet model (for example) is ~10... | https://github.com/pytorch/pytorch/issues/32690 | open | [
"oncall: jit",
"triaged",
"oncall: mobile"
] | 2020-01-28T10:11:07Z | 2020-02-28T18:54:55Z | null | danmalowany-allegro |
pytorch/tutorials | 833 | Using encoder output in attention model | I study this [NLP from scratch](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html) tutorial. Encoder's output shape is `(seq_len, batch, hidden_size)`
Why does the author only save `[0, 0]` part (later is needed for attention weights) but not `[0]`:
https://github.com/pytorch/tutorials/bl... | https://github.com/pytorch/tutorials/issues/833 | closed | [] | 2020-01-25T18:06:14Z | 2020-01-29T19:03:51Z | 0 | kenenbek |
pytorch/pytorch | 32,485 | How to specify pytroch as a package requirement on windows ? | ## โ Questions and Help
I have a python package which depends on pytorch and which Iโd like windows users to be able to install via pip (the specific package is: https://github.com/mindsdb/lightwood, but I donโt think this is very relevant to my question).
What are the best practices for going about this ?
Are... | https://github.com/pytorch/pytorch/issues/32485 | closed | [] | 2020-01-22T09:31:44Z | 2020-01-22T10:27:03Z | null | George3d6 |
huggingface/transformers | 2,591 | What is the f1 score of Squad v2.0 on bert-base? I only got f1 score 74.78. | ## โ Questions & Help
<!-- A clear and concise description of the question. -->
Hello, I am doing some experiment of squad v2.0 on bert-base (NOT bert-large).
According to the BERT paper, bert-large achieves f1 score 81.9 with squad v2.0.
Since I couldn't find the official result for bert-base, I am not sure if I... | https://github.com/huggingface/transformers/issues/2591 | closed | [] | 2020-01-20T09:03:45Z | 2020-01-22T05:03:12Z | null | YJYJLee |
pytorch/tutorials | 828 | Multiple input tutorial | I am currently trying to build a model that takes two different inputs into account, trying to generalize the interaction between both from their properties.
However, I cannot find any resource on how to build a dataset that allows multiple inputs, while it seems to be quite simple to build the neural net itself. Yet... | https://github.com/pytorch/tutorials/issues/828 | closed | [] | 2020-01-20T08:21:57Z | 2021-06-09T21:14:17Z | 6 | THinnerichs |
pytorch/pytorch | 32,418 | how to install pytorch on AMD GPU | I find that the pytorch offer one version of downloading which not requires CUDA. And I follow the instruction.
I choose the pytorch 1.4.
My OS is Windows.
Pip is used to install.
My version of python is python 3.6
CUDA None
and I run the command pip3 install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://do... | https://github.com/pytorch/pytorch/issues/32418 | closed | [] | 2020-01-20T06:19:18Z | 2023-04-10T18:58:46Z | null | PIPIKAI-Sung |
pytorch/pytorch | 32,403 | How to accelerate the compiling of pytorch | ## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/32403 | open | [
"module: build",
"triaged"
] | 2020-01-19T13:42:14Z | 2020-01-21T23:25:36Z | null | daydayfun |
pytorch/java-demo | 3 | how and where is it better to install the LIBTORCH library localy for the project? | how and where is it better to install the LIBTORCH library localy for the project in linux(Ubuntu)?
While make proj Intellij idea write Error: "A problem occurred evaluating root project 'java-demo'. > LIBTORCH_HOME not present in environment."
| https://github.com/pytorch/java-demo/issues/3 | closed | [] | 2020-01-18T18:03:04Z | 2020-04-29T02:53:34Z | null | vit1967 |
pytorch/pytorch | 32,282 | How to convert layer_norm layer to ONNX? | Iโm trying to convert my model to ONNX format for further deployment in TensorRT. Here is a sample code to illustrate my problem in layer_norm here.
``` python
import torch
from torch import nn
class ExportModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
... | https://github.com/pytorch/pytorch/issues/32282 | closed | [
"module: onnx",
"triaged"
] | 2020-01-16T10:53:52Z | 2020-03-23T08:24:02Z | null | rtrobin |
pytorch/vision | 1,757 | Torchvision Resnet 50 accuracy | Hey, Pytorchโs (torchvision) Resnet 50 accuracy is declared to be 76.15.
But when Iโm using the training script from PyTorchโs repo, which is mentioned in the official torchvision website(https://pytorch.org/docs/stable/torchvision/models.html#classification):
[https://github.com/pytorch/examples/blob/master/imagenet... | https://github.com/pytorch/vision/issues/1757 | closed | [
"question",
"module: models"
] | 2020-01-16T09:43:54Z | 2021-06-30T15:08:29Z | null | Esaada |
pytorch/vision | 1,751 | module 'torchvision' has no attribute 'ops' | torchvision. ops implements operators that are specific for Computer Vision. Those operators currently do not support TorchScript. Performs non-maximum suppression (NMS) on the boxes according to their intersection-over-union (IoU)
output[image_i] = pred[torchvision.ops.boxes.batched_nms(pred[:, :4], pred[:, 4], c, ... | https://github.com/pytorch/vision/issues/1751 | closed | [
"question",
"module: ops"
] | 2020-01-15T15:01:54Z | 2020-01-15T18:45:32Z | null | omizonly |
huggingface/tokenizers | 73 | Decoding to string | Hi, thanks for this awesome library!
I want to decode BPE back to *actual* text, so that I can calculate BLEU scores. When I use the tokenizer.decoder, I get a string without any whitespace. I understand I can use a `pre_tokenizer` to get whitespaces, but in that case the decoded output would be `i can feel the mag ... | https://github.com/huggingface/tokenizers/issues/73 | closed | [
"question",
"python"
] | 2020-01-15T12:58:44Z | 2020-01-20T15:38:29Z | null | davidstap |
pytorch/vision | 1,737 | Pyramid layer | I want to extract the third layer of feature pyramid from
features = self.backbone(images.tensors) in generalized_rcnn.py
any help please? | https://github.com/pytorch/vision/issues/1737 | open | [
"question",
"module: models",
"topic: object detection"
] | 2020-01-10T15:44:20Z | 2020-01-10T16:29:44Z | null | MitraTj |
pytorch/pytorch | 32,041 | How to export L2-normalization to onnx | ## ๐ Feature
Support export for LpNormalization from PyTorch to ONNX, thus it could be used in TensorRT model.
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof | https://github.com/pytorch/pytorch/issues/32041 | closed | [
"module: onnx",
"triaged",
"enhancement",
"onnx-needs-info"
] | 2020-01-10T14:37:38Z | 2022-10-24T18:08:40Z | null | stoneyang |
pytorch/vision | 1,732 | How to use Resnet to deal with one channel input through pytorch.hub ? | I did this to load the Resnet model, and since my input contains only one channel, the model does not work.
`model = torch.hub.load('pytorch/vision:v0.4.2', 'resnet18', pretrained=True)`
I know how to modify the 'resnet.py' file to satisfy my demands, but that means I must include the modified 'resnet.py' file in... | https://github.com/pytorch/vision/issues/1732 | closed | [
"question",
"module: models",
"topic: classification"
] | 2020-01-09T09:22:50Z | 2020-01-09T20:22:18Z | null | PhilWallace |
pytorch/pytorch | 31,984 | Question about how to predict the derivation of the output? | I expect a neural network predict a value and the derivation of value.Is the following code the correct way?
```python
import torch
from torch import nn
from torch.autograd import grad
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.lin1 = nn.Linear(3, 30)
... | https://github.com/pytorch/pytorch/issues/31984 | closed | [] | 2020-01-09T07:31:25Z | 2020-01-09T18:57:24Z | null | thu-wangz17 |
pytorch/vision | 1,723 | torchvision fail to use GPU. | While I am using [detectron2](https://github.com/facebookresearch/detectron2), I meet the problem that some function in torchvision can't use GPU.
The details are here: https://github.com/facebookresearch/detectron2/issues/469
It seems an install problem. Directly using conda to install torchvision should be ok f... | https://github.com/pytorch/vision/issues/1723 | closed | [
"question",
"topic: build"
] | 2020-01-07T09:23:49Z | 2020-05-11T12:18:51Z | null | dihuangdh |
huggingface/transformers | 2,411 | What is the difference between T5Model, T5WithLMHeadModel, T5PreTrainedModel? | ## โ Questions & Help
<!-- A clear and concise description of the question. -->
I notice that for T5 model, there are more choices(T5Model, T5WithLMHeadModel, T5PreTrainedModel) than BERT or GPT. What is the difference between these three? I think all three are pre-trained model. We do not use T5PreTrainedModel in ... | https://github.com/huggingface/transformers/issues/2411 | closed | [
"wontfix"
] | 2020-01-06T07:01:32Z | 2020-03-13T08:09:42Z | null | g-jing |
pytorch/vision | 1,720 | Enquiry on Implementation of RandomHorizontalFlip (in transforms.py from references folder) | I am a bit confused by the implementation RandomHorizontalFlip defined [here](https://github.com/pytorch/vision/blob/master/references/detection/transforms.py). Note the following snippet extracted:
```
class RandomHorizontalFlip(object):
def __init__(self, prob):
self.prob = prob
def __call__(se... | https://github.com/pytorch/vision/issues/1720 | closed | [
"question",
"module: transforms",
"module: reference scripts"
] | 2020-01-05T11:04:12Z | 2020-01-08T10:28:44Z | null | riven314 |
pytorch/pytorch | 31,869 | How to save int value in ctx.save_for_backward | I want to define a new memory op, and first impl a new memory function(torch.autograd.Function), but forward and backward are static method,
and inputs have some int value for some config(like stride in conv function), ctx.save_for_backward can't save int value, How to fix this problem?
First, i want to f... | https://github.com/pytorch/pytorch/issues/31869 | closed | [] | 2020-01-05T07:13:11Z | 2020-01-06T05:22:12Z | null | kuramawzw1 |
pytorch/pytorch | 31,865 | how to install pytorch 0.4.1 | For some reason I have to install 0.4.1, I tired many times including install from source, I tried to install 0.4.1 under cuda9.0 and cuda 9.2, but it failed. my card is 2080ti. please help and tell me if there is a way to solve the problem, thanks! | https://github.com/pytorch/pytorch/issues/31865 | closed | [] | 2020-01-05T03:25:46Z | 2020-01-06T05:24:02Z | null | lapetite123 |
pytorch/pytorch | 31,853 | How to modify the internal calculation process of LSTM in pytorch-v1.1.0? | ## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/31853 | closed | [] | 2020-01-04T03:14:06Z | 2020-01-06T05:24:17Z | null | zwd2016 |
pytorch/pytorch | 31,823 | How to set quantization aware training scaling factors? | ## โ Questions and Help
when i use quantization aware training , The weight tensor scaling factors is a standard floating point number.
I want to convert my model as 8bit at FPGA, so the weight tensor scaling factor must be an integer power-of-two value exponent. Is there such an option? what should I do?
| https://github.com/pytorch/pytorch/issues/31823 | closed | [] | 2020-01-03T10:53:36Z | 2020-01-06T05:24:37Z | null | sunkr1995 |
pytorch/pytorch | 31,821 | How to convert model with a new QConv to onnx? | ## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/31821 | closed | [
"module: onnx",
"oncall: quantization",
"triaged"
] | 2020-01-03T07:56:58Z | 2021-12-16T00:16:35Z | null | Wuqiman |
pytorch/pytorch | 31,818 | How to distinguish different layers in hook๏ผ | ## ๐ Feature
<!-- A clear and concise description of the feature proposal -->
A way to distinguish different layers in each module itself
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to ... | https://github.com/pytorch/pytorch/issues/31818 | open | [
"module: nn",
"triaged"
] | 2020-01-03T03:48:13Z | 2022-09-22T22:55:48Z | null | I-Doctor |
pytorch/examples | 689 | DDP training multi nodes nccl error | pytroch:1.3.1
python:3.6
system:ubuntu 16
cuda:10.0
when i run imagenet main.py in multi-nodes ,there is a error likes,(single node can run ):
Use GPU: 1 for training
Use GPU: 0 for training
=> creating model 'resnet50'
=> creating model 'resnet50'
id-d3:714:714 [0] misc/ibvwrap.cu:63 NCCL WARN Failed to o... | https://github.com/pytorch/examples/issues/689 | open | [
"distributed"
] | 2020-01-02T03:56:27Z | 2024-09-27T05:43:31Z | 1 | ciel-zhang |
pytorch/vision | 1,710 | finetuning inception_v3 | finetuning resnet18 as
train: `models.resnet18(pretrained=True)`
val: `models.resnet18()`
But while finetuning inception_v3 as above, I got poor result. The valuation must be
val: `models.inception_v3(pretrained=True)`
I spent much time stucking here.. | https://github.com/pytorch/vision/issues/1710 | closed | [
"question",
"module: models"
] | 2020-01-01T14:45:26Z | 2020-01-08T10:54:17Z | null | stormchasingg |
huggingface/transformers | 2,372 | What is the "could not find answer" warning in squad.py | Hello,
I am trying to run run_squad.py for BERT (italian-cased) with an italian version of squad.
During the creation of features from dataset, I got some answer skipped like in the following:
<img width="478" alt="Screenshot 2019-12-30 at 23 30 19" src="https://user-images.githubusercontent.com/26765504/71603... | https://github.com/huggingface/transformers/issues/2372 | closed | [
"wontfix"
] | 2019-12-30T22:31:58Z | 2020-08-29T15:05:37Z | null | cppntn |
pytorch/vision | 1,707 | 'loss_dict' error from 'train_one_epoch' | Navigating through the code in 'train_one_epoch', running this line:
`loss_dict = model(image,targets)`
gives the error:
> 397 # RPN uses all feature maps that are available
--> 398 features = list(features.values())
399 objectness, pred_bbox_deltas = self.head(features)
400 ... | https://github.com/pytorch/vision/issues/1707 | closed | [
"question",
"module: reference scripts"
] | 2019-12-30T10:32:15Z | 2020-10-10T09:43:24Z | null | madiltalay |
pytorch/pytorch | 31,699 | How to implement multiple different kernel shapes in 2D convolution? | Hello. Iโm currently working on spherical convolutional network topic. Right now Iโm trying to develop a new kind of kernel used for the convolutional layer.
The usual kernel is 3x3 matrix. But for spherical images, after being projected onto a plane using equirectangular projection, there will be distortion. So I wan... | https://github.com/pytorch/pytorch/issues/31699 | closed | [
"feature",
"module: nn",
"triaged",
"needs research"
] | 2019-12-30T08:59:59Z | 2020-01-07T15:14:06Z | null | vhchuong |
pytorch/pytorch | 31,696 | how to set cuda stream by call Aten function | at::Tensor a = at::ones({16, 32}, opts);
at::Tensor b = at::randn({32, 64}, opts);
at::Tensor b1 = at::randn({32, 64}, opts);
auto c = at::matmul(a,b);
auto c1 = at::matmul(a,b1);
I want to call matmul by attach different cuda stream.
call at::matmul(a,b) by using stream1 , and call at::matmul(a,b1) by using... | https://github.com/pytorch/pytorch/issues/31696 | closed | [
"module: cuda",
"triaged"
] | 2019-12-30T05:44:55Z | 2019-12-31T06:48:42Z | null | kuramawzw1 |
pytorch/pytorch | 31,685 | What is the significance of torchvision._is_tracing()? | ## What is the significance of torchvision._is_tracing()? โ
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion... | https://github.com/pytorch/pytorch/issues/31685 | open | [
"triaged",
"module: vision"
] | 2019-12-29T04:07:08Z | 2019-12-30T21:50:08Z | null | AyanKumarBhunia |
pytorch/tutorials | 799 | Should I rewrote the "dcgan_faces_tutorial notebook" for the student to able to run it on colab for that 1GB dataset? | OK, I see it sets " data root = "/home/ubuntu/facebook/datasets/celeba..."". This is definitely not for Colab, and there are some students' computer does not have a GPU. I have a solution. I have rewritten it, so we can just download the zip file from google drive and unzip it. However, this requires to upload the 1GB ... | https://github.com/pytorch/tutorials/issues/799 | closed | [] | 2019-12-27T14:44:39Z | 2019-12-29T12:07:31Z | 0 | AliceSum |
pytorch/vision | 1,701 | Errors with COCO targets | I am using the COCO dataset for training with annotations available at the COCO website.
I use this dataloader:
`train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=True, num_workers=4, collate_fn=collate_fn)
`
Running one iteration:
`image, target = next(iter(train_dataloader))`
... | https://github.com/pytorch/vision/issues/1701 | closed | [
"question",
"module: reference scripts"
] | 2019-12-27T07:17:14Z | 2020-01-08T10:44:53Z | null | madiltalay |
pytorch/pytorch | 31,643 | how to know the input_shape of a pretrained model ? |
hi,dear,
Just wanna know the model's input_shape,
but got nothing,
So could you help me ?
thx
| https://github.com/pytorch/pytorch/issues/31643 | closed | [] | 2019-12-27T01:12:54Z | 2019-12-27T01:49:43Z | null | ucasiggcas |
pytorch/vision | 1,699 | 'train_one_epoch' gives error while using COCO annotations | I am using the COCO dataset for training with annotations available at the COCO website.
While using the code from: [https://github.com/pytorch/vision/blob/master/references/detection/engine.py](url), I get an error:
> AttributeError: 'list' object has no attribute 'items'
for the code snippet:
`targets = [{k: ... | https://github.com/pytorch/vision/issues/1699 | closed | [
"question",
"module: reference scripts"
] | 2019-12-25T10:15:51Z | 2022-10-07T16:13:55Z | null | madiltalay |
huggingface/transformers | 2,278 | where is the script of a second step of knwoledge distillation on SQuAD 1.0? | ## โ Questions & Help
<!-- A clear and concise description of the question. -->
In Distil part, there is a paragraph description which is "distilbert-base-uncased-distilled-squad: A finetuned version of distilbert-base-uncased finetuned using (a second step of) knwoledge distillation on SQuAD 1.0. This model reache... | https://github.com/huggingface/transformers/issues/2278 | closed | [
"wontfix"
] | 2019-12-23T09:13:26Z | 2020-05-08T15:29:08Z | null | c0derm4n |
huggingface/pytorch-image-models | 63 | what is the value range of magnitude in auto-augment when the MAX_LEVEL is set as 10. | Dear @rwightman , I have read the code about auto-augmentation and random-augmentation, and I noticed that the MAX_LEVEL is set as 10, same as the google's implementation. Also in the google implementation, they say an optimal magnitude is often in [5, 30]. But in your implementation you clip the input magnitude to be ... | https://github.com/huggingface/pytorch-image-models/issues/63 | closed | [] | 2019-12-23T08:49:19Z | 2019-12-26T23:40:49Z | null | cddlyf |
pytorch/text | 669 | How to use datasets for distributed training? | ## โ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
I built a dataset from my corpus, and use each line as an Example.
It works fine at first until I try to use it for distributed training.
It seems that torch.nn.parallel.DistributedParallel has to use DistributedSamp... | https://github.com/pytorch/text/issues/669 | open | [] | 2019-12-22T03:20:56Z | 2020-01-02T17:56:48Z | null | styxjedi |
pytorch/pytorch | 31,543 | how to install torch by python3.8? | ## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/31543 | closed | [] | 2019-12-21T03:15:45Z | 2019-12-21T05:43:47Z | null | Fenghuixueha |
pytorch/android-demo-app | 46 | How to create custom model for the PyTorchDemoApplication?Thanks | Hi, I want to learn about how to apply pytorch model on andorid platform. And this android-demo-app is very useful to me.
The PyTorchDemoApp has already been deployed on my android mobile ,and it can be runned successfully.
But I want to know how to create a custom model with my own Image data.
When I copy the mod... | https://github.com/pytorch/android-demo-app/issues/46 | open | [] | 2019-12-20T08:55:31Z | 2021-06-27T18:52:02Z | null | btdan |
pytorch/xla | 1,490 | pytorch/xla vs TF | ## โ Questions and Help
Hi, is training a model with pytorch xla slower than training a model with tf? Are there any other limitations to using pytorch/xla compared to TF? | https://github.com/pytorch/xla/issues/1490 | closed | [
"question"
] | 2019-12-19T21:03:11Z | 2019-12-19T22:01:41Z | null | bilal2vec |
huggingface/transformers | 2,230 | what is the most efficient way to store all hidden layers' weights? | Hi,
I am following this [post](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) for getting all 12 hidden layers' weights for every token in a sentence.
Consider I have a short text with 2 sentences: `He stole money today. He is fishing on the Mississippi riverbank.`
I want to store 5 + 8 = 1... | https://github.com/huggingface/transformers/issues/2230 | closed | [
"wontfix"
] | 2019-12-19T19:41:00Z | 2020-02-24T20:38:46Z | null | vr25 |
pytorch/pytorch | 31,466 | how to pass trained weight to neural network module | Suppose i used own data and trained a `conv1d`, how could we pass the weight to `conv1d` in c++ like what the `PyTorch` acts ?
Noticed that the implementation of `conv1d` in `PyTorch`, we could update the parameters like `in_channels`, `out_channels`, etc in the `__init__` function. If we want to update the `weight... | https://github.com/pytorch/pytorch/issues/31466 | closed | [] | 2019-12-19T10:18:14Z | 2019-12-19T14:53:57Z | null | OswaldoBornemann |
pytorch/examples | 682 | "EOFError: Ran out of inputโ occurred in example mnist_hogwild | Hi, when I ran example **mnist_hogwild** on cuda, errors occurred as below:
```
File "main.py", line 66, in <module>
p.start()
File "D:\Python3.7.3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "D:\Python3.7.3\lib\multiprocessing\context.py", line 223, in _Po... | https://github.com/pytorch/examples/issues/682 | open | [
"distributed",
"pickle"
] | 2019-12-19T05:06:30Z | 2023-10-11T06:19:14Z | 2 | audreycs |
pytorch/examples | 681 | SNLI: The examples doesn't work |
help, I try to run the snli task in examples๏ผand I got the errors as follow:
Traceback (most recent call last):
File "C:/Users/syk/Desktop/git/examples/snli/train.py", line 35, in <module>
inputs.vocab.load_vectors(wv_dir=args.data_cache, wv_type=args.word_vectors, wv_dim=args.d_embed)
TypeError: load_vec... | https://github.com/pytorch/examples/issues/681 | closed | [] | 2019-12-18T12:50:50Z | 2020-09-13T13:50:53Z | 0 | Youarerare |
huggingface/pytorch-image-models | 61 | where is your MixNet code? I can't find it. | https://github.com/huggingface/pytorch-image-models/issues/61 | closed | [] | 2019-12-17T02:49:04Z | 2019-12-17T05:30:46Z | null | xiebinghua | |
pytorch/tutorials | 793 | Explain how we can use same dataset for training an non-training | In the [Training a Classifer tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py), explain how can we use the same dataset for training and non-training? Is it cause we shuffle to randomize and use a subset? | https://github.com/pytorch/tutorials/issues/793 | closed | [
"60_min_blitz"
] | 2019-12-16T23:24:55Z | 2020-05-18T17:58:46Z | 1 | jlin27 |
pytorch/tutorials | 790 | Clarify why there are 6 output channels | In the [Define the network section of the Neural Network tutorial](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py), clarify why is it 6 outputs? Is it bias?
, I have added the device command at the top to offload the work on GPU.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.device("cuda:0")
```
However... | https://github.com/pytorch/tutorials/issues/771 | closed | [] | 2019-12-15T17:06:10Z | 2021-07-30T21:55:36Z | 1 | mahmoodn |
pytorch/vision | 1,665 | Automatic Background Removal technology | I am looking for a deep learning library/sdk which can be used to remove the background from any image automatically (with quality as good as www.remove.bg).
I tried some image segmentation SDKs with pre-trained models such as Tensorflow Lite & Fritz AI, but the accuracy of the cutout mask was very low, amongst othe... | https://github.com/pytorch/vision/issues/1665 | closed | [
"question",
"module: models"
] | 2019-12-15T06:53:21Z | 2020-03-24T15:44:36Z | null | InternetMaster1 |
pytorch/pytorch | 31,246 | How to do independent random number generatation in multiprocessing dataloader. | When I use num_woker > 0 in DataLoader, and I generate a random number in __getitem__ function.
I found all threads will generate the same random number...
For example, I set num_worker=8, and I want to got a random number to define my scale augmentation.
I will get
0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9
eight s... | https://github.com/pytorch/pytorch/issues/31246 | closed | [
"module: dataloader",
"triaged"
] | 2019-12-13T08:34:29Z | 2019-12-16T17:29:43Z | null | EricKani |
pytorch/text | 666 | How to use torchtext for tasks involving image/tabular data like image captioning? | ## โ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
Hi, thanks for the great library. I am wondering is there a way to use torchtext Dataset for multi-modal data? An example task will be image captioning, where we need to generate some text based on the input image. Or gene... | https://github.com/pytorch/text/issues/666 | open | [] | 2019-12-13T05:24:33Z | 2020-04-11T07:55:54Z | null | Hans0124SG |
pytorch/pytorch | 31,098 | How to install pytorch for CUDA 10.2? | Hello everyone. I have installed CUDA 10.2 and i tried to install pytorch on windows.
But I catched error like this:
FAILED: build.ninja
C:\Users\TensorFlow\.conda\envs\torch\Library\bin\cmake.exe -SF:\Git\pytorch -BF:\Git\pytorch\build
ninja: error: rebuilding 'build.ninja': subcommand failed
Traceback (most rece... | https://github.com/pytorch/pytorch/issues/31098 | closed | [] | 2019-12-11T06:56:22Z | 2019-12-11T17:01:39Z | null | tensor2flow |
pytorch/text | 665 | How to load downloaded dataset? | I download sougoNews and try to use it like this:
`train_dataset, test_dataset = datasets.SogouNews(root='data',ngrams=3)`
but it didn't work.still autodownload the datasets. | https://github.com/pytorch/text/issues/665 | closed | [] | 2019-12-11T01:03:17Z | 2022-06-24T00:20:48Z | null | LotusQing |
huggingface/transformers | 2,127 | Where is extract_features.py and run_classifier.py ? | ## โ Questions & Help
<!-- A clear and concise description of the question. -->
Hello! I couldn't find the extract_features.py and run_classifier.py. Have they been renamed ? | https://github.com/huggingface/transformers/issues/2127 | closed | [] | 2019-12-10T17:14:27Z | 2019-12-13T15:09:01Z | null | JiangYanting |
pytorch/pytorch | 31,041 | How to load PyTorch model using C++ api | ## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/31041 | closed | [] | 2019-12-10T09:57:21Z | 2019-12-10T10:30:15Z | null | henbucuoshanghai |
pytorch/pytorch | 30,962 | How can I add masks to parameters | Hi,
Can I use hook to add a parameter masking function to Conv2d. Specifically, Iโd like to add a binary mask buffer to each Conv2d module, during each training step, I need to update the mask buffer and then use it to mask the weight.
Or, is there any method to add masks and apply the masks to Conv2d in a given ... | https://github.com/pytorch/pytorch/issues/30962 | open | [
"module: nn",
"triaged"
] | 2019-12-09T12:50:11Z | 2019-12-11T07:37:43Z | null | tzm1003306213 |
pytorch/tutorials | 761 | RuntimeError: CUDA error: out of memory | I'm trying to run the code below:
_if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print... | https://github.com/pytorch/tutorials/issues/761 | closed | [] | 2019-12-09T10:03:49Z | 2021-07-30T22:15:11Z | 3 | Ala770 |
pytorch/examples | 676 | Reading my own dataset | Hi, I want to read/load my own dataset and build my models by using these datasets. But, I did not understand how can I read/load my own dataset. All examples are using PyTorch's datasets but do not help for me. Can you help me with this problem? | https://github.com/pytorch/examples/issues/676 | closed | [] | 2019-12-08T08:49:16Z | 2019-12-09T14:48:38Z | 2 | gozeloglu |
pytorch/vision | 1,646 | What is the meta.bin file used by the ImageNet dataset? | [Comment from @kanonjz in #1457](https://github.com/pytorch/vision/pull/1457#issuecomment-562807954)
> I downloaded imagenet myself and used `parse_val_archive` to prepare the folders, but got an error below. What is the `meta.bin`? I didn't find it in the imagenet.
>
> `The meta file meta.bin is not present in t... | https://github.com/pytorch/vision/issues/1646 | closed | [
"module: datasets"
] | 2019-12-07T12:30:20Z | 2019-12-10T13:07:42Z | null | pmeier |
pytorch/pytorch | 30,929 | How to set not to build libtorch_cpu.so and libmkl_*.so dependencies? | ``` linux-vdso.so.1 (0x00007fffa4bfc000)
libtorch_cpu.so => /home/xxxxx/workfiles/work/pytorch/torch/lib/./libtorch_cpu.so (0x00007f63d4f6c000)
librt.so.1 => /lib64/librt.so.1 (0x00007f63d4d52000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f63d4b3c000)
libdl.so.2 => /lib64/l... | https://github.com/pytorch/pytorch/issues/30929 | open | [
"module: build",
"triaged",
"module: mkl"
] | 2019-12-07T04:08:13Z | 2020-05-01T18:47:25Z | null | LinGeLin |
pytorch/examples | 675 | what do parameters 'ndf' and 'ngf' mean? | Thanks for your code. However, I was wondering if you could tell me what 'ndf' and 'ngf' mean? I do know how these two parameters are used, but I do not know why they are called 'ndf' and 'ngf' , respectively. Looking forward to your reply. | https://github.com/pytorch/examples/issues/675 | closed | [] | 2019-12-06T21:29:40Z | 2022-03-09T21:52:39Z | 1 | jianzhuwang |
pytorch/pytorch | 30,869 | How to specify install path when build libtorch๏ผno use cmake-gui | ## โ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/30869 | closed | [] | 2019-12-06T12:28:59Z | 2019-12-06T13:39:39Z | null | LinGeLin |
pytorch/pytorch | 30,796 | How to Build pytorch with local protobuf rather than third_party/protobuf? | ## โ Questions and Help
I want to build pytorch with my own os built protobuf lib rather than third_part/protobuf, Which prefix to change, Can anyone help me?
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](h... | https://github.com/pytorch/pytorch/issues/30796 | closed | [] | 2019-12-05T06:11:52Z | 2019-12-06T17:31:11Z | null | Raneee |
pytorch/text | 660 | How to prefetch data? | Currently, the bottleneck of my model training is on the data loading part, is there any example about how to prefetch data? Like the `pin_memory` and `num_workers` arguments of `torch.utils.data.DataLoader` | https://github.com/pytorch/text/issues/660 | closed | [] | 2019-12-04T14:04:20Z | 2022-06-24T00:39:44Z | null | speedcell4 |
pytorch/vision | 1,633 | how can I use ROI align in torch version 1.0 | https://github.com/pytorch/vision/issues/1633 | closed | [
"question",
"module: ops"
] | 2019-12-04T13:25:24Z | 2019-12-04T14:51:40Z | null | scut-salmon | |
pytorch/pytorch | 30,720 | what is tensor's storage C++ pointer? | Recently I look into PyTorch source codes. tensor's impl object is created after a tensor is created. But I can't know where the tensor's storage is and its pointer.
Could anyone give me some help? ๐
| https://github.com/pytorch/pytorch/issues/30720 | closed | [] | 2019-12-04T08:38:09Z | 2019-12-04T16:22:54Z | null | alanzhai219 |
pytorch/xla | 1,448 | python on XLA for CPU/GPU? | IIUC, with the same HLO, XLA is able to run on GPU and TPU.
I wonder if this project allows running PyTorch on top of XLA for CPU/GPU and future AI chips (as soon as they support XLA)?
Thanks,
Tiezhen | https://github.com/pytorch/xla/issues/1448 | closed | [
"question",
"stale"
] | 2019-12-04T06:51:32Z | 2020-01-26T17:08:48Z | null | wangtz |
pytorch/examples | 672 | I faced on the build error of libtorch:mnist.cpp in Ubuntu18.04 | (1)Issue
I faced the build error of one of libtorch examples :mnist.cpp in Ubuntu18.04.
Please tell me the way to solve the build error.

(2)Enviroment
OS:Ubbuntu18.04LTS
libtorch... | https://github.com/pytorch/examples/issues/672 | closed | [] | 2019-12-04T02:13:45Z | 2019-12-04T07:35:11Z | 1 | yoshihingis |
pytorch/vision | 1,630 | GeneralizedRCNNTransform doesn't work with four-channel inputs | When I modify the input channel of FasterRCNN from 3 to 4, GeneralizedRCNNTransform doesn't work.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs... | https://github.com/pytorch/vision/issues/1630 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2019-12-04T00:53:20Z | 2019-12-04T12:58:30Z | null | ZhiangChen |
pytorch/xla | 1,447 | How to use a specific commit of pytorch-xla in Colab? | ## โ Questions and Help
Hi,
I'm eager to use a specific commit (or the latest) in Colab. My current setup is this cell:
```bash
XRT_VERSION = "nightly"
DIST_BUCKET = "gs://tpu-pytorch/wheels"
TORCH_WHEEL = "torch-{}-cp36-cp36m-linux_x86_64.whl".format(XRT_VERSION)
TORCH_XLA_WHEEL = "torch_xla-{}-cp36-cp36... | https://github.com/pytorch/xla/issues/1447 | closed | [
"question"
] | 2019-12-03T20:04:55Z | 2020-02-12T17:36:30Z | null | hrbigelow |
pytorch/vision | 1,629 | Reference detection script image sizes help | Hi @fmassa ,
Somehow the reference detection script does not handle big images of size > 3000.
Always throw me cuda out of memory error.
Any suggestions on that ? | https://github.com/pytorch/vision/issues/1629 | closed | [
"question",
"module: models",
"module: reference scripts",
"topic: object detection"
] | 2019-12-03T12:11:21Z | 2019-12-03T12:30:55Z | null | gaussiangit |
pytorch/pytorch | 30,655 | How to convert Tensor back to BitMap or any image format in Android? | I have converted a PyTorch model for Android mobile. The purpose of the model is to achieve Super Resolution. The problem I am facing is that the model gives output in the form of Tensor. Whereas I want to convert that tensor into some imaging format but I haven't been able to find a method to achieve this task.
I ... | https://github.com/pytorch/pytorch/issues/30655 | closed | [
"module: android",
"oncall: mobile"
] | 2019-12-03T09:32:11Z | 2023-09-29T16:39:11Z | null | nauyan |
pytorch/pytorch | 30,654 | What is the different between nn.Functional.conv2d and nn.Conv2d?It seems a bit redundant? | ## โ Questions and Help
Hi,I have just started learning pytorch recently. In the official website tutorials, I often see nn.Conv2d and nn.Functional.conv2d. I don't understand the difference between the two writing methods. It seems that one of these two is enough.
| https://github.com/pytorch/pytorch/issues/30654 | closed | [] | 2019-12-03T08:27:21Z | 2019-12-04T01:09:44Z | null | wulongjian |
pytorch/xla | 1,442 | Out of memory error? | Is the following an out-of-memory error from the TPU?:

The text just keeps scrolling with similar messages.
It's surprising I get this error, because all I wanted to do is have a batch of 512 for 224... | https://github.com/pytorch/xla/issues/1442 | closed | [
"question"
] | 2019-12-03T04:26:23Z | 2019-12-10T18:57:22Z | null | tmabraham |
pytorch/examples | 671 | nn.Transformer tutorial uses nn.TransformerEncoder only | hello,
when I search for nn.Transformer use example, I find example which uses nn.TransformerEncoder, is there example use of nn.Transformer? | https://github.com/pytorch/examples/issues/671 | closed | [
"question"
] | 2019-12-02T12:49:33Z | 2022-03-10T04:46:18Z | null | vainaixr |
huggingface/transformers | 2,013 | What is the real parameters to weight the triple loss (L_{ce}, L_{mlm}, L_{cos}) in DistilBert? | Hello! Thanks for your great work DistilBert. I want to ask what is the real parameters "alpha" you used in DistilBert to weight the triple loss (L_{ce}, L_{mlm}, L_{cos})?
You did not mention this detail in your NIPS workshop paper (http://arxiv.org/abs/1910.01108). In the [README](https://github.com/huggingface/tr... | https://github.com/huggingface/transformers/issues/2013 | closed | [] | 2019-12-01T16:49:05Z | 2019-12-02T15:37:37Z | null | voidism |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.