repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/vision | 1,625 | Why does the rpn use the L1_Loss? | https://github.com/pytorch/vision/blob/master/torchvision/models/detection/rpn.py#L426
the code in the rpn.py , line 426 as follows:
**box_loss = F.l1_loss(
pred_bbox_deltas[sampled_pos_inds],
regression_targets[sampled_pos_inds],
reduction="sum",
) / (sampled_inds.... | https://github.com/pytorch/vision/issues/1625 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2019-12-01T12:54:15Z | 2019-12-02T12:14:14Z | null | TeeyoHuang |
pytorch/vision | 1,618 | is faster rcnn scriptable?I tried,but failed~ | https://github.com/pytorch/vision/issues/1618 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2019-11-27T06:32:41Z | 2019-11-30T15:24:03Z | null | dao-kun | |
pytorch/vision | 1,617 | Question about converting custom dataset to coco api | https://github.com/pytorch/vision/blob/a44d55d87ba3628ac79292fdcaead7fb98fc130b/references/detection/coco_utils.py#L163
If the box is [3,10,6,20](xyxy format),the converted box should be [3,10,4,11]. I think this code should be added 1. Because there are 4 pixels between [3,6] and 11 pixels between [10,20]. It actua... | https://github.com/pytorch/vision/issues/1617 | closed | [
"question",
"module: reference scripts"
] | 2019-11-27T03:22:53Z | 2019-12-02T12:26:12Z | null | kangkang59812 |
pytorch/tutorials | 735 | Dataloader with SAMPLER tutorial missing. | Original discussion thread: https://discuss.pytorch.org/t/feedback-on-pytorch-for-kaggle-competitions/2252
Previously closed issue: https://github.com/pytorch/tutorials/issues/78
Related PR Merged: https://github.com/pytorch/tutorials/pull/96
Again posting a new issue because the previous issue has been closed and... | https://github.com/pytorch/tutorials/issues/735 | closed | [] | 2019-11-27T00:28:36Z | 2021-07-30T22:19:49Z | 3 | crazysal |
pytorch/text | 652 | How to add special token in torch text.Data.Field( )? | Hello,
I defined my text Field as below:
```js
TEXT_openbookQA = Field(tokenize = "spacy",
init_token = '<sos>',
eos_token = '<eos>',
unk_token = '<unk>',
pad_token = '<pad>',
tokenizer_language = 'en',
lower = True)
```
However,... | https://github.com/pytorch/text/issues/652 | closed | [] | 2019-11-26T12:50:00Z | 2019-11-26T13:40:24Z | null | h56cho |
pytorch/pytorch | 30,408 | Where is the script of the synchronization of gradients during the backwards for DDP | ## ❓ Questions and Help
Hi, I know the synchronization of gradients happens during the backwards for DDP. But I didn’t find the corresponding script in backwards. Where can I find it?
| https://github.com/pytorch/pytorch/issues/30408 | closed | [] | 2019-11-25T17:15:45Z | 2019-11-26T00:49:21Z | null | meiluzhu |
huggingface/neuralcoref | 228 | Integration of different word embeddings for prediction | HI,
I am using SciSpacy with neuralcoref (by adding `ENTITY` to `ACCEPTED_ENTS`) and would also like to use the SciSpacy word vectors if possible.
I already have switched the `self.static_vectors` and `self.tuned_vectors` to point to the `self.vocab.vectors` in the `NeuralCoref` constructor. I also changed `SIZE... | https://github.com/huggingface/neuralcoref/issues/228 | closed | [
"question",
"wontfix",
"usage"
] | 2019-11-25T17:01:15Z | 2022-01-09T04:06:41Z | null | masonedmison |
pytorch/vision | 1,610 | code for visualization in the object detection tutorial | At the end of the [object detection tutorial ](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#torchvision-object-detection-finetuning-tutorial) it visualizes the masks.
can you please provide the code for that task? or guide how to do it? | https://github.com/pytorch/vision/issues/1610 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2019-11-25T15:41:21Z | 2020-07-07T21:21:26Z | null | isalirezag |
pytorch/vision | 1,608 | What's the input format of the fasterrcnn_resnet50_fpn? I mean RGB or BGR. | ### pytorch>=1.1
I notice that both the RGB and BGR input of `[n,c,h,w]` can get a good result(BGR is slightly higher).
```
model = fasterrcnn_resnet50_fpn(pretrained=True)
model.eval()
## RGB
img1 = Image.open('image1.jpg')
## BGR
img2 = np.array(img1)[:, :, [2, 1, 0]].copy()
x1= [transforms.ToTensor(... | https://github.com/pytorch/vision/issues/1608 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2019-11-25T12:20:25Z | 2019-11-25T12:52:15Z | null | kangkang59812 |
huggingface/neuralcoref | 227 | What is the performance on CoNLL-2012 test set? | Hi,
Thank you for your excellent work. I am looking for an off-the-shelf tool to do some coref text processing. I am wondering about the model performance of this repo on the CoNLL-2012, such as the Avg. F1 score.
Would you please post it here or in the readme file? Thanks a lot. | https://github.com/huggingface/neuralcoref/issues/227 | closed | [
"question",
"perf / accuracy"
] | 2019-11-25T09:26:30Z | 2019-12-06T21:57:04Z | null | magic282 |
pytorch/text | 649 | How to perform common sense reasoning task with GPT-2? | Hello,
I am new to NLP so I have lots of questions.
I am interested in carrying out common sense reasoning task with GPT-2, for example, with Winograd Schema Challenge dataset.
Q1. How should I tokenize the Winograd Schema Challenge dataset to process it with GPT-2 (with the double heads model, for instance)? Ca... | https://github.com/pytorch/text/issues/649 | closed | [] | 2019-11-22T12:52:44Z | 2019-11-23T14:38:47Z | null | h56cho |
pytorch/xla | 1,399 | Why does printing progress every step slow things down? | ## ❓ Questions and Help
@dlibenzi You mentioned the ParallelLoader background sender and its ability somehow to overlap communication between TPU and CPU without interrupting the flow of TPU computations. But, you also mentioned that printing the values of summary statistics (which ultimately requires calling `los... | https://github.com/pytorch/xla/issues/1399 | closed | [
"question"
] | 2019-11-21T22:29:43Z | 2019-11-22T17:29:28Z | null | hrbigelow |
pytorch/xla | 1,398 | Should CPU constants be ported to tensors to prevent IR recompilation? | ## ❓ Questions and Help
I have various constructs in my code like:
```python
rec_loss = - log_pred_target.mean()
ze_norm = (self.bottleneck.ze ** 2).sum(dim=1).sqrt()
norm_loss = self.norm_gamma * torch.abs(ze_norm - 1.0).mean()
total_loss = rec_loss + norm_loss
```
Would moving the `2` and `1.0` constant... | https://github.com/pytorch/xla/issues/1398 | closed | [
"good first issue",
"question",
"stale"
] | 2019-11-21T22:18:27Z | 2019-12-28T23:23:21Z | null | hrbigelow |
pytorch/vision | 1,599 | ResNet identity (line 55) mustn't be mutable | The identity variable in line 55 is mutable
def forward(self, x):
identity = x
It must be immutable as follows:
def forward(self, x):
identity = 1*x | https://github.com/pytorch/vision/issues/1599 | closed | [
"question",
"module: models"
] | 2019-11-20T12:39:36Z | 2019-11-21T13:53:04Z | null | Abolfazl-Mehranian |
pytorch/vision | 1,598 | How to feed negative samples during Faster R-CNN training | Hi all,
I have lots of non-annotated images in my training set, where there is no object of interest but there are couple other objects that should be interpreted as part of background. Is there any way I can provide background (negative) samples explicitly in my dataloder?
I tried to set a single fake bounding box ... | https://github.com/pytorch/vision/issues/1598 | closed | [
"enhancement",
"help wanted",
"module: models",
"topic: object detection"
] | 2019-11-20T12:15:54Z | 2023-03-29T16:37:30Z | null | kkirtac |
huggingface/transformers | 1,866 | BertForTokenClassification for NER . what is the conclusion of this output ? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi ,
Im trying to perform NER using BertForTokenClassification .I saw this sample code in transformers GIT page.
from transformers import BertForTokenClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
... | https://github.com/huggingface/transformers/issues/1866 | closed | [
"wontfix"
] | 2019-11-19T09:23:23Z | 2020-02-04T21:23:21Z | null | AjitAntony |
pytorch/xla | 1,385 | How original pytorch calls xla's ops? | ## ❓ Questions and Help
Recently, I am looking into pytorch/xla code but I am confused with some things.
- How original pytorch calls xla's ops?
Is there pytorch-xla internal mechanism?
Any reply will be much appreciated. THX | https://github.com/pytorch/xla/issues/1385 | closed | [
"question",
"stale"
] | 2019-11-19T07:45:18Z | 2019-12-28T16:29:15Z | null | alanzhai219 |
pytorch/examples | 666 | Distributed training resnet50 using 4 nodes 32 TeslaV100 | I checked a lot of literature, but I didn't find the results. The questions are as follows:
How many hours can it converge?(Distributed training resnet50 using 4 nodes 32 TeslaV100 cards)
Do you have internal test results that can be displayed to better understand the performance of your distributed training. | https://github.com/pytorch/examples/issues/666 | open | [
"distributed"
] | 2019-11-19T06:01:31Z | 2022-03-09T20:52:45Z | 0 | gentelyang |
pytorch/FBGEMM | 199 | [Question] 8bit integers and negative numbers | Hey,
I have been reading the code for sparse 8bit gemm: https://github.com/pytorch/FBGEMM/blob/master/test/SpMMI8Test.cc and I have a few questions.
I noticed that `getRandomSparseVector` will only generate positive numbers. Is this because you rely on the `maddubs` instruction? Does it mean that the A matrix can... | https://github.com/pytorch/FBGEMM/issues/199 | closed | [
"question"
] | 2019-11-18T17:09:26Z | 2019-11-20T18:08:28Z | null | XapaJIaMnu |
pytorch/vision | 1,592 | unable to load inception model. Or any other architect other than alexnet | import torchvision.models.inception
# works fine
arch = torchMd.alexnet(pretrained=True)
# gives error, also tried vgg, densenet
arch = torchMd.inception(pretrained=True)
AttributeError Traceback (most recent call last)
<ipython-input-43-3882461a2f37> in <module>
----> 1 print(to... | https://github.com/pytorch/vision/issues/1592 | closed | [
"question",
"module: models"
] | 2019-11-18T07:30:01Z | 2019-11-19T10:44:02Z | null | richesh09 |
pytorch/xla | 1,379 | Successive frames growing, but why? | ## ❓ Questions and Help
In the attached report below, I see successive frames growing by ~30 lines at each. The relevant code is below. The approach I used was to load all of the training data (about 300 mb) into memory into two tensors (`data_source.snd_data` and `data_source.mel_data`) and then at each training ... | https://github.com/pytorch/xla/issues/1379 | closed | [
"question",
"stale"
] | 2019-11-17T18:04:30Z | 2019-12-29T18:57:28Z | null | hrbigelow |
pytorch/vision | 1,591 | Training data set for pretrained resnet18 | Anybody knows what the training data set of pretrained resnet18 is .
I cannot find the official information of training data set used for pretrained models in torchvision.models. | https://github.com/pytorch/vision/issues/1591 | closed | [
"question",
"module: reference scripts",
"topic: classification"
] | 2019-11-17T09:01:12Z | 2019-11-18T14:37:55Z | null | pantheon5100 |
pytorch/vision | 1,588 | pretrained model | Anybody know how to train a pretrain model(etc mobile net v2 in pysot ) ? | https://github.com/pytorch/vision/issues/1588 | closed | [
"question",
"module: reference scripts",
"topic: classification"
] | 2019-11-16T08:30:07Z | 2019-11-26T01:58:11Z | null | zhu2014yi |
pytorch/text | 643 | How to skip last batch that has a different batch size? | ## ❓ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
Sorry if this is a newbie question.
In `torch.nn.utils.data.dataloader` we can drop the last batch by specifying `drop_last=True`.
Do we have something equivalent for our `Iterator`? Currently I continue the training l... | https://github.com/pytorch/text/issues/643 | closed | [] | 2019-11-16T04:08:41Z | 2019-11-18T15:54:07Z | null | Hans0124SG |
pytorch/tutorials | 725 | transfer_learning_tutorial get a warning under pytorch1.3 | >`/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:100: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch ski... | https://github.com/pytorch/tutorials/issues/725 | closed | [] | 2019-11-15T08:21:45Z | 2019-11-15T08:32:13Z | 1 | neo0801 |
pytorch/xla | 1,368 | How to tell if a graph recompilation is happening? | ## 📚 Documentation
Thanks so much for the great library! I'm running my Pytorch model on Google Colab with TPU. Following the tips in TROUBLESHOOTING.md, I see the following in my XLA_METRICS_FILE:
```
Metric: CompileTime
TotalSamples: 12
Accumulator: 44s280ms699.409us
ValueRate: 952ms609.347us / secon... | https://github.com/pytorch/xla/issues/1368 | closed | [
"question"
] | 2019-11-15T03:13:04Z | 2019-12-03T02:37:56Z | null | hrbigelow |
pytorch/vision | 1,578 | pilImage convert to tensor, than convert back to pilimage is not the same to the original | I convert a PILImage to tensor and than convert it back to PILImage. Saving the result, and compare to the original PILImage I loaded, they are not the same.
Why it is so? | https://github.com/pytorch/vision/issues/1578 | closed | [
"question",
"module: transforms"
] | 2019-11-14T21:37:00Z | 2019-11-26T12:43:44Z | null | Yumin-Sun-00 |
huggingface/transformers | 1,834 | Where is Model2Model PreTrainedEncoderDecoder in run_summerization_finetune | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
| https://github.com/huggingface/transformers/issues/1834 | closed | [
"wontfix"
] | 2019-11-14T18:09:24Z | 2020-03-09T03:39:51Z | null | yeliu918 |
pytorch/pytorch | 29,802 | How to release gpu memory of intermediate result tensor | In the example below, after calling torch.matmul, the gpu memory usage increases by 181796864 bytes, which is almost the sum of the sizes of c and b.transpose(2,3). So I guess the unreferenced intermediate result b.transpose(2,3) is stored in gpu memory. How could I release the gpu memory allocated to this intermedi... | https://github.com/pytorch/pytorch/issues/29802 | closed | [
"module: cuda",
"module: memory usage",
"triaged"
] | 2019-11-14T11:36:21Z | 2019-11-15T15:49:09Z | null | akikaaa |
pytorch/android-demo-app | 31 | How to add built AAR libraries to a project | Hi,
I've faced an issue. On PyTorch website there's an intro how to build and deploy pytorch-mobile from source (https://pytorch.org/mobile/android/#building-pytorch-android-from-source) but the part with Gradle won't work for me.
I've succesfully build AAR files, then edited `HelloWorldApp/app/gradle.build` as ... | https://github.com/pytorch/android-demo-app/issues/31 | closed | [] | 2019-11-14T10:41:10Z | 2022-08-13T17:06:38Z | null | zetyquickly |
pytorch/examples | 663 | how do we pass multiple indices as input to generate multiple outputs in word_language model | The current codebase of [`word_language_model/generate.py`](https://github.com/pytorch/examples/blob/master/word_language_model/generate.py) uses a single (randomly sampled) index as `input` and generates a text based on this.
Now, I'd like to extend this a bit and would like to pass a set of indices (i.e. > 1) as `... | https://github.com/pytorch/examples/issues/663 | open | [
"nlp"
] | 2019-11-14T03:55:33Z | 2022-03-09T23:42:32Z | null | kmario23 |
pytorch/pytorch | 29,745 | How to add PyTorch to requirements.txt | I'm trying to include PyTorch in a requirements.txt file to be installed in a Docker container, but can't seem to get it to work. I've tried adding the following with no luck:
```
torch==1.3.1
> ERROR: Could not find a version that satisfies the requirement torch==1.3.1 (from -r /requirements/./base.txt (line 28))... | https://github.com/pytorch/pytorch/issues/29745 | closed | [] | 2019-11-13T20:12:58Z | 2021-01-19T13:35:28Z | null | econti |
pytorch/xla | 1,348 | How to downgrade torch version? | Hey guys, I'm trying to train my image classification model on multi-cores. I'm using Pytorch-nightly version but the problem is that torch version is 1.4.0a0+be75795, which isn't compatible with my Torchvision version(0.3.0). It gives the following error-
`AttributeError: module 'torch' has no attribute 'gels'`
... | https://github.com/pytorch/xla/issues/1348 | closed | [
"bug"
] | 2019-11-13T06:17:52Z | 2019-11-14T00:21:42Z | null | ajay960singh |
pytorch/examples | 660 | how to run resnet on Single node, multiple GPUs | can i use "CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python main.py -a resnet50 ......." | https://github.com/pytorch/examples/issues/660 | closed | [] | 2019-11-12T03:42:46Z | 2019-11-12T03:43:53Z | null | gentelyang |
pytorch/examples | 659 | Do we need average_gradient when we do mutiprocess distributed training? | In the tutorial, it is said that we need to write `avereage_gradients` to get the average gradient for different process, then we can do `optimizer.step()`, however, in the imagenet example, `avereage_gradients` is not there. Does it means we do not need this function in new version of pytorch for mutiprocess distribu... | https://github.com/pytorch/examples/issues/659 | closed | [] | 2019-11-12T00:11:38Z | 2019-11-12T03:43:36Z | 1 | dzk9528 |
pytorch/pytorch | 29,521 | How to perform multi-task regression with pytorch? | ```
import torch
from torch import nn
import torch.nn.functional as F
class mynet(nn.Module):
def __init__(self):
super(mynet, self).__init__()
self.lin1 = nn.Linear(5, 10)
self.lin2 = nn.Linear(10, 3)
self.lin3 = nn.Linear(10, 4)
def forward(self, x):
x = ... | https://github.com/pytorch/pytorch/issues/29521 | closed | [] | 2019-11-10T11:35:47Z | 2019-11-11T03:50:40Z | null | thu-wangz17 |
pytorch/pytorch | 29,517 | Where is the source code for mathematical operations like specifically torch.mean()? | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/29517 | closed | [] | 2019-11-10T07:08:09Z | 2019-11-10T08:56:45Z | null | C-Weed28 |
pytorch/pytorch | 29,441 | error when export to onnx:Auto nesting doesn't know how to process an input object of type maskrcnn_benchmark.structures.image_list.ImageList. Accepted types: Tensors, or lists/tuples of them | ## ❓ Questions and Help
pytorch:1.0.0
cuda:10.0
torchvision:0.2.1
ubuntu:16.04
i clone the [facebook/maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark), and want to export the model to onnx:
```
x = torch.ones(1, 3, 224, 224, requires_grad=True)
torch.onnx.export(model, x, "faster.onn... | https://github.com/pytorch/pytorch/issues/29441 | closed | [
"module: onnx",
"triaged"
] | 2019-11-08T06:14:52Z | 2021-12-23T01:43:59Z | null | zsk423200 |
pytorch/pytorch | 29,434 | How to know which whl version can be selected? | @svenstaro @eklitzke @jfsantos I wan't use pip install torch with cuda10, i know use bash like this:
pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.1.post2-cp36-cp36m-linux_x86_64.whl
when i choose python vision is cp37, It will be reported wrong:
ERROR: torch-1.1.0-cp37-cp37m-linux_x86_64.whl is no... | https://github.com/pytorch/pytorch/issues/29434 | closed | [
"module: binaries",
"triaged"
] | 2019-11-08T02:52:04Z | 2019-11-09T05:54:40Z | null | moyans |
pytorch/pytorch | 29,422 | How to inference with nn.TransformerDecoder layer | I am using customized Transformer with nn.TransformerDecoder layer . It seem like nn.TransformerDecoder layer doesn't support inference process(generation/testing), like sending token id one by one with fixed memory generated from nn.TransformerEncoder layer. I am wondering is there a tutorial that I can refer to as I ... | https://github.com/pytorch/pytorch/issues/29422 | closed | [] | 2019-11-07T23:41:50Z | 2019-11-08T21:47:20Z | null | xdwang0726 |
pytorch/vision | 1,557 | Can KeypointRCNN also detect objects that do not need to be predicted with keypoints? | As far as I understand keypoints would be computed for all the box classes (apart from background) in Keypoint-RCNN. I need to do object detection and keypoint prediction at the same time, however keypoints should only be predicted for one class. Does current version support this?
If not, I would need to modify som... | https://github.com/pytorch/vision/issues/1557 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2019-11-06T00:26:08Z | 2019-11-06T09:56:00Z | null | anuar12 |
pytorch/examples | 656 | About DCGAN datasets | May I know what is the dataset URL for fake input in DCGAN example? | https://github.com/pytorch/examples/issues/656 | closed | [] | 2019-11-05T18:51:11Z | 2022-03-09T23:28:56Z | 1 | mahmoodn |
pytorch/pytorch | 29,190 | How to run two different jit models in two GPUs respectively in one scrip? | I have an encoder-decoder model. After converted encoder and decoder model into jit models, I want to load encoder on GPU:0 and the encoder outputs **Keys** and **Value**. Then I move the **Keys** and **Values** to GPU:1 since the decoder is loaded on GPU:1.
encoder = torch.jit.load(feat_model).cuda(0)
gr... | https://github.com/pytorch/pytorch/issues/29190 | open | [
"oncall: jit",
"triaged"
] | 2019-11-05T10:09:34Z | 2020-03-19T06:14:43Z | null | lzj9072 |
pytorch/vision | 1,553 | Trained Mask RCNN without ground truth bounding boxes | Hi all,
Is acceptable to train mask rcnn without bounding boxes? I want to generate only negative samples after RPN model in order to lower false positive cases. | https://github.com/pytorch/vision/issues/1553 | closed | [
"question",
"module: models",
"topic: object detection"
] | 2019-11-05T03:08:39Z | 2019-11-05T10:48:14Z | null | ghost |
pytorch/vision | 1,552 | Best practice to run Mask R-CNN in parallel | What ist the best practice to run Mask R-CNN in parallel?
@fmassa wrote in #1255
> The current code assumes that you are using 1 GPU per process, with DistributedDataParallel.
Is this information up-to-date? | https://github.com/pytorch/vision/issues/1552 | closed | [
"question",
"module: reference scripts",
"topic: object detection"
] | 2019-11-04T15:56:36Z | 2019-11-05T10:42:50Z | null | maxfrei750 |
pytorch/QNNPACK | 68 | How to build dependencies separately | I'm trying to add a package for QNNPACK to the [Spack package manager](https://spack.io). I see that QNNPACK downloads its own dependencies, and that this can be avoided by setting `*_SOURCE_DIR` via cmake. Is there a way to point to an existing external installation instead of a source directory so that Spack doesn't ... | https://github.com/pytorch/QNNPACK/issues/68 | open | [] | 2019-11-01T22:08:50Z | 2019-11-01T22:08:50Z | null | adamjstewart |
pytorch/examples | 653 | What is the meaning of transforms.Normalize((0.1307,), (0.3081,)) in mnist | In mnist/main.py, when reading the dataset using DataLoader, there is a line:
`transforms.Normalize((0.1307,), (0.3081,))`
can any one explain its meaning? I know that it tries to normalize the data, but why there are two parameters and where do those 0.1307 and 0.3081 come from? | https://github.com/pytorch/examples/issues/653 | closed | [] | 2019-11-01T15:41:13Z | 2024-07-30T12:09:26Z | null | copyrightly |
pytorch/examples | 652 | why not divide by batch size ? | https://github.com/pytorch/examples/blob/4e00723456160d910092aae567a0b8daf66c49ec/vae/main.py#L82
I think finally loss should be **(BCE+KLD) / batch_size** , is right? | https://github.com/pytorch/examples/issues/652 | closed | [] | 2019-11-01T08:56:53Z | 2022-03-09T23:26:30Z | 2 | Johnson-yue |
pytorch/xla | 1,280 | machine translation validation fails with multi-process | ## ❓ Questions and Help
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. create an instance using the latest torch-xla
```bash
export PROJECT_NAME=xxx
gcloud config set project ${PROJECT_NAME}
gcloud compute --project=${PROJECT_NAME} instan... | https://github.com/pytorch/xla/issues/1280 | closed | [
"question"
] | 2019-10-31T20:24:20Z | 2021-05-22T04:59:05Z | null | sIncerass |
pytorch/vision | 1,538 | Problem train/finetuning segmentation (fcn_resnet101) on voc data | Hi
Thanks for a great api.
I am trying to train/finetuning the trained fcn_resnet101 trained on coco dataset, but it seems like after 1. epoch it is way worse on voc data, than it is before.
If i test the already trained fcn_resnet101 on the voc data i get mean IoU: 73.3.
Then i train the fcn_resnet101 on... | https://github.com/pytorch/vision/issues/1538 | closed | [
"question",
"module: reference scripts",
"topic: semantic segmentation"
] | 2019-10-30T15:10:32Z | 2019-11-01T08:37:40Z | null | Denlar2 |
pytorch/examples | 650 | This project fast-neural-style takes too long, how to solve? | 1. 32346.619 ms
2. 12375.127ms | https://github.com/pytorch/examples/issues/650 | closed | [] | 2019-10-30T10:56:34Z | 2022-03-09T23:24:07Z | null | tcxia |
pytorch/xla | 1,262 | How to share weights memory while running big models | ## ❓ Questions and Help
Hello, I use pytorch-xla multiprocessing approach to train my gpt2 model from `huggingface-transformers`. When training from pretrained weights, the model is however loaded multiple times, which increase the need for host memory. While for GPT2-small it's not a problem. GPT2-large can fill up... | https://github.com/pytorch/xla/issues/1262 | closed | [
"question"
] | 2019-10-30T09:59:28Z | 2020-02-27T18:44:54Z | null | Glorf |
pytorch/pytorch | 28,868 | How to build caffe2 with ONNX opset version greater than 9? | ## ❓ Questions and Help
Hello,
I've currently worked with freshly merged feature pytorch/vision#1401 and won't able to find a way to make Caffe2 work with ONNX operation set 10?
Is there a way to build a Caffe2 from source with this opset?
| https://github.com/pytorch/pytorch/issues/28868 | closed | [] | 2019-10-30T09:51:52Z | 2019-10-31T00:50:02Z | null | zetyquickly |
pytorch/vision | 1,534 | The output of features is 512*7*7,why we still need AdaptiveAvgPool2d here to make the output size 7*7 output diamension | https://github.com/pytorch/vision/blob/13b35ffaa5167f3713ea7a53c43395d90b3a7cbc/torchvision/models/vgg.py#L44 | https://github.com/pytorch/vision/issues/1534 | closed | [
"question",
"module: models",
"topic: classification"
] | 2019-10-30T02:44:27Z | 2019-10-30T10:04:25Z | null | shenlinyao |
pytorch/xla | 1,260 | Is tensorboard visualization of computation graphs supported? | Hi. I would like to know is it possible to dump a tensorboard visualization of the structure of the computation graph and the TPU compatibility graph for debugging purposes.
[reference](https://cloud.google.com/tpu/docs/cloud-tpu-tools#profile_tab) This can be done in TF by setting the "model_dir" attribute of tf.esti... | https://github.com/pytorch/xla/issues/1260 | closed | [
"question",
"stale"
] | 2019-10-30T02:18:09Z | 2019-12-13T07:44:20Z | null | 20171130 |
pytorch/android-demo-app | 24 | Hello, I use Java to load my training model, the program is stuck in “module.forward()” this step is not gone, how to do? | https://github.com/pytorch/android-demo-app/issues/24 | closed | [] | 2019-10-28T10:23:04Z | 2019-11-20T23:35:59Z | null | niushaoda | |
pytorch/pytorch | 28,778 | andoroid quantization model (mobilenetv2) first forward very slow? but second forward faster why how to fix it | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)... | https://github.com/pytorch/pytorch/issues/28778 | closed | [
"oncall: quantization",
"triaged"
] | 2019-10-28T04:42:36Z | 2019-10-29T05:53:17Z | null | hexiangquan |
pytorch/pytorch | 28,776 | How to use torch.quantization.get_observer_dict(mod, target_dict, prefix='') | ## ❓ How to use torch.quantization.get_observer_dict(mod, target_dict, prefix='') to get the observer dict
Can you provide an example for this usage? Thanks a lot!
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a | https://github.com/pytorch/pytorch/issues/28776 | closed | [
"oncall: quantization",
"triaged"
] | 2019-10-28T04:13:45Z | 2019-10-29T01:40:28Z | null | vippeterhou |
pytorch/pytorch | 28,771 | why the data-type of output is quint8 in static quantize? what static quantize does under the hood? | Here is a example of static quantize,My python is version 3.7 and torch is 1.3.:
`
import torch
import torch.nn as nn
m = nn.quantized.Linear(20,30)
input = torch.randn(128.20)
input = torch.quantize_per_tensor(input,1.0,0,torch.quint8)
output = m(input)
print (output.dtype)
`
I feel confused why the data-typ... | https://github.com/pytorch/pytorch/issues/28771 | closed | [
"oncall: quantization",
"triaged"
] | 2019-10-28T02:11:47Z | 2020-04-15T01:02:31Z | null | litaozijin |
pytorch/examples | 648 | C++ MNIST without CUDA | Hi
Following instructions for MNIST in C++ I get this after make:
```
-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting... | https://github.com/pytorch/examples/issues/648 | open | [
"c++"
] | 2019-10-27T22:44:15Z | 2022-03-09T20:49:35Z | 0 | maziar840 |
pytorch/text | 629 | How to use custom-built Torchtext vocabulary with the HuggingFace TransfoXLLMHeadModel? | Hello,
I am trying to use my custom built vocabulary which I defined using Torchtext functions with the HuggingFace TransfoXLLMHeadModel, and I am having some troubles with it.
I defined my text field as below:
```js
# Import packages
import torch
import torch.nn as nn
import torch.nn.functional as F
from ... | https://github.com/pytorch/text/issues/629 | closed | [] | 2019-10-27T09:02:13Z | 2019-11-01T15:21:23Z | null | h56cho |
pytorch/android-demo-app | 23 | Does "pth" model need to convert "pt"? and how to convert | https://github.com/pytorch/android-demo-app/issues/23 | open | [] | 2019-10-25T09:52:27Z | 2020-08-25T03:50:08Z | null | niushaoda | |
pytorch/vision | 1,523 | Unable to pass `extensions` when creating custom `Kinetics400` Video Dataset | Thank you for the video support!
When imported using `from torchvision.datasets.kinetics import *`, the `Kinetics400` class doesn't accept an `extensions` argument:
```python
data = Kinetics400(root=data_path, frames_per_clip=32, extensions=('.mp4',))
--------------------------------------------------------... | https://github.com/pytorch/vision/issues/1523 | closed | [
"question",
"module: datasets",
"module: video"
] | 2019-10-25T07:28:37Z | 2019-10-25T09:48:03Z | null | rsomani95 |
huggingface/transformers | 1,626 | What is currently the best way to add a custom dictionary to a neural machine translator that uses the transformer architecture? | ## ❓ Questions & Help
It's common to add a custom dictionary to a machine translator to ensure that terminology from a specific domain is correctly translated. For example, the term server should be translated differently when the document is about data centers, vs when the document is about restaurants.
With a t... | https://github.com/huggingface/transformers/issues/1626 | closed | [
"wontfix"
] | 2019-10-24T17:48:10Z | 2020-01-04T09:41:58Z | null | moyid |
pytorch/examples | 645 | Add Siamese Network example | Hi, I want to add an example for Siamese network, since it is one of the popular use cases in ML. I am thinking of implementing it in a way similar to other examples viz. command line arguments to choose which dataset to train, hyperparameters etc.
Is there something I need to keep in mind specifically apart from the... | https://github.com/pytorch/examples/issues/645 | open | [
"good first issue"
] | 2019-10-24T11:08:50Z | 2022-05-13T18:17:30Z | 4 | piyush01123 |
pytorch/vision | 1,521 | per class mAP in coco_eval script? | Hi,
I was looking around the eval code and did not find function to calculate **per class mAP**? Is there an easy work around to include that. Thanks. @fmassa | https://github.com/pytorch/vision/issues/1521 | closed | [
"question",
"module: reference scripts",
"topic: object detection"
] | 2019-10-23T15:58:28Z | 2019-10-25T14:43:13Z | null | manoja328 |
pytorch/vision | 1,520 | DeepLabV3: segment only person | How can I segment person only and skip the other classes by using DeepLabV3? | https://github.com/pytorch/vision/issues/1520 | closed | [
"question",
"topic: semantic segmentation"
] | 2019-10-23T10:22:46Z | 2020-01-13T17:37:27Z | null | muna-cs |
pytorch/pytorch | 28,478 | How to train a torch::jit::script::Module? | Existing documentation / tutorials show only how to train a `torch::nn::Module` https://pytorch.org/cppdocs/frontend.html#end-to-end-example
I have attempted to make a training loop in the following manner
```
#include <torch/script.h>
#include <torch/torch.h>
#include <iostream>
#include <vector>
// custom lo... | https://github.com/pytorch/pytorch/issues/28478 | closed | [
"oncall: jit",
"module: cpp",
"triaged"
] | 2019-10-22T23:36:22Z | 2022-01-20T22:41:16Z | null | markisus |
pytorch/text | 622 | How to integrate HuggingFace transformers with Torchtext BPTTIterator? | ## ❓ Questions and Help
Hello,
I am trying to use the pretrained tokenizer from the HuggingFace Transformer-XL when training my custom transformer-XL model on WikiText2, and I am having a trouble making the BPTTIterator from the Torchtext to work.
Below are my code:
```js
# Import packages
import torch
... | https://github.com/pytorch/text/issues/622 | open | [] | 2019-10-21T17:27:46Z | 2020-07-18T19:13:42Z | null | h56cho |
pytorch/ios-demo-app | 3 | Add example of how to optimize model for mobile inference | This demo is great and works fine although it would be great to have an example of how to prepare model for mobile inference cause it's non trivial. For example you can add the receipt of how you've prepare the `mobilenet_quantized.pt`.
(Personally i've tried to convert my model to `float16` (it didn't work: model did... | https://github.com/pytorch/ios-demo-app/issues/3 | closed | [] | 2019-10-19T14:53:57Z | 2020-03-11T17:59:13Z | null | mirth |
pytorch/pytorch | 28,331 | How to save quantized model in PyTorch1.3 with quantization information | ## ❓ How to save the quantized model in PyTorch1.3 with quantization information
Is there any way to save the quantized model in PyTorch1.3, which keeps the original information remaining?
I have known that I can save it after tracing it by:
```python
# Save
torch.jit.save(torch.jit.script(self.model_q), "quant... | https://github.com/pytorch/pytorch/issues/28331 | closed | [
"oncall: quantization",
"triaged"
] | 2019-10-19T07:55:01Z | 2019-10-23T17:08:14Z | null | vippeterhou |
pytorch/examples | 643 | How to run dcgan example? | I want to run `dcgan` example, however, the readme is not very clear.
I have downloaded classroom model from lsun as below
```
$ ls classroom_train_lmdb -lh
total 3.5G
-rw-r--r-- 1 mahmood mahmood 3.5G May 1 2015 data.mdb
-rw-r--r-- 1 mahmood mahmood 63K May 1 2015 lock.mdb
$ ls classroom_val_lmdb -lh
to... | https://github.com/pytorch/examples/issues/643 | closed | [] | 2019-10-18T07:24:45Z | 2022-03-09T23:35:07Z | null | mahmoodn |
pytorch/tutorials | 705 | Where is the demo dataset and model files in (EXPERIMENTAL) STATIC QUANTIZATION WITH EAGER MODE IN PYTORCH | I'm trying to run the codes in [(EXPERIMENTAL) STATIC QUANTIZATION WITH EAGER MODE IN PYTORCH](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#experimental-static-quantization-with-eager-mode-in-pytorch), but there are no dataset and model files available, such as **imagenet_1k, mobilenet_quant... | https://github.com/pytorch/tutorials/issues/705 | closed | [] | 2019-10-18T01:38:06Z | 2019-10-27T08:14:15Z | null | Aspirinkb |
huggingface/neuralcoref | 219 | Pre-trained english model | Hi,
Is the pre-trained english model shipped with coref a model trained on the CoNLL and Ontonotes datasets?
Thanks! | https://github.com/huggingface/neuralcoref/issues/219 | closed | [
"question",
"training"
] | 2019-10-17T18:49:51Z | 2019-10-17T20:06:00Z | null | masonedmison |
huggingface/neuralcoref | 218 | State-of-the-art benchmark | Hi,
You are claiming neuralCoref to be state-of-the-art for coreference resolution. Do you have any benchmark supporting the claim? I would like to include it in my paper. Also can it be cited yet? | https://github.com/huggingface/neuralcoref/issues/218 | closed | [
"question",
"perf / accuracy"
] | 2019-10-17T15:30:16Z | 2019-10-21T13:59:12Z | null | Masum06 |
huggingface/neuralcoref | 217 | train conll with BERT | Hi
I would like to train the conll-2012 data with BERT, for this the common thing is to first convert data to NLI format, then use the NLI bert for it, I was wondering if you could assist and the BERT-based codes to this repo. I really appreciate for your help.
thanks a lot
Best
Julia | https://github.com/huggingface/neuralcoref/issues/217 | closed | [
"question"
] | 2019-10-17T09:25:01Z | 2019-10-17T15:33:22Z | null | ghost |
huggingface/transformers | 1,543 | Where is pytorch-pretrained-BERT? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
As the title shows, where is pytorch-pretrained-BERT? Please tell me the path, THX. | https://github.com/huggingface/transformers/issues/1543 | closed | [] | 2019-10-17T07:46:13Z | 2019-12-05T10:27:31Z | null | Foehnc |
pytorch/pytorch | 28,202 | How to quantize resnet in pytorch 1.3? | I tried to quantize resnet18 refer to https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html
but I got this error
```
>>> from torchvision.models import resnet18
>>> net= resnet18()
>>> from torch.quantization import quantize_dynamic
>>> qnet = quantize_dynamic(net,{nn.Conv2d,nn.Linear},dtype... | https://github.com/pytorch/pytorch/issues/28202 | closed | [] | 2019-10-17T04:03:02Z | 2020-06-23T14:10:10Z | null | Arctanxy |
pytorch/pytorch | 28,066 | How to speed up installing pytorch1.3? | I am installing pytorch1.3 using pip. The command from the official site is `pip3 install torch===1.3.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html`.

My pip are using a mirror... | https://github.com/pytorch/pytorch/issues/28066 | closed | [
"triaged"
] | 2019-10-16T07:19:30Z | 2019-10-17T23:13:46Z | null | gaopinghai |
pytorch/pytorch.github.io | 287 | How to replace the website in the install command after -f ? | I am intalling pytorch on windows7 using pip. I get the command throgh the official website as the picture shows.

The command is `pip3 install torch===1.3.0 torchvision===0.4.1 -f https://download.pytorch.... | https://github.com/pytorch/pytorch.github.io/issues/287 | open | [] | 2019-10-16T06:12:43Z | 2019-10-16T06:13:18Z | null | gaopinghai |
pytorch/text | 619 | How to use torchtext for sequence labelling with wordpiece tokeniers | ## ❓ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
Hi,
In a previous issue (#609), I asked how to use the tokenizer from the [Transformers](https://github.com/huggingface/transformers) library with torch text.
I now would like to be able to use this tokenizer and t... | https://github.com/pytorch/text/issues/619 | closed | [] | 2019-10-15T14:42:09Z | 2020-02-22T03:22:23Z | null | JohnGiorgi |
pytorch/pytorch | 27,958 | how to use libtorch library in cuda file with nvcc compiler(c++)? | ## ❓ Questions and Help
# Motivation
i want to implement nms in parallel processing with libtorch library.
i use this cuda code(https://github.com/gdlg/pytorch_nms)
# Environment
PyTorch version : 1.2.0
CUDA (nvcc compiler ) : 10.0
libtorch version : 1.2.0
system : win10
# Operation
the command :`i use ... | https://github.com/pytorch/pytorch/issues/27958 | open | [
"module: cpp",
"triaged"
] | 2019-10-15T03:35:07Z | 2020-05-08T08:30:40Z | null | CasonTsai |
pytorch/pytorch | 27,827 | How to hide latency on libtorch by multithreads? A problem about double stream pipelines execution. | Hello, I want to hide latency between data_loader and inference. I simply apply it by OpenMP with a simple double stream pipelines execution. However, the code "auto t=model->forward({Tensor.to(kCUDA)}.toTensor()" don't support multithreads(OpenMP).
Is there any solution?
My idea is just like Fig. 6 on this webs... | https://github.com/pytorch/pytorch/issues/27827 | closed | [] | 2019-10-14T02:50:44Z | 2019-10-14T08:20:37Z | null | xiaoLiuxiaoLiuxiaoLiu |
pytorch/examples | 640 | Do we still need to divide sample by ourselves when using a single GPU per process? | In https://github.com/pytorch/examples/blob/ee964a2eeb41e1712fe719b83645c79bcbd0ba1a/imagenet/main.py#L149, args.batch_size is manually divided by the number of processes.
However, when I checked https://pytorch.org/docs/stable/_modules/torch/utils/data/distributed.html#DistributedSampler, I found that DistributedSa... | https://github.com/pytorch/examples/issues/640 | closed | [] | 2019-10-14T01:58:19Z | 2020-02-14T10:24:15Z | 2 | taroxd |
pytorch/examples | 638 | missing indent in def train(...) in `imagenet` | https://github.com/pytorch/examples/blob/ee964a2eeb41e1712fe719b83645c79bcbd0ba1a/imagenet/main.py#L284
It seems a missing indent in imagenet train(...) function.
`/example/imagenet/main.py`, line 282 to 284.
```python
if args.gpu is not None:
images = images.cuda(args.gpu, non_blocking=T... | https://github.com/pytorch/examples/issues/638 | closed | [] | 2019-10-12T05:07:56Z | 2019-10-22T21:53:05Z | 1 | HearyShen |
huggingface/transformers | 1,503 | What is the best way to handle sequences > max_len for tasks like abstract summarization? | What is the best way to handle situations where a sequence in your dataset exceeds the max length defined for a model?
For example, if I'm working on an abstract summarization task with a Bert model having a `max_position_embeddings=512` and tokenizer with `max_len=512`, how should I handle documents where the token... | https://github.com/huggingface/transformers/issues/1503 | closed | [
"wontfix"
] | 2019-10-12T00:40:50Z | 2020-02-17T13:26:11Z | null | ohmeow |
pytorch/tutorials | 694 | net visualization image (https://pytorch.org/tutorials/_images/mnist.png) has the wrong dimensions | In the tutorial: beginner_source/blitz/neural_networks_tutorial.py,
The explanation for the first linear layer dimensions is unclear:
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
The input image dimension expected is 32 x 32.
The visualization of the net shows a dimension of 5x5 after the l... | https://github.com/pytorch/tutorials/issues/694 | closed | [] | 2019-10-11T15:30:46Z | 2021-04-26T20:14:34Z | 1 | tinku99 |
pytorch/pytorch | 27,479 | [JIT] Figure out how to easily investigate memory usage issues issues | e.g. https://github.com/pytorch/pytorch/issues/25267
And other internal reports
cc @suo | https://github.com/pytorch/pytorch/issues/27479 | open | [
"oncall: jit",
"triaged"
] | 2019-10-07T18:24:08Z | 2020-02-28T18:54:51Z | null | jamesr66a |
pytorch/vision | 1,395 | How to Crop single image before calling torchvision.utils.save_image, If I am using PIL lib Image.crop(....) method then image quality degrade. |
vutils.save_image(fixed_fake.data,outputpath , normalize=True)
print("output path",outputpath)
img = Image.open(outputpath)
noOfRow = 5
noOfColumn = 8
x1 = 2
y1 = 2
x2 = 130
y2 = 130
folder = file_batch
for i in range(0, noOfColumn):
dest_dir = file_ba... | https://github.com/pytorch/vision/issues/1395 | open | [
"module: utils"
] | 2019-09-30T20:35:12Z | 2021-02-21T15:56:52Z | null | praveenkumarchandaliya |
pytorch/pytorch | 27,070 | How to share a submodule but not copying its parameters in the computing graph? | Hi,
I am trying to feed a list of input images to a model that incorporates a number of the same submodule. The model is like following:
```
class SubModule(nn.Module):
def __init__(self):
super(SubModule, self).__init__()
self.embedding = nn.Linear(1000,20)
def forward(self, input):
return self.e... | https://github.com/pytorch/pytorch/issues/27070 | closed | [] | 2019-09-30T16:02:24Z | 2020-03-19T06:06:45Z | null | ukaneverin |
pytorch/pytorch | 27,033 | How to increase numerical accuracy of Pytorch model? | I write this sentence in my script
`print(self.netG(self.real_A)-self.netG(self.real_A))
`
I think I can get a all zero tensor but no.
```
tensor([[ [[-0.0032, 0.0089, -0.0085, ..., -0.0027, 0.0004, -0.0022],
[-0.0019, -0.0022, 0.0775, ..., 0.0236, -0.0277, -0.0125],
[ 0.0049, 0.01... | https://github.com/pytorch/pytorch/issues/27033 | closed | [] | 2019-09-29T13:11:06Z | 2019-10-02T12:56:36Z | null | gentlezr |
pytorch/vision | 1,384 | How to test my trained model on my data set | https://github.com/pytorch/vision/issues/1384 | closed | [
"question"
] | 2019-09-29T09:52:21Z | 2019-09-30T12:35:10Z | null | PL-96 | |
pytorch/pytorch | 26,880 | in TracedModel how to get model parameter like convolution stride info. | ## ❓ Questions and Help
I use traced_model._modules[‘conv1’] to access conv module.
But how can I find ‘stride’ info in tracedModel object?
Is there any document to describe tracedModel API and structure?
Thanks,
8086 | https://github.com/pytorch/pytorch/issues/26880 | closed | [] | 2019-09-26T08:31:45Z | 2019-09-26T20:34:53Z | null | joe8086 |
pytorch/pytorch | 26,803 | install pytorch1.2 where the environment is cuda9.0? | Can you tell me how to install pytorch1.2 in the environment is cuda9.0?
I don't have the root power, so can't upgrade cuda. | https://github.com/pytorch/pytorch/issues/26803 | closed | [
"module: build",
"triaged"
] | 2019-09-25T14:29:19Z | 2019-09-25T22:17:47Z | null | zyxdSTU |
pytorch/pytorch | 26,717 | How to use RandomSampler? | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
class RandomSampler in torch/utils/data/sampler.py
def __iter__(self):
n = len(self.data_source)
if self.replacement:
return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist())
... | https://github.com/pytorch/pytorch/issues/26717 | closed | [] | 2019-09-24T15:13:42Z | 2019-09-24T15:31:55Z | null | sp2823 |
pytorch/pytorch | 26,707 | How to build pytorch for android | ## ❓ How to build pytorch for android
when i run this command,
```
export ANDROID_NDK=~/android-ndk-r20
set USE_NCCL=OFF
set USE_CUDA=OFF
bash scripts/build_android.sh
```
i got follow errors
```
@ error/constitute.c/WriteImage/1028.
' @ error/constitute.c/WriteImage/1028.
: not foundL/ly/software/pyto... | https://github.com/pytorch/pytorch/issues/26707 | closed | [
"module: build",
"triaged",
"oncall: mobile"
] | 2019-09-24T03:02:44Z | 2019-09-24T15:09:23Z | null | blackxer |
huggingface/neuralcoref | 203 | training new language(French) | How can I get data like the English forme (there is any tool to do that) ? | https://github.com/huggingface/neuralcoref/issues/203 | closed | [
"question",
"training"
] | 2019-09-23T13:16:42Z | 2019-10-14T07:48:00Z | null | Berrougui |
pytorch/extension-cpp | 44 | How to write cuda code of the multilayer units | This tutorials helped me to write a single layer unit with CUDA code.
But how to write CUDA code of the multilayer units, like torch/nn/_functions/rnn.py 281?
```
output, hy, cy, reserve, new_weight_buf = torch._cudnn_rnn(
input, weight_arr, weight_stride0,
flat_weight,
hx, cx... | https://github.com/pytorch/extension-cpp/issues/44 | open | [] | 2019-09-23T03:37:04Z | 2019-09-24T14:54:37Z | null | haoyz |
pytorch/pytorch | 26,630 | How to script a model using c++ extension? I met this error | ## ❓ How to script a model using c++ extension? I met this error
```
RuntimeError:
Could not export Python function call '_DCNv2'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList
```
| https://github.com/pytorch/pytorch/issues/26630 | closed | [] | 2019-09-22T11:30:34Z | 2019-09-24T14:42:22Z | null | yinnhao |
huggingface/transformers | 1,299 | What is the best CPU inference acceleration solution for BERT now? | Thank you very much.
Thank you very much.
Thank you very much. | https://github.com/huggingface/transformers/issues/1299 | closed | [
"wontfix"
] | 2019-09-20T02:50:55Z | 2019-11-20T01:42:25Z | null | guotong1988 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.