repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/tutorials | 259 | Confirm if batch training in seq2seq tutorial? | for the tutorial _pytorch->tutorials/intermediate_source/seq2seq_translation_tutorial.py_: [https://github.com/pytorch/tutorials/blob/master/intermediate_source/seq2seq_translation_tutorial.py](url)
According to lines 636-646, It seems like it is training with one sentence at a time, instead of batch training. Am I ... | https://github.com/pytorch/tutorials/issues/259 | closed | [] | 2018-06-14T04:55:06Z | 2021-07-30T23:01:59Z | 3 | ecilay |
pytorch/examples | 373 | float16 mixed precision training on Titan V is slower than float32 | Since I cannot find a place to download imagenet dataset, I modified mnist example to support float16 training, please see the code in https://github.com/qigtang/examples.git, commit ed095d384529808f930161cbf005963ad482c22a
When running in my Titan V GPU
? | https://github.com/pytorch/examples/issues/372 | closed | [] | 2018-06-12T11:21:10Z | 2020-02-19T06:43:35Z | 0 | shaoxiongji |
pytorch/text | 335 | where is the documentation? | https://github.com/pytorch/text/issues/335 | closed | [] | 2018-06-05T08:24:53Z | 2018-06-06T11:05:01Z | null | udion | |
pytorch/examples | 368 | Is it a right implement for rnn model? | I find a implement of rnn model,but the "forward" is not the normal format,there are there parameters for "forward" function.I wonder is it a right implement of rnn model?
the link:https://github.com/zhangxu0307/time_series_forecasting_pytorch/blob/master/code/model.py | https://github.com/pytorch/examples/issues/368 | closed | [] | 2018-06-04T10:15:28Z | 2018-06-04T16:20:42Z | 1 | lxj0276 |
pytorch/examples | 367 | TransformerNet no longer works in pytorch 0.4 | Is there anything that can be done to fix this?
When I call it I receive:
Traceback (most recent call last):
File "neural_style.py", line 651, in <module>
main()
File "neural_style.py", line 645, in main
stylize(args)
File "neural_style.py", line 437, in stylize
style_model.load_state_dict(... | https://github.com/pytorch/examples/issues/367 | closed | [] | 2018-06-03T22:24:45Z | 2022-11-25T21:38:19Z | 2 | Zekodon |
pytorch/examples | 362 | InceptionV3 cannot work! | `python main.py -a inception_v3 ./imagenet/cat2dog --batch-size 16 --print-freq 1 --pretrained;`
=> using pre-trained model 'inception_v3'
Traceback (most recent call last):
File "main.py", line 314, in <module>
main()
File "main.py", line 157, in main
train(train_loader, model, criterion, optimizer,... | https://github.com/pytorch/examples/issues/362 | open | [
"help wanted",
"vision"
] | 2018-05-27T21:15:55Z | 2022-03-10T06:02:49Z | 8 | happsky |
pytorch/examples | 357 | language model generator question | In this file:
https://github.com/pytorch/examples/blob/master/word_language_model/generate.py
What does this input mean in the generation?
input = torch.randint(ntokens, (1, 1), dtype=torch.long).to(device)
As I understand it in a rnn-based language model, the last output of the rnn is fed into the curr... | https://github.com/pytorch/examples/issues/357 | open | [
"triaged"
] | 2018-05-18T22:37:47Z | 2022-03-10T00:29:50Z | 2 | evanthebouncy |
pytorch/examples | 355 | Imagenet training example - RandomResizedCrop | This is regarding
https://github.com/pytorch/examples/blob/master/imagenet/main.py#L122
The default scale argument for the transform RandomResizedCrop is defined as scale=(0.08, 1.0) - defined in pytorch/vision/transform
RandomResizedCrop is doing a crop first and then scale to the desired size. What could be... | https://github.com/pytorch/examples/issues/355 | closed | [] | 2018-05-16T21:40:38Z | 2018-06-05T13:24:36Z | 1 | mathmanu |
pytorch/examples | 347 | fast_neural_style using cuda | 0.4.0
Cuda 9.0
cudnn 7.1
python3.5
I am trying to train a new model using cuda.
I am getting a RuntimeError
```
Traceback (most recent call last):
File "neural_style/neural_style.py", line 239, in <module>
main()
File "neural_style/neural_style.py", line 233, in main
train(args)
File "n... | https://github.com/pytorch/examples/issues/347 | closed | [] | 2018-05-03T19:17:57Z | 2018-05-15T20:39:02Z | 2 | spencerbull |
pytorch/ELF | 6 | What is the winrate for the Leela Zero rematch how is it coming along? | https://github.com/gcp/leela-zero/issues/1311#issuecomment-386156687 | https://github.com/pytorch/ELF/issues/6 | closed | [] | 2018-05-03T03:21:42Z | 2018-05-03T15:09:12Z | null | bochen2027 |
pytorch/vision | 484 | What is the relationship between the output label of pretrained model in model zoo and wordnet synset id? | we can easily access pytorch pre-trained model like VGG, AlexNet and SqueezeNet by
import torchvision
torchvision.models.vgg16(pretrained=True)
can anyone point out what's the relationship between the output label(index of maximum output value) and the actual category?
i downloaded ILSVRC2012_devkit_... | https://github.com/pytorch/vision/issues/484 | open | [
"enhancement"
] | 2018-05-02T07:23:46Z | 2019-06-10T10:06:57Z | null | imkzh |
pytorch/tutorials | 226 | how to get turtorial for pytorch-0.3.1 | the site http://pytorch.org/tutorials/ is only for pytorch-0.4.0 now
how to get the earlier version of tutorials | https://github.com/pytorch/tutorials/issues/226 | closed | [] | 2018-04-25T04:18:33Z | 2018-04-27T11:06:26Z | 1 | HarryRuiTse |
pytorch/pytorch | 6,486 | Where is the Caffe2 website? | The gh-pages branch doesn't exist. | https://github.com/pytorch/pytorch/issues/6486 | closed | [] | 2018-04-10T21:54:41Z | 2018-04-10T21:58:08Z | null | louisabraham |
pytorch/examples | 330 | Use pretrained word embeddings | I want to use my pretrained word embeddings to train this model. How do I go about implementing it?
Thanks! | https://github.com/pytorch/examples/issues/330 | closed | [
"question"
] | 2018-04-10T18:07:59Z | 2022-03-10T03:43:27Z | 3 | BordiaS |
pytorch/pytorch | 6,468 | BatchNorm2d when batch size 1 works, what is it doing? | `BatchNorm2d` works even when batch size is 1, which puzzles me. So what is it doing when batch size is 1? The only related thread I could find is https://github.com/pytorch/pytorch/issues/1381 without much explanation.
minimal example:
```
x = Variable(torch.randn(1,2,3,3))
m = nn.BatchNorm2d(2)
y = m(x)
``` | https://github.com/pytorch/pytorch/issues/6468 | closed | [] | 2018-04-10T15:09:39Z | 2018-04-10T16:04:25Z | null | chanshing |
pytorch/examples | 327 | Absence of seed for result reproduction | Hello,
When running ImageNet with different resnet architectures (18,152..) l'm not able to reproduce the results. There is a small variation in accuracy.
https://github.com/pytorch/examples/blob/master/imagenet/main.py
What is wrong ?
even by making in
```
main() :
seed=15
torch.manua... | https://github.com/pytorch/examples/issues/327 | closed | [] | 2018-04-09T13:40:23Z | 2022-03-10T03:40:23Z | 1 | pinkfloyd06 |
pytorch/examples | 326 | [Super resolution] image Resizing &low psnr value result | https://github.com/pytorch/examples/blob/dcdabc22b305d2f2989c6f03570dfcd3919e8a5b/super_resolution/data.py#L41
I think resizing LANCZOS interpolation is better than default BILINEAR
`Resize(crop_size // upscale_factor,interpolation=Image.LANCZOS)`
__How does downsampling work in a normal SR?__
And In the Set5 dat... | https://github.com/pytorch/examples/issues/326 | open | [
"vision"
] | 2018-04-08T13:17:50Z | 2022-03-10T03:44:41Z | 8 | ryujaehun |
pytorch/tutorials | 221 | epub format support | Is it possible to provide an epub format of the tutorials officially ?
I have tried to build by `make epub`,
but it took too much time and I never finishd it. | https://github.com/pytorch/tutorials/issues/221 | closed | [] | 2018-04-05T13:11:36Z | 2018-04-27T11:08:18Z | 3 | zmlcc |
pytorch/tutorials | 218 | Char-RNN tutorial giving Error. | I was running the code for Char level RNN in the PyTorch docs, found here: http://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html .
I got the error:
```
Traceback (most recent call last):
File "names.py", line 86, in <module>
rnn = RNN(n_letters, n_hidden, n_categories)
File "nam... | https://github.com/pytorch/tutorials/issues/218 | closed | [] | 2018-03-25T15:37:52Z | 2021-06-16T21:33:27Z | 1 | ayush1999 |
pytorch/tutorials | 216 | The code snippets in How to create custom C extension has something wrong IMHO. | In the official tutorial about [how to create custom C extension](http://pytorch.org/tutorials/advanced/c_extension.html) page, I think there are still minor problems. First, in the src/my_lib.c file, here is the code snippets,
```
int my_lib_add_backward(THFloatTensor *grad_output, THFloatTensor *grad_input)
{
... | https://github.com/pytorch/tutorials/issues/216 | closed | [] | 2018-03-24T14:53:42Z | 2018-05-19T18:00:54Z | 1 | sonack |
pytorch/examples | 317 | How to understand this way of declaring a class? | `class Linear(Bottle, nn.Linear):
pass`
(in snli/model.py line 16)
I'm new user of torch. I get confused about this statement. Can someone help me?
| https://github.com/pytorch/examples/issues/317 | closed | [] | 2018-03-17T08:24:56Z | 2018-03-17T14:25:02Z | 1 | jueliangguke |
pytorch/pytorch | 5,833 | [Doc Bug] where is classmethod torch.nn.Embedding.from_pretrained? | There is a method to initialize Embedding from pretrained data (torch.Tensor).
http://pytorch.org/docs/master/nn.html
However that method does not exist in pytorch 0.3.1 .
If it was deprecated, what should I do to load pretrained word vectors such as torchtext.vocab.GloVe?
```python
import torch as th
emb... | https://github.com/pytorch/pytorch/issues/5833 | closed | [] | 2018-03-16T13:03:21Z | 2018-03-16T13:19:43Z | null | cdluminate |
pytorch/examples | 316 | Imagenet datasets | How to get validation images of ImageNet dataset | https://github.com/pytorch/examples/issues/316 | closed | [] | 2018-03-16T08:31:52Z | 2018-11-07T17:33:11Z | 2 | 22wei22 |
pytorch/examples | 312 | Doc comment on `accuracy` method in imagenet example, incorrect? | I'm confused with the doc comment for the `accuracy` function in the imagenet example:
```python
def accuracy(output, target, topk=(1,)):
"""Computes the precision@k for the specified values of k"""
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pr... | https://github.com/pytorch/examples/issues/312 | open | [
"good first issue"
] | 2018-02-27T12:02:28Z | 2022-03-10T03:09:32Z | 1 | willprice |
pytorch/examples | 308 | Clarification | https://github.com/pytorch/examples/blob/4ef2d4d0c8524372d0047e050065edcac665ce1a/vae/main.py#L61
Is there a particular reason why the method .exp_() is preferred to .exp() ? | https://github.com/pytorch/examples/issues/308 | closed | [] | 2018-02-23T12:07:39Z | 2018-12-13T06:45:41Z | 1 | ggbioing |
pytorch/examples | 304 | Is it possible to run snli: train.py on CPU (without CUDA)? | ```
$ conda list pytorch
# packages in environment at /Users/davidlaxer/anaconda:
#
pytorch 0.2.0 py27_4cu75 soumith
$ export NO_CUDA=0; python train.py
Traceback (most recent call last):
File "train.py", line 17, in <module>
torch.cuda.set_device(args.gpu)
File "... | https://github.com/pytorch/examples/issues/304 | closed | [] | 2018-02-10T19:38:49Z | 2022-04-07T18:19:14Z | 3 | dbl001 |
pytorch/examples | 298 | Reversed Sign? | https://github.com/pytorch/examples/blob/963f7d1777cd20af3be30df40633356ba82a6b0c/vae/main.py#L105
Aren't we trying to maximize that and hence there needs to be a negative sign here? | https://github.com/pytorch/examples/issues/298 | closed | [] | 2018-02-03T18:14:11Z | 2018-02-07T11:29:35Z | 2 | whamza15 |
pytorch/examples | 286 | Batching in Word Level Language Model | Hi,
It is not clear how does the batching happen in the Language model?
It is not clear if it the input to the model in every iteration of the loop is [seq_length, batch_size, embed_size] or [batch_size, seq_length, embed_size]?
Also, why does rnn model return output and hidden separately, they are the same...... | https://github.com/pytorch/examples/issues/286 | closed | [] | 2018-01-15T16:44:11Z | 2018-01-17T03:32:51Z | 7 | mourinhoxyz |
pytorch/examples | 280 | Needs updating for PyTorch HEAD (no_grad) | volatile is no more in PyTorch HEAD, which means that you have to use the `no_grad` context manager now. Any examples using volatile need to be ported accordingly. However, we shouldn't do this until the next release, because examples should work for the current release. (If someone wants to get the jump, maybe a dev b... | https://github.com/pytorch/examples/issues/280 | open | [
"help wanted"
] | 2018-01-09T18:58:27Z | 2022-03-10T05:54:42Z | 2 | ezyang |
pytorch/examples | 278 | Is total variation loss necessary in fast_neural_style? | I notice that there is no total variation loss regularization implemented in the example of `fast_neural_style`. But the paper declared it and their torch version use it. I'm wondering if total variation loss is necessary or not in style transfer. | https://github.com/pytorch/examples/issues/278 | open | [
"question",
"good first issue"
] | 2018-01-03T08:26:09Z | 2022-03-10T05:55:02Z | 0 | ZhuFengdaaa |
pytorch/examples | 277 | ValueError: optimizer got an empty parameter list | Hi PyTorch Friends,
I'm trying to building customized layer by following the guide [Extending PyTorch Tutorial](http://pytorch.org/docs/master/notes/extending.html) and use the customized layers to replace the nn.Conv2d and nn.Linear layer in the official example of [mnist main.py](https://github.com/pytorch/exampl... | https://github.com/pytorch/examples/issues/277 | closed | [] | 2018-01-03T04:35:54Z | 2018-03-05T10:06:12Z | 1 | OpenBanboo |
pytorch/examples | 271 | Transfer Learning on DC-GAN | Are the models for the generator and discriminator trained on LSUN or imagenet dataset made public?. If they are made public, where can I download them from? | https://github.com/pytorch/examples/issues/271 | closed | [
"question"
] | 2017-12-19T06:26:09Z | 2022-03-10T02:41:26Z | 1 | brijml |
pytorch/tutorials | 189 | Tutorial about torch.distributions ? | https://github.com/pytorch/tutorials/issues/189 | closed | [] | 2017-12-18T15:57:51Z | 2021-06-16T21:41:33Z | 3 | zuoxingdong | |
huggingface/neuralcoref | 10 | what is the training data for this project? | is it the same to clark and manning paper? | https://github.com/huggingface/neuralcoref/issues/10 | closed | [] | 2017-12-04T22:16:52Z | 2017-12-19T01:40:18Z | null | xinyadu |
pytorch/tutorials | 176 | [Request] Tutorial on testing and improving data loading | Hi, I think pytorch is a great framework and I'm using it consistently in my work. As a self-taught in machine learning I have sometimes difficulties to understand how to solve some bottlenecks in training, for example slow I/O. I get the idea, but I lack a general view of the topic.
I think it would be nice to have... | https://github.com/pytorch/tutorials/issues/176 | closed | [] | 2017-11-14T12:39:26Z | 2018-01-22T05:34:20Z | 1 | iacolippo |
pytorch/examples | 253 | Error For imagenet/main.py training with DistributedDataParallel(). | I got DistributedDataParallel() error.
I just fixed calling init_process_group() to pass rank like the below
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, rank = args.rank,
world_size=args.world_size)
$ CUDA_VISIBLE_DEVICES=0 python main.py /datase... | https://github.com/pytorch/examples/issues/253 | closed | [] | 2017-11-10T06:50:22Z | 2018-12-11T07:49:24Z | 2 | andrew-yang0722 |
pytorch/examples | 252 | mnist dataset(jpg format) load slow | I put different label of Mnist datasets in different folders, as is shown in attached figure.

.exp_()` be `std = logvar.exp_().pow(0.5)`?
sorry, I just realized...
| https://github.com/pytorch/examples/issues/240 | closed | [] | 2017-10-24T14:14:25Z | 2017-10-24T16:01:59Z | 0 | fedecarne |
pytorch/tutorials | 156 | Explain optimizer.zero_grad() | I think the call to [optimizer.zero_grad()](https://github.com/pytorch/tutorials/blob/master/beginner_source/examples_nn/two_layer_net_optim.py#L52) should be explained in the beginner tutorials. In particular:
* What is the point of this call?
* Why is not it made automatically?
Thanks! | https://github.com/pytorch/tutorials/issues/156 | closed | [] | 2017-10-11T12:32:06Z | 2018-01-22T08:22:20Z | 0 | Vayel |
pytorch/examples | 231 | As for the pretrained model in torchvision, what's the image channel RGB or BGR? | https://github.com/pytorch/examples/issues/231 | closed | [] | 2017-10-10T13:48:00Z | 2017-10-12T00:53:17Z | 2 | AlexHex7 | |
pytorch/tutorials | 147 | No module named 'torch.onnx' when following super_resolution_with_caffe2.html | I am following tutorial http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html (Transfering a model from PyTorch to Caffe2 and Mobile using ONNX). At the beginning I get:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-cabf174890ab> in <module>()
5 ... | https://github.com/pytorch/tutorials/issues/147 | closed | [] | 2017-09-28T20:17:10Z | 2017-11-08T12:58:48Z | 4 | liqunfu |
pytorch/text | 125 | what is the purpose of this project? | pytorch has offered utils.data.dataset, and what is the purpose of torchtext?
what features do torchtext support? | https://github.com/pytorch/text/issues/125 | closed | [] | 2017-09-19T05:44:43Z | 2017-12-22T07:00:38Z | null | rabintang |
pytorch/pytorch | 2,557 | What is the Torch7 's nn.Add layer in PyTorch? | I find the torch.legacy.nn.Add layer, but it doesn't support autograd. Any other solutions? | https://github.com/pytorch/pytorch/issues/2557 | closed | [] | 2017-08-29T02:32:49Z | 2017-08-29T02:40:14Z | null | yytdfc |
pytorch/examples | 207 | how to finetune my own trained model on new datasets? | I have trained my own model ,now i want use this trained model to initialize my new networks or finetune this trained model on new datasets, anyone know how to do it ? | https://github.com/pytorch/examples/issues/207 | closed | [] | 2017-08-24T09:01:19Z | 2017-08-24T09:53:38Z | 0 | visonpon |
pytorch/tutorials | 123 | Neural style transfer question | Hi, not sure if this is the right place to ask questions, but I'm working through the neural style transfer tutorial and am confused about something.
What is the purpose of the `backward` method in `ContentLoss` and `StyleLoss`?
If we remove the `backward` method, won't this work as well for the `closure` functio... | https://github.com/pytorch/tutorials/issues/123 | closed | [] | 2017-08-10T17:48:30Z | 2017-08-12T14:07:01Z | 2 | reiinakano |
pytorch/pytorch | 2,247 | what is exactly batch_size in pytorch? | Sorry im new to this.
I am not sure if I understand right. in pytorch it says: batch_size (int, optional) โ how many samples per batch to load (default: 1).
I know that, batch size = the number of training examples in one forward/backward pass.
What does it mean that it says "how many **samples** per **batch** to l... | https://github.com/pytorch/pytorch/issues/2247 | closed | [] | 2017-07-30T04:38:06Z | 2017-07-31T07:38:59Z | null | isalirezag |
pytorch/pytorch | 2,227 | where is the torch.nn.NLLLoss ? | i want to find how NLLLoss calcuate the loss, but i can't find its code.
# loss
def nll_loss(input, target, weight=None, size_average=True, ignore_index=-100):
r"""The negative log likelihood loss.
See :class:`~torch.nn.NLLLoss` for details.
where is `~torch.nn.NLLLoss` ? | https://github.com/pytorch/pytorch/issues/2227 | closed | [] | 2017-07-28T08:35:34Z | 2022-07-26T18:28:32Z | null | susht3 |
pytorch/examples | 187 | fast-neural-style uses mscoco but normalizes for imagenet mean | Documentation for `fast-neural-style` uses mscoco training dataset, but subtracts imagenet mean from image input data.
The effects are probably very minor, but anybody have the mean stats for mscoco? | https://github.com/pytorch/examples/issues/187 | closed | [] | 2017-07-21T09:18:40Z | 2017-07-24T01:20:02Z | 1 | twairball |
pytorch/tutorials | 116 | How to save the model in Classifying name tutorial? | I am 100% successfully run the tutorial and I make some problem change, where I fixed the sequence to 10 and just 3 feature. It almost same with the tutorial. I have successfully save the model, but I have problem when loading it.
```
import torch.nn as nn
from torch.autograd import Variable
class RNN(nn.Module):... | https://github.com/pytorch/tutorials/issues/116 | closed | [] | 2017-07-12T11:02:16Z | 2017-07-12T11:07:13Z | 1 | herleeyandi |
pytorch/examples | 178 | ImageNet Error | Hi,
I am trying to train the models on ImageNet following [this](https://github.com/pytorch/examples/tree/master/imagenet#training). However, I got no luck.
Does anyone know how to fix the following issue?
```shell
kwang@cdc-177:~/PyTorch/examples/imagenet$ CUDA_VISIBLE_DEVICES=1 python main.py -a resnet18 /i... | https://github.com/pytorch/examples/issues/178 | closed | [] | 2017-07-07T07:03:45Z | 2017-07-08T02:50:51Z | 1 | wk910930 |
pytorch/examples | 173 | imagenet example did not transfer input to gpu? | In the imagenet training code, `input` is not explicitly converted to cuda in these [lines](https://github.com/pytorch/examples/blob/master/imagenet/main.py#L163-L165).
I've noticed that the training loader has `pin_memory` flag as True. In fact, even if a tensor has called `pin_memory()`, it is still a `FloatTenso... | https://github.com/pytorch/examples/issues/173 | closed | [] | 2017-06-30T03:12:38Z | 2018-03-16T08:35:16Z | 2 | iammarvelous |
pytorch/tutorials | 101 | Regarding exercises in Character-Level RNN | I was wondering where I can find the dataset for the exercises given in Classifying Names with Character-Level RNN.
For example:
Any word -> language
First name -> gender
Character name -> writer
Page title -> blog or subreddit
To complete this task, do I have to create my own dataset or is there any repo where... | https://github.com/pytorch/tutorials/issues/101 | closed | [] | 2017-06-26T21:36:40Z | 2018-01-22T04:55:21Z | 1 | oya163 |
pytorch/examples | 170 | Potential speedup for DCGAN | In the dcgan example, while training the discriminator, why is backward called twice ? First its called on the real images, then the fake images.
Instead, shouldn't doing something like:
`totalError = real_loss + fake_loss ,
and then calling totalError.backward() `
save one whole backprop ?
Does doing it the wa... | https://github.com/pytorch/examples/issues/170 | closed | [] | 2017-06-16T05:47:31Z | 2017-10-04T15:02:47Z | 8 | harveyslash |
pytorch/tutorials | 98 | update beginner tutorial to most recent pytorch version? | This [beginner tutorial](http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py) uses `y.grad_fn` where, from googling around it seems like it should now use `y.creator`. The image is updated, but the text/code isn't.
Regardless, the tutorial should probably s... | https://github.com/pytorch/tutorials/issues/98 | closed | [] | 2017-06-15T02:50:43Z | 2017-06-15T19:48:14Z | 3 | erindb |
pytorch/examples | 168 | Regarding dimensions of mean and variance | Its a multivariate normal distribution in latent space and input space so mean(mu) and variance should be in multidimensional form(matrix) per distribution but your code is generating single value of mean and variance per distribution. So what is the math or implementation process behind it? | https://github.com/pytorch/examples/issues/168 | closed | [] | 2017-06-09T10:31:23Z | 2017-10-01T22:51:56Z | 1 | anindyasarkarIITH |
pytorch/examples | 166 | why input data is not copied to CUDA memory during training (only target) ? | in ImageNet example why only target is copied to CUDA memory target.cuda(async=True) and the absence of input.cuda() in training phase? | https://github.com/pytorch/examples/issues/166 | closed | [] | 2017-06-07T08:48:38Z | 2017-06-07T09:52:22Z | 1 | chahrazaddo |
pytorch/tutorials | 94 | blog tutorial and slides | Couldn't find you on twitter so raising this here.
I wrote a beginner's first steps blog and a presentation for the pydata london monthly meetup:
- [https://goo.gl/EmSfNk](https://goo.gl/EmSfNk)
- [http://makeyourownneuralnetwork.blogspot.co.uk/2017/05/learning-mnist-with-gpu-acceleration.html](http://makeyourow... | https://github.com/pytorch/tutorials/issues/94 | closed | [] | 2017-06-01T12:54:46Z | 2017-07-05T17:28:08Z | 1 | makeyourownneuralnetwork |
pytorch/examples | 163 | super_resolution model building question | class Net(nn.Module):
def __init__(self, upscale_factor):
super(Net, self).__init__()
self.relu = nn.ReLU()
self.conv1 = nn.Conv2d(1, 64, 5, 1, 2)
self.conv2 = nn.Conv2d(64, 64, 3, 1, 1)
self.conv3 = nn.Conv2d(64, 32, 3, 1, 1)
self.conv4 = nn.Conv2d(32, upsca... | https://github.com/pytorch/examples/issues/163 | closed | [
"question"
] | 2017-05-30T12:48:11Z | 2022-03-10T01:56:57Z | 1 | pageedward |
pytorch/examples | 162 | Request for examples on Recurrent Highway Networks (RHN) | Is it possible to use the existing torch.nn modules and implement RHNs? Would it make sense to have RHN as a separate module in torch.nn?
For reference, someone did raise this issue in pytorch/pytorch https://github.com/pytorch/pytorch/issues/516 | https://github.com/pytorch/examples/issues/162 | closed | [] | 2017-05-30T05:49:20Z | 2022-03-10T01:56:13Z | 2 | sanyam5 |
pytorch/tutorials | 89 | is the grad value wrong in beginner_source/blitz/autograd_tutorial.py line 92? | in line 92: `z_i = 3(x_i+2)^2` and `z_i\bigr\rvert_{x_i=1} = 27`.
I think `z_i\bigr\rvert_{x_i=1} = 6(x_i+2)\rvert_{x_i=1} = 6*(1+2) = 18`, please correct me if I am wrong, otherwise i will submit a pull request.
| https://github.com/pytorch/tutorials/issues/89 | closed | [] | 2017-05-25T23:59:47Z | 2017-05-29T17:02:53Z | 2 | ningzhou |
pytorch/examples | 158 | Shapes in SNLI | Looking over the SNLI example, something seems off to me. I hope I'm just missing something. First, a batch is embedded and, from the docs, I understand that Embedding layers output the shape `(N, W, D)` where N is the batch size and W is the sequence length. This is passed to the Encoder where it extracts the batch_si... | https://github.com/pytorch/examples/issues/158 | closed | [
"question",
"nlp"
] | 2017-05-07T02:32:11Z | 2022-03-10T03:19:09Z | 2 | neverfox |
pytorch/examples | 157 | two lines of code in mnist/main.py | There are two arguments called batch_size and test_batch_size:
`parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')`
`parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='inp... | https://github.com/pytorch/examples/issues/157 | closed | [] | 2017-05-04T07:40:05Z | 2020-10-10T02:22:56Z | 0 | iamabug |
pytorch/tutorials | 77 | Slowdown in DQN RL Tutorial | After about 5 episodes on latest master build of Pytorch, the time to execute each step t in the main loop slows way down. I tried a pip install of Pytorch as well to test if it was just my version and same thing. I am on OSX with no cuda. Is slowdown normal? I don't see anything in the optimization step that should re... | https://github.com/pytorch/tutorials/issues/77 | closed | [] | 2017-04-26T21:58:41Z | 2018-01-22T04:54:10Z | 1 | lbollar |
pytorch/pytorch | 1,344 | What the function is about element-wise product(Hadamard product) in pytorch? | https://github.com/pytorch/pytorch/issues/1344 | closed | [] | 2017-04-24T10:59:08Z | 2017-04-24T13:07:50Z | null | stevenhanjun | |
pytorch/examples | 147 | imagenet example training gets slower over time. | It seems that as I do training, the per batch time gets slower and slower.
For example, when I run `CUDA_VISIBLE_DEVICES=0 python main.py -a alexnet --lr 0.01 --workers 22 /ssd/cv_datasets/ILSVRC2015/Data/CLS-LOC`.
Initially I get an average per batch time of about 0.25s
After several batches, I get 0.5s.
... | https://github.com/pytorch/examples/issues/147 | closed | [] | 2017-04-20T19:27:35Z | 2019-05-03T09:09:49Z | 10 | zym1010 |
pytorch/examples | 144 | why treating Alexnet/VGG differently in ImageNet example? | in <https://github.com/pytorch/examples/blob/master/imagenet/main.py#L68-L72>, it seems that special care has to be taken when wrapping the module with `DataParallel`. Why is this the case? Also, I don't understand why for AlexNet and VGG, `features` is wrapped, yet `classifier` is not. | https://github.com/pytorch/examples/issues/144 | closed | [] | 2017-04-16T04:26:33Z | 2020-01-08T00:27:23Z | 6 | zym1010 |
pytorch/examples | 142 | action.reinforce(reward) | What does "action.reinforce(reward)" mean? Does it means gradient descent?

| https://github.com/pytorch/examples/issues/142 | closed | [] | 2017-04-14T07:35:47Z | 2017-04-14T11:54:32Z | 1 | susht3 |
pytorch/examples | 137 | How To Correctly Kill MultiProcesses During Multi-GPU Training | During the training of using examples/imagenet/main.py, I used the following command:
CUDA_VISIBLE_DEVICES=0,1,2,3 nohup python main.py [options] path/to/imagenetdir 1>a.log 2>a.err &
Then it starts 5 processes in the system, 1 main process appears in nvidia-smi.
Most of the Time (90% of the time) after I ... | https://github.com/pytorch/examples/issues/137 | closed | [] | 2017-04-10T07:36:38Z | 2022-03-09T21:27:41Z | 1 | catalystfrank |
pytorch/examples | 126 | ImageNet example is falling apart in multiple ways | I am experimenting with Soumith's ImageNet example, but it is crashing or deadlocking in three different ways. I have added a bunch of "print" statements to it to figure out where it is crashing, and here is the GIST of full script: (as you can see, there are almost no significant modifications to the original code.) ... | https://github.com/pytorch/examples/issues/126 | closed | [] | 2017-03-28T01:07:36Z | 2017-03-28T01:08:39Z | 1 | FuriouslyCurious |
pytorch/examples | 116 | why is detach necessary | Hi, I am wondering why is detach necessary in this line:
https://github.com/pytorch/examples/blob/a60bd4e261afc091004ea3cf582d0ad3b2e01259/dcgan/main.py#L230
I understand that we want to update the gradients of netD without changin the ones of netG. But if the optimizer is only using the parameters of netD, then on... | https://github.com/pytorch/examples/issues/116 | closed | [] | 2017-03-20T22:12:36Z | 2022-04-16T07:20:21Z | 17 | rogertrullo |
pytorch/tutorials | 47 | Web page for Tutorials | Hi,
I've been working on beautifying/integrating all the tutorials on pytorch into one. see https://github.com/pytorch/pytorch/pull/778. These tutorials are based on [sphinx-gallery](http://sphinx-gallery.readthedocs.io) and tutorials are executed during build time.
I've created a [separate repo](https://github.... | https://github.com/pytorch/tutorials/issues/47 | closed | [] | 2017-03-14T12:37:35Z | 2017-04-14T18:46:27Z | 13 | chsasank |
pytorch/tutorials | 44 | Where is Variable? | In `Reinforcement (Q-)Learning with PyTorch2`, the section `Training hyperparameters and utilities` claim the cell providing `Variable` which is "a simple wrapper around torch.autograd". But I can't found it in the cell. Then I encounter `NameError: name 'Variable' is not defined`, anyway I import Variable from `torc... | https://github.com/pytorch/tutorials/issues/44 | closed | [] | 2017-03-06T04:57:01Z | 2019-12-02T12:42:11Z | null | yiyuezhuo |
pytorch/tutorials | 41 | Numerically unstable initialized values for uninitialized tensors? | I was trying to follow the tutorial when I noticed that if I just create an "uninitialized matrix", its values are not numerically stable. I guess since we will have to initialize the matrix later, it doesn't really matter, but I'm just wondering if this is intentional.
I'm running pyTorch with anaconda python 3.6, ... | https://github.com/pytorch/tutorials/issues/41 | closed | [] | 2017-02-27T03:17:09Z | 2017-02-27T03:38:54Z | 1 | r-luo |
pytorch/tutorials | 26 | Training on GPU in deep learning notebook - inputs/labels need cuda() | In working through the deep learning notebook, it's not obvious at first how to get the learning working once you put the net on the GPU.
After some trial and error, this worked
inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda()
I could make a PR with this addition if desired | https://github.com/pytorch/tutorials/issues/26 | closed | [] | 2017-02-04T21:38:56Z | 2017-05-23T16:37:32Z | 3 | gojira |
pytorch/tutorials | 15 | There is any fine tune tutorials? | Fine tune is very easy in Torch and Caffe, but I can't find how do fine tune in pytorch. Is there any fine tune examples or tutorials? | https://github.com/pytorch/tutorials/issues/15 | closed | [] | 2017-01-22T09:06:53Z | 2017-10-31T07:24:56Z | 9 | Teaonly |
pytorch/tutorials | 14 | Potential improvement to 60 minute blitz for pasteability? | Hello! I'm very much a newbie to this:
https://github.com/pytorch/tutorials/blob/master/Deep%20Learning%20with%20PyTorch.ipynb
I followed this guide with Anaconda 3.5 and got to this point: `out = net(input)`
I got a NotImplementedError from the original nn module that the class was supposed to override.
Turn... | https://github.com/pytorch/tutorials/issues/14 | closed | [] | 2017-01-22T02:29:52Z | 2017-01-22T03:05:36Z | 1 | youanden |
pytorch/tutorials | 7 | Feature Request: tutorial on loading datasets | A tutorial outlining how to make use of the `torch.utils.data.Dataset` and `torch.utils.data.DataLoader` on your own data (not just the `torchvision.datasets`) would be good. The documentation page is quite obscure, and it is not entirely clear how these can be made use of on your own data.
Also outlining what woul... | https://github.com/pytorch/tutorials/issues/7 | closed | [
"enhancement"
] | 2017-01-19T11:08:21Z | 2023-05-26T20:43:34Z | 8 | ronrest |
pytorch/tutorials | 5 | Initialize with t7 files? | If I trained a model with Torch and stored the weights using t7 format. Is it possible to use this as initialization in pytorch? Thank you. | https://github.com/pytorch/tutorials/issues/5 | closed | [] | 2017-01-18T18:50:08Z | 2017-01-18T19:20:07Z | 2 | Yuliang-Zou |
vllm-project/vllm | 31,787 | [Usage]: How to set different attention backend for prefill and decode phases? | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Alibaba Cloud Linux 3 (Soaring Falcon) (x86_64)
GCC version : (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3.8 2.32)
Clan... | https://github.com/vllm-project/vllm/issues/31787 | open | [
"usage"
] | 2026-01-06T07:33:18Z | 2026-01-06T07:33:18Z | 0 | stormchasingg |
sgl-project/sglang | 16,546 | [RFC] SGLang-Omni Design | API Design: @shuaills
Proposal Draft: @FrankLeeeee @sleepcoo
## Motivation
Recent models, no matter open-source or proprietary, have the tendency to become more multi-modal than ever before. That is, models have the ability to process data in more than two modalities. For example, Gemini can have inputs of text, i... | https://github.com/sgl-project/sglang/issues/16546 | open | [] | 2026-01-06T06:23:37Z | 2026-01-06T07:14:36Z | 0 | FrankLeeeee |
vllm-project/vllm | 31,766 | [Docs] Feedback for `/en/latest/contributing/profiling/` | ### ๐ The doc issue
When I follow this doc and run OpenAI Server[ยถ](https://docs.vllm.ai/en/latest/contributing/profiling/#openai-server), I found
> usage: vllm [-h] [-v] {chat,complete,serve,bench,collect-env,run-batch} ...
> vllm: error: unrecognized arguments: --profiler-config {"profiler": "torch", "torch_profil... | https://github.com/vllm-project/vllm/issues/31766 | open | [
"documentation"
] | 2026-01-06T03:15:37Z | 2026-01-06T03:15:37Z | 0 | cyk2018 |
huggingface/tokenizers | 1,926 | [bug] Why is Apple's development for computers with Inter chips not supported in versions above 0.30.0 | Why is Apple's development for computers with Inter chips not supported in versions above 0.30.0๏ผ | https://github.com/huggingface/tokenizers/issues/1926 | open | [] | 2026-01-06T03:11:35Z | 2026-01-06T03:18:03Z | 1 | sustly |
sgl-project/sglang | 16,530 | [Bug] DecodingStage VRAM usage surges dramatically | ### Checklist
- [ ] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/16530 | open | [] | 2026-01-06T02:15:16Z | 2026-01-06T02:15:16Z | 0 | carloszhang999 |
huggingface/lerobot | 2,753 | Debugging poor eval with SmoVLA and two cameras. | ### Ticket Type
โ Technical Question
### Environment & System Info
```Shell
- Lerobot running on a Jetson Orin nano Super
- Model trained on a 4090
- SO-ARM-101 model.
- two cameras setup (wrist and top view)
```
### Description
I just trained a 30K steps SmoVLA model from a 73 episodes dataset (which are a 2 merg... | https://github.com/huggingface/lerobot/issues/2753 | open | [
"question",
"policies",
"dataset",
"sensors",
"training",
"evaluation"
] | 2026-01-05T18:25:13Z | 2026-01-05T18:25:27Z | null | vettorazi |
vllm-project/vllm | 31,726 | [Usage]: Why does `vllm serve` keep filling up my system disk when loading a model from a network mount? |
### Your current environment
```
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could n... | https://github.com/vllm-project/vllm/issues/31726 | open | [
"usage"
] | 2026-01-05T14:50:19Z | 2026-01-05T15:30:39Z | 5 | tingjun-cs |
huggingface/diffusers | 12,913 | Is Lumina2Pipeline's mu calculation correct? | ### Describe the bug
Description
While reviewing the current main-branch implementation of pipeline_lumina2, I noticed a potential bug in the calculation of mu within the pipeline's __call__.
In the following section of the code:
https://github.com/huggingface/diffusers/blob/5ffb65803d0ddc5e3298c35df638ceed5e580922... | https://github.com/huggingface/diffusers/issues/12913 | open | [
"bug"
] | 2026-01-05T14:30:01Z | 2026-01-05T18:07:36Z | 1 | hwangdonghyun |
vllm-project/vllm | 31,689 | [Feature][Quantization][Help Wanted]: Clean up GPTQ + AWQ Quantization | ### ๐ The feature, motivation and pitch
We are in process of cleaning up the quantization integrations in vllm (see the FusedMoE refactor PRs I am working on)
In general, this means we are trying to separate concerns of the quantization INTEGRATION (on disk format --- responsible for weight loading) from the quantiz... | https://github.com/vllm-project/vllm/issues/31689 | open | [
"help wanted",
"feature request"
] | 2026-01-04T20:56:04Z | 2026-01-06T04:42:19Z | 7 | robertgshaw2-redhat |
vllm-project/vllm | 31,683 | [Feature]: Error Logging Redesign | ### ๐ The feature, motivation and pitch
vLLM has a multiprocess architecture with:
- API Server --> EngineCore --> [N] Workers
As a result, clean error message logging is challenging, since the error in the API server that occurs will often not be the root cause error. An example of this is at startup time:
```
(vl... | https://github.com/vllm-project/vllm/issues/31683 | open | [
"help wanted",
"feature request"
] | 2026-01-04T14:53:38Z | 2026-01-04T14:53:43Z | 0 | robertgshaw2-redhat |
sgl-project/sglang | 16,362 | [Bug] Deepseekv3.2 detect eos when reasonging | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/16362 | open | [] | 2026-01-04T02:43:14Z | 2026-01-04T02:43:14Z | 0 | duzeyan |
vllm-project/vllm | 31,646 | [Usage]: How can I use GPU12 as standalone KV LMCache? | ### Your current environment
```text
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version ... | https://github.com/vllm-project/vllm/issues/31646 | open | [
"usage"
] | 2026-01-03T13:25:41Z | 2026-01-03T13:25:41Z | 0 | joshuakoh1 |
vllm-project/vllm | 31,624 | [Bug]: ModelOpt Llama-4 Checkpoints Take 5+ minutes to load | ### ๐ The feature, motivation and pitch
In working on some MoE refactors, I discovered that L4 for ModelOpt takes 5+minutes to load weights even from CPU page cache.
- https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-FP8
The root cause is basically this hack logic to load the state dict that ModelOpt us... | https://github.com/vllm-project/vllm/issues/31624 | open | [
"bug",
"help wanted",
"good first issue",
"feature request"
] | 2026-01-02T15:18:14Z | 2026-01-06T02:42:32Z | 6 | robertgshaw2-redhat |
huggingface/lerobot | 2,741 | XVLA: Clarification on provided lerobot/xvla-base model checkpoint and documentation | ### Ticket Type
โ Technical Question
### Environment & System Info
```Shell
```
### Description
Dear lerobot-Team,
I hope you had a good start into 2026 and thanks for the great work on making X-VLA natively available via lerobot.
I have a few questions regarding the _lerobot/xvla-base_ checkpoint and the inform... | https://github.com/huggingface/lerobot/issues/2741 | open | [
"documentation",
"question",
"policies",
"dataset",
"training"
] | 2026-01-02T08:38:03Z | 2026-01-04T15:54:55Z | null | gianlucageraci |
huggingface/datasets | 7,927 | Using Stateful Dataloader with Split Dataset By Node and DCP for DDP | ### Describe the bug
I am trying to determine how to save and load the Stateful Dataloader State with DCP and Split Dataset by Node for DDP.
Currently, I am running into the issue where I am receiving a slow resume.
```
Neither dataset nor iter(dataset) defines state_dict/load_state_dict so we are naively fast-forwar... | https://github.com/huggingface/datasets/issues/7927 | open | [] | 2026-01-01T22:27:07Z | 2026-01-02T02:48:21Z | 2 | conceptofmind |
vllm-project/vllm | 31,609 | [Bug][ModelOpt]: FlashInfer CUTLASS MoE Accuracy Degraded (Llama4) | ### Your current environment
H100, B200 ---> vllm 0.13.0
### ๐ Describe the bug
- running the following:
```bash
# modelopt
MODEL_TENSOR := "nvidia/Llama-4-Scout-17B-16E-Instruct-FP8"
GPUS := "2"
PORT := "8001"
# sm90 / sm100
launch_cutlass_tensor:
VLLM_USE_DEEP_GEMM=0 VLLM_USE_FLASHINFER_MOE_FP8=1 VLLM_FLASH... | https://github.com/vllm-project/vllm/issues/31609 | closed | [
"bug",
"help wanted"
] | 2026-01-01T21:45:48Z | 2026-01-03T20:26:38Z | 2 | robertgshaw2-redhat |
huggingface/trl | 4,766 | Asynchronous generation and training for GRPO? | ### Feature request
GRPOTrainer send requests for the next batch to vllm server when it is computing backpropagation, in order to reduce idle runtime for both server's GPUs and trainer's GPUs.
### Motivation
Under the current GRPO trainer, generation and backpropagation are sequential, meaning that lots of runtime a... | https://github.com/huggingface/trl/issues/4766 | open | [] | 2026-01-01T08:42:12Z | 2026-01-01T08:42:12Z | 0 | sxndqc |
vllm-project/vllm | 31,574 | [Usage]: If vllm surpport load LoRA adapter and DeepSeek-v3.1-termunis at the same time | ### Your current environment
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : ... | https://github.com/vllm-project/vllm/issues/31574 | open | [
"usage"
] | 2025-12-31T10:33:52Z | 2026-01-01T07:09:51Z | 1 | AIR-hl |
sgl-project/sglang | 16,220 | GLM pd disaggregation with mtp | did glm support pd disaggregation and mtp? i try to test,but the accept len in log is always 1(failed to predict everytime) and performance is bad.i use the start command below,is there something wrong?
args for prefill node :
SGLANG_ENABLE_SPEC_V2=1 SGLANG_DISAGGREGATION_QUEUE_SIZE=1 SGLANG_DISAGGREGATION_THREAD_POO... | https://github.com/sgl-project/sglang/issues/16220 | open | [] | 2025-12-31T10:19:04Z | 2026-01-04T01:52:56Z | 1 | dongliangwu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.