instruction stringlengths 13 150 | input stringlengths 36 29.1k | output stringlengths 31 29.5k | source stringlengths 45 45 |
|---|---|---|---|
How to run Pytorch model in normal non-parallel way? | I am going through this script, and there is a code block which takes 2 options into account, DataParallel and DistributedDataParallel here:
if not args.distributed:
if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
model.features = torch.nn.DataParallel(model.features)
model.cuda(... |
DataParallel is a wrapper object to parallelize the computation on multiple GPUs of the same machine, see here.
DistributedDataParallel is also a wrapper object that lets you distribute the data on multiple devices, see here.
If you don't want it, you can simply remove the wrapper and use the model as it is:
if n... | https://stackoverflow.com/questions/45858524/ |
Spyder just "kernel died,restarting" when I run net.forward in pytorch | I am using PYTHON IDE spyder3.2.1 in anaconda2, with python2.7, ubuntu14.04
code is just simple as follows:
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
"""
input:[batch_size,in_channel,height,width]
kernel:[out_channel,in_channel,kh,kw]
""... | (Spyder developer here) I don't know why this is happening, but I opened an issue in our issue tracker so we don't forget to take a look at it in the future.
Update: This problem was solved by the OP by updating PyTorch.
| https://stackoverflow.com/questions/45875837/ |
Pytorch install with anaconda error | I get this error:
C:\Users>conda install pytorch torchvision -c soumith
Fetching package metadata .............
PackageNotFoundError: Package missing in current win-64 channels:
- pytorch
I got conda install pytorch torchvision -c soumith from Pytorch official website and I have OSX/conda/3.6/none for settings... | Use the following commands to install pytorch on windows
for Windows 10 and Windows Server 2016, CUDA 8
conda install -c peterjc123 pytorch cuda80
for Windows 10 and Windows Server 2016, CUDA 9
conda install -c peterjc123 pytorch cuda90
for Windows 7/8/8.1 and Windows Server 2008/2012, CUDA 8
conda install -c p... | https://stackoverflow.com/questions/45906706/ |
How should I save the model of PyTorch if I want it loadable by OpenCV dnn module | I train a simple classification model by PyTorch and load it by opencv3.3, but it throw exception and say
OpenCV Error: The function/feature is not implemented (Unsupported Lua type) in readObject, file
/home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/torch/torch_importer.cpp,
line 797
/home/ramsus/Qt/3rdLibs/o... | I found the answer, opencv3.3 do not support PyTorch (https://github.com/pytorch/pytorch) but pytorch (https://github.com/hughperkins/pytorch), it is a big surprise, I never know there are another version of pytorch exist(looks like a dead project, long time haven't updated), I hope they could mention which pytorch the... | https://stackoverflow.com/questions/45929573/ |
Why does autograd not produce gradient for intermediate variables? | trying to wrap my head around how gradients are represented and how autograd works:
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([2]), requires_grad=True)
y = x * x
z = y * y
z.backward()
print(x.grad)
#Variable containing:
#32
#[torch.FloatTensor of size 1]
print(y.grad)
#None
Why ... |
By default, gradients are only retained for leaf variables. non-leaf variables' gradients are not retained to be inspected later. This was
done by design, to save memory.
-soumith chintala
See: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94
Option 1:
Call y.retain_grad()
x =... | https://stackoverflow.com/questions/45988168/ |
pytorch inception model outputs the wrong label for every input image | For the pytorch models I found this tutorial explaining how to classify an image. I tried to apply the same prcedure for an inception model. However the model fails for every image I load in
Code:
# some people need these three lines to make it work
#from torchvision.models.inception import model_urls
#name = 'incept... | I found out that one needs to call model.eval() before applying the model. Because of the batch normalisations and also dropout layers, the model bahaves differently for training and testing.
| https://stackoverflow.com/questions/46005286/ |
How to do scatter and gather operations in numpy? | I want to implement scatter and gather operations of Tensorflow or PyTorch in Numpy. I have been scratching my head for a while. Any pointers are greatly appreciated!
PyTorch Scatter
PyTorch Gather
| The scatter method turned out to be way more work than I anticipated. I did not find any ready made function in NumPy for it. I'm sharing it here in the interest of anyone who may need to implement it with NumPy.
(p.s. self is the destination or output of the method.)
def scatter_numpy(self, dim, index, src):
"""... | https://stackoverflow.com/questions/46065873/ |
what is the equivalent of theano.tensor.clip in pytorch? | I want to clip my tensor (not gradient) values to some range. Is there any function in pytorch like there is a function theano.tensor.clip() in theano?
| The function you are searching for is called torch.clamp. You can find the documentation here
| https://stackoverflow.com/questions/46093577/ |
How do I write a PyTorch sequential model? | How do I write a sequential model in PyTorch, just like what we can do with Keras? I tried:
import torch
import torch.nn as nn
net = nn.Sequential()
net.add(nn.Linear(3, 4))
net.add(nn.Sigmoid())
net.add(nn.Linear(4, 1))
net.add(nn.Sigmoid())
net.float()
But I get the error:
AttributeError: 'Sequential' object has n... | As described by the correct answer, this is what it would look as a sequence of arguments:
device = torch.device('cpu')
if torch.cuda.is_available():
device = torch.device('cuda')
net = nn.Sequential(
nn.Linear(3, 4),
nn.Sigmoid(),
nn.Linear(4, 1),
nn.Sigmoid()
).to(device)
print(ne... | https://stackoverflow.com/questions/46141690/ |
Pytorch Pre-trained RESNET18 Model | I have trained a pre-trained RESNET18 model in pytorch and saved it. While testing the model is giving different accuracy for different mini-batch size. Does anyone know why?
| Yes, I think so.
RESNET contains batch normalisation layers. At evaluation time you need to fix these; otherwise the running means are continuously being adjusted after processing each batch hence giving you different accuracy .
Try setting:
model.eval()
before evaluation. Note before getting back into training, c... | https://stackoverflow.com/questions/46167566/ |
How to train pytorch model with numpy data and batch size? | I am learning the basics of pytorch and thought to create a simple 4 layer nerual network with dropout to train IRIS dataset for classification. After refering to many tutorials I wrote this code.
import pandas as pd
from sklearn.datasets import load_iris
import torch
from torch.autograd import Variable
epochs=300
ba... |
I want to set a batch size of 20. How should I do this?
For data processing and loading, PyTorch provide two classes, one is Dataset, which is used to represent your dataset. Specifically, Dataset provide the interface to get one sample from the whole dataset using the sample index.
But Dataset is not enough, fo... | https://stackoverflow.com/questions/46170814/ |
How does pytorch compute the gradients for a simple linear regression model? | I am using pytorch and trying to understand how a simple linear regression model works.
I'm using a simple LinearRegressionModel class:
class LinearRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_di... | PyTorch has this concept of tensors and variables. When you use nn.Linear the function creates 2 variables namely W and b.In pytorch a variable is a wrapper that encapsulates a tensor , its gradient and information about its create function. you can directly access the gradients by
w.grad
When you try it before cal... | https://stackoverflow.com/questions/46278124/ |
Static graph is fast. Dynamic graph is slow. Is there any specific benchmark demonstrating this? | I've see some benchmark about tensorflow and pytorch. Tensorflow maybe faster but seems not that faster even sometimes slower.
Is there any benchmark on specifically testing on static graph and dynamic graph demonstrating that static graph is much faster that dynamic graph?
| To be more precise, the speed benefit comes from "deferred execution with graph rewriting."
It's typically associated with explicit graph frameworks (Theano/TF), but with enough engineering you could add it to execution models like numpy/PyTorch which don't have explicit graph. See Bohrium for an example of hacking n... | https://stackoverflow.com/questions/46312238/ |
When to use individual optimizers in PyTorch? | The example given here uses two optimizers for encoder and decoder individually. Why? And when to do like that?
| If you have multiple networks (in the sense of multiple objects that inherit from nn.Module), you have to do this for a simple reason: When construction a torch.nn.optim.Optimizer object, it takes the parameters which should be optimized as an argument. In your case:
encoder_optimizer = optim.Adam(encoder.parameters()... | https://stackoverflow.com/questions/46377599/ |
n*m*3 input image to an nxm label in PyTorch | The input to my network is an RGB image with dimensions nm, how can I get the output to have dimensions of nm.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 20, kernel_size = 5)
self.conv2 = nn.Conv2d(20, 50, kernel_size = 3)
self.co... | If you want to reshape a Tensor into a different size but with the same number of elements, generally you can use torch.view.
For your case, there is an even simpler solution: torch.squeeze returns a Tensor with all dimensions of size 1 removed.
| https://stackoverflow.com/questions/46382732/ |
CUDNN error using pretrained vgg16 model | I am trying to extract the activations of the last layer in a VGG16 model.
For that end I used a decorator over the model as shown below.
When I pass a cuda tensor to the model I get a CUDNN_STATUS_INTERNAL_ERROR with the following traceback.
Anyone knows where I went wrong?
traceback:
File "/media/data1/iftachg/... | Apparently cudnn errors are extremely unhelpful and there was no problem with the code itself - it is simply the GPUs I was trying to access were already in use.
| https://stackoverflow.com/questions/46391244/ |
Running a portion of Python code in parallel on two different GPUs | I have a PyTorch script similar to the following:
# Loading data
train_loader, test_loader = someDataLoaderFunction()
# Define the architecture
model = ResNet18()
model = model.cuda()
# Get method from program argument
method = args.method
# Training
train(method, model, train_loader, test_loader)
In order to r... | I guess the easiest way would be to fix the seeds as below.
myseed=args.seed
np.random.seed(myseed)
torch.manual_seed(myseed)
torch.cuda.manual_seed(myseed)
This should force the data loaders to get the same samples every time. The parallel way is to use multithreading but I hardly see it worth the hassle for the pr... | https://stackoverflow.com/questions/46438877/ |
NumPy/PyTorch extract subsets of images | In Numpy, given a stack of large images A of size(N,hl,wl), and coordinates x of size(N) and y of size(N) I want to get smaller images of size (N,16,16)
In a for loop it would look like this:
B=numpy.zeros((N,16,16))
for i in range(0,N):
B[i,:,:]=A[i,y[i]:y[i]+16,x[i]:x[i]+16]
But can I do this just with indexing... | Pretty simple really with view_as_windows from scikit-image, to get those sliding windowed views as a 6D array with the fourth axis being singleton. Then, use advanced-indexing to select the ones we want based off the y and x indices for indexing into the second and third axes of the windowed array to get our B.
Hence... | https://stackoverflow.com/questions/46450040/ |
error occurs when loading data in pytorch: 'Image' object has no attribute 'shape' | I am finetuning resnet152 using the code based on ImageNet training in PyTorch, and error occurs when I load the data, and it occured only after handling several batches of images. How can I solve the problem.
Following code is the simplified code that produces the same error :
code
# Data loading code
normalize = t... | There's a bug in transforms.RandomSizedCrop.get_params(). In the last line of your error message, it should be img.size instead of img.shape.
The lines containing the bug will only be executed if the cropping failed for 10 consecutive times (where it falls back to central crop). That's why this error does not occur fo... | https://stackoverflow.com/questions/46482787/ |
Using Augmented Data Images in Testing | I am working on a Person Re-Identification problem and am showing the results using a CMC curve.
I used augmented data/Image along with the normal images (Currently training on CUHK01) in the training set. While testing if I don't use the augmented data along with my normal test images for Calculating Rank let's say Ra... | The reason behind using data augmentation is to reduce the chance of overfitting. This way you want to tell your model that the parameters (theta) are not correlated with the data that you are augmenting (alpha). That is achievable by augmenting each input by every possible alpha. But this is far from reality for a num... | https://stackoverflow.com/questions/46495978/ |
pytorch backprop through volatile variable error | I'm trying to minimize some input relative to some target by running it through several backward pass iterations and updating the input at each step. The first pass runs successfully but I get the following error on the second pass:
RuntimeError: element 0 of variables tuple is volatile
This code snippet demonstrate... | You must zero out the gradient before each update like this:
inp.grad.data.zero_()
But in your code every time you update the gradient you are creating another Variable object, so you must update entire history like this:
import torch
from torch.autograd import Variable
import torch.nn as nn
inp_hist = []
inp = Va... | https://stackoverflow.com/questions/46513276/ |
How to get ouput from a particular layer from pretrained CNN in pytorch | I have pretrained CNN (RESNET18) on imagenet dataset , now what i want is to get output of my input image from a particular layer,
for example.
my input image is of FloatTensor(3, 224, 336) and i send a batch of size = 10 in my resnet model , now what i want is the output returned by model.layer4,
Now what i tried i... | You must create a module with all layers from start to the block you want:
resnet = torchvision.models.resnet18(pretrained=True)
f = torch.nn.Sequential(*list(resnet.children())[:6])
features = f(imgs)
| https://stackoverflow.com/questions/46513886/ |
How to use groups parameter in PyTorch conv2d function | I am trying to compute a per-channel gradient image in PyTorch. To do this, I want to perform a standard 2D convolution with a Sobel filter on each channel of an image. I am using the torch.nn.functional.conv2d function for this
In my minimum working example code below, I get an error:
import torch
import torch.nn.fu... | If you want to apply a per-channel convolution then your out-channel should be the same as your in-channel. This is expected, considering each of your input channels creates a separate output channel that it corresponds to.
In short, this will work
import torch
import torch.nn.functional as F
filters = torch.autogr... | https://stackoverflow.com/questions/46536971/ |
expected Double tensor (got Float tensor) in Pytorch | I want to create nn.Module in Pytorch. I used following code for text related problem (In fact I use Glove 300d pre-trained embedding and weighted average of words in a sentence to do classification).
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv1d(300, ... |
Error 1
RuntimeError: expected Double tensor (got Float tensor)
This happens when you pass a double tensor to the first conv1d function. Conv1d only works with float tensor.
Either do,
conv1.double() , or
model.double().
Which is what you have done, and that is correct.
Error 2
RuntimeError: Given inp... | https://stackoverflow.com/questions/46546217/ |
Jupyter Kernel crash/dies when use large Neural Network layer, any idea pls? | I am experimenting Autoencoder with Pytorch. It seems when I use relatively larger neural network for instance nn.Linear(250*250, 40*40) as the first layer, the Jupyter kernel keep crashing. when I use smaller layer size e.g. nn.Linear(250*250, 20*20). the Jupyter kernel is ok. Any idea how to fix this? So I can run la... | I have found the root cause. I am running a docker ubuntu image/package on windows. the memory setting is set too low, when I increase the memory setting on docker. my ubuntu environment got more memory, then I can larger matrix operations.
| https://stackoverflow.com/questions/46624247/ |
PyTorch Broadcasting Failing. "Rules" Followed | I have a few PyTorch examples that confuse me and am hoping to clear it up.
Firstly, as per the PyTorch page, I would expect these example to work as do their numpy equivalents namely these. The first example is very intuitive. These are compatible for broadcasting:
Image (3d array): 256 x 256 x 3
Scale (1d array):... | Verify that you are using the correct version of pytorch. It must be 0.2.0, which is when broadcasting was introduced in pytorch.
In[2]: import torch
In[3]: torch.__version__
Out[3]: '0.2.0_4'
| https://stackoverflow.com/questions/46625967/ |
Porting PyTorch code from CPU to GPU | Following the tutorial from https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb
There is a USE_CUDA flag that is used to control the variable and tensor types between CPU (when False) to GPU (when True) types.
Using the data from en-fr.tsv and converting the sentences t... | You can also try:
net = YouNetworkClass()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)
After that, you have to send the word_inputs, encoder_hidden and decoder_context to the GPU too:
word_inputs, encoder_hidden, decoder_context = word_inputs.to(device), encoder_hidden.to(d... | https://stackoverflow.com/questions/46704352/ |
RNN is not training (PyTorch) | I can't get what I am doing wrong when training RNN. I am trying to train RNN for AND operation on sequences (to learn how it works on simple task).
But my network is not learning, loss stays the same and it can't event overfit the model.
Can you please help me to find the problem?
Data I am using:
data = [
[1,... | The problem is this line:
out, hidden = net(Variable(tensor), Variable(hidden.data))
It should be simply
out, hidden = net(Variable(tensor), hidden)
By having Variable(hidden.data) here you are creating a new hidden_state variable (with all zeros) at very step instead of passing the hidden state from the previous... | https://stackoverflow.com/questions/46705634/ |
What does the parameter retain_graph mean in the Variable's backward() method? | I'm going through the neural transfer pytorch tutorial and am confused about the use of retain_variable(deprecated, now referred to as retain_graph). The code example show:
class ContentLoss(nn.Module):
def __init__(self, target, weight):
super(ContentLoss, self).__init__()
self.target = target.de... | @cleros is pretty on the point about the use of retain_graph=True. In essence, it will retain any necessary information to calculate a certain variable, so that we can do backward pass on it.
An illustrative example
Suppose that we have a computation graph shown above. The variable d and e is the output, and a is the ... | https://stackoverflow.com/questions/46774641/ |
PyTorch: How to get the shape of a Tensor as a list of int | In numpy, V.shape gives a tuple of ints of dimensions of V.
In tensorflow V.get_shape().as_list() gives a list of integers of the dimensions of V.
In pytorch, V.size() gives a size object, but how do I convert it to ints?
| For PyTorch v1.0 and possibly above:
>>> import torch
>>> var = torch.tensor([[1,0], [0,1]])
# Using .size function, returns a torch.Size object.
>>> var.size()
torch.Size([2, 2])
>>> type(var.size())
<class 'torch.Size'>
# Similarly, using .shape
>>> var.shape
torc... | https://stackoverflow.com/questions/46826218/ |
Pytorch to Keras code equivalence | Given a below code in PyTorch what would be the Keras equivalent?
class Network(nn.Module):
def __init__(self, state_size, action_size):
super(Network, self).__init__()
# Inputs = 5, Outputs = 3, Hidden = 30
self.fc1 = nn.Linear(5, 30)
self.fc2 = nn.Linear(30, 3)
def forward(... | None of them looks correct according to my knowledge. A correct Keras equivalent code would be:
model = Sequential()
model.add(Dense(30, input_shape=(5,), activation='relu'))
model.add(Dense(3))
model.add(Dense(30, input_shape=(5,), activation='relu'))
Model will take as input arrays of shape (*, 5) and outpu... | https://stackoverflow.com/questions/46866763/ |
Indexing one PyTorch tensor by another using index_select | I have a 3 x 3 PyTorch LongTensor that looks something like this:
A =
[0, 0, 0]
[1, 2, 2]
[1, 2, 3]
I want to us it to index a 4 x 2 FloatTensor like this one:
B =
[0.4, 0.5]
[1.2, 1.4]
[0.8, 1.9]
[2.4, 2.9]
My intended output is the 2 x 3 x 3 FloatTensor below:
C[0,:,:] =
[0.4... | Using index_select() requires that the indexing values are in a vector rather than a tensor. But as long as that is formatted correctly, the function handles the broadcasting for you. The last thing that must be done is reshaping the output, I believe due to the broadcasting.
The one-liner that will do this operation ... | https://stackoverflow.com/questions/46876131/ |
How to input a matrix to CNN in pytorch | I'm very new to pytorch and I want to figure out how to input a matrix rather than image into CNN.
I have try it in the following way, but some errors occur.
I define my dataset as following:
class FrameDataSet(tud.Dataset):
def __init__(self, data):
targets = data['class'].values.tolist()
featur... | Your input probably missing one dimension. It should be:
(batch_size, channels, width, height)
If you only have one element in the batch, the tensor have to be in your case
e.g (1, 1, 28, 28)
because your first conv2d-layer expected a 1-channel input.
| https://stackoverflow.com/questions/46901973/ |
Why detach needs to be called on variable in this example? | I was going through this example - https://github.com/pytorch/examples/blob/master/dcgan/main.py and I have a basic question.
fake = netG(noise)
label = Variable(label.fill_(fake_label))
output = netD(fake.detach()) # detach to avoid training G on these labels
errD_fake = criterion(output, label)
errD_fake.backward()
... | ORIGINAL ANSWER (WRONG / INCOMPLETE)
You're right, optimizerD only updates netD and the gradients on netG are not used before netG.zero_grad() is called, so detaching is not necessary, it just saves time, because you're not computing gradients for the generator.
You're basically also answering your other question you... | https://stackoverflow.com/questions/46944629/ |
sequence to sequence learning for language translation, what about unseen words | sequence to sequence learning is a powerful mechanism for language translation, especially using it locally in a context specific case.
I am following this pytorch tutorial for the task.
However, the tutorial did not split the data into training and testing.
You might think its not a big deal, just split it up, use o... | Sequence to sequence models performance strongly depend on count of unique words in vocabulary. Each unique word has to be encountered a number of times in training set, such that model can learn it correct usage. Words that appears few times cannot be used by the model, as model can't learn enough information about su... | https://stackoverflow.com/questions/46962582/ |
A proper way to adjust input size of CNN (e.g. VGG) | I want to train VGG on 128x128-sized images. I don't want to rescale them to 224x224 to save GPU-memory and training time. What would be the proper way to do so?
| The best way is to keep the convolutional part as it is and replace the fully connected layers. This way it is even possible to take pretrained weights for the convolutional part of the network. The fully connected layers must be randomly initialized. This way one can finetune a network with a smaller input size.
Here... | https://stackoverflow.com/questions/46963372/ |
deep reinforcement learning parameters and training time for a simple game | I want to learn how deep reinforcement algorithm works and what time it takes to train itself for any given environment.
I came up with a very simple example of environment:
There is a counter which holds an integer between 0 to 100.
counting to 100 is its goal.
there is one parameter direction whose value can be +1... | In Reinforcement Learning the underlining reward function is what defines the game. Different reward functions lead to different games with different optimal strategies.
In your case there are a few different possibilities:
Give +1 for reaching 100 and only then.
Give +1 for reaching 100 and -0.001 for every time s... | https://stackoverflow.com/questions/46979986/ |
pip to install Pytorch for usr/bin/python | I have set up Malmo on my mac and got into a trouble. I am using Malmo-0.17.0-Mac-64bit, python 2.7, macOS Sierra.
As it is explained on the Malmo installation instruction, mac users require to run the python scripts with the default mac python located on /usr/bin/python. I am aiming to implement a neural nets with Py... | I figured it out what the problem was. It is so stupid. I just needed to add sudo at the beginning of the command.
| https://stackoverflow.com/questions/46991226/ |
Neural network in pytorch predict two binary variables | Suppose I want to have the general neural network architecture:
---> BinaryOutput_A
/
Input --> nnLayer -
\
---> BinaryOutput_B
An input is put through a neural network layer which then goes to predict two binary variables (i... | It turns out that torch is actually really smart, and you can just calculate the total loss as:
loss = 0
loss += loss_function(pred_A, label_A)
loss += loss_function(pred_B, label_B)
loss.backward()
and the error will be properly backpropagated through the network. No torch.cat() needed or anything.
| https://stackoverflow.com/questions/47016302/ |
Cross entropy loss in pytorch nn.CrossEntropyLoss() | maybe someone is able to help me here. I am trying to compute the cross entropy loss of a given output of my network
print output
Variable containing:
1.00000e-02 *
-2.2739 2.9964 -7.8353 7.4667 4.6921 0.1391 0.6118 5.2227 6.2540
-7.3584
[torch.FloatTensor of size 1x10]
and the desired label, which is of... | Please check this code
import torch
import torch.nn as nn
from torch.autograd import Variable
output = Variable(torch.rand(1,10))
target = Variable(torch.LongTensor([1]))
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target)
print(loss)
This will print out the loss nicely:
Variable containing:
2.4... | https://stackoverflow.com/questions/47065172/ |
Elegant way to compare to torch.FloatTensor on GPU | I try to compare two torch.FloatTensor (with only one entry) lying on GPU like this:
if (FloatTensor_A > FloatTensor_B): do something
The problem is, that (FloatTensor_A > FloatTensor_B) gives ByteTensor back. Is there a way to do boolean comparison between these two scalar FloatTensors, without loading the te... | The comparison operations in PyTorch return ByteTensors (see docs). In order to convert your result back to a float datatype, you can call .float() on your result. For example:
(t1 > t2).float()
(t1 > t2) will return a ByteTensor.
The inputs to the operation must be on the same memory (either CPU or GPU). The... | https://stackoverflow.com/questions/47096651/ |
How do I use a ByteTensor in a contrastive cosine loss function? | I'm trying to implement the loss function in http://anthology.aclweb.org/W16-1617 in PyTorch. It is shown as follows:
I've implemented the loss as follows:
class CosineContrastiveLoss(nn.Module):
"""
Cosine contrastive loss function.
Based on: http://anthology.aclweb.org/W16-1617
Maintain 0 for matc... | Maybe you can try float() method on the variable directly?
Variable(torch.zeros(5)).float() - works for me, for instance
| https://stackoverflow.com/questions/47107589/ |
Upsampling in Semantic Segmentation | I am trying to implement a paper on Semantic Segmentation and I am confused about how to Upsample the prediction map produced by my segmentation network to match the input image size.
For example, I am using a variant of Resnet101 as the segmentation network (as used by the paper). With this network structure, an inp... | Maybe the simpliest thing you can try is:
upsample 8 times. Then you 41x41 input turns into 328x328
perform center cropping to get your desired shape 321x321 (for instance, something like this input[3:,3:,:-4,:-4])
| https://stackoverflow.com/questions/47117302/ |
how to calculate loss over a number of images and then back propagate the average loss and update network weight | I am doing a task where the batch size is 1, i.e, each batch contains only 1 image. So I have to do manual batching: when the the number of accumulated losses reach a number, average the loss and then do back propagation.
My original code is:
real_batchsize = 200
for epoch in range(1, 5):
net.train()
total_l... |
The problem you encounter here is related to how PyTorch accumulates gradients over different passes. (see here for another post on a similar question)
So let's have a look at what happens when you have code of the following form:
loss_total = Variable(torch.zeros(1).cuda(), requires_grad=True)
for l in (loss_func(x1... | https://stackoverflow.com/questions/47120126/ |
Calculating input and output size for Conv2d in PyTorch for image classification | I'm trying to run the PyTorch tutorial on CIFAR10 image classification here - http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
I've made a small change and I'm using a different dataset. I have images from the Wikiart dataset that I want to classify by artis... | You have to shape your input to this format (Batch, Number Channels, height, width).
Currently you have format (B,H,W,C) (4, 32, 32, 3), so you need to swap 4th and 2nd axis to shape your data with (B,C,H,W).
You can do it this way:
inputs, labels = Variable(inputs), Variable(labels)
inputs = inputs.transpose(1,3)
..... | https://stackoverflow.com/questions/47128044/ |
Error reading images with pandas (+pyTorch,scikit) | I'm trying to read images to work with a CNN, but I'm getting a pandas error while trying to load the images.
This is some of the code (omitted imports and irrelevant nn class for clarity):
file_name = "annotation.csv"
image_files = pd.read_csv(file_name)
class SimpsonsDataset(Dataset):
def __init__(self, csv_fi... | Your variable image_files is a pandas DataFrame, since it holds the return value of pd.read_csv(), which returns a DataFrame. Try deleting the line
image_files = pd.read_csv(file_name)
and changing the last line to this:
simpsons = SimpsonsDataset(csv_file=file_name, root_dir="folder/")
| https://stackoverflow.com/questions/47145683/ |
pytorch: Convert a tuple of FloatTensors into a numpy array | I just got a torchvision.datasets object with MNIST data
train_dataset= dsets.MNIST(root='./data',train=True,transform=transforms.ToTensor(),download=True)
I want to convert this tuple into a set of numpy arrays of shape 60000x28x28 and labels of 60000
I know that the form that the data is provided, can be directly... | Maybe something like this will work for you:
train_dataset.train_data.numpy() #contains (60000, 28, 28) numpy array
train_dataset.train_labels.numpy() # contains labels
| https://stackoverflow.com/questions/47146768/ |
Pytorch-tutorial: Strange input argument in class definition | I'm reading through some pytorch tutorials. Below is the definition of a residual block. However in the forward method each function handle only takes one argument out while in the __init__ function these functions have different number of input arguments:
# Residual Block
class ResidualBlock(nn.Module):
def __ini... | you define the layer in the init function, which means the parameters. In the forward function you only input the data that needs to be processed with the predefined settings from init. The nn.whatever builds a function with the settings you pass to it. Then this function can be used in forward and this function only t... | https://stackoverflow.com/questions/47151826/ |
Pytorch Tensor help in LongTensor | a=[1,2,3];
context_var = autograd.Variable(torch.LongTensor(a))
This is giving an error
RuntimeError: tried to construct a tensor from a int sequence, but found an item of type numpy.int32 at index
I am not able to figure out how to get over this.
| Your code works perfectly fine in the recent version of pytorch. But for older versions, you can convert the numpy array to list using .tolist() method as follows to get rid of the error.
a=[1,2,3];
context_var = autograd.Variable(torch.LongTensor(a.tolist()))
| https://stackoverflow.com/questions/47194626/ |
How to do fully connected batch norm in PyTorch? | torch.nn has classes BatchNorm1d, BatchNorm2d, BatchNorm3d, but it doesn't have a fully connected BatchNorm class? What is the standard way of doing normal Batch Norm in PyTorch?
| Ok. I figured it out. BatchNorm1d can also handle Rank-2 tensors, thus it is possible to use BatchNorm1d for the normal fully-connected case.
So for example:
import torch.nn as nn
class Policy(nn.Module):
def __init__(self, num_inputs, action_space, hidden_size1=256, hidden_size2=128):
super(Policy, self).__ini... | https://stackoverflow.com/questions/47197885/ |
Embedding 3D data in Pytorch | I want to implement character-level embedding.
This is usual word embedding.
Word Embedding
Input: [ [‘who’, ‘is’, ‘this’] ]
-> [ [3, 8, 2] ] # (batch_size, sentence_len)
-> // Embedding(Input)
# (batch_size, seq_len, embedding_dim)
This is what i want to do.
Character Embedding
Input: [ [ [‘w’, ‘h’... | I am assuming you have a 3d tensor of shape BxSxW where:
B = Batch size
S = Sentence length
W = Word length
And you have declared embedding layer as follows.
self.embedding = nn.Embedding(dict_size, emsize)
Where:
dict_size = No. of unique characters in the training corpus
emsize = Expected size of embeddings
... | https://stackoverflow.com/questions/47205762/ |
How to get the value of a feature in a layer that match a the state dict in PyTorch? | I have some cnn, and I want to fetch the value of some intermediate layer corresponding to a some key from the state dict.
How could this be done?
Thanks.
| I think you need to create a new class that redefines the forward pass through a given model. However, most probably you will need to create the code regarding the architecture of your model. You can find here an example:
class extract_layers():
def __init__(self, model, target_layer):
self.model = model
... | https://stackoverflow.com/questions/47260715/ |
Pytorch FloatClassNLLCriterion_updateOutput error | I am getting the following error message while computing the loss of my neural net: TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, !torch.FloatTensor!, torch.FloatTensor, bool, NoneType, torch.FloatTensor, int), but expected (int state, torch.F... | The target tensor you are passing to your loss function is of type Float Tensor. That's happening here:
one_hot_target = autograd.Variable(torch.Tensor(one_hot_target))
That is, because the default Tensor type in PyTorch is FloatTensor.
However, the NLLLoss() function is expecting a LongTensor as target. You can doub... | https://stackoverflow.com/questions/47267293/ |
How to generate a new Tensor with different vectors in PyTorch? | I want to generate new a○b vector with a and b (○ means element wise multiply). My code is below, but the performance looks bad because of for. Are there any efficient way?
a = torch.rand(batch_size, a_len, hid_dim)
b = torch.rand(batch_size, b_len, hid_dim)
# a_elmwise_mul_b = torch.zeros(batch_size, a_len, b_len, hi... | From your problem definition, it looks like you want to multiply two tensors, say A and B of shape AxE and BxE and want to get a tensor of shape AxBxE. It means you want to multiply, each row of tensor A with the whole tensor B. If it is correct, then we don't call it element-wise multiplication.
You can accomplish yo... | https://stackoverflow.com/questions/47267433/ |
Assign torch.cuda.FloatTensor | I'd like to know how can I do the following code, but now using pytorch,
where dtype = torch.cuda.FloatTensor. There's the code straight python (using numpy):
import numpy as np
import random as rand
xmax, xmin = 5, -5
pop = 30
x = (xmax-xmin)*rand.random(pop,1)
y = x**2
[minz, indexmin] = np.amin(y), np.argmin(y) ... | The above code works fine in pytorch 0.2. Let me analyze your code so that you can identify the problem.
x= (xmax-xmin)*torch.rand(pop, 1).type(dtype)+xmin
y = fit(x)
Here, x and y is a 2d tensor of shape 30x1. In the next line:
[miny, indexmin] = torch.min(y,0)
The returned tensor miny is a 2d tensor of shape 3... | https://stackoverflow.com/questions/47267612/ |
Assign variable in pytorch | I'd like to know if it is possble to the following code, but now using pytorch, where dtype = torch.cuda.FloatTensor. There's the code straight python (using numpy): Basically I want to get the value of x that produces the min value of fitness.
import numpy as np
import random as rand
xmax, xmin = 5, -5
pop ... | You can simply do as follows.
import torch
dtype = torch.cuda.FloatTensor
def main():
pop, xmax, xmin = 30, 5, -5
x = (xmax-xmin)*torch.rand(pop, 1).type(dtype)+xmin
y = torch.pow(x, 2)
[miny, indexmin] = y.min(0)
best = x.squeeze()[indexmin] # squeeze x ... | https://stackoverflow.com/questions/47271631/ |
Add a constant variable to a cuda.FloatTensor | I have two question:
1) I'd like to know how can I add/subtract a constante torch.FloatTensor of size 1 to all of the elemets of a torch.FloatTensor of size 30.
2) How can I multiply each element of a torch.FloatTensor of size 30 by a random value (different or not for each).
My code:
import torch
dtype = t... |
1) How can I add/subtract a constant torch.FloatTensor of size 1 to all of the elements of a torch.FloatTensor of size 30?
You can do it directly in pytorch 0.2.
import torch
a = torch.randn(30)
b = torch.randn(1)
print(a-b)
In case if you get any error due to size mismatch, you can make a small change as foll... | https://stackoverflow.com/questions/47273868/ |
How to `dot` weights to batch data in PyTorch? | I have batch data and want to dot() to the data. W is trainable parameters.
How to dot between batch data and weights?
hid_dim = 32
data = torch.randn(10, 2, 3, hid_dim)
data = data.view(10, 2*3, hid_dim)
W = torch.randn(hid_dim) # assume trainable parameters via nn.Parameter
result = torch.bmm(data, W).squeeze() # er... | Expand W tensor to match the shape of data tensor. The following should work.
hid_dim = 32
data = torch.randn(10, 2, 3, hid_dim)
data = data.view(10, 2*3, hid_dim)
W = torch.randn(hid_dim)
W = W.unsqueeze(0).unsqueeze(0).expand(*data.size())
result = torch.sum(data * W, 2)
result = result.view(10, 2, 3)
Edit: Your... | https://stackoverflow.com/questions/47274655/ |
Generating new images with PyTorch | I am studying GANs I've completed the one course which gave me an example of a program that generates images based on examples inputed.
The example can be found here:
https://github.com/davidsonmizael/gan
So I decided to use that to generate new images based on a dataset of frontal photos of faces, but I am not hav... | The code from your example (https://github.com/davidsonmizael/gan) gave me the same noise as you show. The loss of the generator decreased way too quickly.
There were a few things buggy, I'm not even sure anymore what - but I guess it's easy to figure out the differences yourself. For a comparison, also have a look a... | https://stackoverflow.com/questions/47275395/ |
PyTorch loss function evaluation | I am trying to optimise a function (fit) using the acceleration of GPU in PyTorch. This is the straight Python code, where I doing the evaluation of fit:
import numpy as np
...
for j in range(P):
e[:,j] = z - h[:,j];
fit[j] = 1/(sqrt(2*pi)*sigma*N)*np.sum(exp(-(e[:,j]**2)/(2*sigma**2)));
The dimension o... | I guess you are getting size mismatch error at the following line.
e = z - h
In your example, z is a vector (2d tensor of shape Nx1) and h is a 2d tensor of shape NxP. So, you can't directly subtract h from z.
You can do the following to avoid size mismatch error.
e = z.expand(*h.size()) - h
Here, z.expand(*h.si... | https://stackoverflow.com/questions/47275847/ |
How to update a Tensor? | I've had some troubles updating a tensor using a previous one.
My problem: let's suppose that I have a tensor x1 [Nx1] and a new one calculated through the previous, x2 [Nx1]. Now I want to update the elements of x2 that are less than x1. I'm using dtype=torch.cuda.FloatTensor.
This is the straight code in Python:... | The code looks really similar to numpy:
idx = (x1 > x2)
x2[idx] = x1[idx]
Using some predefined arrays and printing x2:
x1 = torch.from_numpy(np.array([1, 2, 3, 4, 5])).float().cuda()
x2 = torch.from_numpy(np.array([3, 3, 3, 3, 3])).float().cuda()
3 3 3 4 5
[torch.cuda.FloatTensor of size 5 (GPU 0)]
... | https://stackoverflow.com/questions/47287858/ |
How to use if statement PyTorch using torch.FloatTensor | I am trying to use the if statement in my PyTorch code using torch.FloatTensor as data type, to speed it up into the GPU.
This is my code:
import torch
import time
def fitness(x):
return torch.pow(x, 2)
def velocity(v, gxbest, pxbest, pybest, x, pop):
return torch.rand(pop).type(dtype)*v + \
to... | If you look into the following simple example:
import torch
a = torch.LongTensor([1])
b = torch.LongTensor([5])
print(a > b)
Output:
0
[torch.ByteTensor of size 1]
Comparing tensors a and b results in a torch.ByteTensor which is obviously not equivalent to boolean. So, you can do the following.
print(a[0] ... | https://stackoverflow.com/questions/47290053/ |
Separate optimizer for discriminator and the rest of the model in adversarial setting | I have a model with the following components.
embedding layer
encoder
generator
discriminator
feed-forward neural network
I want to define two optimizers. One for the discriminator only and one for the rest. I am doing the following.
optimizers = []
model_params = chain(model.embedding.parameters(), model.encoder.... | no, because model.parameters() returns a generator. If you want to modify a generator like you want it you have to convert it to a list anyway.
| https://stackoverflow.com/questions/47299549/ |
Fine Tuning dimensionality error | I am trying to use Resnet50 for image classification problem. However it shows error and I could not fix it.
RuntimeError: inconsistent tensor size, expected tensor [120 x 2048] and src [1000 x 2048] to have the same number of elements, but got 245760 and 2048000 elements respectively at /Users/soumith/code/builder/wh... | When you create your models.resnet50 with num_classes=num_breeds, the last layer is a fully connected layer from 2048 to num_classes (which is 120 in your case).
Having pretrained='imagenet' asks pytorch to load all the corresponding weights into your network, but it has for its last layer 1000 classes, not 120. This ... | https://stackoverflow.com/questions/47311570/ |
Pytorch AssertionError: Torch not compiled with CUDA enabled | I am trying to run code from this repo. I have disabled cuda by changing lines 39/40 in main.py from
parser.add_argument('--type', default='torch.cuda.FloatTensor', help='type of tensor - e.g torch.cuda.HalfTensor')
to
parser.add_argument('--type', default='torch.FloatTensor', help='type of tensor - e.g torch.Half... | If you look into the data.py file, you can see the function:
def get_iterator(data, batch_size=32, max_length=30, shuffle=True, num_workers=4, pin_memory=True):
cap, vocab = data
return torch.utils.data.DataLoader(
cap,
batch_size=batch_size, shuffle=shuffle,
collate_fn=create_batches(v... | https://stackoverflow.com/questions/47312396/ |
Tensorflow: Hierarchical Softmax Implementation | I'm currently having text inputs represented by vector, and I want to classify their categories. Because they are multi-level categories, I meant to use Hierarchical Softmax.
Example:
- Computer Science
- Machine Learning
- NLP
- Economics
- Maths
- Algebra
- Geometry
I don't know how to impl... | Finally, I have changed to use Pytorch. It's easier and more straight-forward than Tensorflow.
For anyone further interested in implement HS, you can have a look at my sample instructions: https://gist.github.com/paduvi/588bc95c13e73c1e5110d4308e6291ab
For anyone still want a Tensorflow implementation, this one is f... | https://stackoverflow.com/questions/47313656/ |
ValueError: Floating point image RGB values must be in the 0..1 range. while using matplotlib | I want to visualize weights of the layer of a neural network. I'm using pytorch.
import torch
import torchvision.models as models
from matplotlib import pyplot as plt
def plot_kernels(tensor, num_cols=6):
if not tensor.ndim==4:
raise Exception("assumes a 4D tensor")
if not tensor.shape[-1]==3:
... | It sounds as if you already know your values are not in that range. Yes, you must re-scale them to the range 0.0 - 1.0. I suggest that you want to retain visibility of negative vs positive, but that you let 0.5 be your new "neutral" point. Scale such that current 0.0 values map to 0.5, and your most extreme value (l... | https://stackoverflow.com/questions/47318871/ |
Is it possible to check if elements of a tensor are out of boundaries? | Is it possible to check if elements of a tensor are out of boundaries using torch.cuda.FloatTensor on PyTorch, GPU approach?
Example (check limits):
for i in range(pop):
if (x[i]>xmax):
x[i]=xmax
elif (x[i]<xmin):
x[i]=xmin
I tried the following, but did not speed up:
idxmax = (x... | You can get a CPU copy of tensor x, do your operations and then push the tensor to GPU memory again.
x = x.cpu() # get the CPU copy
# do your operations
x = x.cuda() # move the object back to cuda memory
| https://stackoverflow.com/questions/47319704/ |
how pytorch nn.module save submodule | I have some question about how pytorch nn.module works
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.sub_module = nn.Linear(10, 5)
self.value = 3
net = Net()
print(net.__dict__)
output
{'_modules': OrderedDict([('sub_mo... | I will try to keep it simple.
Every time you create a new item in the class Net for instance: self.sub_module = nn.Linear(10, 5) it calls the method __setattr__ of its parent class, in this case nn.Module. Then, inside __setattr__ method, the parameters are stored to the dict they belong. In this case since nn.Linear... | https://stackoverflow.com/questions/47324354/ |
torch.nn.LSTM runtime error | I'm trying to implement the structure from "Livelinet: A multimodal Deep Recurrent Neural Network to Predict Liveliness in Educational Videos".
For a brief explanation, I split 10-second audio clip into ten 1-second audio clip and get a spectrogram (a picture) from that 1-second audio clip. Then I use CNN to get a rep... | The error message
RuntimeError: save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition
generally indicates that you are passing a tensor or something else that cannot store history as an input into a module. In your case, your problem is that you return tensors in i... | https://stackoverflow.com/questions/47347098/ |
Rosenbrock function with D dimension using PyTorch | How can I implement the Rosenbrock function with D dimension, using PyTorch?
Create the variables, where D is the number of dimensions and N is the number of elements.
x = (xmax - xmin)*torch.rand(N,D).type(dtype) + xmin
Function :
Using straight Python I'd do something like this:
fit = 0
for i... | Numpy has the function roll which I think can be really helpful.
Unfortunately, I am not aware of any function similar to numpy.roll for pytorch.
In my attempt, x is a numpy array in the form DxN. First we use roll to move the items in the first dimension (axis=0) one position to the left. Like this, everytime we com... | https://stackoverflow.com/questions/47350058/ |
How to construct a 3D Tensor where every 2D sub tensor is a diagonal matrix in PyTorch? | Consider I have 2D Tensor, index_in_batch * diag_ele.
How can I get a 3D Tensor index_in_batch * Matrix (who is a diagonal matrix, construct by drag_ele)?
The torch.diag() construct diagonal matrix only when input is 1D, and return diagonal element when input is 2D.
| import torch
a = torch.rand(2, 3)
print(a)
b = torch.eye(a.size(1))
c = a.unsqueeze(2).expand(*a.size(), a.size(1))
d = c * b
print(d)
Output
0.5938 0.5769 0.0555
0.9629 0.5343 0.2576
[torch.FloatTensor of size 2x3]
(0 ,.,.) =
0.5938 0.0000 0.0000
0.0000 0.5769 0.0000
0.0000 0.0000 0.0555
(1 ,... | https://stackoverflow.com/questions/47372508/ |
How to select index over two dimension in PyTorch? | Given a = torch.randn(3, 2, 4, 5), how I can select sub tensor like (2, :, 0, :), (1, :, 1, :), (2, :, 2, :), (0, :, 3, :) (a resulting tensor of size (2, 4, 5) or (4, 2, 5)?
While a[2, :, 0, :] gives
0.5580 -0.0337 1.0048 -0.5044 0.6784
-1.6117 1.0084 1.1886 0.1278 0.3739
[torch.FloatTensor of size 2x5]
h... | Does it work for you?
import torch
a = torch.randn(3, 2, 4, 5)
print(a.size())
b = [a[2, :, 0, :], a[1, :, 1, :], a[2, :, 2, :], a[0, :, 3, :]]
b = torch.stack(b, 0)
print(b.size()) # torch.Size([4, 2, 5])
| https://stackoverflow.com/questions/47374172/ |
transfer learning [resnet18] using PyTorch. Dataset: Dog-Breed-Identification | I am trying to implement a transfer learning approach in PyTorch. This is the dataset that I am using: Dog-Breed
Here's the step that I am following.
1. Load the data and read csv using pandas.
2. Resize (60, 60) the train images and store them as numpy array.
3. Apply stratification and split the train data into 7:1... | Your network is too deep for the size of images you are using (60x60). As you know, the CNN layers do produce smaller and smaller feature maps as the input image propagate through the layers. This is because you are not using padding.
The error you have simply says that the next layer is expecting 512 feature maps wi... | https://stackoverflow.com/questions/47403634/ |
`for` loop to a multi dimensional array in PyTorch | I want to implement Q&A systems with attention mechanism. I have two inputs; context and query which shapes are (batch_size, context_seq_len, embd_size) and (batch_size, query_seq_len, embd_size).
I am following the below paper.
Machine Comprehension Using Match-LSTM and Answer Pointer. https://arxiv.org/abs/160... | I think your code is fine. You can't avoid the loop: for i in range(T): because in equation (2) in the paper (https://openreview.net/pdf?id=B1-q5Pqxl), there is a hidden state coming from Match-LSTM cell which is involved in computing G_i and alpha_i vector and they are used to compute the input for next timestep of th... | https://stackoverflow.com/questions/47417159/ |
Taking subsets of a pytorch dataset | I have a network which I want to train on some dataset (as an example, say CIFAR10). I can create data loader object via
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, bat... | You can define a custom sampler for the dataset loader avoiding recreating the dataset (just creating a new loader for each different sampling).
class YourSampler(Sampler):
def __init__(self, mask):
self.mask = mask
def __iter__(self):
return (self.indices[i] for i in torch.nonzero(self.mask))... | https://stackoverflow.com/questions/47432168/ |
AttributeError on variable input of custom loss function in PyTorch | I've made a custom loss function to compute cross-entropy (CE) for a multi-output multi-label problem. Within the class, I want to set the target variable I'm feeding to not require a gradient. I do this within the forward function using a pre-defined function (taken from pytorch source code) outside the class.
de... | The problem is in "target.float()" line which converts your t Variable to Tensor. You can directly use target without any problems in your CE calculation.
Also, I guess you do not really need "self.save_for_backward(ce_out)" as I guess you are defining nn.Module class which will take care of the backward pass internal... | https://stackoverflow.com/questions/47432905/ |
How can I feed a batch into LSTM without reordering them by length in PyTorch? | I am new to Pytorch and I have got into some trouble.
I want to build a rank-model to judge the similarity of question and its answers (including right answers and wrong answers). And I use LSTM as an encoder.
There are two LSTMs in my model and they share the weights. So my model’s input is two sequences (question a... | Maybe a starting point could be to use similar RNN wrapper as here https://github.com/facebookresearch/DrQA/blob/master/drqa/reader/layers.py#L20
You can encode question and asnwer separately (this module will pack and unpack internally and will take care of sorting) and then you can work on the encoded representation... | https://stackoverflow.com/questions/47433199/ |
torch.addmm received an invalid combination of arguments | In the official webpage of pytorch I saw the following code with answers:
>> a = torch.randn(4, 4)
>> a
0.0692 0.3142 1.2513 -0.5428
0.9288 0.8552 -0.2073 0.6409
1.0695 -0.0101 -2.4507 -1.2230
0.7426 -0.7666 0.4862 -0.6628
torch.FloatTensor of size 4x4]
>>> torch.max(a, 1)
(
1.2513
0.928... | It just tells the index of the max element in your original tensor along the queried dimension.
E.g.
0.9477 1.0090 0.8348 -1.3513
-0.4861 1.2581 0.3972 1.5751
-1.2277 -0.6201 -1.0553 0.6069
0.1688 0.1373 0.6544 -0.7784
[torch.FloatTensor of size 4x4]
# torch.max(a, 1)
(
1.0090
1.5751
0.6069
0.6544
[torc... | https://stackoverflow.com/questions/47461625/ |
torch.pow does not work | I'm trying to create a custom loss function using PyTorch, and am running into a simple error.
When I try to use torch.pow to take the exponent of a PyTorch Variable, I get the following error message:
AttributeError: 'torch.LongTensor' object has no attribute 'pow'
In the python terminal, I created a simple Variab... | You need to initialize the tensor as floats, because pow always returns a Float.
import torch
from torch.autograd import Variable
import numpy as np
v = Variable(torch.from_numpy(np.array([1, 2, 3, 4], dtype="float32")))
torch.pow(v, 2)
You can cast it back to integers afterwards
torch.pow(v, 2).type(torch.LongTe... | https://stackoverflow.com/questions/47470152/ |
Print exact value of PyTorch tensor (floating point precision) | I'm trying to print torch.FloatTensor like:
a = torch.FloatTensor(3,3)
print(a)
This way I can get a value like:
0.0000e+00 0.0000e+00 3.2286e-41
1.2412e-40 1.2313e+00 1.6751e-37
2.6801e-36 3.5873e-41 9.4463e+21
But I want to get more accurate value, like 10 decimal point:
0.1234567891+01
With other python num... | You can set the precision options:
torch.set_printoptions(precision=10)
There are more formatting options on the documentation page, it is very similar to numpy's.
| https://stackoverflow.com/questions/47483733/ |
AttributeError: 'CrossEntropyLoss' object has no attribute 'backward' | i am trying to train a very basic CNN on CIFAR10 data set and getting the following error :
AttributeError: 'CrossEntropyLoss' object has no attribute 'backward'
criterion =nn.CrossEntropyLoss
optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
... | Issue resolved. My mistake, I was missing the parenthesis
criterion = nn.CrossEntropyLoss()
| https://stackoverflow.com/questions/47488598/ |
binarize input for pytorch | may I ask how to make data loaded in pytorch become binarized once it is loaded?
Like Tensorflow can done this through:
train_data = mnist.input_data.read_data_sets(data_directory, one_hot=True)
How can pytorch achieve the one_hot=True effect.
The data_loader I have now is:
torch.set_default_tensor_type('torch.... | The one-hot encoding idea is used for classification. It sounds like you are trying to create an autoencoder perhaps.
If you are creating an autoencoder then there is no need to round as BCELoss can handle values between 0 and 1. Note when training it is better not to apply the sigmoid and instead to use BCELossWithLo... | https://stackoverflow.com/questions/47493135/ |
Implementing a tutorial for Tensorflow and SaaK Transform: Handwritten Digits Recognition | I am learning to work with Tensorflow. I have installed most of the libraries.
# load libs
import torch
import argparse
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
import numpy as np
from data.datasets import MNIST
import torch.utils.data as data_utils
from sklearn.decomposition impor... | The dataset directory is placed next to main.py, so probably when you tried sudo -H pip install dataset you actually installed a irrelevant package.
So uninstall the wrong package and it would work:
sudo -H pip uninstall dataset
| https://stackoverflow.com/questions/47493423/ |
Load pytorch model from different working directory | I want to load and access a pretrained model from the directory outside where all files are saved. Here is how directory structure is:
-MyProject
----Model_checkpoint_and_scripts
------access_model.py
--run_model.py
--other files
When run model calls access_model.py it looks for model.py in current working directory... | I was able to load it by adding the following lines:
here = os.path.dirname(os.path.abspath(__file__))
sys.path.append(here)
| https://stackoverflow.com/questions/47494314/ |
Why should the training label for Generator in GAN be always True? | I am currently learning deep learning especially GAN.
I found a simple code of GAN from a web site below.
https://medium.com/@devnag/generative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f
However, in the code, I don't understand why we always need to give true label to Generator as below.
... | In this part of the code you are training G to fool D, so G generates fake data and asks D whether it thinks it's real (true labels), D's gradients are then propogated all the way to G (this is possible as D's input was G's output) so that it will learn to better fool D in the next iteration.
The inputs of G are not t... | https://stackoverflow.com/questions/47499404/ |
Best way in Pytorch to upsample a Tensor and transform it to rgb? | For a nice output in Tensorboard I want to show a batch of input images, corresponding target masks and output masks in a grid.
Input images have different size then the masks. Furthermore the images are obviously RGB.
From a batch of e.g. 32 or 64 I only want to show the first 4 images.
After some fiddling around I c... | Maybe you can just convert your Tensors to the numpy array (.data.cpu().numpy() ) and use opencv to do upsampling? OpenCV implementation should be quite fast.
| https://stackoverflow.com/questions/47521393/ |
How to find and understand the autograd source code in PyTorch | I have a good understanding of the autograd algorithm, and I think I should learn about the source code in PyTorch. However, when I see the project on GitHub, I am confused by the structure, cuz so many files include autograd. So which part is the most important core code of autograd?
|
Try to understand the autograd variable is probably the first thing, what you can do. From my understanding is autograd only a naming for the modules, which containing classes with enhancement of gradients and backward functions.
Be aware a lot of the algorithm, e.g. back-prop through the graph, is hidden in compiled ... | https://stackoverflow.com/questions/47525820/ |
How can I give a name to each module in ModuleList? | I have the following component in my model:
feedfnn = []
for task_name, num_class in self.tasks:
if self.config.nonlinear_fc:
ffnn = nn.Sequential(OrderedDict([
('dropout1', nn.Dropout(self.config.dropout_fc)),
('dense1', nn.Linear(self.config.nhid * self.num_directions * 8, self.co... | That's straightforward.
Simply start with an empty ModuleListand then use add_module for it. For example,
import torch.nn as nn
from collections import OrderedDict
final_module_list = nn.ModuleList()
a_sequential_module_with_names = nn.Sequential(OrderedDict([
('dropout1', nn.Dropout(0.1)),
('dense... | https://stackoverflow.com/questions/47544009/ |
When should I use nn.ModuleList and when should I use nn.Sequential? | I am new to Pytorch and one thing that I don't quite understand is the usage of nn.ModuleList and nn.Sequential. Can I know when I should use one over the other? Thanks.
| nn.ModuleList does not have a forward method, but nn.Sequential does have one. So you can wrap several modules in nn.Sequential and run it on the input.
nn.ModuleList is just a Python list (though it's useful since the parameters can be discovered and trained via an optimizer). While nn.Sequential is a module that sequ... | https://stackoverflow.com/questions/47544051/ |
Calculating Distance in Parameter Space for PyTorch Network | I'm new to PyTorch. I want to keep track of the distance in parameter-space my model travels through its optimization. This is the code I'm using.
class ParameterDiffer(object):
def __init__(self, network):
network_params = []
for p in network.parameters():
network_params.append(p.data.... | The cause
This is because the way PyTorch treat conversion between numpy array and torch Tensor. If the underlying data type between numpy array and torch Tensor are the same, they will share the memory. Change the value of one will also change the value of the other. I will show a concrete example here,
x = Variable... | https://stackoverflow.com/questions/47544151/ |
Pytorch: Can’t load images using ImageFolder | I’m trying to load images using “ImageFolder”.
data_dir = './train_dog' # directory structure is
train_dog/image
dset = datasets.ImageFolder(data_dir, transform)
train_loader = torch.utils.data.DataLoader(dset, batch_size=128, shuffle=True)
However, it seems not working. So I checked the stored data as ... | You should try this:
print len(dset)
which represents the size of the dataset, aka the number of image files.
dset[0] means the (shuffled) first index of the dataset, where dset[0][0] contains the input image tensor and dset[0][1] contains the corresponding label or target.
| https://stackoverflow.com/questions/47553103/ |
When will the computation graph be freed if I only do forward for some samples? | I have a use case where I do forward for each sample in a batch and only accumulate loss for some of the samples based on some condition on the model output of the sample. Here is an illustrating code,
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
total_loss = 0
loss_coun... | Let's start with a general discussion of how PyTorch frees memory:
First, we should emphasize that PyTorch uses an implicitly declared graph that is stored in Python object attributes. (Remember, it's Python, so everything is an object). More specifically, torch.autograd.Variables have a .grad_fn attribute. This attri... | https://stackoverflow.com/questions/47587122/ |
How to cast a 1-d IntTensor to int in Pytorch | How do I convert a 1-D IntTensor to an integer? This:
IntTensor.int()
Gives an error:
KeyError: Variable containing:
423
[torch.IntTensor of size 1]
| You can use:
print(dictionary[IntTensor.data[0]])
The key you're using is an object of type autograd.Variable.
.data gives the tensor and the index 0 can be used to access the element.
| https://stackoverflow.com/questions/47588682/ |
torch.nn has no attribute named upsample | Following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-perform-neural-style-transfer-with-python-3-and-pytorch#step-2-%E2%80%94-running-your-first-style-transfer-experiment
When I run the example in Jupyter notebook, I get the following:
So, I've tried troubleshooting, which eventually got... | I think the reason maybe that you have an older version of PyTorch on your system. On my system, the pytorch version is 0.2.0, torch.nn has a module called Upsample.
You can uninstall your current version of pytorch and reinstall it.
| https://stackoverflow.com/questions/47635918/ |
Reshaping 2 numpy dimensions into one with zipping? | I tried searching for this question but I couldn't find anything relevant.
The quickest way to describe the problem is with a simple example:
lets say I have a 2D numpy arrayl like this:
[[0, 1, 2, 3],
[10, 11, 12, 13],
[20, 21, 22, 23]]
So it has the shape [3,6]
I want to reshape it into a 1D array that looks like... | We need to swap the last two axes with np.swapaxes or np.transpose and then reshape.
For 2D input case, it would be -
a.swapaxes(-2,-1).ravel()
For 3D input case, only the reshape part changes -
a.swapaxes(-2,-1).reshape(a.shape[0],-1)
Generic way : To make it generic that would cover all n-dim array cases -
a.... | https://stackoverflow.com/questions/47642014/ |
Automatically set cuda variables in pytorch | I'm trying to automatically set Variables to use cuda if a flag use_gpu is activated.
Usually I would do:
import torch
from torch import autograd
use_gpu = torch.cuda.is_available()
foo = torch.randn(5)
if use_gpu:
fooV = autograd.Variable(foo.cuda())
else:
fooV = autograd.Variable(foo)
However, to do th... | An alternative is to use type method on CPU Tensor to convert it to GPU Tensor,
if use_cuda:
dtype = torch.cuda.FloatTensor
else:
dtype = torch.FloatTensor
x = torch.rand(2,2).type(dtype)
Also, you can find a discussion here.
| https://stackoverflow.com/questions/47645271/ |
tf.concat operation in PyTorch | torch.stack is not what I am searching.
I am looking for the Tensorflow's concat operation for Pytorch.
I have searched the doc http://pytorch.org/docs/0.3.0/
| I belive torch.cat is what you are looking for.
http://pytorch.org/docs/master/torch.html#torch.cat
| https://stackoverflow.com/questions/47645428/ |
Create a nnModule that's just the identity |
I'm trying to debug a pretty complex interaction between different nnModules. It would be very helpful for me to be able to replace one of them with just an identity network for debugging purposes. For example:
net_a = NetworkA()
net_b = NetworkB()
net_c = NetworkC()
input = Autograd.Variable(torch.rand(10,2))
out ... |
The problem you encounter here is a logical one. Look at what it means when you do:
error.backward()
opt.step()
.backward() will recursively compute the gradients from your output to any input you pass into the network. In terms of the computation graph, there are two noteworthy kinds of inputs: the input you pass ... | https://stackoverflow.com/questions/47646130/ |
Error when upgrading PyTorch to 0.3.0 | I tried to update PyTorch to the recently released 0.3.0,
conda install pytorch=0.3.0 torchvision -c pytorch
But I'm getting the following error:
NoPackagesFoundError: Dependency missing in current linux-64 channels:
- pytorch 0.3.0* -> mkl >=2018
| You can also update the condo itself and then try again:
conda update conda
conda update pytorch -c pytorch
It will use the updated channels and download the latest mil version.
| https://stackoverflow.com/questions/47734702/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.