instruction stringlengths 13 150 | input stringlengths 36 29.1k | output stringlengths 31 29.5k | source stringlengths 45 45 |
|---|---|---|---|
Pytorch LSTM grad only on last output | I'm working with sequences of different lengths. But I would only want to grad them based on the output computed at the end of the sequence.
The samples are ordered so that they are decreasing in length and they are zero-padded. For 5 1D samples it looks like this (omitting width dimension for visibility):
array([[5,... | As Neaabfi answered hidden[-1] is correct. To be more specific to your question, as the docs wrote:
output, (h_n, c_n) = self.lstm(x_pack) # batch_first = True
# h_n is a vector of shape (num_layers * num_directions, batch, hidden_size)
In your case, you have a stack of 2 LSTM layers with only forward direction, th... | https://stackoverflow.com/questions/55907234/ |
Register_hook with respect to a subtensor of a tensor | Assuming that we would like to modify gradients only of a part of the variable values, is it possible to register_hook in pytorch only to a subtensor (of a tensor that is a pytorch variable)?
| We can make a function where we modify the part of the tensor as follows:
def gradi(module):
module[1]=0
h = net.param_name.register_hook(gradi)
| https://stackoverflow.com/questions/55911016/ |
The train loss and test loss are the same high in CNN Pytorch(FashionMNIST) | The problem is the Training loss and test loss are the same and the loss and accuracy weren't changing, what's wrong with my CNN structure and training process?
Training results:
Epoch: 1/30.. Training Loss: 2.306.. Test Loss: 2.306.. Test Accuracy: 0.100
Epoch: 2/30.. Training Loss: 2.306.. Test Loss: 2.306.. T... |
Be careful when using nn.CrossEntropyLoss and nn.NLLLoss without any confusion.
I don't think your code has problem, I tried to run it exactly the same way as you defined. Maybe you didn't give us other lines of code for initialization for other parts, and it might be a problem.
log_ps is supposed to be log_soft... | https://stackoverflow.com/questions/55912817/ |
PyTorch does not converge when approximating square function with linear model | I'm trying to learn some PyTorch and am referencing this discussion here
The author provides a minimum working piece of code that illustrates how you can use PyTorch to solve for an unknown linear function that has been polluted with random noise.
This code runs fine for me.
However, when I change the function such ... | You cannot fit a 2nd degree polynomial with a linear function. You cannot expect more than random (since you have random samples from the polynomial).
What you can do is try and have two inputs, x and x^2 and fit from them:
model = nn.Linear(2, 1) # you have 2 inputs now
X_input = torch.cat((X, X**2), dim=1) # have ... | https://stackoverflow.com/questions/55912952/ |
PyTorch: _thnn_nll_loss_forward is not implemented for type torch.LongTensor | When trying to create a model using PyTorch, when I am trying to implement the loss function nll_loss, it is throwing the following error
RuntimeError: _thnn_nll_loss_forward is not implemented for type torch.LongTensor
The fit function I have created is:
for epoch in tqdm_notebook(range(1, epochs+1)):
for bat... | Look at the description of F.nll_loss. It expects to get as input not the argmax of the prediction (type torch.long), but rather the full 64x50x43 prediction vectors (of type torch.float). Note that indeed the prediction you provide to F.nll_loss has an extra dimension more than the ground truth targets you provide.
I... | https://stackoverflow.com/questions/55914172/ |
Code Optimization: Computation in Torch.Tensor | I am currently implementing a function to compute Custom Cross Entropy Loss.
The definition of the function is a following image.
my codes are as following,
output = output.permute(0, 2, 3, 1)
target = target.permute(0, 2, 3, 1)
batch, height, width, channel = output.size()
total_loss = 0.
for b in range(batch): ... | I'm not sure sid_t = t[h][w][0] is the same for every pixel or not. If so, you can get rid of all for loop which boost the speed of computing loss.
Don't use .item() because it will return a Python value which loses the grad_fn track. Then you can't use loss.backward() to compute the gradients.
If sid_t = t[h][w]... | https://stackoverflow.com/questions/55917472/ |
Convert integer to pytorch tensor of binary bits | Given an number and an encoding length, how can I convert the number to its binary representation as a tensor?
Eg, given the number 6 and width 8, how can I obtain the tensor:
(0, 0, 0, 0, 0, 1, 1, 0)
|
def binary(x, bits):
mask = 2**torch.arange(bits).to(x.device, x.dtype)
return x.unsqueeze(-1).bitwise_and(mask).ne(0).byte()
If you wanna reverse the order of bits, use it with torch.arange(bits-1,-1,-1) instead.
| https://stackoverflow.com/questions/55918468/ |
CUDA for pytorch: CUDA C++ stream and state | I am trying to follow this tutorial and make a simple c++ extension with CUDA backend.
My CPU implementation seems to work fine.
I am having trouble finding examples and documentation (it seems like things are constantly changing).
Specifically,
I see pytorch cuda functions getting THCState *state argument - where ... | It's deprecated (without documentation!)
See here:
https://github.com/pytorch/pytorch/pull/14500
In short: use at::cuda::getCurrentCUDAStream()
| https://stackoverflow.com/questions/55919123/ |
How to realize a polynomial regression in Pytorch / Python | I want my neural network to solve a polynomial regression problem like y=(x*x) + 2x -3.
So right now I created a network with 1 input node, 100 hidden nodes and 1 output node and gave it a lot of epochs to train with a high test data size. The problem is that the prediction after like 20000 epochs is okayish, but much... | For this problem, it might be such easier if you consider the Net() with 1 Linear layer as Linear Regression with inputs features including [x^2, x].
Generate your data
import torch
from torch import Tensor
from torch.nn import Linear, MSELoss, functional as F
from torch.optim import SGD, Adam, RMSprop
from torch.aut... | https://stackoverflow.com/questions/55920015/ |
Spacy similarity warning : "Evaluating Doc.similarity based on empty vectors." | I'm trying to do data enhancement with a FAQ dataset. I change words, specifically nouns, by most similar words with Wordnet checking the similarity with Spacy. I use multiple for loop to go through my dataset.
import spacy
import nltk
from nltk.corpus import wordnet as wn
import pandas as pd
nlp = spacy.load('en_cor... | You get that error message when similar_word is not a valid spacy document. E.g. this is a minimal reproducible example:
import spacy
nlp = spacy.load('en_core_web_md') # make sure to use larger model!
tokens = nlp(u'dog cat')
#similar_word = nlp(u'rabbit')
similar_word = nlp(u'')
for token in tokens:
print(token... | https://stackoverflow.com/questions/55921104/ |
Pytorch's model can't feed forward a DataLoader dataset, NotImplementedError | so I am experimenting with PyTorch library to train a CNN. There is nothing wrong with the model (I can feed forward a data w/ no error) and I prepare a custom dataset with DataLoader function.
This is my code for data prep (I've omitted some irrelevant variable declaration, etc.):
# Initiliaze model
class neural_n... | Your network class implemented a net_forward method. However, nn.Module expects its derived classes to implement forward method (without net_ prefix).
Simply rename net_forward to just forward and your code should be okay.
You can learn more about inheritance and overloaded methods here.
Old Answer:
The code you a... | https://stackoverflow.com/questions/55932767/ |
Difference between logloss in sklearn and BCEloss in Pytorch? | Looking at the documentation for logloss in Sklearn and BCEloss in Pytorch, these should be the same, i.e. just the normal log loss with weights applied. However, they behave differently - both with and without weights applied. Can anyone explain it to me? I could not find the source code for BCEloss (which refers to b... | Actually, I found out. It turns out that BCELoss and log_loss behaves differently when the weights sum up to more than the dimension of the input array. Interesting.
| https://stackoverflow.com/questions/55933305/ |
How to normalize convolutional weights in pytorch? | I have a CNN in pytorch and I need to normalize the convolution weights (filters) with L2 norm in each iteration. What is the most efficient way to do this?
Basically, in my particular experiment I need to replace the filters with their normalized value in the model (during both training and test).
| I am not sure if I have understood your question correctly. But if I were asked to normalize the weights of NN layer at each iteration, I would do something like as follows.
for ite in range(100): # training iteration
# write your code to train your model
# update the parameters using optimizer.step() and the... | https://stackoverflow.com/questions/55941503/ |
Pytorch: backpropagating from sum of matrix elements to leaf variable | I'm trying to understand backpropagation in pytorch a bit better. I have a code snippet that successfully does backpropagation from the output d to the leaf variable a, but then if I add in a reshape step, the backpropagation no longer gives the input a gradient.
I know reshape is out-of-place, but I'm still not sure ... | Edit:
Here is a detailed explanation of what's going on ("this isn't a bug per se, but it is definitely a source of confusion"): https://github.com/pytorch/pytorch/issues/19778
So one solution is to specifically ask to retain grad for now non-leaf a:
a = torch.tensor([1.])
a.requires_grad = True
a = a.reshape(a.shap... | https://stackoverflow.com/questions/55942423/ |
PyTorch transformations change dimensions | I do not understand why pytorch transformations from 100x100 pic make 3x100 pic.
print("Original shape ", x.shape)
x = transforms.Compose([
transforms.ToPILImage(),
transforms.ToTensor()
])(x)
print("After transformation shape ", x.shape)
outputs
Original shape torch.Size([100, 100, 3])
After transform... | According to the docs https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.ToPILImage, if the input is a Torch tensor, the shape is C x H x W. So 100 is considered to be number of channels. Because there is no mode that corresponds to 100 channels, it interpreted as RGB (3 channels).
So y... | https://stackoverflow.com/questions/55958330/ |
In PyTorch, how to I make certain module `Parameters` static during training? | Context:
In pytorch, any Parameter is a special kind of Tensor. A Parameter is automatically registered with a module's parameters() method when it is assigned as an attribute.
During training, I will pass m.parameters() to the Optimizer instance so they can be updated.
Question: For a built-in pytorch module, how... | Parameters can be made static by setting their attribute requires_grad=False.
In my example case:
params = list(s.parameters()) # .parameters() returns a generator
# Each linear layer has 2 parameters (.weight and .bias),
# Skipping first layer's parameters (indices 0, 1):
params[2].requires_grad = False
params[3].r... | https://stackoverflow.com/questions/55959918/ |
Pytorch: How plot the prediction output from segmentation task when batch size is larger than 1? | I have a doubt and a question about plot output of different batches in segmentation subject.
The snippet below plot the probability of each class and the prediction output.
I am sure the prob plots is plotting one batch, but not sure about prediction when I got the torch.argmax(outputs, 1). Am I plotted the argmax ... | Not sure about what you are asking. Assuming you are using the NCHW data layout, your output is 10 samples per batch, 4 channels (each channel for a different class), and 256x256 resolution, then the first 4 graphs are plotting the class scores of the four classes.
For the 5th plot, your torch.argmax(outputs, 1).deta... | https://stackoverflow.com/questions/55967347/ |
translate from pyrotch to tensorflow | What does this line of code in PyTorch do?
normA = A.mul(A).sum(dim=1).sum(dim=1).sqrt()
Y = A.div(normA.view(batchSize, 1, 1).expand_as(A))
Normally it should be a second term like this:
torch.div(input, value, out=None) → Tensor
| Your question is a little unclear because you didn't mention what is the shape of tensor A and what is normA. But I guess the following:
A is a tensor of shape (batchSize, X, Y)
normA is a tensor of norms of all batch elements of A and its' shape is (batchSize).
So, you normalize the tensor A with the following sta... | https://stackoverflow.com/questions/55972397/ |
PyTorch - RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight' | I load my previously trained model and want to classify a single (test) image from the disk through this model. All the operations in my model are carried out on my GPU. Hence, I move the numpy array of the test image to GPU by calling cuda() function. When I call the forward() function of my model with the numpy array... | Convert to FloatTensor before sending it over the GPU.
So, the order of operations will be:
test_tensor = torch.from_numpy(test_img)
# Convert to FloatTensor first
test_tensor = test_tensor.type(torch.FloatTensor)
# Then call cuda() on test_tensor
test_tensor = test_tensor.cuda()
log_results = model.forward(test_t... | https://stackoverflow.com/questions/55983122/ |
I am having trouble calculating the accuracy, recall, precision and f1-score for my model | I have got my confusion matrix working correctly, just having some trouble producing the scores. A little help would go a long way. I am currently getting the error. "Tensor object is not callable".
def get_confused(model_ft):
nb_classes = 120
from sklearn.metrics import precision_recall_fscore_support as sco... | The problem is with this line.
cm = confusion_matrix(classes, preds)
confusion_matrix is a tensor and you can't call it like a function. Hence Tensor is not callable. I am also, not sure why you need this line. Instead, I think you would want to write cm= confusion_matrix.cpu().data.numpy() to make it a numpy array... | https://stackoverflow.com/questions/55984768/ |
Pytorch sum tensors doing an operation within each set of numbers | I have the following Pytorch tensor:
V1 = torch.tensor([[2, 4], [6, 4], [5, 3]])
I want to do the sum of the difference of each pair of numbers (applying absolute value), something like the code below;
result.sum(abs(2-4), abs(6-4), abs(5-3))
I can do this using a for statement:
total = 0
for i in range(0,vector... | You can do
torch.abs(V1[:, 1]- V1[:, 0])
and to sum over it
torch.sum(torch.abs(V1[:, 1]- V1[:, 0]))
| https://stackoverflow.com/questions/55985336/ |
Sample each image from dataset N times in single batch | I'm currently working on task of learning representation (deep embeddings). The dataset I use have only one example image per object. I also use augmentation.
During training, each batch must contain N different augmented versions of single image in dataset (dataset[index] always returns new random transformation).
I... | As far as I know, this is not a standard way of doing things — even if you have only one sample per object, one would still sample different images from different object per batch, and in different epochs the sampled images would be transformed differently.
That said, if you truly want to do what you are doing, why no... | https://stackoverflow.com/questions/55986607/ |
TypeError: argument 0 is not a Variable | I am trying to learn CNN with my own data. The shape of the data is (1224, 15, 23). 1224 is the number of data, and each data is (15, 23). CNN is built with PyTorch.
I think there is no logical error because conv2D needs 4-D tensor, and I feed (batch, channel, x, y).
when I build an instance of the Net class I got th... | Your code actually works for PyTorch >= 0.4.1. I guess your PyTorch version is < 0.4 and so you need to pass a Variable in the following line.
o = conv(torch.autograd.Variable(torch.zeros(1, *x.shape)))
In PyTorch >= 0.4.1, the concept of Variable no longer exists. So, torch.FloatTensor can be directly pass... | https://stackoverflow.com/questions/55990271/ |
pytorch model loading and prediction, AttributeError: 'dict' object has no attribute 'predict' | model = torch.load('/home/ofsdms/san_mrc/checkpoint/best_v1_checkpoint.pt', map_location='cpu')
results, labels = predict_function(model, dev_data, version)
> /home/ofsdms/san_mrc/my_utils/data_utils.py(34)predict_squad()
-> phrase, spans, scores = model.predict(batch)
(Pdb) n
AttributeError: 'dict' object has n... | the checkpoint you save is usually a state_dict: a dictionary containing the values of the trained weights - but not the actual architecture of the net. The actual computational graph/architecture of the net is described as a python class (derived from nn.Module).
To use a trained model you need:
Instantiate a model... | https://stackoverflow.com/questions/56002682/ |
PyTorch: inconsistent pretrained VGG output | When loading a pretrained VGG network with the torchvision.models module and using it to classify an arbitrary RGB image, the network's output differs noticeably from invocation to invocation. Why does this happen? From my understanding no part of the VGG forward pass should be non-deterministic.
Here's an MCVE:
impo... | vgg16 has a nn.Dropout layer that, during training, randomly drops 50% of its inputs. During test time you should "switch off" this behavior by setting the mode of the net to "eval" mode:
vgg.eval()
torch.all(torch.eq(vgg(img), vgg(img)))
Out[73]: tensor(1, dtype=torch.uint8)
Note that there are other layers wit... | https://stackoverflow.com/questions/56003198/ |
How can I calculate the network gradients w.r.t weights for all inputs in PyTorch? | I'm trying to figure out how I can calculate the gradient of the network for each input. And I'm a bit lost. Essentially, what I would want, is to calculate d self.output/d weight1 and d self.output/d weight2 for all values of input x. So, I would have a matrix of size (1000, 5) for example. Where the 1000 is for the s... | I am not sure what exactly you are trying to achieve because normally you only work with the sum of gradients of (d output)/(d parameter) and not with any other gradients in between as autograd takes care that, but let me try to answer.
Question 1
The example I've included below returns weights as size (1,5). What... | https://stackoverflow.com/questions/56004624/ |
pytorch embedding index out of range | I'm following this tutorial here https://cs230-stanford.github.io/pytorch-nlp.html. In there a neural model is created, using nn.Module, with an embedding layer, which is initialized here
self.embedding = nn.Embedding(params['vocab_size'], params['embedding_dim'])
vocab_size is the total number of training samples, ... | Found the answer here https://discuss.pytorch.org/t/embeddings-index-out-of-range-error/12582
I'm converting words to indexes, but I had the indexes based off the total number of words, not vocab_size which is a smaller set of the most frequent words.
| https://stackoverflow.com/questions/56010551/ |
Numpy/Pytorch dtype conversion / compatibility | I'm trying to find some documentation understand how dtypes are combined. For example:
x : np.int32 = ...
y : np.float64 = ...
What is going to be the type of x + y ?
does it depend on the operator (here +) ?
does it depend where it is stored (z = x + y vs z[...] = x + y) ?
I'm looking for part of the documentati... | If data types don't match, then NumPy will upcast the data to the higher precision data types if possible. And it doesn't depend on the type of (arithmetic) operation that we do or to the variables that we assign to, unless that variable already has some other dtype. Here is a small illustration:
In [14]: x = np.arang... | https://stackoverflow.com/questions/56022497/ |
Setting device for pytorch version 0.3.1.post2 | In newer versions of pytorch, you can set the device by using torch.device. I need to use torch version 0.3.1.post2 for some legacy code. How do I set the device for this version?
| As far as I know, you can use the set_device function. But this is not encouraged. Please see the reference.
The suggested method is, you just set the CUDA_VISIBLE_DEVICES environmental variable. You can run your script as follows.
CUDA_VISIBLE_DEVICES=GPU_ID python script_name.py
In your program, you can just simp... | https://stackoverflow.com/questions/56026555/ |
How to define specific number of convolutional kernels/filters in pytorch? | On pytorch website they have the following model in their tutorial
class BasicCNN(nn.Module):
def __init__(self):
super(BasicCNN, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5,... | Your question is a little ambiguous but let me try to answer it.
Usually, in a convolutional layer, we set the number of filters as the number of out_channels. But this is not straight-forward. Let's discuss based on the example that you provided.
What are the convolutional layer parameters?
model = BasicCNN()
f... | https://stackoverflow.com/questions/56030884/ |
PyTorch and Chainer implementations of the Linear layer- are they equivalent? | I want to use a Linear, Fully-Connected Layer as one of the input layers in my network. The input has shape (batch_size, in_channels, num_samples). It is based on the Tacotron paper: https://arxiv.org/pdf/1703.10135.pdf, the Enocder prenet part.
It feels to me as if Chainer and PyTorch have different implementations of... | Chainer Linear layer (a bit frustratingly) does not apply the transformation to the last axis. Chainer flattens the rest of the axes. Instead you need to provide how many batch axes there are, documentation which is 2 in your case:
# data.shape == (1, 4, 3)
results_chainer = linear_layer_chainer(data, n_batch_axes=2)
... | https://stackoverflow.com/questions/56033418/ |
Slicing uneven columns from tensor array | I have an array like so:
([[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]],
[[12, 13, 14],
[15, 16, 17]]])
If i want to slice the numbers 12 to 17 i would use:
arr[2, 0:2, 0:3]
but how would i go about slicing the array to get 12 to 16?
| You'll need to "flatten" the last two dimensions first. Only then will you be able to extract the elements you want:
xf = x.view(x.size(0), -1) # flatten the last dimensions
xf[2, 0:5]
Out[87]: tensor([12, 13, 14, 15, 16])
| https://stackoverflow.com/questions/56034978/ |
Considerations of model definitions when moving from Tensorflow to PyTorch | I've just recently switched to PyTorch after getting frustrated in debugging tf and understand that it is equivalent to coding in numpy almost completely. My question is what are the permitted python aspects we can use in a PyTorch model (to be put completely on GPU) eg. if-else has to be implemented as follows in tens... | In pytorch, the code can be written like the way normal python code is written.
CPU
import torch
a = torch.FloatTensor([1,2,3,4,5])
b = torch.FloatTensor([6,7,8,9,10])
cond = torch.randn(5)
for ci in cond:
if ci > 0:
print(torch.add(a, 1))
else:
print(torch.sub(b, 1))
GPU
Move the tenso... | https://stackoverflow.com/questions/56063686/ |
How to add a 2d tensor to every 2d tensor from a 3d tensor | I'm trying to add a 2d tensor to every 2d tensor from a 3d tensor.
Let's say i have a tensor a with (2,3,2) shape and a tensor b with (2,2) shape.
a = [[[1,2],
[1,2],
[1,2]],
[[3,4],
[3,4],
[3,4]]]
b = [[1,2], [3,4]]
#the result i want to get
a[:, 0, :] + b
a[:, 1, :] + b
a[:, 2, :] + b... | The most efficient way of doing this would be to add an extra second dimension to b and use broadcasting to add:
a = torch.Tensor([[[1,2],[1,2],[1,2]],[[3,4],[3,4],[3,4]]])
b = torch.Tensor([[1,2],[3,4]])
a += b.unsqueeze(1)
| https://stackoverflow.com/questions/56066305/ |
Pytorch: How can I find indices of first nonzero element in each row of a 2D tensor? | I have a 2D tensor with some nonzero element in each row like this:
import torch
tmp = torch.tensor([[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 1, 1, 0, 0]], dtype=torch.float)
I want a tensor containing the index of first nonzero element in each row:
indices = tensor([2],
[3])
How can ... | I could find a tricky answer for my question:
tmp = torch.tensor([[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 1, 1, 0, 0]], dtype=torch.float)
idx = reversed(torch.Tensor(range(1,8)))
print(idx)
tmp2= torch.einsum("ab,b->ab", (tmp, idx))
print(tmp2)
indices = torch.argmax(tmp2, 1, keepdim=T... | https://stackoverflow.com/questions/56088189/ |
Get Euclidian and infinite distance in Pytorch | I'm trying to get the Euclidian Distance in Pytorch, using torch.dist, as shown below:
torch.dist(vector1, vector2, 1)
If I use "1" as the third Parameter, I'm getting the Manhattan distance, and the result is correct, but I'm trying to get the Euclidian and Infinite distances and the result is not right. I tried wi... | You should use the .norm() instead of .dist().
vector1 = torch.FloatTensor([3, 4, 5])
vector2 = torch.FloatTensor([1, 1, 1])
dist = torch.norm(vector1 - vector2, 1)
print(dist) # tensor(9.)
dist = torch.norm(vector1 - vector2, 2)
print(dist) # tensor(5.3852)
dist = torch.norm(vector1 - vector2, float("inf"))
print(di... | https://stackoverflow.com/questions/56093749/ |
IndexError - Implementing the test of CBOW with pytorch | I am not very used to python & machine learning code. I am testing on pytorch CBOW test, but it says something about Index Error. Can anyone help?
# model class
class CBOW(nn.Module):
...
def get_word_embedding(self, word):
word = torch.cuda.LongTensor([word_to_ix[word]])
return self.embeddings(wo... | In your code, word_similarity is not an array, so you can't access it's 0th element. Just modify your code as:
word_similarity = (word_1_vec.dot(word_2_vec) / (torch.norm(word_1_vec) * torch.norm(word_2_vec))).data.numpy()
You can also use:
word_similarity = (word_1_vec.dot(word_2_vec) / (torch.norm(word_1_vec) * t... | https://stackoverflow.com/questions/56094478/ |
How to create embeddedings for a column that is a list of categorical values | I am having some trouble deciding how to create embeddings for a categorical feature for my DNN model. The feature consists of a non fixed set of tags.
The feature is like:
column = [['Adventure','Animation','Comedy'],
['Adventure','Comedy'],
['Adventure','Children','Comedy']
I would like to do ... | First you need to fill in your features to the same length.
import itertools
import numpy as np
column = np.array(list(itertools.zip_longest(*column, fillvalue='UNK'))).T
print(column)
[['Adventure' 'Animation' 'Comedy']
['Adventure' 'Comedy' 'UNK']
['Adventure' 'Children' 'Comedy']]
Then you can use tf.feature_... | https://stackoverflow.com/questions/56099266/ |
How to convert binary classifier output/loss tensor in LSTM to multiclass | I am trying to train an LSTM model to predict what year a song was written given its lyrics using word-level association in Pytorch. There are 51 potential classes/labels (1965-2015) - however I was working off of a template that used a binary classifier for a different problem. I have been trying to figure out how to ... | You must provide input as (N, C) and target as (N) to CrossEntropyLoss. I suspect in your model's foward() method, the following code segment is wrong.
sig_out = self.sig(out) # shape: batch_size*seq_len x output_size
# reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1) # shape: batch_size x seq_l... | https://stackoverflow.com/questions/56101779/ |
Gradient calculation not disabled in no_grad() PyTorch | Why is the gradient calculation of y not disabled in the following piece of code?
x = torch.randn(3, requires_grad=True)
print(x.requires_grad)
print((x ** 2).requires_grad)
y = x**2
print(y.requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
print(y.requires_grad)
Which gives the following ou... | Going through the official documentation says that the results would have require_grad=False even though the inputs have required_grad=True
Disabling gradient calculation is useful for inference, when you are sure
that you will not call :meth:Tensor.backward(). It will reduce memory
consumption for comp... | https://stackoverflow.com/questions/56111964/ |
KeyError: 'predictions' When use SimpleSeq2SeqPredictor to predict string | Please first search our GitHub repository for similar questions. If you don't find a similar example you can use the following template:
System (please complete the following information):
- OS: Ubunti 18.04
- Python version: 3.6.7
- AllenNLP version: v0.8.3
- PyTorch version: 1.1.0
Question
When I Try to predic... | solved
I forgot add model.eval() after load model.
Sorry
| https://stackoverflow.com/questions/56113646/ |
How to use Adam optim considering of its adaptive learning rate? | In the Adam optimization algorithm, the learning speed is adjusted according to the number of iterations. I don't quite understand Adam's design, especially when using batch training. When using batch training, if there are 19,200 pictures, each time 64 pictures are trained, it is equivalent to 300 iterations. If our e... | You can read the official paper here https://arxiv.org/pdf/1412.6980.pdf
Your update looks somewhat like this (for brevity, sake I have omitted the warmup-phase):
new_theta = old_theta-learning_rate*momentum/(velocity+eps)
The intuition here is that if momentum>velocity, then the optimizer is in a plateau, so the t... | https://stackoverflow.com/questions/56114706/ |
What are the differences between type of layer and its activation function in PyTorch? | I am trying to write some simple neural network using pytorch. I am new to this library. I faced with two ways of implementing the same idea: a layer with some fixed activation function (e.g. tanh).
The first way to implement it:
l1 = nn.Tanh(n_in, n_out)
The second way:
l2 = nn.Linear(n_in, n_out) # linear layer,... | An activation function is just a non-linear function and it doesn't have any parameters. So, your first approach doesn't make any sense!
However, you can use a sequential wrapper to combine a linear layer with tanh activation.
model = nn.Sequential(
nn.Linear(n_in, n_out),
nn.Tanh()
)
output = model(input)
| https://stackoverflow.com/questions/56117566/ |
Deep learning : How to build character level embedding? | I am trying to use character level embedding in my model but I have few doubts regarding character level embedding.
So for word level embedding :
Sentence = 'this is a example sentence'
create the vocab :
vocab = {'this' : 0 , 'is' :1 , 'a': 2 'example' : 3, 'sentence' : 4 }
encode the sentence :
encoded_senten... | You can concatenate the character level features with a fixed length.
For example:
``[[0, 1, 2, 3, 0, 0, 0,0],
[2, 3, 0, 0, 0, 0, 0, 0],
[5, 0, 0, 0, 0, 0, 0, 0],
[6, 7, 5, 8, 9, 10, 6, 0],
[3, 6, 11, 0, 6, 11, 12, 6]]``
can be changed to:
[[0, 1, 2, 3, 0, 0, 0,0,2, 3, 0, 0, 0, 0, 0, 0,5, 0, 0, 0, 0, 0,... | https://stackoverflow.com/questions/56131509/ |
Pytorch how to multiply tensors of variable size except the first dimention | I have one tensor which is A = 40x1.
i need to multiply this one with 3 other tensors: B = 40x100x384, C = 40x10, D=40x10.
for example in tensor B, we got 40 100x384 matrixes and i need each one of these matrixes to be multiplied with its corresponding element from A
what is the best way to do this in pytorch? Suppo... | If I understand correctly, you want to multiply every i-th matrix K x L by the corresponding i-th scalar in A.
One possible way is:
(A * B.view(len(A), -1)).view(B.shape)
Or you can use the power of broadcasting:
A = A.reshape(len(A), 1, 1)
# now A is (40, 1, 1) and you can do
A*B
A*C
A*D
essentially each traili... | https://stackoverflow.com/questions/56133218/ |
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #4 'mat1' | I'm failing to run my GAN on GPU. I call to(device) for all both models and all tensors but still keep getting the following error:
Traceback (most recent call last):
File "code/a3_gan_template.py", line 185, in <module>
main(args)
File "code/a3_gan_template.py", line 162, in main
train(dataloader, d... | Found the solution. Turns out .to(device) does not work in place for tensors.
# wrong
imgs.to(device)
# correct
imgs = imgs.to(device)
| https://stackoverflow.com/questions/56133521/ |
groupby aggregate mean in pytorch | I have a 2D tensor:
samples = torch.Tensor([
[0.1, 0.1], #-> group / class 1
[0.2, 0.2], #-> group / class 2
[0.4, 0.4], #-> group / class 2
[0.0, 0.0] #-> group / class 0
])
and a label for each sample corresponding to a class:
labels = torch.LongTensor([1, 2, 2, 0])
so len(s... | All you need to do is form an mxn matrix (m=num classes, n=num samples) which will select the appropriate weights, and scale the mean appropriately. Then you can perform a matrix multiplication between your newly formed matrix and the samples matrix.
Given your labels, your matrix should be (each row is a class numbe... | https://stackoverflow.com/questions/56154604/ |
Why do I get RuntimeError: CUDA error: invalid argument in pytorch? | Recently I've frequently been getting RuntimeError: CUDA error: invalid argument when calling functions like torch.cholesky e.g.:
import torch
a = torch.randn(3, 3, device="cuda:0")
a = torch.mm(a, a.t()) # make symmetric positive-definite
torch.cholesky(a)
This works fine if I use device="cpu" instead. This error i... | I discovered that this error was because the machine I'm running things on has CUDA 10 installed now, but I just installed pytorch as pip install torch. From their website, the proper way to install with pip and CUDA 10 is pip install https://download.pytorch.org/whl/cu100/torch-1.1.0-cp37-cp37m-linux_x86_64.whl.
| https://stackoverflow.com/questions/56156032/ |
Pytorch: How to assign a default value for look-up table using torch tensor | Say that I have two tensors as follows:
a = torch.tensor([[1, 2, 3], [1, 2, 3]])
b = torch.tensor([0, 2, 3, 4])
where b is the lookup value for a such as:
b[a]
will return the value of:
tensor([[2, 3, 4], [2, 3, 4]])
My problem is, what if I only have a look-up table of:
c = torch.tensor([0, 2, 3])
In which... |
Code
# replace values greater than a certain number
def custom_replace(tensor, value, on_value):
# we create a copy of the original tensor,
# because of the way we are replacing them.
res = tensor.clone()
res[tensor>=value] = on_value
return res
a = torch.tensor([[1, 2, 3], [1, 2, 3]])
c =... | https://stackoverflow.com/questions/56164949/ |
Transform List to Tensor more accurat | I want to return in the Dataloader a list.
But to return it, its need to be a tensor right?
So I transform it but in this process information is lost, is there another way to do this?
pt_tensor_from_list = torch.tensor(pose_transform)
pt_tensor_from_list = torch.FloatTensor(pose_transform)
I excpect the output:
... |
You are not losing any information during such conversion. The reason why it looks like that, more compactly, is as when you are printing a tensor, it invokes __str__() or __repr__() methods and it makes your tensor looks more pretty. As you can find here torch.Tensor uses a kind of internal tensor formatter called _t... | https://stackoverflow.com/questions/56166364/ |
Pytorch argsort ordered, with duplicate elements in the tensor | I have a vector A = [0,1,2,3,0,0,1,1,2,2,3,3]. I need to sort it in an increasing matter such that it is listed in an ordered fashion and from that extract the argsort. To better explain this I need to sort A to such that it returns B = [0,4,5,1,6,7,2,8,9,3,10,11]. However, when I use pyotrch's torch.argsort(A) it retu... | Here is a pure PyTorch based solution leveraging broadcasting, torch.unique(), and torch.nonzero(). This would be of great boost particularly for a GPU based implementation/run, which is not possible if we have to switch back to NumPy, argsort it and then transfer back to PyTorch (as suggested in other approaches).
# ... | https://stackoverflow.com/questions/56176439/ |
How to reproduce RNN results on several runs? | I call same model on same input twice in a row and I don't get the same result, this model have nn.GRU layers so I suspect that it have some internal state that should be release before second run?
How to reset RNN hidden state to make it the same as if model was initially loaded?
UPDATE:
Some context:
I'm trying t... | I believe this may be highly related to Random Seeding. To ensure reproducible results (as stated by them) you have to seed torch as in this:
import torch
torch.manual_seed(0)
And also, the CuDNN module.
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
If you're using numpy, you co... | https://stackoverflow.com/questions/56190274/ |
How to access the predictions of pytorch classification model? (BERT) | I'm running this file:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py
This is the prediction code for one input batch:
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
wit... | After looking at this part of the run_classifier.py code:
# copied from the run_classifier.py code
eval_loss = eval_loss / nb_eval_steps
preds = preds[0]
if output_mode == "classification":
preds = np.argmax(preds, axis=1)
elif output_mode == "regression":
preds = np.squeeze(preds)... | https://stackoverflow.com/questions/56201147/ |
Will installing Nvidia CUDA ver10 break CUDA ver9 on the same machine? | I am currently using pytorch and tensorflow with cuda ver9.0. I am pondering whether to install the latest cuda version 10. Will installing cuda v10 break cuda v9? Can both co-exist on the same desktop PC? Is it advisable to uninstall cuda v9 after installing cuda v10 or is it better to leave both versions installed?
... | I will answer my own question. Installing CUDA v10 will not break CUDA v9 on the same machine. Both can co-exist.
I installed CUDA v10 successfully. Pytorch has been tested to work successfully with CUDA v10.
| https://stackoverflow.com/questions/56204127/ |
Error while converting pytorch model to core-ml | C = torch.cat((A,B),1)
shape of tensors:
A is (1, 128, 128, 256)
B is (1, 1, 128, 256)
Expected C value is (1, 129, 128, 256)
This code is working on pytorch, but while converting to core-ml it gives me below error:
"Error while converting op of type: {}. Error message: {}\n".format(node.op_type, err_message, )... | It was coremltools version related issue. Tried with latest beta coremltools 3.0b2.
Following works without any error with latest beta.
import torch
class cat_model(torch.nn.Module):
def __init__(self):
super(cat_model, self).__init__()
def forward(self, a, b):
c = torch.cat((a, b), 1)
... | https://stackoverflow.com/questions/56217454/ |
How to properly convert pytorch LSTM to keras CuDNNLSTM? | I am trying to hand-convert a Pytorch model to Tensorflow for deployment. ONNX doesn't seem to natively go from Pytorch LSTMs to Tensorflow CuDNNLSTMs so that's why I'm writing it by hand.
I've tried the code below:
This is running in an Anaconda environment running Python 2.7, Pytorch 1.0, tensorflow 1.12, cuda9. I'm... | Update: the solution was to convert the torch model into a keras base LSTM model, then call
base_lstm_model.save_weights('1.h5')
cudnn_lstm_model.load_weights('1.h5')
| https://stackoverflow.com/questions/56230511/ |
how to increase number of images with data augmentation | I'm trying to apply data augmentation with pytorch. In particular, I have a dataset of 150 images and I want to apply 5 transformations (horizontal flip, 3 random rotation ad vertical flip) to every single image to have 750 images, but with my code I always have 150 images.
'train': transforms.Compose([
transforms... | You're misunderstanding the API. When you add some transform to your dataset, it is essentially a function which is being applied to every sample from that dataset and then returned. transforms.Compose applies sub-transforms sequentially, rather than returning multiple results (with each translation either being applie... | https://stackoverflow.com/questions/56235136/ |
Is there a `tensor` operation or function in Pytorch that works like cv2.dilate in OpenCV? | I built several masks through a network. These masks are stored in a torch.tensor variable. I would like to do a cv2.dilate like operation on every channel of the tensor.
I know there is a way that convert the tensor to numpy.ndarray and then apply cv2.dilate to every channel using a for loop. But since there are abou... | I think dilate is essentially conv2d operation in torch. See the code below
import cv2
import numpy as np
import torch
im = np.array([ [0, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 1, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 0] ], dtype=np.float32)
kernel = np.ar... | https://stackoverflow.com/questions/56235733/ |
How to compute gradient of the error with respect to the model input? | Given a simple 2 layer neural network, the traditional idea is to compute the gradient w.r.t. the weights/model parameters. For an experiment, I want to compute the gradient of the error w.r.t the input. Are there existing Pytorch methods that can allow me to do this?
More concretely, consider the following neural net... | It is possible, just set input.requires_grad = True for each input batch you're feeding in, and then after loss.backward() you should see that input.grad holds the expected gradient. In other words, if your input to the model (which you call features in your code) is some M x N x ... tensor, features.grad will be a ten... | https://stackoverflow.com/questions/56238599/ |
Loading custom dataset in pytorch | Normally, when we are loading data in pytorch, we do the followings
for x, y in dataloaders:
# Do something
However, in this dataset called MusicNet, they declare their own dataset and dataloader like this
train_set = musicnet.MusicNet(root=root, train=True, download=True, window=window)#, pitch_shift=5, jitter... |
Question 1
I don't understand why the code doesn't work without the line with train_set, test_set.
For you to be able to use the torch.utils.data.DataLoader with a custom dataset design, you must create a class of your dataset which subclasses torch.utils.data.Dataset (and implementing specific functions) an... | https://stackoverflow.com/questions/56238732/ |
BucketIterator throws 'Field' object has no attribute 'vocab' | It's not a new question, references I found without any solution working for me first and second.
I'm a newbie to PyTorch, facing AttributeError: 'Field' object has no attribute 'vocab' while creating batches of the text data in PyTorch using torchtext.
Following up the book Deep Learning with PyTorch I wrote the same... | You haven't built the vocab for the LABEL field.
After TEXT.build_vocab(train, ...), run LABEL.build_vocab(train), and the rest will run.
| https://stackoverflow.com/questions/56251267/ |
Pytorch - Inferring linear layer in_features | I am building a toy model to take in some images and give me a classification. My model looks like:
conv2d -> pool -> conv2d -> linear -> linear.
My issue is that when we create the model, we have to calculate the size of the first linear layer in_features based on the size of the input image. If we get ... | As of 1.8, PyTorch now has LazyLinear which infers the input dimension:
A torch.nn.Linear module where in_features is inferred.
| https://stackoverflow.com/questions/56262712/ |
Is it possible to use Keras module in Pytorch script? | I'm creating a network similar to lpcnet introduced by mozilla but in PyTorch. At the one point I need the module src/mdense.py in my script but it is written with keras. Is it possible to import keras module to torch framework? What could be the easiest way? I don't want to re-write the module with PyTorch.
There ... | No, not really, you can import it (as in python import), but Keras code won't work with PyTorch as they use different differentiation methods and they are completely different libraries. The only way is to rewrite the module using PyTorch's API.
| https://stackoverflow.com/questions/56270834/ |
Pytorch RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0 | I'm building a simple content based recommendations system. In order to compute the cosine similarity in a GPU accelerated way, i'm using Pytorch.
At the time of creating the tfidf vocabulary tensor from a csr_matrix, it promts the following RuntimeErrorr
RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_mema... | I tried the same piece of code that was throwing this error with the older version of PyTorch. It said that I need to have more RAM. Therefore, it's not a PyTorch bug. The only solution is to reduce matrix size somehow.
| https://stackoverflow.com/questions/56272981/ |
How can I get predictions from these pretrained models? | I've been trying to generate human pose estimations, I came across many pretrained models (ex. Pose2Seg, deep-high-resolution-net ), however these models only include scripts for training and testing, this seems to be the norm in code written to implement models from research papers ,in deep-high-resolution-net I have ... | Im not doing skeleton detection research, but your problem seems to be general.
(1) I dont think other people should teaching you from begining on how to load data and run their code from begining.
(2) For running other peoples code, just modify their test script which is provided e.g
https://github.com/leoxiaob... | https://stackoverflow.com/questions/56284107/ |
Unexpected data types when trying to train a pytorch model | I'm putting together a basic neural network to learn pytorch. Attempting to train it always fails with the message "Expected object of scalar type Float but got scalar type Double for argument #4 'mat1'". I suspect I'm doing something wrong with putting the data together, but I don't know what.
The data in question i... | PyTorch uses single-precision floats by default.
In the lines:
self.xs = np.array(xs, dtype=np.double)
self.ys = np.array(ys, dtype=np.double)
Replace np.double with np.float32.
| https://stackoverflow.com/questions/56297496/ |
How can I convert numpy.ndarray having type object to torch.tensor? | I'm trying to work on lstm in pytorch. It takes only tensors as the input. The data that I have is in the form of a numpy.object_ and if I convert this to a numpy.float, then it can be converted to tensor.
I checked the data type using print(type(array)) it gives class 'numpy.ndarray' as output and print(arr.dtype.typ... | The pytorch LSTM returns a tuple. So you get this error as your second LSTM layer self.seq2 can not handle this tuple. So,
change
prefix1=self.seq1(input1)
suffix1=self.seq1(input2)
to something like this:
prefix1_out, prefix1_states = self.seq1(input1)
suffix1_out, suffix1_states = self.seq1(input2)
and the... | https://stackoverflow.com/questions/56325104/ |
Pyinstaller executable fails importing torchvision | This is my main.py:
import torchvision
input("Press key")
It runs correctly in the command line: python main.py
I need an executable for windows. So I did : pyinstaller main.py
But when I launched the main.exe, inside /dist/main I got this error:
Traceback (most recent call last):
File "main.py", line 1, in <... | Downgrade torchvision to the previous version fix the error.
pip uninstall torchvision
pip install torchvision==0.2.2.post3
| https://stackoverflow.com/questions/56325181/ |
How does python map works with torch.tensor? | I am now in python so I am trying to understand this line from pytorch tutorial.
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
I understand how map works on a single element
def sqr(a):
return a * a
a = [1, 2, 3, 4]
a = map(sqr, a)
print(list(a))
And ... | map works on a single the same way it works on list/tuple of lists, it fetches an element of the given input regardless what is it.
The reason why torch.tensor works, is that it accepts a list as input.
If you unfold the following line you provided:
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_trai... | https://stackoverflow.com/questions/56327213/ |
pytorch masked_fill: why can't I mask all zeros? | I want to mask the all the zeros in the score matrix with -np.inf, but I can only get part of zeros masked, looked like
you see in the upper right corner there are still zeros that didn't get masked with -np.inf
Here's my codes:
q = torch.Tensor([np.random.random(10),np.random.random(10),np.random.random(10), np... | Your mask is wrong. Try
scores = scores.masked_fill(scores == 0, -np.inf)
scores now looks like
tensor([[1.4796, 1.2361, 1.2137, 0.9487, -inf, -inf],
[0.6889, 0.4428, 0.6302, 0.4388, -inf, -inf],
[0.8842, 0.7614, 0.8311, 0.6431, -inf, -inf],
[0.9884, 0.8430, 0.7982, 0.7323, -in... | https://stackoverflow.com/questions/56328630/ |
Why does PyTorch gather function require index argument to be of type LongTensor? | I'm writing some code in PyTorch and I came across the gather function. Checking the documentation I saw that the index argument takes in a LongTensor, why is that? Why does it need to take in a LongTensor instead of another type such as IntTensor? What are the benefits?
| By default all indices in pytorch are represented as long tensors - allowing for indexing very large tensors beyond just 4GB elements (maximal value of "regular" int).
| https://stackoverflow.com/questions/56335215/ |
How to properly display Pytorch's math notation inside docstring? | When looking at the docstring of Pytorch functions, the math notations are not properly displayed, e.g.:
https://pytorch.org/docs/stable/_modules/torch/nn/modules/loss.html
.. math::
\text{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right)
Only if I use my IDE to display... | The docs in source code won't render unless you try some script injection via Greasemonkey or so.
But first have a look at the standard docs where you can find the rendered formula.
| https://stackoverflow.com/questions/56337956/ |
Matrix multiplication (element-wise) from numpy to Pytorch | I got two numpy arrays (image and and environment map),
MatA
MatB
Both with shapes (256, 512, 3)
When I did the multiplication (element-wise) with numpy:
prod = np.multiply(MatA,MatB)
I got the wanted result (visualize via Pillow when turning back to Image)
But when I did it using pytorch, I got a really stran... | If you read the documentation of transforms.ToTensor() you'll see this transformation does not only convert a numpy array to torch.FloatTensor, but also transpose its dimensions from HxWx3 to 3xHxW.
To "undo" this you'll need to
prodasNp = (prodTensor.permute(2, 0, 1) * 255).to(torch.uint8).numpy()
See permute for ... | https://stackoverflow.com/questions/56342193/ |
free(): invalid pointer Aborted (core dumped) | I-m trying to run my python program it seems that it should run smoothly however I encounter an error that I haven't seen before it says:
free(): invalid pointer
Aborted (core dumped)
However I'm not sure how to try and fix error since it doesn't give me too much information about the problem itself.
At first I th... | Edit: The cause is actually known. The recommended solution is to build both packages from source.
There is a known issue with importing both open3d and PyTorch. The cause is unknown. https://github.com/pytorch/pytorch/issues/19739
A few possible workarounds exist:
(1) Some people have found that changing the orde... | https://stackoverflow.com/questions/56346569/ |
What is the difference between model.training = False and model.param.require_grad = False | What is the difference between these two:
model.training = False
and
for param in model.parameters():
param.require_grad = False
| model.training = False sets the module in evaluation mode, i.e.,
if model.training == True:
# Train mode
if model.training == False:
# Evaluation mode
So, effectively layers like dropout, batchnorm etc. which behave different on the train and test procedures know what is going on and hence can behave accordi... | https://stackoverflow.com/questions/56353381/ |
Reproducibility and performance in PyTorch | The documentation states:
Deterministic mode can have a performance impact, depending on your model.
My question is, what is meant by performance here. Processing speed or model quality (i.e. minimal loss)? In other words, when setting manual seeds and making the model perform in a deterministic way, does that ca... | Performance refers to the run time; CuDNN has several ways of implementations, when cudnn.deterministic is set to true, you're telling CuDNN that you only need the deterministic implementations (or what we believe they are). In a nutshell, when you are doing this, you should expect the same results on the CPU or the G... | https://stackoverflow.com/questions/56354461/ |
PyTorch: Sigmoid of weights? | I'm new to neural networks/PyTorch. I'm trying to make a net that takes in a vector x, first layer is h_j = w_j^T * x + b_j, output is max_j{h_j}. The only thing is that I want the w_j to be restricted between 0 and 1, by having w_j = S(k*a_j), where S is the sigmoid function, k is some constant, and a_j are the actual... | I am really not sure why would you do that but you can declare a custom layer as below to apply sigmoid to weights.
class NewLayer(nn.Module):
def __init__ (self, input_size, output_size):
super().__init__()
self.W = nn.Parameter(torch.zeros(input_size, output_size))
# kaiming initiali... | https://stackoverflow.com/questions/56364712/ |
Pytorch nn Module generalization | Let us take a look at the simple class:
class Temp1(nn.Module):
def __init__(self, stateSize, actionSize, layers=[10, 5], activations=[F.tanh, F.tanh] ):
super(Temp1, self).__init__()
self.layer1 = nn.Linear(stateSize, layers[0])
self.layer2 = nn.Linear(layers[0], layers[1])
self.... | The problem is that most of the nn.Linear layers in the "generalized" version are stored in a regular pythonic list (self.fcLayers). pytorch does not know to look for nn.Paramters inside regular pythonic members of nn.Module.
Solution:
If you wish to store nn.Modules in a way that pytorch can manage them, you need to... | https://stackoverflow.com/questions/56370283/ |
Can torchtext's BucketIterator pad all batches to the same length? | I recently started using torchtext to replace my glue code and I'm running into an issue where I'd like to use an attention layer in my architecture. In order to do this, I need to know the maximum sequence length of my training data.
The problem is that torchtext.data.BucketIterator does padding on a per-batch basis... | When instantiating a torchtext.data.Field, there's an optional keyword argument called fix_length which, when set, defines the length to which all samples will be padded; by default it is not set which implies flexible padding.
| https://stackoverflow.com/questions/56370964/ |
Pytorch : Results of vector multiplications are different for same input | I did not understand why these multiplication outputs are different.
print(features*weights)
print('------------')
print(features*weights.view(5,1))
print('------------')
print(torch.mm(features,weights.view(5,1)))
outputs:
tensor([[ 0.1314, -0.2796, 1.1668, -0.1540, -2.8442]])
------------
tensor([[ 0.1314, -0.70... | If I'm not wrong, what you are trying to understand is:
features = torch.rand(1, 5)
weights = torch.Tensor([1, 2, 3, 4, 5])
print(features)
print(weights)
# Element-wise multiplication of shape (1 x 5)
# out = [f1*w1, f2*w2, f3*w3, f4*w4, f5*w5]
print(features*weights)
# weights has been reshaped to (5, 1)
# Elemen... | https://stackoverflow.com/questions/56388586/ |
Conv2d not accepting tensor as input, saying its not tensor | I want to pass a tensor through a convolutional 2 layer. I am not able to execute it as I am getting a type error even though I have converted my numpy array to a tensor.
I tried using tf.convert_to_tensor() to solve this problem. Didn't work
import numpy as np
import tensorflow as tf
class Generator():
def __in... | You are trying to pass a TensorFlow tensor to a PyTorch function. TensorFlow and PyTorch are separate projects with different data structures which, in general, cannot be used interchangeably in this way.
To convert a NumPy array to a PyTorch tensor, you can use:
import torch
arr = torch.from_numpy(np.random.random((... | https://stackoverflow.com/questions/56390348/ |
Pytorch expected type Long but got type int | I revived an error
Expected object of scalar type Long but got scalar type Int for argument #3 'index'
This is from this line.
targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
I am not sure what to do as I tried to convert this to a long using several places. I tried puttin... | The dtype of your index argument (i.e., targets.unsqueeze(1).data.cpu()) needs to be torch.int64.
(The error message is a bit confusing: torch.long doesn't exist. But "Long" in PyTorch internals means int64).
| https://stackoverflow.com/questions/56391202/ |
Max pooling layers have weights. interesting work | In the pytorch tutorial step of " Deep Learning with PyTorch: A 60 Minute Blitz > Neural Networks "
I have a question that what dose mean params[1] in the networks?
The reason why i have this think is because of max polling dose not have any weight values.
for example.
If you write some codes like that
'
def ini... | You are wrong about that. It has nothing to do with max_pooling.
As you can read in your "linked" Tutorial, is the "nn.paramter"-Tensor automatically registered as parameter when it gets assigned to a Module.
Which, in your case basically, means that everything listed within the __init__ is a module and parameter can ... | https://stackoverflow.com/questions/56391692/ |
Pytorch neural network (probably) does not learn | my homework is to train a network on a given data set of 3000 frogs, cats and dogs. The network I built doesn't seem to improve at all. Why is that?
The training data x_train is a numpy ndarray of shape (3000,32,32,3).
class Netz(nn.Module):
def __init__(self):
super(Netz, self).__init__()
self.co... | Your loss is fluctuating means your network is not powerful enough to extract meaningful embeddings. I can recommend trying one of these few things.
Add more layers.
Use a smaller learning rate.
Use a larger dataset or use a pre-trained model if you only have a small dataset.
Normalize your dataset.
Shuffle training ... | https://stackoverflow.com/questions/56398768/ |
How to properly Forward the dropout layer | I created the following deep network with dropout layers like below:
class QNet_dropout(nn.Module):
"""
A MLP with 2 hidden layer and dropout
observation_dim (int): number of observation features
action_dim (int): Dimension of each action
seed (int): Random seed
"""
def _... | The F.linear() function used incorrectly. You should use your stated linear function instead of torch.nn.functional. The dropout layer should be after Relu. You can call Relu function from torch.nn.functional.
import torch
import torch.nn.functional as F
class QNet_dropout(nn.Module):
"""
A MLP with 2 h... | https://stackoverflow.com/questions/56401266/ |
Autoencoder model either oscillates or doesn't converge on MNIST dataset | Already ran the code 3 months ago with intended results. Changed nothing. Tried troubleshooting by using codes from (several) earlier versions, including among the earliest (which definitely worked). The problem persists.
# 4 - Constructing the undercomplete architecture
class autoenc(nn.Module):
def __init__(self... | You should do optimizer.zero_grad() before you do loss.backward() since the gradients accumulate. This is most likely causing the issue.
The general order to be followed during training phase :
optimizer.zero_grad()
output = model(input)
loss = criterion(output, label)
loss.backward()
optimizer.step()
Also, the val... | https://stackoverflow.com/questions/56402753/ |
Why do we pass nn.Module as an argument to class definition for neural nets? | I want to understand why we pass torch.nn.Module as a argument when we define the class for a neural network like GAN's
import torch
import torch.nn as nn
class Generator(nn.Module):
def __init__(self, input_size, hidden_size, output_size, f):
super(Generator, self).__init__()
self.map1 = nn.Lin... | This line
class Generator(nn.Module):
simple means the Generator class will inherit the nn.Module class, it is not an argument.
However, the dunder init method:
def __init__(self, input_size, hidden_size, output_size, f):
Has self which is why you may consider this as an argument.
Well this is Python class ins... | https://stackoverflow.com/questions/56405652/ |
Implementing FFT with Pytorch | I am trying to implement FFT by using the conv1d function provided in Pytorch.
Generating artifical signal
import numpy as np
import torch
from torch.autograd import Variable
from torch.nn.functional import conv1d
from scipy import fft, fftpack
import matplotlib.pyplot as plt
%matplotlib inline
# Creating filters... | You calculated the power rather than the amplitude.
You simply need to add the line zx = zx.pow(0.5) to take the square root to get the amplitude.
| https://stackoverflow.com/questions/56408603/ |
In PyTorch/Numpy, how to multiply rows of a matrix with "matrices" in a 3-D tensor? | For example, a = torch.Tensor([[1,2],[3,4]]) (for numpy it is just a = np.array([[1,2],[3,4]])), and b = torch.ones((2,2,2)),
I would like to product every row of a with the two 2x2 matrices, and get a new matrix [[3,3],[7,7]] (i.e. [1,2]*[[1,1],[1,1]]=[3,3], [3,4]*[[1,1],[1,1]]=[7,7]). Is it possible to achieve this?... | I consider this an ugly solution, but perhaps this is what you want to achieve:
a = torch.Tensor([[1,2],[3,4]])
b = torch.ones((2,2,2))
A = torch.mm(a[0].view(-1, 2), b[0])
B = torch.mm(a[1].view(-1, 2), b[1])
res = torch.cat([A, B], dim=0)
print(res)
output:
tensor([[3., 3.],
[7., 7.]])
| https://stackoverflow.com/questions/56411257/ |
Pytorch's packed_sequence/pad_sequence pads tensors vertically for list of tensors | I am trying to pad sequence of tensors for LSTM mini-batching, where each timestep in the sequence contains a sub-list of tensors (representing multiple features in a single timestep).
For example, sequence 1 would have 3 timesteps and within each timestep there are 2 features. An example below would be:
Sequence 1 =... | Simply change
result = rnn_utils.pad_sequence([a, b, c])
to
result = rnn_utils.pad_sequence([a, b, c], batch_first=True)
seq1 = result[0]
seq2 = result[1]
seq3 = result[2]
By default, batch_first is False. Output will be in B x T x * if True, or in T x B x * otherwise, where
B is batch size. It is equal to the... | https://stackoverflow.com/questions/56412810/ |
RuntimeError: input must have 2 dimensions, got 3 | I am trying to run a sequence of bigrams into an LSTM. This involves:
2.Padding the sequences using pad_sequence
3.Inputting the padded sequences in the embedding layer
4.Packing the output of the embedding layer
5.Inserting the pack into the LSTM.
class LSTMClassifier(nn.Module):
# LSTM initialization
def... | Finally found solution after 5 hours of browsing...
def forward(self, input):
input = input.unsqueeze(0).unsqueeze(0)
# Initializing hidden state for first input with zeros
h0 = torch.zeros(1, input.size(0), 128)
# Initializing cell state for first input with zeros
c0 = torch.zeros(1, input.si... | https://stackoverflow.com/questions/56418580/ |
what is the right usage of _extra_files arg in torch.jit.save | one option I tried is pickling vocab and saving with extrafiles arg
import torch
import pickle
class Vocab(object):
pass
vocab = Vocab()
pickle.dump(open('path/to/vocab.pkl','w'))
m = torch.jit.ScriptModule()
## I am not sure about the usage of this arg, the docs didn't help me
extra_files = torch._C.ExtraFil... | I believe that the documentation for torch.jit.load is incorrect. You need to create an ExtraFilesmap() object to load the saved files.
The following is an example of how I got things to work:
Step 1: Save model
extra_files = torch._C.ExtraFilesMap()
extra_files['foo.txt'] = 'bar'
traced_script_module.save(serialize... | https://stackoverflow.com/questions/56418938/ |
When using Conv2D in PyTorch, does padding or dilation happen first? | Consider the following bit of code:
torch.nn.Conv2d(1, 1, 2, padding = 1, dilation = 2)
Which of the following two cases is a correct interpretation?
| If you look at the bottom of the nn.Conv2d documentation you'll see the formula used to compute the output size of the conv layer:
Notice how padding is not affected by the value of dilation. I suppose this indicates "pad first" approach.
| https://stackoverflow.com/questions/56420160/ |
Import from Github : How to fix ImportError | I want to use the open source person re-identification library in Python
on Ubuntu 19.04
with Anaconda
no CUDA
in the terminal PyCharm (or not)
Python version 3.7.3
PyTorch version 1.1.0
For that I have to follow instruction like on their deposite git :
git clone https://github.com/Cysu/open-reid.git
cd open-reid
... | Since the directory structure is as below:
/(root)-->|
|
|-->reid |--> (contents inside reid)
|
|
|-->examples | -->softmax_loss.py
|
|-->(Other contents in root directory)
It can be observed that reid is not in the same directo... | https://stackoverflow.com/questions/56429621/ |
Assigning values to torch tensors | I'm trying to assign some values to a torch tensor. In the sample code below, I initialized a tensor U and try to assign a tensor b to its last 2 dimensions. In reality, this is a loop over i and j that solves some relation for a number of training data (here 10) and assigns it to its corresponding location.
import to... | Your b tensor has too much dimensions.
U[:, :, i, j] has a [10, 1] shape (try U[:, :, i, j].shape)
Use b = torch.rand([10, 1]) instead.
| https://stackoverflow.com/questions/56435110/ |
How to access the network weights while using PyTorch 'nn.Sequential'? | I'm building a neural network and I don't know how to access the model weights for each layer.
I've tried
model.input_size.weight
Code:
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn... | I've tried many ways, and it seems that the only way is by naming each layer by passing OrderedDict
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hid... | https://stackoverflow.com/questions/56435961/ |
Getting alignment/attention during translation in OpenNMT-py | Does anyone know how to get the alignments weights when translating in Opennmt-py? Usually the only output are the resulting sentences and I have tried to find a debugging flag or similar for the attention weights. So far, I have been unsuccessful.
| You can get the attention matrices. Note that it is not the same as alignment which is a term from statistical (not neural) machine translation.
There is a thread on github discussing it. Here is a snippet from the discussion. When you get the translations from the mode, the attentions are in the attn field.
import o... | https://stackoverflow.com/questions/56440732/ |
Getting the gradients of a model trained in OpenNMT-py | When training a model using OpenNMT-py, we get a dict as output, containing the weights and biases of the network. However, these tensors have requires_grad = False, and so, do not have a gradient. For example. with one layer, we might have the following tensors, denoting embeddings as well as weights and biases in the... | The gradients are accessible only inside the training loop, where optim.step() is called. If you want to log the gradients (or norm of gradients or whatever) to TensorBoard, you can probably best get them before the optimizer step is called. It happens in the _gradient_accumulation method of the Trainer object.
Be awa... | https://stackoverflow.com/questions/56447123/ |
How to calculate output sizes after a convolution layer in a configuration file? | I'm new to convolutional neural networks and wanted to know how to calculate or figure out the output sizes between layers of a model given a configuration file for pytorch similar to those following instructions in this link.
Most of the stuff I've already looked at hasn't been very clear and concise. How am I suppo... | In short, there is a common formula for output dims calculation:
You can find explanation in A guide to receptive field arithmetic for Convolutional Neural Networks.
In addition, I'd like to recommend amazing article A guide to convolution arithmetic for deep learning.
And this repo conv_arithmetic with convolutio... | https://stackoverflow.com/questions/56450969/ |
TypeError: can't assign a str to a torch.LongTensor pytoch | I'm trying to convert a list of strings into tensors but I get this error
lengths = [len(cap) for cap in captions]
targets = torch.zeros(len(captions), max(lengths)).long()
for i, cap in enumerate(captions):
end = lengths[i]
targets[i, :end] = cap[:end]
| You can use python's ord to convert characters to their unicode:
targets[i, :end] = torch.from_numpy(np.array(list(map(ord, cap[:end])))).to(torch.long)
| https://stackoverflow.com/questions/56452900/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.