instruction stringlengths 13 150 | input stringlengths 36 29.1k | output stringlengths 31 29.5k | source stringlengths 45 45 |
|---|---|---|---|
After Pytorch Upgrade , my model is giving almost random output | I trained, tested and still using a model in "Pytorch 0.4.1". It was, and is still working fine (output is what it should be) if I use pitch 0.4.1.
But as i upgrade to version 1.0.1, every time, I try to evaluate same input image, I get different output (Its a regression).
I tried to see what has been changed in those... | It was because of drop out layer. Model.eval disables the dropout layer. Pretty simple.
But Now in Pytorh upgrade, if Dropout is not defined specifically in model init function, it will not get disable during eval.
Atleast this was reason for my case.
| https://stackoverflow.com/questions/54984345/ |
How does pytorch's nn.Module register submodule? |
When I read the source code(python) of torch.nn.Module , I found the
attribute self._modules has been used in many functions like
self.modules(), self.children(), etc. However, I didn't find any functions
updating it. So, where will the self._modules be updated?
Furthermore, how does pytorch's nn.Module regi... | The modules and parameters are usually registered by setting an attribute for an instance of nn.module.
Particularly, this kind of behavior is implemented by customizing the __setattr__ method:
def __setattr__(self, name, value):
def remove_from(*dicts):
for d in dicts:
if name in d:... | https://stackoverflow.com/questions/54994658/ |
pdb cannot debug into backward hooks | Here is my code.
import torch
v = torch.tensor([0., 0., 0.], requires_grad=True)
x = 1
def f(grad):
global x
x = 2
return grad * 2
h = v.register_hook(f) # double the gradient
v.backward(torch.tensor([1., 2., 3.]))
h.remove()
print(v.grad)
When I debug with pdb, I find that I cannot break in function f ... | You can try ipdb https://pypi.org/project/ipdb/ instead of pdb.
| https://stackoverflow.com/questions/54998448/ |
Pytorch TypeError: eq() received an invalid combination of arguments | num_samples = 10
def predict(x):
sampled_models = [guide(None, None) for _ in range(num_samples)]
yhats = [model(x).data for model in sampled_models]
mean = torch.mean(torch.stack(yhats), 0)
return np.argmax(mean.numpy(), axis=1)
print('Prediction when network is forced to predict')
correct = 0
total =... | You are trying to compare predicted and labels. However, your predicted is an np.array while labels is a torch.tensor therefore eq() (the == operator) cannot compare between them.
Replace the np.argmax with torch.argmax:
return torch.argmax(mean, dim=1)
And you should be okay.
| https://stackoverflow.com/questions/54999926/ |
Why does creating a single tensor on the GPU take 2.5 seconds in PyTorch? | I'm just going through the beginner tutorial on PyTorch and noticed that one of the many different ways to put a tensor (basically the same as a numpy array) on the GPU takes a suspiciously long amount compared to the other methods:
import time
import torch
if torch.cuda.is_available():
print('time =', time.time(... | When I re-order the commands, whatever command is on top takes 2.5 seconds. So this leads me to believe there is a delayed one-time setup of the device happening here, and future on-GPU allocations will be faster.
| https://stackoverflow.com/questions/55009297/ |
Trying to print class names for dog breed but it keeps saying list index out of range | I am using a resnet model to classify dog breeds but when I try to print out an image with the label of dog breed it says list index out of range.
Here is my code:
import torchvision.models as models
import torch.nn as nn
model_transfer = models.resnet18(pretrained=True)
if use_cuda:
model_transfer = model_tra... | I suppose you have several issues that can be fixed using 13 chars.
First, I suggest what @Alekhya Vemavarapu suggested - run your code with a debugger to isolate each line and inspect the output. This is one of the greatest benefits of dynamic graphs with pytorch.
Secondly, the most probable cause for your issue is ... | https://stackoverflow.com/questions/55012880/ |
Why doesn't the learning rate (LR) go below 1e-08 in pytorch? | I am training a model. To overcome overfitting I have done optimization, data augmentation etc etc. I have an updated LR (I tried for both SGD and Adam), and when there is a plateu (also tried step), the learning rate is decreased by a factor until it reaches LR 1e-08 but won't go below than that and my model's validat... | Personally I'm not aware of a lower limit on the learning rate (other than 0.0). But you can achieve the effect of a lower learning rate by reducing the loss before computing the backwards pass:
outputs = model(batch)
loss = criterion(outputs, targets)
# Equivalent to lowering the learning rate by a factor of 100
los... | https://stackoverflow.com/questions/55026544/ |
Pre-Trained Models in Keras,TorchVision | I Have The Following Code which use pre-trained ResNet50 Model in Keras with imagenet DataSet:
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
model = ResNet50(weights='imagenet')
pri... | I've also been exploring Tensorflow's pretrained model landscape and (as of 1/14/2020), solutions don't exist for 1) a mnist-pretrained lenet or 2) a cifar10-pretrained 32-layer resnet.
Honestly, I strongly doubt that most frameworks release a pretrained model for LeNet-5. It's extremely small and usually takes O(minu... | https://stackoverflow.com/questions/55030766/ |
How to do 2-layer nested FOR loop in PYTORCH? | I am learning to implement the Factorization Machine in Pytorch.
And there should be some feature crossing operations.
For example, I've got three features [A,B,C], after embedding, they are [vA,vB,vC], so the feature crossing is "[vA·vB], [vA·vC], [vB·vc]".
I know this operation can be simplified by the following:
... | You can
cross_vec = (feature_emb[:, None, ...] * feature_emb[..., None, :]).sum(dim=-1)
This should give you corss_vec of shape (batch_size, feature_len, feature_len).
Alternatively, you can use torch.bmm
cross_vec = torch.bmm(feature_emb, feature_emb.transpose(1, 2))
| https://stackoverflow.com/questions/55038022/ |
PyTorch conversion between tensor and numpy array: the addition operation | I am following the 60-minute blitz on PyTorch but have a question about conversion of a numpy array to a tensor. Tutorial example here.
This piece of code:
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
yields
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.f... | This actually has little to do with PyTorch. Compare
import numpy as np
a = np.ones(5)
b = a
followed by either
np.add(a, 1, out=a)
print(b)
or
a = a + 1
print(b)
There is a difference between np.add(a, 1, out=a) and a = a + 1. In the former you retain the same object (array) a with different values (2 instead... | https://stackoverflow.com/questions/55040217/ |
Problems passing tensor to linear layer - Pytorch | I'm trying to build a neural net however I can't figure out where I'm going wrong with the max pooling layer.
self.embed1 = nn.Embedding(256, 8)
self.conv_1 = nn.Conv2d(1, 64, (7,8), padding = (0,0))
self.fc1 = nn.Linear(64, 2)
def forward(self,x):
import pdb; pdb.set_trace()
x = self.embed1(x) #... | You use of torch.max returns two outputs: the max value along dim=0 and the argmax along that dimension. Thus, you need to pick only the first output. (you might want to consider using adaptive max pooling for this task).
Your linear layer expects its input to have dim 64 (that is batch_size-by-64 shaped tensor). Howe... | https://stackoverflow.com/questions/55040412/ |
How does Pytorch Dataloader handle variable size data? | I have a dataset that looks like below. That is the first item is the user id followed by the set of items which is clicked by the user.
0 24104 27359 6684
0 24104 27359
1 16742 31529 31485
1 16742 31529
2 6579 19316 13091 7181 6579 19316 13091
2 6579 19316 13091 7181 ... | So how do you handle the fact that your samples are of different length? torch.utils.data.DataLoader has a collate_fn parameter which is used to transform a list of samples into a batch. By default it does this to lists. You can write your own collate_fn, which for instance 0-pads the input, truncates it to some predef... | https://stackoverflow.com/questions/55041080/ |
PyTorch model validation: The size of tensor a (32) must match the size of tensor b (13) | I am a very beginner in case of machine learning. So for learning purpose I am trying to develop a simple CNN to classify chess pieces. The net already works and I can train it but I have a problem with my validation function.
I can't compare my prediction with my target_data because my prediction is only a tensor of... | Simply run "argmax" on the target as well:
_, target = torch.max(target.data, 1)
Or better yet, just keep the target around as [example_1_class, example_2_class, ...], instead of 1-hot encoding.
| https://stackoverflow.com/questions/55046831/ |
Unexpected key(s) in state_dict: "model", "opt" | I'm currently using fast.ai to train an image classifier model.
data = ImageDataBunch.single_from_classes(path, classes, ds_tfms=get_transforms(), size=224).normalize(imagenet_stats)
learner = cnn_learner(data, models.resnet34)
learner.model.load_state_dict(
torch.load('stage-2.pth', map_location="cpu")
)
which... | My strong guess is that stage-2.pth contains two top-level items: the model itself (its weights) and the final state of the optimizer which was used to train it. To load just the model, you need only the former. Assuming things were done in the idiomatic PyTorch way, I would try
learner.model.load_state_dict(
torc... | https://stackoverflow.com/questions/55047065/ |
converting list of tensors to tensors pytorch | I have list of tensor where each tensor has a different size. How can I convert this list of tensors into a tensor using PyTorch?
For instance,
x[0].size() == torch.Size([4, 8])
x[1].size() == torch.Size([4, 7]) # different shapes!
This:
torch.tensor(x)
Gives the error:
ValueError: only one element tensors can be c... | You might be looking for cat.
However, tensors cannot hold variable length data.
for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their d... | https://stackoverflow.com/questions/55050717/ |
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 3 and 1 in dimension 1 | While training the resnet50 model through pytorch I got this error:
RuntimeError: invalid argument 0: Sizes of tensors must match except
in dimension 0. Got 3 and 1 in dimension 1 at
/pytorch/aten/src/TH/generic/THTensorMoreMath.cpp:1333
I'm using this: http://github.com/Helias/Car-Model-Recognition/
with th... | I solved this, the problem was the different images color channels, not all the images were RGB, so I made a conversion in dataset.py, I changed this:
im = Image.open(image_path)
into this:
im = Image.open(image_path).convert('RGB')
| https://stackoverflow.com/questions/55054009/ |
Pytorch batch matrix vector outer product | I am trying to generate a vector-matrix outer product (tensor) using PyTorch. Assuming the vector v has size p and the matrix M has size qXr, the result of the product should be pXqXr.
Example:
#size: 2
v = [0, 1]
#size: 2X3
M = [[0, 1, 2],
[3, 4, 5]]
#size: 2X2X3
v*M = [[[0, 0, 0],
[0, 0, 0]],
[... | You can use torch.einsum operator:
torch.einsum('bp,bqr->bpqr', v, M) # batch-wise operation v.shape=(b,p) M.shape=(b,q,r)
torch.einsum('p,qr->pqr', v, M) # cross-batch operation
| https://stackoverflow.com/questions/55054127/ |
PyTorch PermissionError: [Errno 13] Permission denied: '/.torch' | I'm running a PyTorch based ML program for image classification using Resnet50 model for transfer learning. I am getting below error regarding permission.
Traceback (most recent call last):
File "imgc_pytorch.py", line 67, in
model = models.resnet50(pretrained=True)
File "/opt/conda/lib/python3.6/sit... | you can change model_zoo.load_url(model_urls['resnet50']) to model_zoo.load_url(model_urls['resnet50'], model_dir='~/.torch/') like this
| https://stackoverflow.com/questions/55073757/ |
Extract features from last hidden layer Pytorch Resnet18 | I am implementing an image classifier using the Oxford Pet dataset with the pre-trained Resnet18 CNN.
The dataset consists of 37 categories with ~200 images in each of them.
Rather than using the final fc layer of the CNN as output to make predictions I want to use the CNN as a feature extractor to classify the pets.
F... | This is probably not the best idea, but you can do something like this:
#assuming model_ft is trained now
model_ft.fc_backup = model_ft.fc
model_ft.fc = nn.Sequential() #empty sequential layer does nothing (pass-through)
# or model_ft.fc = nn.Identity()
# now you use your network as a feature extractor
I also checked ... | https://stackoverflow.com/questions/55083642/ |
How to save and load random number generator state in Pytorch? | I am training a DL model in Pytorch, and want to train my model in a deterministic way.
As written in this official guide, I set random seeds like this:
np.random.seed(0)
torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
Now, my training is long and i want to save,... | You can use torch.get_rng_state and torch.set_rng_state
When calling torch.get_rng_state you will get your random number generator state as a torch.ByteTensor.
You can then save this tensor somewhere in a file and later you can load and use torch.set_rng_state to set the random number generator state.
When using... | https://stackoverflow.com/questions/55097671/ |
Build Pytorch from source | I'm trying to install Pytorch from source on my MacOS (version 10.14.3) to use GPU. I have follow the documentation from this link. When I launch in my terminal the MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install I'm getting the following error in my terminal:
[ 69%] Built target caffe2_obs... | I was encountering this problem because Nvidia is incompatible with OSX Mojave 10.14+
| https://stackoverflow.com/questions/55107466/ |
Finding non-intersection of two pytorch tensors | Thanks everyone in advance for your help! What I'm trying to do in PyTorch is something like numpy's setdiff1d. For example given the below two tensors:
t1 = torch.tensor([1, 9, 12, 5, 24]).to('cuda:0')
t2 = torch.tensor([1, 24]).to('cuda:0')
The expected output should be (sorted or unsorted):
torch.tensor([9, 12, ... | if you don't want to leave cuda, a workaround could be:
t1 = torch.tensor([1, 9, 12, 5, 24], device = 'cuda')
t2 = torch.tensor([1, 24], device = 'cuda')
indices = torch.ones_like(t1, dtype = torch.uint8, device = 'cuda')
for elem in t2:
indices = indices & (t1 != elem)
intersection = t1[indices]
| https://stackoverflow.com/questions/55110047/ |
Fastest way to read an image from huge uncompressed tar file in __getitem__ of PyTorch custom dataset | I have a huge dataset (2 million) of jpg images in one uncompressed TAR file. I also have a txt file each line is the name of the image in TAR file in order.
img_0000001.jpg
img_0000002.jpg
img_0000003.jpg
...
and images in tar file are exactly the same.
I searched alot and find out tarfile module is the best one, b... | tarfile seems to have caching for getmember, it reuses getmembers() results.
But if you use the provided snipped in __getitem__, then for each item from the dataset the tar file is open and read fully, one image file extracted, then the tar file closed and the associated info is lost.
The simplest way to resolve this... | https://stackoverflow.com/questions/55116639/ |
Convolutional auto-encoder error - 'RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.FloatTensor) should be the same' | For below model I received error 'Expected stride to be a single value integer or list'. I used suggested answer from https://discuss.pytorch.org/t/expected-stride-to-be-a-single-integer-value-or-a-list/17612/2
and added
img.unsqueeze_(0)
I now receive error :
RuntimeError: Input type (torch.cuda.ByteTensor) and... | This is because your image tensor resides in GPU (that happens here img = Variable(img).cuda()), while your model is still in RAM. Please remember that you need to explicitly call cuda() to send a tensor (or an instance of nn.Module) to GPU.
Just change this line:
model = autoencoder()
To this:
model = autoencoder... | https://stackoverflow.com/questions/55120789/ |
Groups in Convolutional Neural Network / CNN | I came across this PyTorch example for depthwise separable convolutions using the groups parameter:
class depthwise_separable_conv(nn.Module):
def __init__(self, nin, nout):
super(depthwise_separable_conv, self).__init__()
self.depthwise = nn.Conv2d(nin, nin, kernel_size=3, padding=1, groups=nin)
... | Perhaps you're looking up an older version of the docs. 1.0.1 documentation for nn.Conv2d expands on this.
Groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,
At groups=1, all inputs are convolved to all outputs.
At groups=2, the opera... | https://stackoverflow.com/questions/55123161/ |
Output and Broadcast shape mismatch in MNIST, torchvision | I am getting following error when using MNIST dataset in Torchvision
RuntimeError: output with shape [1, 28, 28] doesn't match the broadcast shape [3, 28, 28]
Here is my code:
import torch
from torchvision import datasets, transforms
transform = transforms.Compose([transforms.ToTensor(),
... | The error is due to color vs grayscale on the dataset, the dataset is grayscale.
I fixed it by changing transform to
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
| https://stackoverflow.com/questions/55124407/ |
Implementing fast dense feature extraction in PyTorch | I am trying to implement this paper in PyTorch Fast Dense Feature Extractor but I am having trouble converting the Torch implementation example they provide into PyTorch.
My attempt thus far has the issue that when adding an additional dimension to the feature map then the convolutional weights don't match the featur... | It is your lucky day as I have recently uploaded a PyTorch and TF implementation of the paper Fast Dense Feature Extraction with CNNs with Pooling Layers.
An approach to compute patch-based local feature descriptors efficiently in presence of pooling and striding layers for whole images at once.
See https://github.co... | https://stackoverflow.com/questions/55126493/ |
How can I make a neural network that has multiple outputs using pytorch? | Is my question even right? I looked everywhere but couldn't find a single thing. I'm pretty sure this was addressed when I learned keras, but how do I implement it in pytorch?
| Multiple outputs can be trivially achieved with pytorch.
Here is one such network.
import torch.nn as nn
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.linear1 = nn.Linear(in_features = 3, out_features = 1)
self.linear2 = nn.Linear(in_features = 3,out_fe... | https://stackoverflow.com/questions/55128814/ |
PyTorch grid_sample returns zero array | I want to sample rectangular patch from my image by affine_grid/grid_sample
I created array which contains only 255 values
canvas1 = np.zeros((128, 128), dtype=np.uint8)
canvas1[:] = 255
Also i created grid
theta = torch.FloatTensor([[
[11/2, 0, 63],
[0, 11/2, 63],
]])
grid = F.affine_grid(theta, (1, 1, 11,... | Your grid values are outside [-1, 1].
According to https://pytorch.org/docs/stable/nn.html#torch.nn.functional.grid_sample, such values are handled as defined by padding_mode.
Default padding_mode is 'zeros', what you probably want is "border": F.grid_sample(canvas1_torch, grid, mode="bilinear", padding_mode="border"... | https://stackoverflow.com/questions/55129589/ |
How to transfer the following tensorflow code into pytorch | I want to re-implement the word embedding here
here is the original tensorflow code (version: 0.12.1)
import tensorflow as tf
class Network(object):
def __init__(
self, user_length,item_length, num_classes, user_vocab_size,item_vocab_size,fm_k,n_latent,user_num,item_num,
embedding_size, filter_... | The pytorch equivalent of the tensorflow part of the code will be, explained with comments in the code itself, you have to import truncnorm from scipy.
from scipy.stats import truncnorm #extra import equivalent to tf.trunc initialise
pooled_outputs_u = []
for i, filter_size in enumerate(filter_sizes):
filter_s... | https://stackoverflow.com/questions/55133700/ |
How to transfer the follow Embedding code in tensorflow to pytorch? | I have an Embedding code in Tensorflow as follow
self.input_u = tf.placeholder(tf.int32, [None, user_length], name="input_u")
with tf.name_scope("user_embedding"):
self.W1 = tf.Variable(
tf.random_uniform([user_vocab_size, embedding_size], -1.0, 1.0),
name="W")
self.embedded_use... | Method 1: Use Embedding layer and freeze the weight to act as lookup table
import numpy as np
import torch
# user_vocab_size = 10
# embedding_size = 5
W1 = torch.FloatTensor(np.random.uniform(-1,1,size=(user_vocab_size,embedding_size)))
embedded_user = torch.nn.Embedding(user_vocab_size,embedding_size, _weight=W1)
emb... | https://stackoverflow.com/questions/55133931/ |
Output from LSTM not changing for different inputs | I have the an LSTM implemented in PyTorch as below.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class LSTM(nn.Module):
"""
Defines an LSTM.
"""
def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
s... | Initial weights for LSTM are small numbers close to 0, and by adding more layers the initial weighs and biases are getting smaller: all the weights and biases are initialized from -sqrt(k) to -sqrt(k), where k = 1/hidden_size (https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM)
By adding more layers you effectively... | https://stackoverflow.com/questions/55134920/ |
index selection in case of conflict in pytorch Argmax | I have been trying to learn tensor operations and this one has thrown me for a loop.
Let us say I have one tensor t:
t = torch.tensor([
[1,0,0,2],
[0,3,3,0],
[4,0,0,5]
], dtype = torch.float32)
Now this is a rank 2 tensor and we can apply argmax for each rank/dimension.
let us say we... |
That is a good question I stumbled over a couple of times myself. The simplest answer is that there are no guarantees whatsoever that torch.argmax (or torch.max(x, dim=k), which also returns indices when dim is specified) will return the same index consistently. Instead, it will return any valid index to the argmax va... | https://stackoverflow.com/questions/55139801/ |
Convolutional encoder error - 'RuntimeError: input and target shapes do not match' | In below code three images are created, saved and a convolutional auto-encoder attempts to encode them to a lower dimensional representation.
%reset -f
import torch.utils.data as data_utils
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from ma... | There are a couple of ways,
Here is the one solution:
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
self.encoder = nn.Sequential(
... | https://stackoverflow.com/questions/55140554/ |
How to do backprop in Pytorch (autograd.backward(loss) vs loss.backward()) and where to set requires_grad=True? | I have been using Pytorch for a while now. One question I had regarding backprop is as follows:
let's say we have a loss function for a neural network. For doing backprop, I have seen two different versions. One like:
optimizer.zero_grad()
autograd.backward(loss)
optimizer.step()
and the other one like:
optimizer.... | so just a quick answer: both autograd.backward(loss) and loss.backward() are actually the same. Just look at the implementation of tensor.backward() (as your loss is just a tensor), where tensor.loss just calls autograd.backward(loss).
As to your second question: whenever you use a prefabricated layer such as nn.Linea... | https://stackoverflow.com/questions/55144904/ |
Implementation of VGG16 on Pytorch giving size mismatch error | Snippet of my code implementation on PyTorch is:
model = models.vgg16(pretrained = False)
classifier = nn.Sequential(
nn.Linear(25088, 128),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(128, 128),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(128, 20)
)
model.classifier = cla... | It's probably the other way around. Things run perfectly on torchvision 0.2.2 and fails on torch vision 0.2.1.
This change of using AdaptiveAvgPool2d that went into 0.2.2 is why you don't see the error. https://github.com/pytorch/vision/commit/83b2dfb2ebcd1b0694d46e3006ca96183c303706
>>> import torch
>&g... | https://stackoverflow.com/questions/55145561/ |
Caffe2: Load ONNX model, and inference single threaded on multi-core host / docker | I'm having trouble running inference on a model in docker when the host has several cores. The model is exported via PyTorch 1.0 ONNX exporter:
torch.onnx.export(pytorch_net, dummyseq, ONNX_MODEL_PATH)
Starting the model server (wrapped in Flask) with a single core yields acceptable performance (cpuset pins the proc... | This is not a direct answer to the question, but if your goal is to serve PyTorch models (and only PyTorch models, as mine is now) in production, simply using PyTorch Tracing seems to be the better choice.
You can then load it directly into a C++ frontend similarly to what you would do through Caffe2, but PyTorch trac... | https://stackoverflow.com/questions/55147193/ |
Convert a torch t7 model to keras h5 | How can we convert a t7 model to keras' h5 ?
I am trying to do so for c3d-sports1m-kinetics.t7 that you can find here https://github.com/kenshohara/3D-ResNets/releases
The least I can ask for is a way to load the t7 model to python (pytorch) and then extract its weights, but I couldn't do it with the load_lua() functi... | As mentioned in this link,
https://github.com/pytorch/pytorch/issues/15307#issuecomment-448086741
with torchfile package, the load was successful. You can dump the contents of model to a file and then understand the contents. Each layer information is stored as a dictionary. Knowing the model architecture would make i... | https://stackoverflow.com/questions/55147282/ |
Invalid combination of arguments - eq() | I'm using a code shared here to test a CNN image classifier. When I call the test function, I got this error on line 155:
test_acc += torch.sum(prediction == labels.data)
TypeError: eq() received an invalid combination of arguments - got (numpy.ndarray), but expected one of:
* (Tensor other)
didn't match becaus... | Why do you have .numpy() here prediction = prediction.cpu().numpy()?
That way you convert PyTorch tensor to NumPy array, making it incompatible type to compare with labels.data.
Removing .numpy() part should fix the issue.
| https://stackoverflow.com/questions/55147511/ |
How to pass parameters to forward function of my torch nn.module from skorch.NeuralNetClassifier.fit() | I have extended nn.Module to implement my network whose forward function is like this ...
def forward(self, X, **kwargs):
batch_size, seq_len = X.size()
length = kwargs['length']
embedded = self.embedding(X) # [batch_size, seq_len, embedding_dim]
if self.use_padding:
if length is None:
... | The fit_params parameter is intended for passing information that is relevant to data splits and the model alike, like split groups.
In your case, you are passing additional data to the module via fit_params which is not what it is intended for. In fact, you could easily run into trouble doing this if you, for example... | https://stackoverflow.com/questions/55156877/ |
U-net with pre-trained backbones: where to make skip connections? | I'm trying to implement U-Net with PyTorch, using pre-trained networks in the encoder path.
The original U-Net paper trained a network from scratch. Are there any resources or principles on where skip connections should be placed, when a pre-trained backbone is used instead?
I already found some examples (e.g. this r... | In the original U-Net paper features right before Max-Pool layers are used for the skip-connections.
The logic is exactly the same with pre-trained backbones: at each spatial resolution, the deepest feature layer is selected. Thanks for qubvel on github for pointing this out in an issue.
| https://stackoverflow.com/questions/55165091/ |
Display examples of augmented images in PyTorch | I want to display some samples of augmented training images.
My transform includes the standard ImageNet transforms.Normalize like this:
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
... | To answer my own question, I came up with the following:
# Undo transforms.Normalize
def denormalise(image):
image = image.numpy().transpose(1, 2, 0) # PIL images have channel last
mean = [0.485, 0.456, 0.406]
stdd = [0.229, 0.224, 0.225]
image = (image * stdd + mean).clip(0, 1)
return image
e... | https://stackoverflow.com/questions/55179282/ |
LSTMCell parameters is not shown Pytorch | I have the following code:
class myLSTM(nn.Module):
def __init__(self, input_size, output_size, hidden_size, num_layers):
super(myLSTM, self).__init__()
self.input_size = input_size + 1
self.output_size = output_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.layers = []
... | This is to be expected - storing modules in list, dict, set or other python containers does not register them with the module owning said list, etc. To make your code work, use nn.ModuleList instead. It's as simple as modifying your __init__ code to use
layers = []
new_input_size = self.input_size
for i in xrange(num_... | https://stackoverflow.com/questions/55186290/ |
Understanding PyTorch training batches | Reading https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel & https://discuss.pytorch.org/t/how-does-enumerate-trainloader-0-work/14410 I'm trying to understand how training epochs behave in PyTorch.
Take this outer and inner loop :
for epoch in range(num_epochs):
for i1,i2 in enumerate(... | torch.utils.data.DataLoader returns an iterable that iterates over the dataset.
Therefore, the following -
training_loader = torch.utils.data.DataLoader(*args)
for i1,i2 in enumerate(training_loader):
#process
runs one over the dataset completely in batches.
| https://stackoverflow.com/questions/55188624/ |
Understanding when to use python list in Pytorch | Basically as this thread discusses here, you cannot use python list to wrap your sub-modules (for example your layers); otherwise, Pytorch is not going to update the parameters of the sub-modules inside the list. Instead you should use nn.ModuleList to wrap your sub-modules to make sure their parameters are going to be... | You are misunderstanding what Modules are. A Module stores parameters and defines an implementation of the forward pass.
You're allowed to perform arbitrary computation with tensors and parameters resulting in other new tensors. Modules need not be aware of those tensors. You're also allowed to store lists of tensors ... | https://stackoverflow.com/questions/55188698/ |
Pytorch select values from the last tensor dimension with indices from another tenor with a smaller dimension | I have a tensor a with three dimensions. The first dimension corresponds to minibatch size, the second to the sequence length, and the third to the feature dimension. E.g.,
>>> a = torch.arange(1, 13, dtype=torch.float).view(2,2,3) # Consider the values of a to be random
>>> a
tensor([[[ 1., 2., 3... | Lets declare necessary variables first: (notice requires_grad in a's initialization, we will use it to ensure differentiability)
a = torch.arange(1,13,dtype=torch.float32,requires_grad=True).reshape(2,2,3)
b = torch.LongTensor([[0, 2],[1,0]])
Lets reshape a and squash minibatch and sequence dimensions:
temp = a.res... | https://stackoverflow.com/questions/55196295/ |
Inconsistency when comparing scipy, torch and fourier periodic convolution | I'm implementing a 2d periodic convolution on a synthetic image in three different ways: using scipy, using torch and using the Fourier transform (also under torch framework).
However, I've got different results. Performing the operation by hand I can see that scipy's convolution yields the correct results. torch's s... |
The FFT result is wrong because the padding is wrong. When padding, you need to put the origin (center of the kernel) at the top-left corner of the image. See this other answer for details.
The difference between the other two is the difference between a convolution and a correlation. It looks like the “numpy“ result ... | https://stackoverflow.com/questions/55199256/ |
Unable to Normalize Tensor in PyTorch | I am trying to normalize the tensor outputted by my network but am getting an error in doing so. The code is as follows:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model_load_path = r'path\to\saved\model\file'
model.load_state_dict(torch.load(model_load_path))
model.eval... | In order to apply transforms.Normalize you have to convert the input to a tensor. For this you can use transforms.ToTensor.
inv_normalize = transforms.Compose(
[
transforms.toTensor(),
transforms.Normalize(mean=[-0.5/0.5], std=[1/0.5])
]
)
This tensor must consist of three dimensions (channels... | https://stackoverflow.com/questions/55205750/ |
Load a plain text file into PyTorch | I have two separate files, one is a text file, with each line being a single text. The other file contains the class label of that corresponding line. How do I load this into PyTorch and carry out further tokenization, embedding, etc?
| What have you tried already? What you described is still not very PyTorch related, you can make a pre-processing script that loads all the sentences into single data structured, e.g.: a list of (text, label) tuple.You can also already split your data into training and hold-out set in this step. You can then dump all th... | https://stackoverflow.com/questions/55216339/ |
Common class for Linear, Conv1d, Conv2d,..., LSTM, | Is there any class that all torch::nn::Linear, torch::nn::Conv1d, torch::nn::Conv2d, ... torch::nn::GRU, .... all inherit from that? torch::nn::Module seems be a good option, though there is a middle class, called torch::nn::Cloneable, so that torch::nn::Module does not work. Also, torch::nn::Cloneable itself is a temp... | I found that nn::sequential can be used for a this purpose, and it does not need a forward implementation, which can be a positive point and at a same time a negative point. nn::sequential already requires each module to have a forward implementation, and calls the forward functions in a sequence that they have added i... | https://stackoverflow.com/questions/55223728/ |
Encountering Import Error DLL load failed constantly | I have been trying to intall scikit-learn and pytorch using their respective commands given in the docs:
The commands for installing PyTorch are:
1) pip3 install https://download.pytorch.org/whl/cpu/torch-1.0.1-cp37-cp37m-win_amd64.whl
2) pip3 install torchvision
The command for installing scikit-learn is:
pip install... | Please check your python build number with the following command.
conda list python
Python 3.7.2 with build number h8c8aaf0_2 has a solved issue.
If this is the case, an update will do.
conda update python
| https://stackoverflow.com/questions/55225211/ |
How to change the axis on which 1 dimensional convolution is performed on embedding layer in PyTorch? | I've been playing around with text classification in PyTorch and I've encountered a problem with 1 dimensional convolutions.
I've set an embedding layer of dimesions (x, y, z) where:
x - denotes the batch size
y - denotes the length of a sentence (fixed with padding, so 40 words)
z - the dimensionality of pre-trained... | conv1d expects the input's size to be (batch_size, num_channels, length) and there is no way to change that, so you have two possible ways ahead of you, you can either permute the output of embedding or you can use a conv1d instead of you embedding layer(in_channels = num_words, out_channels=word_embedding_size, and ke... | https://stackoverflow.com/questions/55227010/ |
What's the difference between sum and torch.sum for a torch Tensor? | I get the same results when using either the python sum or torch.sum so why did torch implement a sum function? Is there a difference between them?
| nothing, torch.sum calls tensor.sum and python's sum calls __add__ (or __radd__ when needed) which calls tensor.sum again
so the only difference is in the number of function calls, and tensor.sum() should be the fastest (when you have small tensors and the function call's overhead is considerable)
| https://stackoverflow.com/questions/55228863/ |
Import LSTM from Tensorflow to PyTorch by hand | I am trying to import a pretrained Model from tensorflow to PyTorch. It takes a single input and maps it onto a single output.
Confusion comes up, when I try to import the LSTM weights
I read the weights and their variables from the file with the following function:
def load_tf_model_weights():
modelpa... | pytorch uses CuDNN's LSTM underlayer(even when you don't have CUDA, it still uses something compatible) thus it has one extra bias term.
So you can pick two numbers with their sum equal to 1(0 and 1, 1/2 and 1/2 or anything else) and set your pytorch biases as those numbers times TF's bias.
pytorch_bias_1 = torch.fr... | https://stackoverflow.com/questions/55229636/ |
pytorch: get number of classes given an ImageFolder dataset | If I have a dataset like:
image_datasets['train'] = datasets.ImageFolder(train_dir, transform=train_transforms)
How do I determine programatically the number of classes or unique labels in the dataset?
| Use:
len(image_datasets['train'].classes)
.classes returns a list.
| https://stackoverflow.com/questions/55235594/ |
How to open pretrained models in python | Hi I'm trying to load some pretrained models from .sav files and so far nothing is working. The models were originally made in pytorch and when I open the raw file in vs-code I can see that all the appropiate information was stored correctly.
I've tried the following libraries:
sklearn.externals.joblib
pickle
scipy... | You need to use PyTorch to load the models. On top of this, you also need the original model definition, so you need to need the clone the authors repository. In your example this repo:
git clone https://github.com/tbepler/protein-sequence-embedding-iclr2019.git
Then you can open the model with torch.load(). Note th... | https://stackoverflow.com/questions/55237899/ |
How to add parameters from self.some_dictionary_of_modules to self.parameters? | Consider this simple example:
import torch
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.conv_0=torch.nn.Conv2d(3,32,3,stride=1,padding=0)
self.blocks=torch.nn.ModuleList([
torch.nn.Conv2d(3,32,3,stride=1,padding=0),
torc... | Found the answer. Posting it in case someone runs into the same problem:
Looking into <torch_install>/torch/nn/modules/container.py it looks like there is a class torch.nn.ModuleDict that just does that. So in the example I gave in the question, the solution would be:
self.dict_block=torch.nn.ModuleDict({"key_1... | https://stackoverflow.com/questions/55237944/ |
how to get torch.Size([1, 3, 16, 112, 112]) instead of torch.Size([1, 3, 16, 64, 64]) after ConvTranspose3d | I have a torch.Size([1, 64, 8, 32, 32]) which I want after my transpose 3d convolution to become torch.Size([1, 3, 16, 112, 112]).
Using this: nn.ConvTranspose3d(64, 3, kernel_size=4, stride=2, bias=False, padding=(1, 1, 1)) I get correct the output channels and the number of frames, but not the frame sizes:torch.Size... | you should use different strides and paddings for different dims.
ConvTranspose3d(64, 3, kernel_size=4, stride=(2, 4, 4), bias=False, padding=(1, 8, 8))
| https://stackoverflow.com/questions/55245081/ |
Is Feature Scaling recommended for AutoEncoder? | Problem:
The Staked Auto Encoder is being applied to a dataset with 25K rows and 18 columns, all float values.
SAE is used for feature extraction with encoding & decoding.
When I train the model without feature scaling, the loss is around 50K, even after 200 epochs. But, when scaling is applied the loss is aroun... |
With a few exceptions, you should always apply feature scaling in machine learning, especially when working with gradient descent as in your SAE. Scaling your features will ensure a much smoother cost function and thus faster convergence to global (hopefully) minima.
Also worth noting that your much smaller loss af... | https://stackoverflow.com/questions/55253587/ |
PyTorch did not compute gradient and update parameters for 'masking' tensors? | I am coding a LeNet in PyTorch for MNIST dataset; I add a tensor self.mask_fc1\2\3 to mask some certain connections for the full connection layers. The code is like this:
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
import torch.nn as n... | For training, you need to register mask_fc1/2/3 as module parameters:
self.mask_fc1 = nn.Parameter(torch.ones(16 * 5 * 5, 120))
You can print net.parameters() after that to confirm.
| https://stackoverflow.com/questions/55255402/ |
Need help understanding the label input in a CGAN | I am trying to implement a CGAN. I understand that in convolutional generator and discriminator models, you add volume to the inputs by adding depth that represents the label. So if you have 10 classes in your data, your generator and discriminator would both have the base depth + 10 as its input volume.
However, I a... | y_fill_ is based on y_vec_ which is based on y_, so they are reading label info from mini batches which is correct. you might be confused by the scatter operation, basically what the code's doing is transferring the label into one-hot encoding
| https://stackoverflow.com/questions/55257936/ |
PyTorch preferred way to copy a tensor | There seems to be several ways to create a copy of a tensor in PyTorch, including
y = tensor.new_tensor(x) #a
y = x.clone().detach() #b
y = torch.empty_like(x).copy_(x) #c
y = torch.tensor(x) #d
b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? ... | TL;DR
Use .clone().detach() (or preferrably .detach().clone())
If you first detach the tensor and then clone it, the computation path is not copied, the other way around it is copied and then abandoned. Thus, .detach().clone() is very slightly more efficient.-- pytorch forums
as it's slightly fast and explicit in wha... | https://stackoverflow.com/questions/55266154/ |
Pytorch - why does preallocating memory cause "trying to backward through the graph a second time" | Suppose I have a simple one-hidden-layer network that I'm training in the typical way:
for x,y in trainData:
optimizer.zero_grad()
out = self(x)
loss = self.lossfn(out, y)
loss.backward()
optimizer.step()
This works as expected, but if I instead pre-allocate and update th... | The error implies that the program is trying to backpropagate through a set of operations a second time. The first time you backpropagate through a set of operations, pytorch deletes the computational graph to free memory. Therefore, the second time you try to backpropagate it fails as the graph has already been delete... | https://stackoverflow.com/questions/55268726/ |
pytorch linear regression given wrong results | I implemented a simple linear regression and I’m getting some poor results. Just wondering if these results are normal or I’m making some mistake.
I tried different optimizers and learning rates, I always get bad/poor results
Here is my code:
import torch
import torch.nn as nn
import numpy as np
import matplotlib.py... | Problem Solution
Actually your batch_size is problematic. If you have it set as one, your targetneeds the same shape as outputs (which you are, correctly, reshaping with view(-1, 1)).
Your loss should be defined like this:
loss = criterium(pred_y, target.view(-1, 1))
This network is correct
Results
Your results ... | https://stackoverflow.com/questions/55270266/ |
PyTorch - GPU is not used by tensors despite CUDA support is detected | As the title of the question clearly describes, even though torch.cuda.is_available() returns True, CPU is used instead of GPU by tensors. I have set the device of the tensor to GPU through the images.to(device) function call after defining the device. When I debug my code, I am able to see that the device is set to cu... | tensor.to() does not modify the tensor inplace. It returns a new tensor that's stored
in the specified device.
Use the following instead.
images = images.to(device)
labels = labels.to(device)
| https://stackoverflow.com/questions/55274076/ |
Determining the result of a convolution operation | Following guide from https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807 I'm attempting to calculate the number of output features using code below :
The output of :
%reset -f
import torch
import torch.nn as nn
my_tensor = torch.randn((1, 16, 12, 12), r... | I see where you confusion is coming from. The formula computes the linear number of outputs, whereas you assume that it operates on the whole tensor.
So the correct code is:
from math import floor
stride = 2
padding = 1
kernel_size = 3
n_out = floor((12 + (2 * padding) - kernel_size) / stride) + 1
print(n_out)
T... | https://stackoverflow.com/questions/55275826/ |
Different methods for initializing embedding layer weights in Pytorch | There seem to be two ways of initializing embedding layers in Pytorch 1.0 using an uniform distribution.
For example you have an embedding layer:
self.in_embed = nn.Embedding(n_vocab, n_embed)
And you want to initialize its weights with an uniform distribution. The first way you can get this done is:
self.in_embed... | Both are same
torch.manual_seed(3)
emb1 = nn.Embedding(5,5)
emb1.weight.data.uniform_(-1, 1)
torch.manual_seed(3)
emb2 = nn.Embedding(5,5)
nn.init.uniform_(emb2.weight, -1.0, 1.0)
assert torch.sum(torch.abs(emb1.weight.data - emb2.weight.data)).numpy() == 0
Every tensor has a uniform_ method which initializes it w... | https://stackoverflow.com/questions/55276504/ |
ValueError: operands could not be broadcast together with shapes (50,50,512) (3,) (50,50,512) while converting tensor to image in pytorch | I'm doing a neural style transfer. I'm trying to reconstruct the output of the convolutional layer conv4_2 of the VGG19 network.
def get_features(image, model):
layers = {'0': 'conv1_1', '5': 'conv2_1', '10': 'conv3_1',
'19': 'conv4_1', '21': 'conv4_2', '28': 'conv5_1'}
x = image
features... | Your tensor_to_image method only works for 3 channel images. Your input to the network is 3 channels, so is the final output, therefore it works fine there. But you cannot do the same at an internal high dimensional activation.
Essentially the problem is that you try to apply a channel-wise normalization, but you have... | https://stackoverflow.com/questions/55277192/ |
RuntimeError: Expected object of backend CUDA but got backend CPU for argument: ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t()) | When the forward function of my neural network (after the training phase is completed) is being executed, I'm experiencing RuntimeError: Expected object of backend CUDA but got backend CPU for argument #4 'mat1'. The error trace indicates the error happens due to the call of output = self.layer1(x) command. I have trie... | The error only happens only at the testing step, when you try calculating the accuracy, this might already give you a hint. The training loop runs without a problem.
The error is simply that you don't send the images and labels to the GPU at this step. This is your corrected evaluation loop:
with torch.no_grad():
... | https://stackoverflow.com/questions/55278566/ |
Pytorch RNN HTML Generation | I’ m stuck for a couple of days trying to make and RNN network to learn a basic HTML template.
I tried different approaches and I even overfit on the following data:
<!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>This is a Heading</h1>
... | First of all for a GRU (RNN) to be efficient, you may need more data to train.
Second, it seems that you have a problem with the embedding. It looks like, the mapping vocabulary['id2letter'] does not work, otherwise you would obtain
sequences of tags like <head><title><title><title>, instead ... | https://stackoverflow.com/questions/55281499/ |
not able to predict using pytorch [MNIST] | pytorch noob here,trying to learn.
link to my notebook:
https://gist.github.com/jagadeesh-kotra/412f371632278a4d9f6cb31a33dfcfeb
I get validation accuracy of 95%.
i use the following to predict:
m.eval()
testset_predictions = []
for batch_id,image in enumerate(test_dataloader):
image = torch.autograd.Variable(... | Most probably it is due to a typo; while you want to use the newly created predictated outcomes, you actually use predicted:
_, predictated = torch.max(output.data,1)
for prediction in predicted:
which predicted comes from earlier in your linked code, and it contains predictions from the validation set instead... | https://stackoverflow.com/questions/55282224/ |
int8 data type in Pytorch | What is the best way to run a qauntized model using int8 data types in Pytorch? I know in pytorch I can define tensors as int8, however, when I actually want to use int8, I get:
RuntimeError: _thnn_conv2d_forward is not implemented for type torch.CharTensor
So I am confused, how to run quantized model in pytorch tha... | Depends what your goals are.
If you want to simulate your quantized model:
You may stick to existing float data type and only introduce truncation as needed, i.e.:
x = torch.floor(x * 2**8) / 2**8
assuming x is a float tensor.
If you want to simulate your quantized model efficiently:
Then, I am afraid PyTor... | https://stackoverflow.com/questions/55289703/ |
Loss over pixels | During backpropagation, will these cases have different effect:-
sum up loss over all pixels then backpropagate.
average loss over all pixels then backpropagate
backpropagate individuallyover all pixels.
My main doubts in regarding the numerical value but the effect all these would be having.
| The difference between no 1 and 2 is basically : since sum will result in bigger than mean, the magnitude of gradients from sum operation will be bigger, but direction will be same.
Here's a little demonstration, lets first declare necessary variables:
x = torch.tensor([4,1,3,7],dtype=torch.float32,requires_grad=True... | https://stackoverflow.com/questions/55299261/ |
Pytorch nn embeddings dimension size? | What is the correct dimension size for nn embeddings in Pytorch? I'm doing batch training.
I'm just a little confused with what the dimensions of "self.embeddings" in the code below are supposed to be when I get "shape"?
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
| The shape of the self.embedding will be [sentence_length, batch_size, embedding_dim]
where sentence_length is the length of inputs in each batch.
| https://stackoverflow.com/questions/55299435/ |
pip install horovod fails on conda + OSX 10.14 | Running pip install horovod in a conda environment with pytorch installed resulted in
error: None of TensorFlow, PyTorch, or MXNet plugins were built. See errors above.
where the root problem near the top of stdout is
ld: library not found for -lstdc++
clang: error: linker command failed with exit code 1 (use -v ... | CFLAGS=-mmacosx-version-min=10.9 pip install horovod, inspired from this seemingly unrelated Horovod issue.
This issue thread from pandas has a nice explanation:
The compiler standard library defaults to either libstdc++ or libc++, depending on the targetted macOS version - libstdc++ for 10.8 and below, and libc++ for... | https://stackoverflow.com/questions/55305018/ |
Creating a torch tensor from a generator | I attempt to construct a tensor from a generator as follows:
>>> torch.tensor(i**2 for i in range(10))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Could not infer dtype of generator
Currently I just do:
>>> torch.tensor([i**2 for i in range(10... | As @blue-phoenox already points out, it is preferred to use the built-in PyTorch functions to create the tensor directly. But if you have to deal with generator, it can be advisable to use numpy as a intermediate stage. Since PyTorch avoid to copy the numpy array, it should be quite performat (compared to the simple li... | https://stackoverflow.com/questions/55307368/ |
Loss not decreasing - Pytorch | I am using dice loss for my implementation of a Fully Convolutional Network(FCN) which involves hypernetworks. The model has two inputs and one output which is a binary segmentation map. The model is updating weights but loss is constant.
It is not even overfitting on only three training examples
I have used other los... | @Muhammad Hamza Mughal
You got to add code of at least your forward and train functions for us to pinpoint the issue, @Jatentaki is right there could be so many things that could mess up a ML / DL code. Even I moved recently to pytorch from Keras, took some time to get used to it. But, here are the things I'd do:
1) ... | https://stackoverflow.com/questions/55311932/ |
PyTorch 1.0 loading VGGFace2 weights in Python3.7 | I am using Python3.7 and PyTorch 1.0 to develop a face recognition system. I want to use VGGFace2 Resnet50 pretrained model as described here as a feature extractor. I have downloaded the model and weights.
I run the following codes as project readme says:
MainModel = imp.load_source('MainModel', 'resnet50_128_pytorc... | I found a solution which currently looks like it's working. It basically changes the pickle load with latin1 encoding.
from functools import partial
import pickle
pickle.load = partial(pickle.load, encoding="latin1")
pickle.Unpickler = partial(pickle.Unpickler, encoding="latin1")
MainModel = imp.load_source('MainModel... | https://stackoverflow.com/questions/55312396/ |
BatchNorm1d needs 2d input? | I want to fix problem in PyTorch.
I wrote the following code that is learning sine functions as tutorial.
import torch
from torch import nn
from torch import optim
from torch.autograd import Variable as V
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
# y=sin(x1)
numTrain = 512
numTest = 12... | When working with 1D signals, pyTorch actually expects a 2D tensors: the first dimension is the "mini-batch" dimension. Therefore, you should evaluate your net on a batch with one 1D signal:
output - net(V(torch.Tensor([x[None, ...]]))
Make sure you set your net to "eval" mode before evaluating it:
net.eval()
| https://stackoverflow.com/questions/55320883/ |
How to clear CUDA memory in PyTorch | I am trying to get the output of a neural network which I have already trained. The input is an image of the size 300x300. I am using a batch size of 1, but I still get a CUDA error: out of memory error after I have successfully got the output for 25 images.
I tried torch.cuda.empty_cache(), but this still doesn't seem... | I figured out where I was going wrong. I am posting the solution as an answer for others who might be struggling with the same problem.
Basically, what PyTorch does is that it creates a computational graph whenever I pass the data through my network and stores the computations on the GPU memory, in case I want to calc... | https://stackoverflow.com/questions/55322434/ |
Unity ML-Agents Running Very Slowly | Using the Python API, whether I run using a build or in the Editor the simulation is much slower than when using the provided mlagents-learn methods. I'm running something similar to this, using a PyTorch implementation of DDPG and CUDA 9.0. Is this expected behaviour?
| Got to the Academy's proprieties and make sure that the value of Time Scale in the Inference Configuration is equals to the Training Configuration's one.
For other info check the official documentation of ML-Agents at Learning-Environment-Design-Academy
| https://stackoverflow.com/questions/55324790/ |
Visualizing the output of intermediate layers of cnn in pytorch | I'm trying to visualize the output of the intermediate layers of the VGG19 network, from the torchvision module, specifically the layer, conv4_2.
I've extracted the output in a tensor of shape [1, 512, 50, 50].
But how do I visualize an image with 512 channels?
| Feature visualization is a very complex subject.
If you want to have a visual idea what each filter (of the 512) of the trained net is responding to, you can use methods like these: propagating gradients from conv4_2's output to the input image, and change the image to maximize the feature response. You will have to wo... | https://stackoverflow.com/questions/55335583/ |
Why there are different output between model.forward(input) and model(input) | I'm using pytorch to build a simple model like VGG16,and I have overloaded the function forward in my model.
I found everyone tends to use model(input) to get the output rather than model.forward(input), and I am interested in the difference between them. I try to input the same data, but the result is different. I'm ... | model.forward just calls the forward operations as you mention but __call__ does a little extra.
If you dig into the code of nn.Module class you will see __call__ ultimately calls forward but internally handles the forward or backward hooks and manages some states that pytorch allows. When calling a simple model like... | https://stackoverflow.com/questions/55338756/ |
How to do parallel processing in pytorch | I am working on a deep learning problem. I am solving it using pytorch. I have two GPU's which are on the same machine (16273MiB,12193MiB). I want to use both the GPU's for my training (video dataset).
I get a warning:
There is an imbalance between your GPUs. You may want to exclude GPU 1 which
has less than 75% ... | As mentioned in this link, you have to do model.cuda() before passing it to nn.DataParallel.
net = nn.DataParallel(model.cuda(), device_ids=[0,1])
https://github.com/pytorch/pytorch/issues/17065
| https://stackoverflow.com/questions/55343893/ |
How to make a dataset from video datasets(tensorflow first) | everyone.
Now I have an object classification task, and I have a dataset containing a large number of videos. In every video, some frames(not every frame, about 160 thousand frames) have its labels, since a frame may have multiple objects.
I have some confusion about creating the dataset. My idea is to convert videos... | I am no TensorFlow guy, so my answer won't cover that, sorry.
Video formats generally gain compression at the cost of longer random-access times thanks to exploiting temporal correlations in the data. It makes sense because one usually accesses video frames sequentially, but if your access is entirely random I suggest... | https://stackoverflow.com/questions/55353187/ |
Pooling over channels in pytorch | In tensorflow, I can pool over the depth dimension which would reduce the channels and leave the spatial dimensions unchanged. I'm trying to do the same in pytorch but the documentation seems to say pooling can only be done over the height and width dimensions. Is there a way I can pool over channels in pytorch?
I've a... | The easiest way to reduce the number of channels is using a 1x1 kernel:
import torch
x = torch.rand(1, 512, 50, 50) ... | https://stackoverflow.com/questions/55355669/ |
Character level CNN - 1D or 2D | I want to implement a character-level CNN in Pytorch.
My input has 4 dimensions:
(batch_size, seq_length, padded_character_length, embedding_dim)
I'm wondering whether I should merge two dimensions and use a Conv1D-layer or instead use a Conv2D-layer on the existing dimensions.
Given the dimensions of the inpu... | I agree with Venkatesh that 1D might make more sense for your implementation. Instead of merging, I typically use the TimeDistributed layers that are found in Keras. This takes one layer and applies it across time dimensions. The advantage is that you keep the features from each dimension separate until you want to mer... | https://stackoverflow.com/questions/55357600/ |
installed pytorch1.0.1 for OS X with pip3 but cannot import, what can I do? | I have already installed pytorch for MacOS 10.14 with pip3, but I can not import it in the python script. What should I do?
System: MacOS 10.14
Python3: v3.7
➜ ~ pip3 list
Package Version
----------- -----------
numpy 1.16.2
Pillow 5.4.1
pip 18.1
pycairo 1.17.1
... | To expand upon my comment:
There's no strict guarantee that a pip3 wrapper script somewhere on your system is related to the pip package manager/module for your python3 binary. That wrapper may be created by a different installation of Python – maybe your system's own, maybe something else. (You can see where the scri... | https://stackoverflow.com/questions/55359707/ |
Why the pytorch implementation is so inefficient? | I have implemented a paper about a CNN architecture in both Keras and Pytorch but keras implementation is much more efficient it takes 4 gb of gpu for training with 50000 samples and 10000 validation samples but pytorch one takes all the 12 gb of gpu and i cant even use a validation set !
Optimizer for both of them is ... | Edit: on a closer look, acc doesn't seem to require gradient, so this paragraph probably doesn't apply.
It looks like the most significant issue is that total_train_acc accumulates history across the training loop (see https://pytorch.org/docs/stable/notes/faq.html for details).
Changing total_train_acc += acc to total... | https://stackoverflow.com/questions/55362722/ |
How to save grayscale image in Pytorch? | I want to save grayscale image in Pytorch, each image has four gray values, 0, 60, 120 and 180. I try the following way to save images, but the saved image is not I expected.
for i, (inputs) in enumerate(test_generator):
pred = modelPl(inputs.float()).detach()
fig,ax = plt.subplots(1,1,figsize = (5,5))
ax.... | It might be that torchvision.utils.save_image requires values to be in range 0 to 1. Your images have values which are greater than 1 and hence the problem.
You can check this by dividing the tensor by 255 (or some appropriate number). You can also try to set normalize=True and see if it can automatically normalize t... | https://stackoverflow.com/questions/55368465/ |
PyTorch doesn't seem to be optimizing correctly | I have posted this question on Data Science StackExchange site since StackOverflow does not support LaTeX. Linking it here because this site is probably more appropriate.
The question with correctly rendered LaTeX is here: https://datascience.stackexchange.com/questions/48062/pytorch-does-not-seem-to-be-optimizing-cor... | You have to move computing T inside the loop, or it will always have the same constant value, thus constant loss.
Another thing is to initialize theta to different values at indices, otherwise because of the symmetric nature of the problem the gradient is the same for every index.
Another thing is that you need to ze... | https://stackoverflow.com/questions/55369652/ |
Converting Keras (Tensorflow) convolutional neural networks to PyTorch convolutional networks? | Keras and PyTorch use different arguments for padding: Keras requires a string to be input, while PyTorch works with numbers. What is the difference, and how can one be translated into another (what code gets the equivalent result in either framework)?
PyTorch also takes the args in_channels, out_chanels while keras o... | Regarding padding,
Keras => 'valid' - no padding;
'same' - input is padded so that the output shape is same as input shape
Pytorch => you explicitly specify the padding
Valid padding
>>> model = keras.Sequential()
>>> model.add(keras.layers.Conv2D(filters=10, kernel_size=3, padding='va... | https://stackoverflow.com/questions/55381052/ |
Can anyone tell me how to checking if Pytorch model exists, and if it does, delete it and replace it with a new one? | So I save a lot of torch models for training and with different batchsize and epochs, and the models are saves with strings of epoch and batchsize. Basically I sometimes change some layers hyperparamters and some augmentation to check the prediction results, but if the torch model is there, I want to delete it and repl... | The simplest solution is simply saving a model with the same name, essentially overwriting the existing one. This is equivalent to checking if it exists, deleting and then saving.
If you want to explicitly check if it exists, you can do that easily with os.
import os
if os.path.exists('path/to/model.pth'): # checkin... | https://stackoverflow.com/questions/55388781/ |
How to convert ndarray to autograd variable in GPU format? | I am trying to do something like this,
data = torch.autograd.Variable(torch.from_numpy(nd_array))
It comes under the type as Variable[torch.FloatTensor], But I need Variable[torch.cuda.FloatTensor] also I want to do this in pytorch version 0.3.0 which lacks few methods like to(device) or set_default_device
| You can use cuda() method of your tensor.
If you'd like to use specific device you could go with context manager, e.g.
with torch.cuda.device(device_index):
t = torch.FloatTensor(1.).cuda()
For more specific information check documentation for version 0.3.0.
| https://stackoverflow.com/questions/55397711/ |
Get each sequence's last item from packed sequence | I am trying to put a packed and padded sequence through a GRU, and retrieve the output of the last item of each sequence. Of course I don't mean the -1 item, but the actual last, not-padded item. We know the lengths of the sequences in advance, so it should be as easy as to extract for each sequence the length-1 item. ... | Instead of last two operations last_seq_idxs and last_seq_items you could just do last_seq_items=output[torch.arange(4), input_sizes-1].
I don't think index_select is doing the right thing. It will select the whole batch at the index you passed and therefore your output size is [4,4,12].
| https://stackoverflow.com/questions/55399115/ |
What is the difference between view and view_as in PyTorch? | I am building neural networks in Pytorch, I see view and view_as used interchangeably in various implementation what is the difference between them?
| view and view_as are very similar with a slight difference. In view() the shape of the desired output tensor is to be passed in as the parameter, whereas in view_as() a tensor whose shape is to be mimicked is passed.
tensor.view_as(other) is equivalent to tensor.view(other.size())
| https://stackoverflow.com/questions/55403843/ |
Is there a pytorch method to check the number of cpus? | I can use this torch.cuda.device_count() to check the number of GPUs. I was wondering if there was something equivalent to check the number of CPUs.
| just use this :
os.cpu_count()
| https://stackoverflow.com/questions/55411921/ |
In a Kaggle kernel while having selected the GPU option when checking torch.cuda.is_available(), it says is not available | I created a kernel for a finished Kaggle competition in which i used pytorch. When checking if cuda is available, it returns False.
I checked the GPU option from settings and it says it is on in the bottom bar with resources info. I tried to restart the session without any changes.
What could be the problem? (cpu only... | The problem was with the selected docker configuration from settings. Selecting the "Latest Available" fixed the problem.
| https://stackoverflow.com/questions/55426042/ |
How to implement low-dimensional embedding layer in pytorch | I recently read a paper about embedding.
In Eq. (3), the f is a 4096X1 vector. the author try to compress the vector in to theta (a 20X1 vector) by using an embedding matrix E.
The equation is simple theta = E*f
I was wondering if it can using pytorch to achieve this goal, then in the training, the E can be learn... | Assuming your input vectors are one-hot that is where "embedding layers" are used, you can directly use embedding layer from torch which does above as well as some more things. nn.Embeddings take non-zero index of one-hot vector as input as a long tensor. For ex: if feature vector is
f = [[0,0,1], [1,0,0]]
then inp... | https://stackoverflow.com/questions/55427386/ |
Training model in eval() mode gives better result in PyTorch? | I have a model with Dropout layers (with p=0.6). I ended up training the model in .eval() mode and again trained the model in .train() mode, I find that the training .eval() mode gave me better accuracy and quicker loss reduction on training data,
train(): Train loss : 0.832, Validation Loss : 0.821
eval(): Train lo... | This seems like the model architecture is simple and when in train mode, is not able to capture the features in the data and hence undergoes underfitting.
eval() disables dropouts and Batch normalization, among other modules.
This means that the model trains better without dropout helping the model the learn better w... | https://stackoverflow.com/questions/55448941/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.