diff --git "a/data.json" "b/data.json" --- "a/data.json" +++ "b/data.json" @@ -1654,5 +1654,1655 @@ "input": "", "output": "

image has a shape of (64,64,3).

\n

Your input placeholder _x have a shape of (?,64,64,3).

\n

The problem is that you're feeding the placeholder with a value of a different shape.

\n

You have to feed it with a value of (1,64,64,3) = a batch of 1 image.

\n

Just reshape your image value to a batch with size one.

\n
image = array(img).reshape(1,64,64,3)\n
\n
\n

P.S: The fact that the input placeholder accepts a batch of images, means that you can run predicions for a batch of images in parallel.\nYou can try to read more than 1 image (N images) and then build a batch of N images, using a tensor with shape (N,64,64,3)

\n", "system": "" + }, + { + "instruction": "Get CUDA_HOME environment path PYTORCH", + "input": "", + "output": "

I had a similar problem when I needed NVCC for flash attention, but it seems that torch uses a reduced version of CUDA libraries. Installing the toolkit from conda forge resolved issue for me

\n
conda install -c conda-forge cudatoolkit-dev -y\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: 'collections.OrderedDict' object has no attribute 'eval'", + "input": "", + "output": "

It is not a model file, instead, this is a state file. In a model file, the complete model is stored, whereas in a state file only the parameters are stored.
\nSo, your OrderedDict are just values for your model. You will need to create the model and then need to load these values into your model. So, the process will be something in form of

\n\n
import torch\nimport torch.nn as nn\n\nclass TempModel(nn.Module):\n    def __init__(self):\n        self.conv1 = nn.Conv2d(3, 5, (3, 3))\n    def forward(self, inp):\n        return self.conv1(inp)\n\nmodel = TempModel()\nmodel.load_state_dict(torch.load(file_path))\nmodel.eval()\n
\n\n

You'll need to define your model properly. The one given in the example above is just a dummy. If you construct your model yourself, you might need to update the keys of the saved dict file as mentioned here. The best course of action is to define your model in exactly the same way from when the state_dict was saved and then directly executing model.load_state_dict will work.

\n", + "system": "" + }, + { + "instruction": "KL Divergence for two probability distributions in PyTorch", + "input": "", + "output": "

Yes, PyTorch has a method named kl_div under torch.nn.functional to directly compute KL-devergence between tensors. Suppose you have tensor a and b of same shape. You can use the following code:

\n\n
import torch.nn.functional as F\nout = F.kl_div(a, b)\n
\n\n

For more details, see the above method documentation.

\n", + "system": "" + }, + { + "instruction": "What is volatile variable in Pytorch", + "input": "", + "output": "

Basically, set the input to a network to volatile if you are doing inference only and won't be running backpropagation in order to conserve memory.

\n\n

From the docs:

\n\n
\n

Volatile is recommended for purely inference mode, when you\u2019re sure\n you won\u2019t be even calling .backward(). It\u2019s more efficient than any\n other autograd setting - it will use the absolute minimal amount of\n memory to evaluate the model. volatile also determines that\n requires_grad is False.

\n
\n\n

Edit: The volatile keyword has been deprecated as of pytorch version 0.4.0

\n", + "system": "" + }, + { + "instruction": "How do you alter the size of a Pytorch Dataset?", + "input": "", + "output": "

You can use torch.utils.data.Subset() e.g. for the first 10,000 elements:

\n
import torch.utils.data as data_utils\n\nindices = torch.arange(10000)\ntr_10k = data_utils.Subset(tr, indices)\n
\n", + "system": "" + }, + { + "instruction": "How to solve the famous `unhandled cuda error, NCCL version 2.7.8` error?", + "input": "", + "output": "

I had the right cuda installed meaning:

\n
python -c "import torch;print(torch.version.cuda)"\n\n#was equal to \n\nnvcc -V\n
\n

and

\n
ldconfig -v | grep "libnccl.so" | tail -n1 | sed -r 's/^.*\\.so\\.//' \n
\n

was giving out some version of nccl (e.g., 2.10.3 )

\n

The fix was to remove nccl:

\n
sudo apt remove libnccl2 libnccl-dev\n
\n

then the libnccl version check was not giving any version, but ddp training was working fine!

\n", + "system": "" + }, + { + "instruction": "PyTorch: What is the difference between tensor.cuda() and tensor.to(torch.device("cuda:0"))?", + "input": "", + "output": "

There is no difference between the two.
\nEarly versions of pytorch had .cuda() and .cpu() methods to move tensors and models from cpu to gpu and back. However, this made code writing a bit cumbersome:

\n
if cuda_available:\n  x = x.cuda()\n  model.cuda()\nelse:\n  x = x.cpu()\n  model.cpu()\n
\n

Later versions introduced .to() that basically takes care of everything in an elegant way:

\n
device = torch.device('cuda') if cuda_available else torch.device('cpu')\nx = x.to(device)\nmodel = model.to(device)\n
\n", + "system": "" + }, + { + "instruction": "Pytorch: IndexError: index out of range in self. How to solve?", + "input": "", + "output": "

I think you have messed up with input dimension declared torch.nn.Embedding and with your input. torch.nn.Embedding is a simple lookup table that stores embeddings of a fixed dictionary and size.

\n

Any input less than zero or more than or equal to the declared input dimension raises this error (In the given example having torch.tensor([10]), 10 is equal to input_dim).\nCompare your input and the dimension mentioned in torch.nn.Embedding.

\n

Attached code snippet to simulate the issue.

\n
from torch import nn\ninput_dim = 10\nembedding_dim = 2\nembedding = nn.Embedding(input_dim, embedding_dim)\nerr = True\nif err:\n    #Any input more than input_dim - 1, here input_dim = 10\n    #Any input less than zero\n    input_to_embed = torch.tensor([10])\nelse:\n    input_to_embed = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\nembed = embedding(input_to_embed)\nprint(embed)\n
\n

Hope this will solve your issue.

\n", + "system": "" + }, + { + "instruction": "How to modify path where Torch Hub models are downloaded", + "input": "", + "output": "

From official documentation, there is several ways to modify this path.
\nIn priority order :

\n
    \n
  1. Calling hub.set_dir(<PATH_TO_HUB_DIR>)

    \n
  2. \n
  3. $TORCH_HOME/hub, if environment variable TORCH_HOME is set.

    \n
  4. \n
  5. $XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set.

    \n
  6. \n
  7. ~/.cache/torch/hub

    \n
  8. \n
\n

So I just had to do :

\n

export TORCH_HOME=/my/path/

\n
\n

Edit

\n

TORCH_HUB appear to be deprecated, use TORCH_HOME instead

\n", + "system": "" + }, + { + "instruction": "How to implement dropout in Pytorch, and where to apply it", + "input": "", + "output": "

The two examples you provided are exactly the same. self.drop_layer = nn.Dropout(p=p) and self.dropout = nn.Dropout(p) only differ because the authors assigned the layers to different variable names. The dropout layer is typically defined in the .__init__() method, and called in .forward(). Like this:

\n
 class NeuralNet(nn.Module):\n  def __init__(self, input_size, hidden_size, num_classes, p = dropout):\n      super(NeuralNet, self).__init__()\n      self.fc1 = nn.Linear(input_size, hidden_size)\n      self.fc2 = nn.Linear(hidden_size, hidden_size)\n      self.fc3 = nn.Linear(hidden_size, num_classes)\n      self.dropout = nn.Dropout(p) \n\n  def forward(self, x):\n      out = F.relu(self.fc1(x))\n      out = F.relu(self.fc2(out))\n      out = self.dropout(self.fc3(out))\n      return out\n
\n

You can do the test:

\n
import torch\nimport torch.nn  as nn\n\nm = nn.Dropout(p=0.5)\ninput = torch.randn(20, 16)\nprint(torch.sum(torch.nonzero(input)))\nprint(torch.sum(torch.nonzero(m(input))))\n
\n
tensor(5440) # sum of nonzero values\ntensor(2656) # sum on nonzero values after dropout\n
\n

Let's visualize it:

\n
import torch\nimport torch.nn as nn\ninput = torch.randn(5, 5)\nprint(input)\n
\n
tensor([[ 1.1404,  0.2102, -0.1237,  0.4240,  0.0174],\n        [-2.0872,  1.2790,  0.7804, -0.0962, -0.9730],\n        [ 0.4788, -1.3408,  0.0483,  2.4125, -1.2463],\n        [ 1.5761,  0.3592,  0.2302,  1.3980,  0.0154],\n        [-0.4308,  0.2484,  0.8584,  0.1689, -1.3607]])\n
\n

Now, let's apply the dropout:

\n
m = nn.Dropout(p=0.5)\noutput = m(input)\nprint(output)\n
\n
tensor([[ 0.0000,  0.0000, -0.0000,  0.8481,  0.0000],\n        [-0.0000,  0.0000,  1.5608, -0.0000, -1.9459],\n        [ 0.0000, -0.0000,  0.0000,  0.0000, -0.0000],\n        [ 0.0000,  0.7184,  0.4604,  2.7959,  0.0308],\n        [-0.0000,  0.0000,  0.0000,  0.0000, -0.0000]])\n
\n

Approximately half the neurons have been turned to zero, because we had probability p=0.5 that a neuron is set to zero!

\n", + "system": "" + }, + { + "instruction": "Filter data in pytorch tensor", + "input": "", + "output": "

Take a look at torch.nonzero which is roughly equivalent to np.where. It translates a binary mask to indices:

\n
>>> X = torch.tensor([0.1, 0.5, -1.0, 0, 1.2, 0])\n>>> mask = X >= 0\n>>> mask\ntensor([1, 1, 0, 1, 1, 1], dtype=torch.uint8)\n\n>>> indices = torch.nonzero(mask)\n>>> indices\ntensor([[0],\n        [1],\n        [3],\n        [4],\n        [5]])\n\n>>> X[indices]\ntensor([[0.1000],\n        [0.5000],\n        [0.0000],\n        [1.2000],\n        [0.0000]])\n
\n

A solution would then be to write:

\n
mask = X >= 0\nnew_tensor = X[mask]\nindices = torch.nonzero(mask)\n
\n", + "system": "" + }, + { + "instruction": "What are the difference between .bin and .pt pytorch saved model types?", + "input": "", + "output": "

There is no difference as it's just an extension. When it comes to UNIX-like OSes one can open the file no matter the extension (see here), Windows on the other hand is built with them in mind (here).

\n

torch can read either .bin or .pt or .anything so it's probably convention employed by the creators of that repository.

\n

Standard approach is to use .pt or .pth, though the second extension collides with Python's text file readable by interpreter, so .pt seems to be the best idea for now (see this github issue).

\n", + "system": "" + }, + { + "instruction": "How to convert a pytorch tensor of ints to a tensor of booleans?", + "input": "", + "output": "

What you're looking for is to generate a boolean mask for the given integer tensor. For this, you can simply check for the condition: "whether the values in the tensor are greater than 0" using simple comparison operator (>) or using torch.gt(), which would then give us the desired result.

\n
# input tensor\nIn [76]: t   \nOut[76]: tensor([ 0, 10,  0, 16])\n\n# generate the needed boolean mask\nIn [78]: t > 0      \nOut[78]: tensor([0, 1, 0, 1], dtype=torch.uint8)\n
\n
\n
# sanity check\nIn [93]: mask = t > 0      \n\nIn [94]: mask.type()      \nOut[94]: 'torch.ByteTensor'\n
\n
\n

Note: In PyTorch version 1.4+, the above operation would return 'torch.BoolTensor'

\n
In [9]: t > 0  \nOut[9]: tensor([False,  True, False,  True])\n\n# alternatively, use `torch.gt()` API\nIn [11]: torch.gt(t, 0)\nOut[11]: tensor([False,  True, False,  True])\n
\n

If you indeed want single bits (either 0s or 1s), cast it using:

\n
In [14]: (t > 0).type(torch.uint8)   \nOut[14]: tensor([0, 1, 0, 1], dtype=torch.uint8)\n\n# alternatively, use `torch.gt()` API\nIn [15]: torch.gt(t, 0).int()\nOut[15]: tensor([0, 1, 0, 1], dtype=torch.int32)\n
\n

The reason for this change has been discussed in this feature-request issue: issues/4764 - Introduce torch.BoolTensor ...

\n
\n

TL;DR: Simple one liner

\n
t.bool().int()\n
\n", + "system": "" + }, + { + "instruction": "How do you convert a .onnx to tflite?", + "input": "", + "output": "

I think the ONNX file i.e. model.onnx that you have given is corrupted I don't know what is the issue but it is not doing any inference on ONNX runtime.

\n
\n

Now you can run PyTorch Models directly on mobile phones. check out PyTorch Mobile's documentation here

\n
\n
\n

This answer is for TensorFlow version 1,
\nFor TensorFlow version 2 or higher click link

\n
\n

The best way to convert the model from protobuf freezeGraph to TFlite is to use the official TensorFlow lite converter documentation

\n

According to TensorFlow Docs, TocoConverter has been deprecated

\n
\n

This class (tf.compat.v1.lite.TocoConverter) has been deprecated. Please use lite.TFLiteConverter instead.

\n
\n

Convert from PyTorch to ONNX model

\n

The best practice to convert the model from Pytorch to Onnx is that you should add the following parameters to specify the names of the input and output layer of your model in torch.onnx.export() function

\n
\n# Export the model from PyTorch to ONNX\ntorch_out = torch.onnx._export(model,             # model being run\n                                x,          # model input (or a tuple for multiple inputs)\n                                EXPORT_PATH + "mnist.onnx",      # where to save the model (can be a file or file-like object)\n                                export_params=True,       # store the trained parameter weights inside the model file\n                                input_names=['main_input'],     # specify the name of input layer in onnx model\n                                output_names=['main_output'])     # specify the name of input layer in onnx model\n\n
\n

So in your case:\nNow export this model to TensorFlow protobuf FreezeGraph using onnx-tf

\n
\n

Please note that this method is only working when tensorflow_version < 2

\n
\n

Convert from ONNX to TensorFlow freezGraph

\n

To convert the model please install onnx-tf version 1.5.0 from the below command

\n
pip install  onnx-tf==1.5.0\n
\n

Now to convert .onnx model to TensorFlow freeze graph run this below command in shell

\n
onnx-tf convert -i "mnist.onnx" -o  "mnist.pb"\n
\n

Convert from TensorFlow FreezeGraph .pb to TF

\n

Now to convert this model from .pb file to tflite model use this code

\n
import tensorflow as tf\n# make a converter object from the saved tensorflow file\nconverter = tf.lite.TFLiteConverter.from_frozen_graph('mnist.pb', #TensorFlow freezegraph .pb model file\n                                                      input_arrays=['main_input'], # name of input arrays as defined in torch.onnx.export function before.\n                                                      output_arrays=['main_output']  # name of output arrays defined in torch.onnx.export function before.\n                                                      )\n# tell converter which type of optimization techniques to use\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\n# to view the best option for optimization read documentation of tflite about optimization\n# go to this link https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional\n\n# convert the model \ntf_lite_model = converter.convert()\n# save the converted model \nopen('mnist.tflite', 'wb').write(tf_lite_model)\n
\n

To choose which option is best for optimization for your model use case see this official guide about TensorFlow lite optimization

\n

https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional

\n
\n

Note: You can try my Jupyter Notebook Convert ONNX model to Tensorflow Lite on Google Colaboratory link

\n
\n", + "system": "" + }, + { + "instruction": "How do you load MNIST images into Pytorch DataLoader?", + "input": "", + "output": "

Here's what I did for pytorch 0.4.1 (should still work in 1.3)

\n\n
def load_dataset():\n    data_path = 'data/train/'\n    train_dataset = torchvision.datasets.ImageFolder(\n        root=data_path,\n        transform=torchvision.transforms.ToTensor()\n    )\n    train_loader = torch.utils.data.DataLoader(\n        train_dataset,\n        batch_size=64,\n        num_workers=0,\n        shuffle=True\n    )\n    return train_loader\n\nfor batch_idx, (data, target) in enumerate(load_dataset()):\n    #train network\n
\n", + "system": "" + }, + { + "instruction": "Pytorch beginner : tensor.new method", + "input": "", + "output": "

As the documentation of tensor.new() says:

\n\n
\n

Constructs a new tensor of the same data type as self tensor.

\n
\n\n

Also note:

\n\n
\n

For CUDA tensors, this method will create new tensor on the same device as this tensor.

\n
\n", + "system": "" + }, + { + "instruction": "What's the reason of the error ValueError: Expected more than 1 value per channel?", + "input": "", + "output": "

It will fail on batches of size 1 if we use feature-wise batch normalization.

\n\n

As Batch normalization computes:

\n\n
y = (x - mean(x)) / (std(x) + eps)\n
\n\n

If we have one sample per batch then mean(x) = x, and the output will be entirely zero (ignoring the bias). We can't use that for learning...

\n", + "system": "" + }, + { + "instruction": "PyTorch equivalence for softmax_cross_entropy_with_logits", + "input": "", + "output": "
\n

is there an equivalent PyTorch loss function for TensorFlow's softmax_cross_entropy_with_logits?

\n
\n

torch.nn.functional.cross_entropy

\n

This takes logits as inputs (performing log_softmax internally). Here "logits" are just some values that are not probabilities (i.e. not necessarily in the interval [0,1]).

\n

But, logits are also the values that will be converted to probabilities.\nIf you consider the name of the tensorflow function you will understand it is pleonasm (since the with_logits part assumes softmax will be called).

\n

In the PyTorch implementation looks like this:

\n
loss = F.cross_entropy(x, target)\n
\n

Which is equivalent to :

\n
lp = F.log_softmax(x, dim=-1)\nloss = F.nll_loss(lp, target)\n
\n

It is not F.binary_cross_entropy_with_logits because this function assumes multi label classification:

\n
F.sigmoid + F.binary_cross_entropy = F.binary_cross_entropy_with_logits\n
\n

It is not torch.nn.functional.nll_loss either because this function takes log-probabilities (after log_softmax()) not logits.

\n", + "system": "" + }, + { + "instruction": "Why does autograd not produce gradient for intermediate variables?", + "input": "", + "output": "
\n

By default, gradients are only retained for leaf variables. non-leaf variables' gradients are not retained to be inspected later. This was\n done by design, to save memory.

\n
\n\n

-soumith chintala

\n\n

See: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94

\n\n

Option 1:

\n\n

Call y.retain_grad()

\n\n
x = Variable(torch.Tensor([2]), requires_grad=True)\ny = x * x\nz = y * y\n\ny.retain_grad()\n\nz.backward()\n\nprint(y.grad)\n#Variable containing:\n# 8\n#[torch.FloatTensor of size 1]\n
\n\n

Source: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/16

\n\n

Option 2:

\n\n

Register a hook, which is basically a function called when that gradient is calculated. Then you can save it, assign it, print it, whatever...

\n\n
from __future__ import print_function\nimport torch\nfrom torch.autograd import Variable\n\nx = Variable(torch.Tensor([2]), requires_grad=True)\ny = x * x\nz = y * y\n\ny.register_hook(print) ## this can be anything you need it to be\n\nz.backward()\n
\n\n

output:

\n\n
Variable containing:  8 [torch.FloatTensor of size 1\n
\n\n

Source: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/2

\n\n

Also see: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/7

\n", + "system": "" + }, + { + "instruction": "In PyTorch, what exactly does the grad_fn attribute store and how is it used?", + "input": "", + "output": "

grad_fn is a function "handle", giving access to the applicable gradient function. The gradient at the given point is a coefficient for adjusting weights during back-propagation.

\n

"Handle" is a general term for an object descriptor, designed to give appropriate access to the object. For instance, when you open a file, open returns a file handle. When you instantiate a class, the __init__ function returns a handle to the created instance. The handle contains references (usually memory addresses) to the data and functions for the item in question.

\n

It appears as the generic object class because it's from the underlying implementation in another language, such that it does not map exactly to the Python function type. PyTorch handles the inter-language call and return. This hand-off is part of the pre-complied (shared-object) run-time system.

\n

Is that enough to clarify what you see?

\n", + "system": "" + }, + { + "instruction": "ImportError: libc10.so: cannot open shared object file: No such file or directory", + "input": "", + "output": "

libc10.so is made available by pytorch. Hence first

\n
import torch\n
\n

and then import packages that depend on pytorch.

\n", + "system": "" + }, + { + "instruction": "What is running loss in PyTorch and how is it calculated", + "input": "", + "output": "

It's because the loss given by CrossEntropy or other loss functions is divided by the number of elements i.e. the reduction parameter is mean by default.

\n
torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')\n
\n

Hence, loss.item() contains the loss of entire mini-batch, but divided by the batch size. That's why loss.item() is multiplied with batch size, given by inputs.size(0), while calculating running_loss.

\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'torch' has no attribute '_six'. Bert model in Pytorch", + "input": "", + "output": "

I was getting this erro when I was trying to load CIFAR10 dataset. Simple restart didn't solved it! I found that for some (conda) environment it was working fine.

\n

Updating torch and torchvision helped:

\n
pip install --upgrade torch torchvision\n
\n", + "system": "" + }, + { + "instruction": "Autograd.grad() for Tensor in pytorch", + "input": "", + "output": "

\nLet's start from simple working example with plain loss function and regular backward. We will build short computational graph and do some grad computations on it.

\n\n

Code:

\n\n
import torch\nfrom torch.autograd import grad\nimport torch.nn as nn\n\n\n# Create some dummy data.\nx = torch.ones(2, 2, requires_grad=True)\ngt = torch.ones_like(x) * 16 - 0.5  # \"ground-truths\" \n\n# We will use MSELoss as an example.\nloss_fn = nn.MSELoss()\n\n# Do some computations.\nv = x + 2\ny = v ** 2\n\n# Compute loss.\nloss = loss_fn(y, gt)\n\nprint(f'Loss: {loss}')\n\n# Now compute gradients:\nd_loss_dx = grad(outputs=loss, inputs=x)\nprint(f'dloss/dx:\\n {d_loss_dx}')\n
\n\n

Output:

\n\n
Loss: 42.25\ndloss/dx:\n(tensor([[-19.5000, -19.5000], [-19.5000, -19.5000]]),)\n
\n\n

Ok, this works! Now let's try to reproduce error \"grad can be implicitly created only for scalar outputs\". As you can notice, loss in previous example is a scalar. backward() and grad() by defaults deals with single scalar value: loss.backward(torch.tensor(1.)). If you try to pass tensor with more values you will get an error.

\n\n

Code:

\n\n
v = x + 2\ny = v ** 2\n\ntry:\n    dy_hat_dx = grad(outputs=y, inputs=x)\nexcept RuntimeError as err:\n    print(err)\n
\n\n

Output:

\n\n

grad can be implicitly created only for scalar outputs

\n\n

Therefore, when using grad() you need to specify grad_outputs parameter as follows:

\n\n

Code:

\n\n
v = x + 2\ny = v ** 2\n\ndy_dx = grad(outputs=y, inputs=x, grad_outputs=torch.ones_like(y))\nprint(f'dy/dx:\\n {dy_dx}')\n\ndv_dx = grad(outputs=v, inputs=x, grad_outputs=torch.ones_like(v))\nprint(f'dv/dx:\\n {dv_dx}')\n
\n\n

Output:

\n\n
dy/dx:\n(tensor([[6., 6.],[6., 6.]]),)\n\ndv/dx:\n(tensor([[1., 1.], [1., 1.]]),)\n
\n\n

NOTE: If you are using backward() instead, simply do y.backward(torch.ones_like(y)).

\n", + "system": "" + }, + { + "instruction": "pytorch delete model from gpu", + "input": "", + "output": "

Freeing memory in PyTorch works as it does with the normal Python garbage collector. This means once all references to an Python-Object are gone it will be deleted.

\n\n

You can delete references by using the del operator:

\n\n
del model\n
\n\n

You have to make sure though that there is no reference to the respective object left, otherwise the memory won't be freed.

\n\n

So once you've deleted all references of your model, it should be deleted and the memory freed.

\n\n

If you want to learn more about memory management you can take a look here:\nhttps://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management

\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'torch' has no attribute '_six'. Bert model in Pytorch", + "input": "", + "output": "

I was getting this erro when I was trying to load CIFAR10 dataset. Simple restart didn't solved it! I found that for some (conda) environment it was working fine.

\n

Updating torch and torchvision helped:

\n
pip install --upgrade torch torchvision\n
\n", + "system": "" + }, + { + "instruction": "Pytorch version for cuda 12.2", + "input": "", + "output": "

You can install the nightly build. Note you should have cudnn installed already, I am using cudnn v8.9.3. The 12.1 PyTorch version works fine with CUDA v12.2.2:

\n

conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia

\n", + "system": "" + }, + { + "instruction": "PyTorch Error loading "\\lib\\site-packages\\torch\\lib\\shm.dll" or one of its dependencies", + "input": "", + "output": "

pip install torch==2.2.1 solved my problem, but additionally i tried to run this code but it didn't solve conda install cudatoolkit

\n", + "system": "" + }, + { + "instruction": "what the difference between att_mask and key_padding_mask in MultiHeadAttnetion", + "input": "", + "output": "

The key_padding_mask is used to mask out positions that are padding, i.e., after the end of the input sequence. This is always specific to the input batch and depends on how long are the sequence in the batch compared to the longest one. It is a 2D tensor of shape batch size \u00d7 input length.

\n

On the other hand, attn_mask says what key-value pairs are valid. In a Transformer decoder, a triangle mask is used to simulate the inference time and prevent the attending to the "future" positions. This is what att_mask is usually used for. If it is a 2D tensor, the shape is input length \u00d7 input length. You can also have a mask that is specific to every item in a batch. In that case, you can use a 3D tensor of shape (batch size \u00d7 num heads) \u00d7 input length \u00d7 input length. (So, in theory, you can simulate key_padding_mask with a 3D att_mask.)

\n", + "system": "" + }, + { + "instruction": "How to add parameters in module class in pytorch custom model?", + "input": "", + "output": "

You need to register your parameters:

\n
self.register_parameter(name='bias', param=torch.nn.Parameter(torch.randn(3)))\n
\n
\n

Update:
\nIn more recent versions of PyTorch, you no longer need to explicitly register_parameter, it's enough to set a member of your nn.Module with nn.Parameter to "notify" pytorch that this variable should be treated as a trainable parameter:

\n
self.bias = torch.nn.Parameter(torch.randn(3))\n
\n

Please note that is you want to have more complex data structures of parameters (e.g., lists, etc.) you should use dedicated containers like torch.nn.ParameterList or torch.nn.ParameterDict.

\n", + "system": "" + }, + { + "instruction": "RuntimeError: output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224]", + "input": "", + "output": "

The first dimension of the tensor means the color, so what your error means is that you are giving a grayscale picture (1 channel), while the data loader expects a RGB image (3 channels). You defined a pil_loader function that returns an image in RGB, but you are never using it.

\n\n

So you have two options:

\n\n
    \n
  1. Work with the image in Grayscale instead of rgb, which is cheaper computationally speaking. \nSolution: Both in train and test transforms change transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) for transforms.Normalize((0.5), (0.5))

  2. \n
  3. Make sure your image is in rgb. I don't know how your images are stored, but I guess you downloaded the dataset in grayscale. One thing you could try is using the pil_loader function you defines. Try changing img = Image.open(os.path.join(self.root, self.elements[index].rstrip())) for img = pil_loader(os.path.join(self.root, self.elements[index].rstrip())) in yout __getitem__ function.

  4. \n
\n\n

Let me know how it goes!

\n", + "system": "" + }, + { + "instruction": "Where is the source code of pytorch conv2d?", + "input": "", + "output": "

The functional code is implemented in C++. As of version 1.13.1 the entry point into the C++ code for conv2d is at aten/src/ATen/native/Convolution.cpp:804.

\n

If you are interested more generally in how functions are registered to the API then you can take a look at aten/src/ATen/native/README.md. A deeper dive will benefit from understanding some of the design decisions in PyTorch. For example, the dispatcher mechanism (see here). More general information can be found in the PyTorch developer's wiki, though keep in mind this wiki is primarily a tool for contributors and is not as polished as the Python API documentation. IMO a good starting point is the Core Frontend Onboarding page which gives links to most everything needed to get your head around the PyTorch source code.

\n", + "system": "" + }, + { + "instruction": "How does the __getitem__'s idx work within PyTorch's DataLoader?", + "input": "", + "output": "

What defines the idx is the sampler or batch_sampler, as you can see here (open-source projects are your friend). In this code (and comment/docstring) you can see the difference between sampler and batch_sampler. If you look here you'll see how the index is chosen:

\n\n
def __next__(self):\n    index = self._next_index()\n\n# and _next_index is implemented on the base class (_BaseDataLoaderIter)\ndef _next_index(self):\n    return next(self._sampler_iter)\n\n# self._sampler_iter is defined in the __init__ like this:\nself._sampler_iter = iter(self._index_sampler)\n\n# and self._index_sampler is a property implemented like this (modified to one-liner for simplicity):\nself._index_sampler = self.batch_sampler if self._auto_collation else self.sampler\n
\n\n

Pay attention that this is the _SingleProcessDataLoaderIter implementation; you can find the _MultiProcessingDataLoaderIter here (ofc, which one is used depends on the num_workers value, as you can see here). Going back to the samplers, assuming your Dataset is not _DatasetKind.Iterable and that you are not providing a custom sampler, it means you are either using (dataloader.py#L212-L215):

\n\n
if shuffle:\n    sampler = RandomSampler(dataset)\nelse:\n    sampler = SequentialSampler(dataset)\n\nif batch_size is not None and batch_sampler is None:\n    # auto_collation without custom batch_sampler\n    batch_sampler = BatchSampler(sampler, batch_size, drop_last)\n
\n\n

Let's take a look at how the default BatchSampler builds a batch:

\n\n
def __iter__(self):\n    batch = []\n    for idx in self.sampler:\n        batch.append(idx)\n        if len(batch) == self.batch_size:\n            yield batch\n            batch = []\n    if len(batch) > 0 and not self.drop_last:\n        yield batch\n
\n\n

Very simple: it gets indices from the sampler until the desired batch_size is reached.

\n\n

Now the question \"How does the __getitem__'s idx work within PyTorch's DataLoader?\" can be answered by seeing how each default sampler works.

\n\n\n\n
class SequentialSampler(Sampler):\n    def __init__(self, data_source):\n        self.data_source = data_source\n\n    def __iter__(self):\n        return iter(range(len(self.data_source)))\n\n    def __len__(self):\n        return len(self.data_source)\n
\n\n\n\n
def __iter__(self):\n    n = len(self.data_source)\n    if self.replacement:\n        return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist())\n    return iter(torch.randperm(n).tolist())\n
\n\n

Therefore, as you did not provide any code, we can only assume:

\n\n
    \n
  1. You are using shuffle=True in your DataLoader or
  2. \n
  3. You are using a custom sampler or
  4. \n
  5. Your Dataset is _DatasetKind.Iterable
  6. \n
\n", + "system": "" + }, + { + "instruction": "PyTorch - Getting the 'TypeError: pic should be PIL Image or ndarray. Got <class 'numpy.ndarray'>' error", + "input": "", + "output": "

This happens because of the transformation you use:

\n\n
self.transform = transforms.Compose([transforms.ToTensor()])\n
\n\n

As you can see in the documentation, torchvision.transforms.ToTensor converts a PIL Image or numpy.ndarray to tensor. So if you want to use this transformation, your data has to be of one of the above types.

\n", + "system": "" + }, + { + "instruction": "TypeError: can't convert np.ndarray of type numpy.object_", + "input": "", + "output": "

It is difficult to answer properly since you do not show us how you try to do it. From your error message I can see that you try to convert a numpy array containing objects to a torch tensor. This does not work, you will need a numeric data type:

\n
import torch\nimport numpy as np\n\n# Your test array without 'dtype=object'\na = np.array([\n    np.array([0.5, 1.0, 2.0], dtype=np.float16),\n    np.array([4.0, 6.0, 8.0], dtype=np.float16),\n])\n\nb = torch.from_numpy(a)\n\nprint(a.dtype) # This should not be 'object'\nprint(b)\n
\n

Output

\n
float16\ntensor([[0.5000, 1.0000, 2.0000],\n        [4.0000, 6.0000, 8.0000]], dtype=torch.float16)\n
\n", + "system": "" + }, + { + "instruction": "Indexing a multi-dimensional tensor with a tensor in PyTorch", + "input": "", + "output": "

A more elegant (and simpler) solution might be to simply cast b as a tuple:

\n\n
a[tuple(b)]\nOut[10]: tensor(5.)\n
\n\n

I was curious to see how this works with \"regular\" numpy, and found a related article explaining this quite well here.

\n", + "system": "" + }, + { + "instruction": "Parsing CSV into Pytorch tensors", + "input": "", + "output": "

Try converting it to an array first:

\n\n
test_tensor = torch.Tensor(test.values)\n
\n", + "system": "" + }, + { + "instruction": "How to multiply a matrix by a vector in PyTorch", + "input": "", + "output": "

You're looking for

\n\n
torch.mv(a,b)\n
\n\n

Note that for the future, you may also find torch.matmul() useful. torch.matmul() infers the dimensionality of your arguments and accordingly performs either dot products between vectors, matrix-vector or vector-matrix multiplication, matrix multiplication or batch matrix multiplication for higher order tensors.

\n", + "system": "" + }, + { + "instruction": "Cannot install PyTorch with Python 3.11 (Windows)", + "input": "", + "output": "

As of now, PyTorch only provides a Linux build with python 3.11\nYou can check all available wheels here.

\n

https://download.pytorch.org/whl/torch/

\n", + "system": "" + }, + { + "instruction": "GUnicorn + CUDA: Cannot re-initialize CUDA in forked subprocess", + "input": "", + "output": "

Reason for the Error

\n

As correctly stated in the comments by @Newbie, the issue isn't the model itself, but the CUDA context. When new child processes are forked, the parent's memory is shared read-only with the child, but the CUDA context doesn't support this sharing, it must be copied to the child. Hence, it reports above-mentioned error.

\n

Spawn instead of Fork

\n

To resolve this issue, we have to change the start method for the child processes from fork to spawn with multiprocessing.set_start_method. The following simple example works fine:

\n
import torch\nimport torch.multiprocessing as mp\n\n\ndef f(y):\n    y[0] = 1000\n\n\nif __name__ == '__main__':\n    x = torch.zeros(1).cuda()\n    x.share_memory_()\n\n    mp.set_start_method('spawn')\n    p = mp.Process(target=f, args=(x,), daemon=True)\n    p.start()\n    p.join()\n    print("x =", x.item())\n
\n

When running this code, a second CUDA context is initialized (this can be observed via watch -n 1 nvidia-smi in a second window), and f is executed after the context was initialized completely. After this, x = 1000.0 is printed on the console, thus, we confirmed that the tensor x was successfully shared between the processes.

\n

However, Gunicorn internally uses os.fork to start the worker processes, so multiprocessing.set_start_method has no influence on Gunicorn's behavior. Consequently, initializing the CUDA context in the root process must be avoided.

\n

Solution for Gunicorn

\n

In order to share the model among the worker processes, we thus must load the model in one single process and share it with the workers. Luckily, sending a CUDA tensor via a torch.multiprocessing.Queue to another process doesn't copy the parameters on the GPU, so we can use those queues for this problem.

\n
import time\n\nimport torch\nimport torch.multiprocessing as mp\n\n\ndef f(q):\n    y = q.get()\n    y[0] = 1000\n\n\ndef g(q):\n    x = torch.zeros(1).cuda()\n    x.share_memory_()\n    q.put(x)\n    q.put(x)\n    while True:\n        time.sleep(1)  # this process must live as long as x is in use\n\n\nif __name__ == '__main__':\n    queue = mp.Queue()\n    pf = mp.Process(target=f, args=(queue,), daemon=True)\n    pf.start()\n    pg = mp.Process(target=g, args=(queue,), daemon=True)\n    pg.start()\n    pf.join()\n    x = queue.get()\n    print("x =", x.item())  # Prints x = 1000.0\n
\n

For the Gunicorn server, we can use the same strategy: A model server process loads the model and serves it to each new worker process after its fork. In the post_fork hook the worker requests and receives the model from the model server. A Gunicorn configuration could look like this:

\n
import logging\n\nfrom client import request_model\nfrom app import app\n\nlogging.basicConfig(level=logging.INFO)\n\nbind = "localhost:8080"\nworkers = 1\nzmq_url = "tcp://127.0.0.1:5555"\n\n\ndef post_fork(server, worker):\n    app.config['MODEL'], app.config['COUNTER'] = request_model(zmq_url)\n
\n

In the post_fork hook, we call request_model to get a model from the model server and store the model in the configuration of the Flask application. The method request_model is defined in my example in the file client.py and defined as follows:

\n
import logging\nimport os\n\nfrom torch.multiprocessing.reductions import ForkingPickler\nimport zmq\n\n\ndef request_model(zmq_url: str):\n    logging.info("Connecting")\n    context = zmq.Context()\n    with context.socket(zmq.REQ) as socket:\n        socket.connect(zmq_url)\n        logging.info("Sending request")\n        socket.send(ForkingPickler.dumps(os.getpid()))\n        logging.info("Waiting for a response")\n        model = ForkingPickler.loads(socket.recv())\n    logging.info("Got response from object server")\n    return model\n
\n

We make use of ZeroMQ for inter-process communication here because it allows us to reference servers by name/address and to outsource the server code into its own application. multiprocessing.Queue and multiprocessing.Process apparently don't work well with Gunicorn. multiprocessing.Queue uses the ForkingPickler internally to serialize the objects, and the module torch.multiprocessing alters it in a way that Torch data structures can be serialized appropriately and reliably. So, we use this class to serialize our model to send it to the worker processes.

\n

The model is loaded and served in an application that is completely separate from Gunicorn and defined in server.py:

\n
from argparse import ArgumentParser\nimport logging\n\nimport torch\nfrom torch.multiprocessing.reductions import ForkingPickler\nimport zmq\n\n\ndef load_model():\n    model = torch.nn.Linear(10000, 50000)\n    model.cuda()\n    model.share_memory()\n\n    counter = torch.zeros(1).cuda()\n    counter.share_memory_()\n    return model, counter\n\n\ndef share_object(obj, url):\n    context = zmq.Context()\n    socket = context.socket(zmq.REP)\n    socket.bind(url)\n    while True:\n        logging.info("Waiting for requests on %s", url)\n        message = socket.recv()\n        logging.info("Got a message from %d", ForkingPickler.loads(message))\n        socket.send(ForkingPickler.dumps(obj))\n\n\nif __name__ == '__main__':\n    parser = ArgumentParser(description="Serve model")\n    parser.add_argument("--listen-address", default="tcp://127.0.0.1:5555")\n    args = parser.parse_args()\n\n    logging.basicConfig(level=logging.INFO)\n    logging.info("Loading model")\n    model = load_model()\n    share_object(model, args.listen_address)\n
\n

For this test, we use a model of about 2GB in size to see an effect on the GPU memory allocation in nvidia-smi and a small tensor to verify that the data is actually shared among the processes.

\n

Our sample flask application runs the model with a random input, counts the number of requests and returns both results:

\n
from flask import Flask\nimport torch\n\napp = Flask(__name__)\n\n\n@app.route("/", methods=["POST"])\ndef infer():\n    model: torch.nn.Linear = app.config['MODEL']\n    counter: torch.Tensor = app.config['COUNTER']\n    counter[0] += 1  # not thread-safe\n    input_features = torch.rand(model.in_features).cuda()\n    return {\n        "result": model(input_features).sum().item(),\n        "counter": counter.item()\n    }\n
\n

Test

\n

The example can be run as follows:

\n
$ python server.py &\nINFO:root:Waiting for requests on tcp://127.0.0.1:5555 \n$ gunicorn -c config.py app:app\n[2023-02-01 16:45:34 +0800] [24113] [INFO] Starting gunicorn 20.1.0\n[2023-02-01 16:45:34 +0800] [24113] [INFO] Listening at: http://127.0.0.1:8080 (24113)\n[2023-02-01 16:45:34 +0800] [24113] [INFO] Using worker: sync\n[2023-02-01 16:45:34 +0800] [24186] [INFO] Booting worker with pid: 24186\nINFO:root:Connecting\nINFO:root:Sending request\nINFO:root:Waiting for a response\nINFO:root:Got response from object server\n
\n

Using nvidia-smi, we can observe that now, two processes are using the GPU, and one of them allocates 2GB more VRAM than the other. Querying the flask application also works as expected:

\n
$ curl -X POST localhost:8080\n{"counter":1.0,"result":-23.956459045410156} \n$ curl -X POST localhost:8080\n{"counter":2.0,"result":-8.161510467529297}\n$ curl -X POST localhost:8080\n{"counter":3.0,"result":-37.823692321777344}\n
\n

Let's introduce some chaos and terminate our only Gunicorn worker:

\n
$ kill 24186\n[2023-02-01 18:02:09 +0800] [24186] [INFO] Worker exiting (pid: 24186)\n[2023-02-01 18:02:09 +0800] [4196] [INFO] Booting worker with pid: 4196\nINFO:root:Connecting\nINFO:root:Sending request\nINFO:root:Waiting for a response\nINFO:root:Got response from object server\n
\n

It's restarting properly and ready to answer our requests.

\n

Benefit

\n

Initially, the amount of required VRAM for our service was (SizeOf(Model) + SizeOf(CUDA context)) * Num(Workers). By sharing the weights of the model, we can reduce this by SizeOf(Model) * (Num(Workers) - 1) to SizeOf(Model) + SizeOf(CUDA context) * Num(Workers).

\n

Caveats

\n

The reliability of this approach relies on the single model server process. If that process terminates, not only will newly started workers get stuck, but the models in the existing workers will become unavailable and all workers crash at once. The shared tensors/models are only available as long as the server process is running. Even if the model server and Gunicorn workers are restarted, a short outage is certainly unavoidable. In a production environment, you thus should make sure this server process is kept alive.

\n

Additionally, sharing data among different processes can have side effects. When sharing changeable data, proper locks must be used to avoid race conditions.

\n", + "system": "" + }, + { + "instruction": "difference between Dataset and TensorDataset in pyTorch", + "input": "", + "output": "

The Dataset class is an abstract class that is used to define new types of (customs) datasets. Instead, the TensorDataset is a ready to use class to represent your data as list of tensors.

\n

You can define your custom dataset in the following way:

\n
class CustomDataset(torch.utils.data.Dataset):\n\n  def __init__(self, *args, **kwargs):\n    super().__init__(*args, **kwargs)\n    # Your code\n\n    self.instances = your_data\n\n  def __getitem__(self, idx):\n    return self.instances[idx] # In case you stored your data on a list called instances\n\n  def __len__(self):\n    return len(self.instances)\n
\n

If you just want to create a dataset that contains tensors for input features and labels, then use the TensorDataset directly:

\n
dataset = TensorDataset(input_features, labels)\n
\n

Note that input_features and labels must match on the length of the first dimension.

\n", + "system": "" + }, + { + "instruction": "PyTorch: RuntimeError: Input, output and indices must be on the current device", + "input": "", + "output": "

You should put your model on the device, which is probably cuda:

\n
device = "cuda:0"\nmodel = model.to(device)\n \n
\n

Then make sure the inputs of the model(input) are on the same device as well:

\n
input = input.to(device)\n
\n

It should work!

\n", + "system": "" + }, + { + "instruction": "What is tape-based autograd in Pytorch?", + "input": "", + "output": "

There are different types of automatic differentiation e.g. forward-mode, reverse-mode, hybrids; (more explanation). The tape-based autograd in Pytorch simply refers to the uses of reverse-mode automatic differentiation, source. The reverse-mode auto diff is simply a technique used to compute gradients efficiently and it happens to be used by backpropagation, source.

\n
\n

Now, in PyTorch, Autograd is the core torch package for automatic differentiation. It uses a tape-based system for automatic differentiation. In the forward phase, the autograd tape will remember all the operations it executed, and in the backward phase, it will replay the operations.

\n

Same in TensorFlow, to differentiate automatically, It also needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients. Now, TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of computation with respect to some inputs, usually tf.Variables. TensorFlow records relevant operations executed inside the context of a tf.GradientTape onto a tape. TensorFlow then uses that tape to compute the gradients of a recorded computation using reverse mode differentiation.

\n

So, as we can see from the high-level viewpoint, both are doing the same operation. However, during the custom training loop, the forward pass and calculation of the loss are more explicit in TensorFlow as it uses tf.GradientTape API scope whereas in PyTorch it's implicit for these operations but it requires to set required_grad flags to False temporarily while updating the training parameters (weights and biases). For that, it uses torch.no_grad API explicitly. In other words, TensorFlow's tf.GradientTape() is similar to PyTorch's loss.backward(). Below is the simplistic form in the code of the above statements.

\n
# TensorFlow \n[w, b] = tf_model.trainable_variables\nfor epoch in range(epochs):\n  with tf.GradientTape() as tape:\n    # forward passing and loss calculations \n    # within explicit tape scope \n    predictions = tf_model(x)\n    loss = squared_error(predictions, y)\n\n  # compute gradients (grad)\n  w_grad, b_grad = tape.gradient(loss, tf_model.trainable_variables)\n\n  # update training variables \n  w.assign(w - w_grad * learning_rate)\n  b.assign(b - b_grad * learning_rate)\n\n\n# PyTorch \n[w, b] = torch_model.parameters()\nfor epoch in range(epochs):\n  # forward pass and loss calculation \n  # implicit tape-based AD \n  y_pred = torch_model(inputs)\n  loss = squared_error(y_pred, labels)\n\n  # compute gradients (grad)\n  loss.backward()\n  \n  # update training variables / parameters  \n  with torch.no_grad():\n    w -= w.grad * learning_rate\n    b -= b.grad * learning_rate\n    w.grad.zero_()\n    b.grad.zero_()\n
\n

FYI, in the above, the trainable variables (w, b) are manually updated in both frameworks but we generally use an optimizer (e.g. adam) to do the job.

\n
# TensorFlow \n# ....\n# update training variables \noptimizer.apply_gradients(zip([w_grad, b_grad], model.trainable_weights))\n\n# PyTorch\n# ....\n# update training variables / parameters\noptimizer.step()\noptimizer.zero_grad()\n
\n", + "system": "" + }, + { + "instruction": "Using GPU inside docker container - CUDA Version: N/A and torch.cuda.is_available returns False", + "input": "", + "output": "

For anybody arriving here looking how to do it with docker compose, add to your service:

\n
deploy:\n  resources:\n    reservations:\n      devices:\n      - driver: nvidia\n        capabilities:\n          - gpu\n          - utility # nvidia-smi\n          - compute # CUDA\n          - video   # NVDEC/NVENC/NVCUVID. For instance to use a hardware accelerated ffmpeg. Skip it if you don't need it\n
\n

Note that, if the environment variable NVIDIA_DRIVER_CAPABILITIES is empty or unset, the container will use the default driver capabilities, which are utility and compute. If it's set to ALL, the container will use all the driver capabilities, but docker compose will still require you to set the capabilities in the docker-compose.yml, such as:

\n
deploy:\n  resources:\n    reservations:\n      devices:\n      - driver: nvidia\n        capabilities: # always required, whatever the value of NVIDIA_DRIVER_CAPABILITIES\n          - gpu\n
\n

You also need to use a nvidia/cuda image.

\n

Doc: https://docs.docker.com/compose/gpu-support

\n

You can find a list of the driver capabilities here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html#driver-capabilities

\n", + "system": "" + }, + { + "instruction": "Using pytorch Cuda on MacBook Pro", + "input": "", + "output": "

PyTorch now supports training using Metal.

\n

Announcement: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/

\n

To get started, install the latest nightly build of PyTorch: https://pytorch.org/get-started/locally/

\n
\n

Answer pre May 2022

\n

Unfortunately, no GPU acceleration is available when using Pytorch on macOS. CUDA has not available on macOS for a while and it only runs on NVIDIA GPUs. AMDs equivalent library ROCm requires Linux.

\n

If you are working with macOS 12.0 or later and would be willing to use TensorFlow instead, you can use the Mac optimized build of TensorFlow, which supports GPU training using Apple's own GPU acceleration library Metal.

\n

Currently, you need Python 3.8 (<=3.7 and >=3.9 don't work) to run it. To install, run:

\n
pip3 install tensorflow-macos\npip3 install tensorflow-metal\n
\n

You may need to uninstall existing tensorflow distributions first or work in a virtual environment.

\n

Then you can just

\n
import tensorflow as tf\n\ntf.test.is_gpu_available()  # should return True\n
\n", + "system": "" + }, + { + "instruction": "k-fold cross validation using DataLoaders in PyTorch", + "input": "", + "output": "

I just wrote a cross validation function work with dataloader and dataset.\nHere is my code, hope this is helpful.

\n
# define a cross validation function\ndef crossvalid(model=None,criterion=None,optimizer=None,dataset=None,k_fold=5):\n    \n    train_score = pd.Series()\n    val_score = pd.Series()\n    \n    total_size = len(dataset)\n    fraction = 1/k_fold\n    seg = int(total_size * fraction)\n    # tr:train,val:valid; r:right,l:left;  eg: trrr: right index of right side train subset \n    # index: [trll,trlr],[vall,valr],[trrl,trrr]\n    for i in range(k_fold):\n        trll = 0\n        trlr = i * seg\n        vall = trlr\n        valr = i * seg + seg\n        trrl = valr\n        trrr = total_size\n        # msg\n#         print("train indices: [%d,%d),[%d,%d), test indices: [%d,%d)" \n#               % (trll,trlr,trrl,trrr,vall,valr))\n        \n        train_left_indices = list(range(trll,trlr))\n        train_right_indices = list(range(trrl,trrr))\n        \n        train_indices = train_left_indices + train_right_indices\n        val_indices = list(range(vall,valr))\n        \n        train_set = torch.utils.data.dataset.Subset(dataset,train_indices)\n        val_set = torch.utils.data.dataset.Subset(dataset,val_indices)\n        \n#         print(len(train_set),len(val_set))\n#         print()\n        \n        train_loader = torch.utils.data.DataLoader(train_set, batch_size=50,\n                                          shuffle=True, num_workers=4)\n        val_loader = torch.utils.data.DataLoader(val_set, batch_size=50,\n                                          shuffle=True, num_workers=4)\n        train_acc = train(res_model,criterion,optimizer,train_loader,epoch=1)\n        train_score.at[i] = train_acc\n        val_acc = valid(res_model,criterion,optimizer,val_loader)\n        val_score.at[i] = val_acc\n    \n    return train_score,val_score\n        \n\ntrain_score,val_score = crossvalid(res_model,criterion,optimizer,dataset=tiny_dataset)\n\n
\n
\n

In order to give an intuition of correctness for what we are doing, see the output below:

\n
train indices: [0,0),[3600,18000), test indices: [0,3600)\n14400 3600\n\ntrain indices: [0,3600),[7200,18000), test indices: [3600,7200)\n14400 3600\n\ntrain indices: [0,7200),[10800,18000), test indices: [7200,10800)\n14400 3600\n\ntrain indices: [0,10800),[14400,18000), test indices: [10800,14400)\n14400 3600\n\ntrain indices: [0,14400),[18000,18000), test indices: [14400,18000)\n14400 3600\n
\n", + "system": "" + }, + { + "instruction": "pytorch RuntimeError: Expected object of scalar type Double but got scalar type Float", + "input": "", + "output": "

In short: your data has type double but your model has type float, this is not allowed in pytorch because only data with the same dtype can be fed into the model.

\n

In long:\nThis issue is related to the default dtype of PyTorch and Numpy. I will first explain why this error happens and then suggest some solutions(but I think you will not need my solution once you understand the principle.)

\n\n
    \n
  1. torch.float32(aka torch.float)
  2. \n
  3. torch.float64(aka torch.double)
  4. \n
\n\n

Then we need to talk about Numpy:

\n\n

I think now the issue is pretty clear, you have a model whose parameters are of torch.float32(aka torch.float) but tries to run it on data of torch.float64(aka torch.double). This is also what the error message tries to say:Expected object of scalar type Double but got scalar type Float for argument

\n

Solutions:

\n
    \n
  1. You have alreay found one: convert your data to torch.float32 by calling tensor.float()
  2. \n
  3. You can also specify the dtype when load the data: np.loadtxt(file_name,delimiter = ',',dtype="float32")
  4. \n
\n", + "system": "" + }, + { + "instruction": "How does torch.distributed.barrier() work", + "input": "", + "output": "

First you need to understand the ranks. To be brief: in a multiprocessing context we typically assume that rank 0 is the first process or base process. The other processes are then ranked differently, e.g. 1, 2, 3, totalling four processes in total.

\n

Some operations are not necessary to be done in parallel or you just need one process to do some preprocessing or caching so that the other processes can use that data.

\n

In your example, if the first if statement is entered by the non-base processes (rank 1, 2, 3), they will block (or "wait") because they run into the barrier. They wait there, because barrier() blocks until all processes have reached a barrier, but the base process has not reached a barrier yet.

\n

So at this point the non-base processes (1, 2, 3) are blocked, but the base process (0) continues. The base process will do some operations (preprocess and cache data, in this case) until it reaches the second if-statement. There, the base process will run into a barrier. At this point, all processes have stopped at a barrier, meaning that all current barriers can be lifted and all processes can continue. Because the base process prepared the data, the other processes can now use that data.

\n

Perhaps the most important thing to understand is:

\n\n", + "system": "" + }, + { + "instruction": "RuntimeError: module must have its parameters and buffers on device cuda:1 (device_ids[0]) but found one of them on device: cuda:2", + "input": "", + "output": "

DataParallel requires every input tensor be provided on the first device in its device_ids list.

\n\n

It basically uses that device as a staging area before scattering to the other GPUs and it's the device where final outputs are gathered before returning from forward. If you want device 2 to be the primary device then you just need to put it at the front of the list as follows

\n\n
model = nn.DataParallel(model, device_ids = [2, 0, 1, 3])\nmodel.to(f'cuda:{model.device_ids[0]}')\n
\n\n

After which all tensors provided to model should be on the first device as well.

\n\n
x = ... # input tensor\nx = x.to(f'cuda:{model.device_ids[0]}')\ny = model(x)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow 2.0 dataset and dataloader", + "input": "", + "output": "

When using the tf.data API, you will usually also make use of the map function.

\n\n

In PyTorch, your __getItem__ call basically fetches an element from your data structure given in __init__ and transforms it if necessary.

\n\n

In TF2.0, you do the same by initializing a Dataset using one of the Dataset.from_... functions (see from_generator, from_tensor_slices, from_tensors); this is essentially the __init__ part of a PyTorch Dataset. Then, you can call map to do the element-wise manipulations you would have in __getItem__.

\n\n

Tensorflow datasets are pretty much fancy iterators, so by design you don't access their elements using indices, but rather by traversing them.

\n\n

The guide on tf.data is very useful and provides a wide variety of examples.

\n", + "system": "" + }, + { + "instruction": "PyTorch get all layers of model", + "input": "", + "output": "

You can iterate over all modules of a model (including those inside each Sequential) with the modules() method. Here's a simple example:

\n
>>> model = nn.Sequential(nn.Linear(2, 2), \n                          nn.ReLU(),\n                          nn.Sequential(nn.Linear(2, 1),\n                          nn.Sigmoid()))\n\n>>> l = [module for module in model.modules() if not isinstance(module, nn.Sequential)]\n\n>>> l\n\n[Linear(in_features=2, out_features=2, bias=True),\n ReLU(),\n Linear(in_features=2, out_features=1, bias=True),\n Sigmoid()]\n
\n", + "system": "" + }, + { + "instruction": "Impact of using data shuffling in Pytorch dataloader", + "input": "", + "output": "

Yes it totally can affect the result! Shuffling the order of the data that we use to fit the classifier is so important, as the batches between epochs do not look alike.

\n

Checking the Data Loader Documentation it says:\n"shuffle (bool, optional) \u2013 set to True to have the data reshuffled at every epoch"

\n

In any case, it will make the model more robust and avoid over/underfitting.

\n

In your case this heavy increase of accuracy (from the lack of awareness of the dataset) probably is due to how the dataset is "organised" as maybe, as an example, each category goes to a different batch, and in every epoch, a batch contains the same category, which derives to a very bad accuracy when you are testing.

\n", + "system": "" + }, + { + "instruction": "Implementing dropout from scratch", + "input": "", + "output": "
\n

It seems I've implemented the dropout function incorrectly?

\n
np.random.binomial([np.ones((len(input),np.array(list(input.shape))))],1 dropout_percent)[0] * (1.0/(1-self.p))\n
\n
\n

In fact, the above implementation is known as Inverted Dropout. Inverted Dropout is how Dropout is implemented in practice in the various deep learning frameworks.

\n

What is inverted dropout?

\n

Before jump into the inverted dropout, it can be helpful to see how Dropout works for a single neuron:

\n

\"\"

\n

Since during train phase a neuron is kept on with probability q (=1-p), during the testing phase we have to emulate the behavior of the ensemble of networks used in the training phase. To this end, the authors suggest scaling the activation function by a factor of q during the test phase in order to use the expected output produced in the training phase as the single output required in the test phase (Section 10, Multiplicative Gaussian Noise). Thus:

\n

\"\"

\n

Inverted dropout is a bit different. This approach consists in the scaling of the activations during the training phase, leaving the test phase untouched. The scale factor is the inverse of the keep probability 1/1-p = 1/q, thus:

\n

\"enter

\n

Inverted dropout helps to define the model once and just change a parameter (the keep/drop probability) to run train and test on the same model. Direct Dropout, instead, force you to modify the network during the test phase because if you don\u2019t multiply by q the output the neuron will produce values that are higher respect to the one expected by the successive neurons (thus the following neurons can saturate or explode): that\u2019s why Inverted Dropout is the more common implementation.

\n

References:

\n\n
\n

How implement inverted dropout Pytorch?

\n
class MyDropout(nn.Module):\n    def __init__(self, p: float = 0.5):\n        super(MyDropout, self).__init__()\n        if p < 0 or p > 1:\n            raise ValueError("dropout probability has to be between 0 and 1, " "but got {}".format(p))\n        self.p = p\n\n    def forward(self, X):\n        if self.training:\n            binomial = torch.distributions.binomial.Binomial(probs=1-self.p)\n            return X * binomial.sample(X.size()) * (1.0/(1-self.p))\n        return X\n
\n

How to implement in Numpy?

\n
import numpy as np\n\npKeep = 0.8\nweights = np.ones([1, 5])\nbinary_value = np.random.rand(weights.shape[0], weights.shape[1]) < pKeep\nres = np.multiply(weights, binary_value)\nres /= pKeep  # this line is called inverted dropout technique\nprint(res)\n
\n

How to implement in Tensorflow?

\n
import tensorflow as tf\ntf.enable_eager_execution()\n\nweights = tf.ones(shape=[1, 5])\nkeep_prob = 0.8\nrandom_tensor = keep_prob\nrandom_tensor += tf.random_uniform(weights.shape)\n# 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)\nbinary_tensor = tf.floor(random_tensor)\nret = tf.div(weights, keep_prob) * binary_tensor\nprint(ret)\n
\n", + "system": "" + }, + { + "instruction": "Multi label classification in pytorch", + "input": "", + "output": "

You are looking for torch.nn.BCELoss. Here's example code:

\n
import torch\n\nbatch_size = 2\nnum_classes = 11\n\nloss_fn = torch.nn.BCELoss()\n\noutputs_before_sigmoid = torch.randn(batch_size, num_classes)\nsigmoid_outputs = torch.sigmoid(outputs_before_sigmoid)\ntarget_classes = torch.randint(0, 2, (batch_size, num_classes))  # randints in [0, 2).\n\nloss = loss_fn(sigmoid_outputs, target_classes)\n\n# alternatively, use BCE with logits, on outputs before sigmoid.\nloss_fn_2 = torch.nn.BCEWithLogitsLoss()\nloss2 = loss_fn_2(outputs_before_sigmoid, target_classes)\nassert loss == loss2\n
\n", + "system": "" + }, + { + "instruction": "Generating new images with PyTorch", + "input": "", + "output": "

The code from your example (https://github.com/davidsonmizael/gan) gave me the same noise as you show. The loss of the generator decreased way too quickly.

\n\n

There were a few things buggy, I'm not even sure anymore what - but I guess it's easy to figure out the differences yourself. For a comparison, also have a look at this tutorial:\nGANs in 50 lines of PyTorch

\n\n
.... same as your code\nprint(\"# Starting generator and descriminator...\")\nnetG = G()\nnetG.apply(weights_init)\n\nnetD = D()\nnetD.apply(weights_init)\n\nif torch.cuda.is_available():\n    netG.cuda()\n    netD.cuda()\n\n#training the DCGANs\ncriterion = nn.BCELoss()\noptimizerD = optim.Adam(netD.parameters(), lr = 0.0002, betas = (0.5, 0.999))\noptimizerG = optim.Adam(netG.parameters(), lr = 0.0002, betas = (0.5, 0.999))\n\nepochs = 25\n\ntimeElapsed = []\nfor epoch in range(epochs):\n    print(\"# Starting epoch [%d/%d]...\" % (epoch, epochs))\n    for i, data in enumerate(dataloader, 0):\n        start = time.time()\n        time.clock()  \n\n        #updates the weights of the discriminator nn\n        netD.zero_grad()\n\n        #trains the discriminator with a real image\n        real, _ = data\n\n        if torch.cuda.is_available():\n            inputs = Variable(real.cuda()).cuda()\n            target = Variable(torch.ones(inputs.size()[0]).cuda()).cuda()\n        else:\n            inputs = Variable(real)\n            target = Variable(torch.ones(inputs.size()[0]))\n\n        output = netD(inputs)\n        errD_real = criterion(output, target)\n        errD_real.backward() #retain_graph=True\n\n        #trains the discriminator with a fake image\n        if torch.cuda.is_available():\n            D_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1).cuda()).cuda()\n            target = Variable(torch.zeros(inputs.size()[0]).cuda()).cuda()\n        else:\n            D_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1))\n            target = Variable(torch.zeros(inputs.size()[0]))\n        D_fake = netG(D_noise).detach()\n        D_fake_ouput = netD(D_fake)\n        errD_fake = criterion(D_fake_ouput, target)\n        errD_fake.backward()\n\n        # NOT:backpropagating the total error\n        # errD = errD_real + errD_fake\n\n        optimizerD.step()\n\n    #for i, data in enumerate(dataloader, 0):\n\n        #updates the weights of the generator nn\n        netG.zero_grad()\n\n        if torch.cuda.is_available():\n            G_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1).cuda()).cuda()\n            target = Variable(torch.ones(inputs.size()[0]).cuda()).cuda()\n        else:\n            G_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1))\n            target = Variable(torch.ones(inputs.size()[0]))\n\n        fake = netG(G_noise)\n        G_output = netD(fake)\n        errG  = criterion(G_output, target)\n\n        #backpropagating the error\n        errG.backward()\n        optimizerG.step()\n\n\n        if i % 50 == 0:\n            #prints the losses and save the real images and the generated images\n            print(\"# Progress: \")\n            print(\"[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f\" % (epoch, epochs, i, len(dataloader), errD_real.data[0], errG.data[0]))\n\n            #calculates the remaining time by taking the avg seconds that every loop\n            #and multiplying by the loops that still need to run\n            timeElapsed.append(time.time() - start)\n            avg_time = (sum(timeElapsed) / float(len(timeElapsed)))\n            all_dtl = (epoch * len(dataloader)) + i\n            rem_dtl = (len(dataloader) - i) + ((epochs - epoch) * len(dataloader))\n            remaining =  (all_dtl - rem_dtl) * avg_time\n            print(\"# Estimated remaining time: %s\" % (time.strftime(\"%H:%M:%S\", time.gmtime(remaining))))\n\n        if i % 100 == 0:\n            vutils.save_image(real, \"%s/real_samples.png\" % \"./results\", normalize = True)\n            vutils.save_image(fake.data, \"%s/fake_samples_epoch_%03d.png\" % (\"./results\", epoch), normalize = True)\n\nprint (\"# Finished.\")\n
\n\n

Result after 25 epochs (batchsize 256) on CIFAR-10:\n\"enter

\n", + "system": "" + }, + { + "instruction": "Torch: How to shuffle a tensor by its rows?", + "input": "", + "output": "

I also faced a similar issue. In the documentation, there is no shuffle function for tensors (there are for dataset loaders). I found a workaround to the problem using torch.randperm.

\n\n
>>> a=torch.rand(3,5)\n>>> print(a)\ntensor([[0.4896, 0.3708, 0.2183, 0.8157, 0.7861],\n        [0.0845, 0.7596, 0.5231, 0.4861, 0.9237],\n        [0.4496, 0.5980, 0.7473, 0.2005, 0.8990]])\n>>> # Row shuffling\n... \n>>> a=a[torch.randperm(a.size()[0])]\n>>> print(a)\ntensor([[0.4496, 0.5980, 0.7473, 0.2005, 0.8990],\n        [0.0845, 0.7596, 0.5231, 0.4861, 0.9237],\n        [0.4896, 0.3708, 0.2183, 0.8157, 0.7861]])\n>>> # column shuffling\n... \n>>> a=a[:,torch.randperm(a.size()[1])]\n>>> print(a)\ntensor([[0.2005, 0.7473, 0.5980, 0.8990, 0.4496],\n        [0.4861, 0.5231, 0.7596, 0.9237, 0.0845],\n        [0.8157, 0.2183, 0.3708, 0.7861, 0.4896]])\n
\n\n

I hope it answers the question!

\n", + "system": "" + }, + { + "instruction": "LSTM time sequence generation using PyTorch", + "input": "", + "output": "

This is normal behaviour and happens because your network is too confident of the quality of the input and doesn't learn to rely on the past (on it's internal state) enough, relying soley on the input. When you apply the network to its own output in the generation setting, the input to the network is not as reliable as it was in the training or validation case where it got the true input.

\n\n

I have two possible solutions for you:

\n\n\n", + "system": "" + }, + { + "instruction": "TRANSFORMERS: Asking to pad but the tokenizer does not have a padding token", + "input": "", + "output": "

kkgarg idea was right, but you also need to update your model token embeding size.\nSo, the code will be:

\n
tokenizer = AutoTokenizer.from_pretrained(pretrained_weights)\nmodel = TFAutoModel.from_pretrained(pretrained_weights)\nif tokenizer.pad_token is None:\n    tokenizer.add_special_tokens({'pad_token': '[PAD]'})\n    model.resize_token_embeddings(len(tokenizer))\n
\n

Check this related issue.

\n", + "system": "" + }, + { + "instruction": "How to make VScode launch.json for a Python module", + "input": "", + "output": "

Specify the module you want to run with "module": "torch.distributed.launch"

\n

You can ignore the -m flag. Put everything else under the args key.

\n

Note: Make sure to include --nproc_per_node and the name of file (main_swav.py) in the list of arguments

\n
{\n    "version": "0.2.0",\n    "configurations": [\n        {\n            "name": "Python: Current File",\n            "type": "debugpy",\n            "module": "torch.distributed.launch",\n            "request": "launch",\n            "console": "integratedTerminal",\n            "args": [\n                "--nproc_per_node", "1", \n                "main_swav.py",\n                "--data_path", "/dataset/imagenet/train",\n            ]\n        }\n    ]\n}\n
\n

Read more here: https://code.visualstudio.com/docs/python/debugging#_module

\n", + "system": "" + }, + { + "instruction": "RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces)", + "input": "", + "output": "

SOLUTION: Just replace the view() function with reshape() function as suggested in the error and it works.

\n

I guess this has to do with how the tensor is stored in memory.

\n", + "system": "" + }, + { + "instruction": "The size of tensor a (707) must match the size of tensor b (512) at non-singleton dimension 1", + "input": "", + "output": "

This is because, BERT uses word-piece tokenization. So, when some of the words are not in the vocabulary, it splits the words to it's word pieces. For example: if the word playing is not in the vocabulary, it can split down to play, ##ing. This increases the amount of tokens in a given sentence after tokenization.\nYou can specify certain parameters to get fixed length tokenization:

\n

tokenized_sentence = tokenizer.encode(test_sentence, padding=True, truncation=True,max_length=50, add_special_tokens = True)

\n", + "system": "" + }, + { + "instruction": "version `GLIBC_2.28' not found", + "input": "", + "output": "
\n

So is it possible to install GLIBC_2.28 on my machine?

\n
\n\n

It is possible, but the chances of you making a mistake and rendering your system un-bootable are quite high. It is also very likely that doing so will break something else on your system (this is the reason distributions do not usually update the version of GLIBC from the one they originally shipped with).

\n\n

A much better solution is to built PyTorch targeting your system (i.e. using your \"normal\" toolchain).

\n\n

P.S. GLIBCXX has nothing to do with your problem, and just adds noise to your question.

\n", + "system": "" + }, + { + "instruction": "PyTorch Model Training: RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR", + "input": "", + "output": "

The error RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR is notoriously difficult to debug, but surprisingly often it's an out of memory problem. Usually, you would get the out of memory error, but depending on where it occurs, PyTorch cannot intercept the error and therefore not provide a meaningful error message.

\n\n

A memory issue seems to be likely in your case, because you are using a while loop until the agent is done, which might take long enough that you run out of memory, it's just a matter of time. That can also possibly occur rather late, once the model's parameters in combination with a certain input is unable to finish in time.

\n\n

You can avoid that scenario by limiting the number of allowed actions instead of hoping that the actor will be done in a reasonable time.

\n\n

What you also need to be careful about, is that you don't occupy unnecessary memory. A common mistake is to keep computing gradients of the past states in future iterations. The state from the last iteration should be considered constant, since the current action should not affect past actions, therefore no gradients are required. This is usually achieved by detaching the state from the computational graph for the next iteration, e.g. state = state_.detach(). Maybe you are already doing that, but without the code it's impossible to tell.

\n\n

Similarly, if you keep a history of the states, you should detach them and even more importantly put them on the CPU, i.e. history.append(state.detach().cpu()).

\n", + "system": "" + }, + { + "instruction": "what does dim=-1 or -2 mean in torch.sum()?", + "input": "", + "output": "

A tensor has multiple dimensions, ordered as in the following figure.\nThere is a forward and backward indexing. Forward indexing uses positive integers, backward indexing uses negative integers.

\n

Example:

\n

-1 will be the last one, in our case it will be dim=2

\n

-2 will be dim=1

\n

-3 will be dim=0

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "How to get entire dataset from dataloader in PyTorch", + "input": "", + "output": "

You can set batch_size = len(dataset). Beware, this might require a lot of memory depending upon your dataset.

\n", + "system": "" + }, + { + "instruction": "How to install nvidia apex on Google Colab", + "input": "", + "output": "

(wanted to just add a comment but I don't have enough reputation...)

\n\n

it works for me but the cd is actually not required. Also, I needed the two global options as suggested here: https://github.com/NVIDIA/apex/issues/86

\n\n
%%writefile setup.sh\n\ngit clone https://github.com/NVIDIA/apex\npip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./apex\n
\n\n

then

\n\n
!sh setup.sh\n
\n", + "system": "" + }, + { + "instruction": "img should be PIL Image. Got <class 'torch.Tensor'>", + "input": "", + "output": "

transforms.RandomHorizontalFlip() works on PIL.Images, not torch.Tensor. In your code above, you are applying transforms.ToTensor() prior to transforms.RandomHorizontalFlip(), which results in tensor.

\n\n

But, as per the official pytorch documentation here,

\n\n
\n

transforms.RandomHorizontalFlip() horizontally flip the given PIL\n Image randomly with a given probability.

\n
\n\n

So, just change the order of your transformation in above code, like below:

\n\n
train_transforms = transforms.Compose([transforms.Resize(255), \n                                       transforms.CenterCrop(224),  \n                                       transforms.RandomHorizontalFlip(),\n                                       transforms.ToTensor(), \n                                       transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) \n
\n", + "system": "" + }, + { + "instruction": "Using Pytorch's Multiprocessing along with Distributed", + "input": "", + "output": "

Sometimes our questions become too restrictive due to premature optimization, like the choice of MPI backend in this case... it may be actually impossible, given that the popular distributed training framework Ray, which supports the other two backends, NCCL and Gloo, does not support MPI, see its code:

\n

RuntimeError for Backend.MPI

\n

An example of using Ray for distributed training of PyTorch models with backends other than MPI (source):

\n
import pytorch_lightning as pl\nfrom ray_lightning import RayPlugin\n\n# Create your PyTorch Lightning model here.\nptl_model = MNISTClassifier(...)\nplugin = RayPlugin(num_workers=4, num_cpus_per_worker=1, use_gpu=True)\n\n# If using GPUs, set the ``gpus`` arg to a value > 0.\n# The actual number of GPUs is determined by ``num_workers``.\ntrainer = pl.Trainer(..., gpus=1, plugins=[plugin])\ntrainer.fit(ptl_model)\n
\n", + "system": "" + }, + { + "instruction": "Torch.cuda.is_available() keeps switching to False", + "input": "", + "output": "

The reason for torch.cuda.is_available() resulting False is the incompatibility between the versions of pytorch and cudatoolkit.

\n

As on Jun-2022, the current version of pytorch is compatible with cudatoolkit=11.3 whereas the current cuda toolkit version = 11.7. Source

\n

Solution:

\n
    \n
  1. Uninstall Pytorch for a fresh installation. You cannot install an old version on top of a new version without force installation (using pip install --upgrade --force-reinstall <package_name>.

    \n
  2. \n
  3. Run conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch to install pytorch.

    \n
  4. \n
  5. Install CUDA 11.3 version from https://developer.nvidia.com/cuda-11.3.0-download-archive.

    \n
  6. \n
\n

You are good to go.\n\"torch.cuda.is_available()

\n", + "system": "" + }, + { + "instruction": "pytorch grad is None after .backward()", + "input": "", + "output": "

This is the expected result.

\n\n

.backward accumulate gradient only in the leaf nodes. out is not a leaf node, hence grad is None.

\n\n

autograd.backward also does the same thing

\n\n

autograd.grad can be used to find the gradient of any tensor w.r.t to any tensor. So if you do autograd.grad (out, out) you get (tensor(1.),) as output which is as expected.

\n\n

Ref:

\n\n\n", + "system": "" + }, + { + "instruction": "How to multiply a tensor row-wise by a vector in PyTorch?", + "input": "", + "output": "

You need to add a corresponding singleton dimension:

\n
m * s[:, None]\n
\n

s[:, None] has size of (12, 1) when multiplying a (12, 10) tensor by a (12, 1) tensor pytorch knows to broadcast s along the second singleton dimension and perform the "element-wise" product correctly.

\n", + "system": "" + }, + { + "instruction": "Pytorch: Image label", + "input": "", + "output": "

The class ImageFolder has an attribute class_to_idx which is a dictionary mapping the name of the class to the index (label). So, you can access the classes with data.classes and for each class get the label with data.class_to_idx.

\n\n

For reference: https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py

\n", + "system": "" + }, + { + "instruction": "How to include batch size in pytorch basic example?", + "input": "", + "output": "

To include batch size in PyTorch basic examples, the easiest and cleanest way is to use PyTorch torch.utils.data.DataLoader and torch.utils.data.TensorDataset.

\n
\n

Dataset stores the samples and their corresponding labels, and DataLoader wraps an iterable around the Dataset to enable easy access to the samples.

\n
\n

DataLoader will take care of creating batches for you.

\n

Building on your question, there is a complete code snippet, where we iterate over a dataset of 10000 examples for 2 epochs with a batch size of 64:

\n
import torch\nfrom torch.utils.data import DataLoader, TensorDataset\n\n\n# Create the dataset with N_SAMPLES samples\nN_SAMPLES, D_in, H, D_out = 10000, 1000, 100, 10\n\nx = torch.randn(N_SAMPLES, D_in)\ny = torch.randn(N_SAMPLES, D_out)\n\n# Define the batch size and the number of epochs\nBATCH_SIZE = 64\nN_EPOCHS = 2\n\n# Use torch.utils.data to create a DataLoader \n# that will take care of creating batches \ndataset = TensorDataset(x, y)\ndataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True)\n\n# Define model, loss and optimizer\nmodel = torch.nn.Sequential(\n    torch.nn.Linear(D_in, H),\n    torch.nn.ReLU(),\n    torch.nn.Linear(H, D_out),\n)\n\nloss_fn = torch.nn.MSELoss(reduction='sum')\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-4)\n\n# Get the dataset size for printing (it is equal to N_SAMPLES)\ndataset_size = len(dataloader.dataset)\n\n# Loop over epochs\nfor epoch in range(N_EPOCHS):\n    print(f"Epoch {epoch + 1}\\n-------------------------------")\n\n    # Loop over batches in an epoch using DataLoader\n    for id_batch, (x_batch, y_batch) in enumerate(dataloader):\n\n        y_batch_pred = model(x_batch)\n\n        loss = loss_fn(y_batch_pred, y_batch)\n\n        optimizer.zero_grad()\n        loss.backward()\n        optimizer.step()\n\n        # Every 100 batches, print the loss for this batch\n        # as well as the number of examples processed so far \n        if id_batch % 100 == 0:\n            loss, current = loss.item(), (id_batch + 1)* len(x_batch)\n            print(f"loss: {loss:>7f}  [{current:>5d}/{dataset_size:>5d}]")\n
\n

The output should be something like:

\n
Epoch 1\n-------------------------------\nloss: 643.433716  [   64/10000]\nloss: 648.195435  [ 6464/10000]\nEpoch 2\n-------------------------------\nloss: 613.619873  [   64/10000]\nloss: 625.018555  [ 6464/10000]\n
\n", + "system": "" + }, + { + "instruction": "How does pytorch broadcasting work?", + "input": "", + "output": "

PyTorch broadcasting is based on numpy broadcasting semantics which can be understood by reading numpy broadcasting rules or PyTorch broadcasting guide. Expounding the concept with an example would be intuitive to understand it better. So, please see the example below:

\n
In [27]: t_rand\nOut[27]: tensor([ 0.23451,  0.34562,  0.45673])\n\nIn [28]: t_ones\nOut[28]: \ntensor([[ 1.],\n        [ 1.],\n        [ 1.],\n        [ 1.]])\n
\n

Now for torch.add(t_rand, t_ones), visualize it like:

\n
               # shape of (3,)\n               tensor([ 0.23451,      0.34562,       0.45673])\n      # (4, 1)          | | | |       | | | |        | | | |\n      tensor([[ 1.],____+ | | |   ____+ | | |    ____+ | | |\n              [ 1.],______+ | |   ______+ | |    ______+ | |\n              [ 1.],________+ |   ________+ |    ________+ |\n              [ 1.]])_________+   __________+    __________+\n
\n

which should give the output with tensor of shape (4,3) as:

\n
# shape of (4,3)\nIn [33]: torch.add(t_rand, t_ones)\nOut[33]: \ntensor([[ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673]])\n
\n
\n

Also, note that we get exactly the same result even if we pass the arguments in a reverse order as compared to the previous one:

\n
# shape of (4, 3)\nIn [34]: torch.add(t_ones, t_rand)\nOut[34]: \ntensor([[ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673]])\n
\n

Anyway, I prefer the former way of understanding for more straightforward intuitiveness.

\n
\n

For pictorial understanding, I culled out more examples which are enumerated below:

\n

Example-1:

\n

\"broadcasting-1\"

\n
\n

Example-2::

\n

\"theano

\n

T and F stand for True and False respectively and indicate along which dimensions we allow broadcasting (source: Theano).

\n
\n

Example-3:

\n

Here are some shapes where the array b is broadcasted appropriately to attempt to match the shape of the array a.

\n

\"broadcastable

\n

As shown above, the broadcasted b may still not match the shape of a, and so the operation a + b will fail whenever the final broadcasted shapes do not match.

\n", + "system": "" + }, + { + "instruction": "PyTorch: Testing with torchvision.datasets.ImageFolder and DataLoader", + "input": "", + "output": "

Looking at the data from Kaggle and your code, it seems that there are problems in your data loading, both train and test set. First of all, the data should be in a different folder per label for the default PyTorch ImageFolder to load it correctly. In your case, since all the training data is in the same folder, PyTorch is loading it as one class and hence learning seems to be working. You can correct this by using a folder structure like - train/dog, - train/cat, - test/dog, - test/cat and then passing the train and the test folder to the train and test ImageFolder respectively. The training code seems fine, just change the folder structure and you should be good. Take a look at the official documentation of ImageFolder which has a similar example.

\n", + "system": "" + }, + { + "instruction": "Creating one hot vector from indices given as a tensor", + "input": "", + "output": "

NEW ANSWER\nAs of PyTorch 1.1, there is a one_hot function in torch.nn.functional. Given any tensor of indices indices and a maximal index n, you can create a one_hot version as follows:

\n\n
n = 5\nindices = torch.randint(0,n, size=(4,7))\none_hot = torch.nn.functional.one_hot(indices, n) # size=(4,7,n)\n
\n\n

Very old Answer

\n\n

At the moment, slicing and indexing can be a bit of a pain in PyTorch from my experience. I assume you don't want to convert your tensors to numpy arrays. The most elegant way I can think of at the moment is to use sparse tensors and then convert to a dense tensor. That would work as follows:

\n\n
from torch.sparse import FloatTensor as STensor\n\nbatch_size = 4\nseq_length = 6\nfeat_dim = 16\n\nbatch_idx = torch.LongTensor([i for i in range(batch_size) for s in range(seq_length)])\nseq_idx = torch.LongTensor(list(range(seq_length))*batch_size)\nfeat_idx = torch.LongTensor([[5, 3, 2, 11, 15, 15], [1, 4, 6, 7, 3, 3],                            \n                             [2, 4, 7, 8, 9, 10], [11, 12, 15, 2, 5, 7]]).view(24,)\n\nmy_stack = torch.stack([batch_idx, seq_idx, feat_idx]) # indices must be nDim * nEntries\nmy_final_array = STensor(my_stack, torch.ones(batch_size * seq_length), \n                         torch.Size([batch_size, seq_length, feat_dim])).to_dense()    \n\nprint(my_final_array)\n
\n\n

Note: PyTorch is undergoing some work currently, that will add numpy style broadcasting and other functionalities within the next two or three weeks and other functionalities. So it's possible, there'll be better solutions available in the near future.

\n\n

Hope this helps you a bit.

\n", + "system": "" + }, + { + "instruction": "Huggingface: How do I find the max length of a model?", + "input": "", + "output": "

Perhaps late, but if you haven't found a solution, I think you can use the tokenizer of that model.\nFor instance:

\n
>>> MODEL = "google/flan-t5-xl"\n>>> tokenizer = AutoTokenizer.from_pretrained(MODEL)\n>>> tokenizer.model_max_length\n512\n\n>>> MODEL = "facebook/bart-base"\n>>> tokenizer = AutoTokenizer.from_pretrained(MODEL)\n>>> tokenizer.model_max_length\n1024\n
\n", + "system": "" + }, + { + "instruction": "PyTorch `torch.no_grad` vs `torch.inference_mode`", + "input": "", + "output": "

Yes, torch.inference_mode is indeed preferable to torch.no_grad in all situations where inference mode does not throw a runtime error. Check here.

\n", + "system": "" + }, + { + "instruction": "How to parallelize a training loop ever samples of a batch when CPU is only available in pytorch?", + "input": "", + "output": "

Torch will use multiple CPU to parallelize operations, so your serial is maybe using multi-core vectorization.

\n

Take this simple example

\n
import torch\nc = 0;\nfor i in range(10000):\n    A = torch.randn(1000, 1000, device='cpu');\n    B = torch.randn(1000, 1000, device='cpu');\n    c += torch.sum(A @ B)\n
\n

No code is used to parallelize, however 80% of 12 CPUs with the default configuration.

\n

\"enter

\n

You can use torch.set_num_threads to set intraop parallelism on CPU. In particular if you are running multiple process and you want each process to use a single CPU you may want to set in each process the intraop parallelism to 1.

\n

However, parallelizing the operations has a cost, I am unable go into the implementation details but we can run a quick experiment that shows the overhead of using multiple threads.

\n
import matplotlib.pyplot as plt\nimport numpy as np\nimport torch;\nimport time;\nA = torch.randn(1000, 1000, device='cpu');\nB = torch.randn(1000, 1000, device='cpu');\nfuncs = {\n    'sin': lambda a,b: torch.sin(A),\n    'tanh': lambda a,b: torch.tanh(A),\n    'log': lambda a,b: torch.log(A),\n    'matmul': lambda a,b: A @ B.T\n}\nt = np.zeros(20)\nfor k,f in funcs.items():\n    for i in range(1, len(t) + 1):\n        torch.set_num_threads(i)\n        c = 0;\n        t0 = time.time();\n        for _ in range(100):\n            f(A,B)\n        tf = time.time()\n        t[i-1] = (tf - t0)*i;\n    plt.plot(np.arange(1, len(t)+1), t, '-o', label=k)\nplt.xlabel('Number of threads')\nplt.legend()\nplt.ylabel('Core x time')\n
\n

The operations tends to run faster with parallelism\n\"enter

\n

But if we take the total CPU time, by multiplying by the number of threads, we see that the single thread version is more efficient.

\n

\"enter

\n

If you are able to parallelize your experiment at a higher level, by running independent processes, you should try that with a single core for each process, otherwise each process will try to use all the CPUs and all of them will run very slowly because your system is overloaded.

\n

Tweaking DDP example

\n

I modified hyperparameters of your example scripts intentionally in a way that weights in favor of multi process.

\n\n
"""\nBased on: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html\n\nNote: as opposed to the multiprocessing (torch.multiprocessing) package, processes can use\ndifferent communication backends and are not restricted to being executed on the same machine.\n"""\nimport torch\nfrom torch import nn, optim\nimport torch.distributed as dist\nimport torch.multiprocessing as mp\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport argparse\nimport os\n\n# More than one epoch so that the initialization is less significant\n# than compared to the model processing time\nnum_epochs = 10\n# for the experiment select a number that has a lot of divisors\n# as I want to test with equal number of batches\nnum_batches = 16*9*5\n# Uses a larger batch so that more work is done in each process\n# between two gradient synchronizations\n# apparently the intraop optimization is not helping \n# (at least not too much) in the batch dimension\nbatch_size = 10000\n# Use smaller dimensions, so that the intraop parallelization becomes less \n# helpful\nDin, Dout = 3, 5\ndata_x = torch.randn(batch_size, Din)\ndata_y = torch.randn(batch_size, Dout)\ndata = [(i*data_x, i*data_y) for i in range(num_batches)]\n\nclass OneDeviceModel(nn.Module):\n    """\n    Toy example for a model ran in parallel but not distributed accross gpus\n    (only processes with their own gpu or hardware)\n    """\n    def __init__(self):\n        super().__init__()\n        # -- Use more layers\n        self.net = [nn.Linear(Din, Din) for _ in range(10)]\n        # -- Bob: use more complex activation  \n        self.tanh = nn.Tanh()\n        self.sigmoid = nn.Sigmoid()\n        self.relu = nn.ReLU()\n        self.net2 = nn.Linear(Din, Dout)\n\n    def forward(self, x):\n      # apply the 10 layers sequentially\n      for i in range(10):\n        x = self.net[i](x)\n        x = self.sigmoid(x)\n        x = self.tanh(x)\n        x = self.relu(x)\n      return self.net2(x)\n\ndef setup_process(rank, world_size, backend='gloo'):\n    """\n    Initialize the distributed environment (for each process).\n\n    gloo: is a collective communications library (https://github.com/facebookincubator/gloo). My understanding is that\n    it's a library/API for process to communicate/coordinate with each other/master. It's a backend library.\n    """\n    # set up the master's ip address so this child process can coordinate\n    # os.environ['MASTER_ADDR'] = '127.0.0.1'\n    os.environ['MASTER_ADDR'] = 'localhost'\n    os.environ['MASTER_PORT'] = '12355'\n\n    # - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends\n    if torch.cuda.is_available():\n        backend = 'nccl'\n    # Initializes the default distributed process group, and this will also initialize the distributed package.\n    dist.init_process_group(backend, rank=rank, world_size=world_size)\n\ndef cleanup():\n    """ Destroy a given process group, and deinitialize the distributed package """\n    dist.destroy_process_group()\n\ndef run_parallel_training_loop(rank, world_size):\n    """\n    Distributed function to be implemented later.\n\n    This is the function that is actually ran in each distributed process.\n\n    Note: as DDP broadcasts model states from rank 0 process to all other processes in the DDP constructor,\n    you don\u2019t need to worry about different DDP processes start from different model parameter initial values.\n    """\n    print()\n    print(f"Start running DDP with model parallel example on rank: {rank}.")\n    print(f'current process: {mp.current_process()}')\n    print(f'pid: {os.getpid()}')\n    setup_process(rank, world_size)\n    torch.set_num_threads(mp.cpu_count() // world_size)\n    # create model and move it to GPU with id rank\n    model = OneDeviceModel().to(rank) if torch.cuda.is_available() else OneDeviceModel().share_memory()\n    # ddp_model = DDP(model, device_ids=[rank])\n    ddp_model = DDP(model)\n    for _ in range(num_epochs):\n      for batch_idx, batch in enumerate(data[rank::world_size]):\n          x, y = batch\n          loss_fn = nn.MSELoss()\n          optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)\n\n          optimizer.zero_grad()\n          outputs = ddp_model(x)\n          labels = y.to(rank) if torch.cuda.is_available() else y\n          # Gradient synchronization communications take place during the backward pass and overlap with the backward computation.\n          loss_fn(outputs, labels).backward()  # When the backward() returns, param.grad already contains the synchronized gradient tensor.\n          optimizer.step()  # TODO how does the optimizer know to do the gradient step only once?\n\n    print()\n    print(f"Start running DDP with model parallel example on rank: {rank}.")\n    print(f'current process: {mp.current_process()}')\n    print(f'pid: {os.getpid()}')\n    # Destroy a given process group, and deinitialize the distributed package\n    cleanup()\n\ndef main():\n    print()\n    print('running main()')\n    print(f'current process: {mp.current_process()}')\n    print(f'pid: {os.getpid()}')\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--world-size', default=1, type=int)\n    args = parser.parse_args()\n    assert num_batches % args.world_size == 0\n    mp.spawn(run_parallel_training_loop, args=(args.world_size,), nprocs=args.world_size)\n\nif __name__ == "__main__":\n    print('starting __main__')\n    main()\n    print('Done!\\a\\n')\n
\n
$ time python3 ddp.py --world-size 1 > /dev/null\n\nreal    0m59.092s\nuser    8m46.589s\nsys     0m7.320s\n\n$ time python3 ddp.py --world-size 1 > /dev/null\n\nreal    1m11.124s\nuser    10m54.209s\nsys     0m9.595s\n\n$ time python3 ddp.py --world-size 6 > /dev/null\n\nreal    0m18.348s\nuser    2m28.799s\nsys     0m18.068s\n$ time python3 ddp.py --world-size 12 > /dev/null\n\nreal    0m26.352s\nuser    4m3.074s\nsys     0m39.179s\n$ time python3 ddp.py --world-size 3 > /dev/null\n\nreal    0m23.047s\nuser    3m51.172s\nsys     0m11.483s\n$ time python3 ddp.py --world-size 4 > /dev/null\n\nreal    0m18.195s\nuser    2m55.241s\nsys     0m12.841s\n$ time python3 ddp.py --world-size 2 > /dev/null\n\nreal    0m26.955s\nuser    4m15.837s\nsys     0m7.127s\n
\n

If I remove the line

\n
torch.set_num_threads(mp.cpu_count() // world_size)\n
\n
$ time python3 ddp.py --world-size 4 > /dev/null\n\nreal    0m40.574s\nuser    6m39.176s\nsys     0m19.025s\n\n$ time python3 ddp.py --world-size 2 > /dev/null\n\nreal    0m28.066s\nuser    3m17.775s\nsys     0m8.410s\n\n$ time python3 ddp.py --world-size 1 > /dev/null\n\nreal    0m37.114s\nuser    2m19.743s\nsys     0m4.866s\n
\n

Using

\n
torch.set_num_threads(mp.cpu_count() // world_size // 2)\n
\n
$ time python3 ddp.py --world-size 6 > /dev/null\n\nreal    0m16.399s\nuser    1m38.915s\nsys     0m20.780s\n\n$ time python3 ddp.py --world-size 4 > /dev/null\n\nreal    0m15.649s\nuser    1m1.821s\nsys     0m13.589s\n\n$ time python3 ddp.py --world-size 3 > /dev/null\n\nreal    0m16.947s\nuser    1m29.696s\nsys     0m10.069s\n\n$ time python3 ddp.py --world-size 2 > /dev/null\n\nreal    0m21.851s\nuser    2m4.564s\nsys     0m7.486s\n
\n

My Opinion

\n

DDP in a single node seems not particularly advantageous. Unless you have a model that does a lot of work that is particularly not well handled by pytorch intraop parallelism, have large batches, and preferrably models with less parameters and more operations, meaning less gradients to synchronize, e.g. a convolutional model on a very large input.

\n

Other scenarios where DDP might be helpful is if you are using too much python in your model, instead of vectorized operations.

\n", + "system": "" + }, + { + "instruction": "Correct way of normalizing and scaling the MNIST dataset", + "input": "", + "output": "

Euler_Salter

\n

I may have stumbled upon this a little too late, but hopefully I can help a little bit.

\n

Assuming that you are using torchvision.Transform, the following code can be used to normalize the MNIST dataset.

\n
        train_loader = torch.utils.data.DataLoader(\n        datasets.MNIST('./data', train=True\n        transform=transforms.Compose([\n        transforms.ToTensor(),\n        transforms.Normalize((0.1307,), (0.3081,))\n        ])),\n
\n

Usually, 'transforms.ToTensor()' is used to turn the input data in the range of [0,255] to a 3-dimensional Tensor. This function automatically scales the input data to the range of [0,1]. (This is equivalent to scaling the data down to 0,1)

\n

Therefore, it makes sense that the mean and std used in the 'transforms.Normalize(...)' will be 0.1307 and 0.3081, respectively. (This is equivalent to normalizing zero mean and unit standard deviation.)

\n

Please refer to the link below for better explanation.

\n

https://pytorch.org/vision/stable/transforms.html

\n", + "system": "" + }, + { + "instruction": "Is One-Hot Encoding required for using PyTorch's Cross Entropy Loss Function?", + "input": "", + "output": "

nn.CrossEntropyLoss expects integer labels. What it does internally is that it doesn't end up one-hot encoding the class label at all, but uses the label to index into the output probability vector to calculate the loss should you decide to use this class as the final label. This small but important detail makes computing the loss easier and is the equivalent operation to performing one-hot encoding, measuring the output loss per output neuron as every value in the output layer would be zero with the exception of the neuron indexed at the target class. Therefore, there's no need to one-hot encode your data if you have the labels already provided.

\n

The documentation has some more insight on this: https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html. In the documentation you'll see targets which serves as part of the input parameters. These are your labels and they are described as:

\n

\"Targets\"

\n

This clearly shows how the input should be shaped and what is expected. If you in fact wanted to one-hot encode your data, you would need to use torch.nn.functional.one_hot. To best replicate what the cross entropy loss is doing under the hood, you'd also need nn.functional.log_softmax as the final output and you'd have to additionally write your own loss layer since none of the PyTorch layers use log softmax inputs and one-hot encoded targets. However, nn.CrossEntropyLoss combines both of these operations together and is preferred if your outputs are simply class labels so there is no need to do the conversion.

\n", + "system": "" + }, + { + "instruction": "Parallelization strategies for deep learning", + "input": "", + "output": "

Training

\n

In general, there are two strategies of parallelizing model training: data parallelism and model parallelism.

\n

1. Data parallelism

\n

This strategy splits training data into N partitions, each of which will be trained on different \u201cdevices\u201d (different CPU cores, GPUs, or even machines). In contrast to training without data parallelism which produces one gradient per minibatch, we now have N gradients for each minibatch step. The next question is how we should combine these N gradients.

\n

One way to do it is by averaging all the N gradients and then updating the model parameters once based on the average. This technique is called synchronous distributed SGD. By doing the average, we have a more accurate gradient, but with a cost of waiting all the devices to finish computing its own local gradient.

\n

Another way is by not combining the gradients \u2014 each gradient will instead be used to update the model parameters independently. So, there will be N parameter updates for each minibatch step, in contrast to only one for the previous technique. This technique is called asynchronous distributed SGD. Because it doesn't have to wait other devices to finish, the async approach will take less time to complete a minibatch step than the sync approach will do. However, the async approach will produce a more noisy gradient, so it might need to complete more minibatch steps to catch up with the performance (in terms of loss) of the sync approach.

\n

There are many papers proposing some improvements and optimizations on either approach, but the main idea is generally the same as described above.

\n

In the literature there's been some disagreement on which technique is better in practice. At the end most people now settle on the synchronous approach.

\n

Data Parallelism in PyTorch

\n

To do synchronous SGD, we can wrap our model with torch.nn.parallel.DistributedDataParallel:

\n
from torch.nn.parallel import DistributedDataParallel as DDP\n\n# `model` is the model we previously initialized\nmodel = ...\n\n# `rank` is a device number starting from 0\nmodel = model.to(rank)\nddp_model = DDP(model, device_ids=[rank])\n
\n

Then we can train it similarly. For more details, you can refer to the official tutorial.

\n

For doing asynchronous SGD in PyTorch, we need to implement it more manually since there is no wrapper similar to DistributedDataParallel for it.

\n

Data Parallelism in TensorFlow/Keras

\n

For synchronous SGD, we can use tf.distribute.MirroredStrategy to wrap the model initalization:

\n
import tensorflow as tf\n\nstrategy = tf.distribute.MirroredStrategy()\nwith strategy.scope():\n    model = Model(...)\n    model.compile(...)\n
\n

Then we can train it as usual. For more details, you can refer to the official guides on Keras website and TensorFlow website.

\n

For asynchronous SGD, we can use tf.distribute.experimental.ParameterServerStrategy similarly.

\n

2. Model Parallelism

\n

This strategy splits the model into N parts, each of which will be computed on different devices. A common way to split the model is based on layers: different sets of layers are placed on different devices. But we can also split it more intricately depending on the model architecture.

\n

Model Parallelism in TensorFlow and PyTorch

\n

To implement model parallelism in either TensorFlow or PyTorch, the idea is the same: to move some model parameters into a different device.

\n

In PyTorch we can use torch.nn.Module.to method to move a module into a different device. For example, suppose we want to create two linear layers each of which is placed on a different GPU:

\n
import torch.nn as nn\n\nlinear1 = nn.Linear(16, 8).to('cuda:0')\nlinear2 = nn.Linear(8, 4).to('cuda:1')\n
\n

In TensorFlow we can use tf.device to place an operation into a specific device. To implement the PyTorch example above in TensorFlow:

\n
import tensorflow as tf\nfrom tensorflow.keras import layers\n\nwith tf.device('/GPU:0'):\n    linear1 = layers.Dense(8, input_dim=16)\nwith tf.device('/GPU:1'):\n    linear2 = layers.Dense(4, input_dim=8)\n
\n

For more details you can refer to the official PyTorch tutorial; or if you use TensorFlow you can even use a more high-level library like mesh.

\n

3. Hybrid: Data and Model Parallelism

\n

Recall that data parallelism only splits the training data, whereas model parallelism only splits the model structures. If we have a model so large that even after using either parallelism strategy it still doesn't fit in the memory, we can always do both.

\n

In practice most people prefer data parallelism to model parallelism since the former is more decoupled (in fact, independent) from the model architecture than the latter. That is, by using data parallelism they can change the model architecture as they like, without worrying which part of the model should be parallelized.

\n

Model Inference / Serving

\n

Parallelizing model serving is easier than parallelizing model training since the model parameters are already fixed and each request can be processed independently. Similar to scaling a regular Python web service, we can scale model serving by spawning more processes (to workaround Python's GIL) in a single machine, or even spawning more machine instances.

\n

When we use a GPU to serve the model, though, we need to do more work to scale it. Because of how concurrency is handled differently by a GPU compared to a CPU, in order to maximize the performance, we need to do inference request batching. The idea is when a request comes, instead of immediately processing it, we wait some timeout duration for other requests to come. When the timeout is up, even if the number of requests is only one, we batch them all to be processed on the GPU.

\n

In order to minimize the average request latency, we need to find the optimal timeout duration. To find it we need to observe that there is a trade-off between minimizing the timeout duration and maximizing the number of batch size. If the timeout is too low, the batch size will be small, so the GPU will be underutilized. But if the timeout is too high, the requests that come early will wait too long before they get processed. So, the optimal timeout duration depends on the model complexity (hence, the inference duration) and the average requests per second to receive.

\n

Implementing a scheduler to do request batching is not a trivial task, so instead of doing it manually, we'd better use TensorFlow Serving or PyTorch Serve which already supports it.

\n
\n

To learn more about parallel and distributed learning, you can read this review paper.

\n", + "system": "" + }, + { + "instruction": "net.zero_grad() vs optim.zero_grad() pytorch", + "input": "", + "output": "

net.zero_grad() sets the gradients of all its parameters (including parameters of submodules) to zero. If you call optim.zero_grad() that will do the same, but for all parameters that have been specified to be optimised. If you are using only net.parameters() in your optimiser, e.g. optim = Adam(net.parameters(), lr=1e-3), then both are equivalent, since they contain the exact same parameters.

\n\n

You could have other parameters that are being optimised by the same optimiser, which are not part of net, in which case you would either have to manually set their gradients to zero and therefore keep track of all the parameters, or you can simply call optim.zero_grad() to ensure that all parameters that are being optimised, had their gradients set to zero.

\n\n
\n

Moreover, what happens if I do both?

\n
\n\n

Nothing, the gradients would just be set to zero again, but since they were already zero, it makes absolutely no difference.

\n\n
\n

If I do none, then the gradients get accumulated, but what does that exactly mean? do they get added?

\n
\n\n

Yes, they are being added to the existing gradients. In the backward pass the gradients in respect to every parameter are calculated, and then the gradient is added to the parameters' gradient (param.grad). That allows you to have multiple backward passes, that affect the same parameters, which would not be possible if the gradients were overwritten instead of being added.

\n\n

For example, you could accumulate the gradients over multiple batches, if you need bigger batches for training stability but don't have enough memory to increase the batch size. This is trivial to achieve in PyTorch, which is essentially leaving off optim.zero_grad() and delaying optim.step() until you have gathered enough steps, as shown in HuggingFace - Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups.

\n\n

That flexibility comes at the cost of having to manually set the gradients to zero. Frankly, one line is a very small cost to pay, even though many users won't make use of it and especially beginners might find it confusing.

\n", + "system": "" + }, + { + "instruction": "PyTorch RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn", + "input": "", + "output": "

Following worked for me:

\n
loss.requires_grad = True\nloss.backward()\n
\n", + "system": "" + }, + { + "instruction": "BertForSequenceClassification vs. BertForMultipleChoice for sentence multi-class classification", + "input": "", + "output": "

The answer to this lies in the (admittedly very brief) description of what the tasks are about:

\n\n
\n

[BertForMultipleChoice] [...], e.g. for RocStories/SWAG tasks.

\n
\n\n

When looking at the paper for SWAG, it seems that the task is actually learning to choose from varying options. This is in contrast to your \"classical\" classification task, in which the \"choices\" (i.e., classes) do not vary across your samples, which is exactly what BertForSequenceClassification is for.

\n\n

Both variants can in fact be for an arbitrary number of classes (in the case of BertForSequenceClassification), respectively choices (for BertForMultipleChoice), via changing the labels parameter in the config. But, since it seems like you are dealing with a case of \"classical classification\", I suggest using the BertForSequenceClassification model.

\n\n

Shortly addressing the missing Softmax in BertForSequenceClassification: Since classification tasks can compute loss across classes indipendent of the sample (unlike multiple choice, where your distribution is changing), this allows you to use Cross-Entropy Loss, which factors in Softmax in the backpropagation step for increased numerical stability.

\n", + "system": "" + }, + { + "instruction": "How to use autograd.gradcheck in PyTorch?", + "input": "", + "output": "

There's an example use case provided in the documentation here:

\n\n

https://pytorch.org/docs/master/notes/extending.html

\n\n
\n

You probably want to check if the backward method you implemented\n actually computes the derivatives of your function. It is possible by\n comparing with numerical approximations using small finite\n differences:

\n \n
from torch.autograd import gradcheck\n\n# gradcheck takes a tuple of tensors as input, check if your gradient\n# evaluated with these tensors are close enough to numerical\n# approximations and returns True if they all verify this condition.\ninput = (torch.randn(20,20,dtype=torch.double,requires_grad=True), torch.randn(30,20,dtype=torch.double,requires_grad=True))\ntest = gradcheck(linear, input, eps=1e-6, atol=1e-4)\nprint(test)\n
\n
\n\n

As the quote above suggests, the purpose of the gradcheck function is to verify that a custom backward function agrees with a numerical approximation of the gradient. The primary use case is when you're implementing a custom backward operation. In very few cases should you be implementing your own backward function in PyTorch. This is because PyTorch's autograd functionality takes care of computing gradients for the vast majority of operations.

\n\n

The most obvious exceptions are

\n\n
    \n
  1. You have a function which can't be expressed as a finite combination of other differentiable functions (for example, if you needed the incomplete gamma function, you might want to write your own forward and backward which used numpy and/or lookup tables).

  2. \n
  3. You're looking to speed up the computation of a particularly complicated expression for which the gradient could be drastically simplified after applying the chain rule.

  4. \n
\n", + "system": "" + }, + { + "instruction": "I can't load my model because I can't put a PosixPath", + "input": "", + "output": "

Just redirect PosixPath to WindowsPath.

\n
import pathlib\ntemp = pathlib.PosixPath\npathlib.PosixPath = pathlib.WindowsPath\n
\n

I am also loading fastai models and this trick works.

\n

IMPORTANT: Since this might cause issues later, remember to set pathlib.PosixPath = temp when done.

\n", + "system": "" + }, + { + "instruction": "How to solve the run time error "Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment"", + "input": "", + "output": "

I met a similar problem once. To be brief, the mistake is caused by deepcopy, which is not suitable for non-leaf node. You can print the Rs, R1 and C1 to check whether they are leaf node.

\n\n

If they are leaf node, there is \"requires_grad=True\" and is not \"grad_fn=SliceBackward\" or \"grad_fn=CopySlices\". I guess that non-leaf node has grad_fn, which is used to propagate gradients.

\n\n
#---------------------------------------------------------------------------------\n>>>import torch\n>>>q = torch.nn.Parameter(torch.Tensor(3,3))\n>>>q\nParameter containing:\ntensor([[8.7551e-37, 0.0000e+00, 0.0000e+00],\n        [0.0000e+00, 0.0000e+00, 0.0000e+00],\n        [0.0000e+00, 0.0000e+00, 0.0000e+00]], requires_grad=True)\n#q is leaf node\n>>>p = q[0,:]\n>>>p\ntensor([8.7551e-37, 0.0000e+00, 0.0000e+00], grad_fn=<SliceBackward>)\n#p is non-leaf node\n>>>q[0,0] = 0\n>>>q\nParameter containing:\ntensor([[0., 0., 0.],\n        [0., 0., 0.],\n        [0., 0., 0.]], grad_fn=<CopySlices>)\n#if slice operation is made on q, q becomes non-leaf node. The deepcopy is not suitable for q any more.\n#-----------------------------------------------------------------------------\n
\n", + "system": "" + }, + { + "instruction": "Reproducibility and performance in PyTorch", + "input": "", + "output": "

Performance refers to the run time; CuDNN has several ways of implementations, when cudnn.deterministic is set to true, you're telling CuDNN that you only need the deterministic implementations (or what we believe they are). In a nutshell, when you are doing this, you should expect the same results on the CPU or the GPU on the same system when feeding the same inputs. Why would it affect the performance? CuDNN uses heuristics for the choice of the implementation. So, it actually depends on your model how CuDNN will behave; choosing it to be deterministic may affect the runtime because their could have been, let's say, faster way of choosing them at the same point of running.

\n\n
\n\n

Concerning your snippet, I do the exact seeding, it has been working good (in terms of reproducibility) for 100+ DL experiments.

\n", + "system": "" + }, + { + "instruction": "How does one dynamically add new parameters to optimizers in Pytorch?", + "input": "", + "output": "

Just to add an answer to the title of your question: \"How does one dynamically add new parameters to optimizers in Pytorch?\"

\n\n

You can append params at any time to the optimizer:

\n\n
import torch\nimport torch.optim as optim\n\nmodel = torch.nn.Linear(2, 2) \n\n# Initialize optimizer\noptimizer = optim.Adam(model.parameters(), lr=0.001, momentum=0.9)\n\nextra_params = torch.randn(2, 2)\noptimizer.param_groups.append({'params': extra_params })\n\n#then you can print your `extra_params`\nprint(\"extra params\", extra_params)\nprint(\"optimizer params\", optimizer.param_groups)\n
\n", + "system": "" + }, + { + "instruction": "Why there are different output between model.forward(input) and model(input)", + "input": "", + "output": "

model.forward just calls the forward operations as you mention but __call__ does a little extra.

\n\n

If you dig into the code of nn.Module class you will see __call__ ultimately calls forward but internally handles the forward or backward hooks and manages some states that pytorch allows. When calling a simple model like just an MLP, it may not be really needed but more complex model like spectral normalization layers have hooks and therefore you should use model(.) signature as much as possible unless you explicitly just want to call model.forward

\n\n

Also see Calling forward function without .forward()

\n\n

In this case, however, the difference may be due to some dropout layer, you should call vgg.eval() to make sure all the stochasticity in network is turned off before comparing the outputs.

\n", + "system": "" + }, + { + "instruction": "Output and Broadcast shape mismatch in MNIST, torchvision", + "input": "", + "output": "

The error is due to color vs grayscale on the dataset, the dataset is grayscale.

\n\n

I fixed it by changing transform to

\n\n
transform = transforms.Compose([transforms.ToTensor(),\n  transforms.Normalize((0.5,), (0.5,))\n])\n
\n", + "system": "" + }, + { + "instruction": "How to batch convert sentence lengths to masks in PyTorch?", + "input": "", + "output": "

One way that I found is:

\n\n
torch.arange(max_len).expand(len(lens), max_len) < lens.unsqueeze(1)\n
\n\n

Please share if there are better ways!

\n", + "system": "" + }, + { + "instruction": "What does the underscore suffix in PyTorch functions mean?", + "input": "", + "output": "

You have already answered your own question that the underscore indicates in-place operations in PyTorch. However I want to point out briefly why in-place operations can be problematic:

\n\n\n\n

Here is a shot and slightly modified example taken from the answer you've posted:

\n\n

First the in-place version:

\n\n
import torch\na = torch.tensor([2, 4, 6], requires_grad=True, dtype=torch.float)\nadding_tensor = torch.rand(3)\nb = a.add_(adding_tensor)\nc = torch.sum(b)\nc.backward()\nprint(c.grad_fn)\n
\n\n

Which leads to this error:

\n\n
---------------------------------------------------------------------------\nRuntimeError                              Traceback (most recent call last)\n<ipython-input-27-c38b252ffe5f> in <module>\n      2 a = torch.tensor([2, 4, 6], requires_grad=True, dtype=torch.float)\n      3 adding_tensor = torch.rand(3)\n----> 4 b = a.add_(adding_tensor)\n      5 c = torch.sum(b)\n      6 c.backward()\n\nRuntimeError: a leaf Variable that requires grad has been used in an in-place operation.\n
\n\n
\n\n

Secondly the non in-place version:

\n\n
import torch\na = torch.tensor([2, 4, 6], requires_grad=True, dtype=torch.float)\nadding_tensor = torch.rand(3)\nb = a.add(adding_tensor)\nc = torch.sum(b)\nc.backward()\nprint(c.grad_fn)\n
\n\n

Which works just fine - output:

\n\n
<SumBackward0 object at 0x7f06b27a1da0>\n
\n\n
\n\n

So as a take-away I just wanted to point out to carefully use in-place operations in PyTorch.

\n", + "system": "" + }, + { + "instruction": "PyTorch CUDA vs Numpy for arithmetic operations? Fastest?", + "input": "", + "output": "

GPU operations have to additionally get memory to/from the GPU

\n

The problem is that your GPU operation always has to put the input on the GPU memory, and\nthen retrieve the results from there, which is a quite costly operation.

\n

NumPy, on the other hand, directly processes the data from the CPU/main memory, so there is almost no delay here. Additionally, your matrices are extremely small, so even in the best-case scenario, there should only be a minute difference.

\n

This is also partially the reason why you use mini-batches when training on a GPU in neural networks: Instead of having several extremely small operations, you now have "one big bulk" of numbers that you can process in parallel.
\nAlso note that GPU clock speeds are generally way lower than CPU clocks, so the GPU only really shines because it has way more cores. If your matrix does not utilize all of them fully, you are also likely to see a faster result on your CPU.

\n

TL;DR: If your matrix is big enough, you will eventually see a speed-up in CUDA than Numpy, even with the additional cost of the GPU transfer.

\n", + "system": "" + }, + { + "instruction": "PyTorch NotImplementedError in forward", + "input": "", + "output": "

please look carefully at the indentation of your __init__ function: your forward is part of __init__ not part of your module.

\n", + "system": "" + }, + { + "instruction": "How to free up all memory pytorch is taken from gpu memory", + "input": "", + "output": "

Try delete the object with del and then apply torch.cuda.empty_cache(). The reusable memory will be freed after this operation.

\n", + "system": "" + }, + { + "instruction": "In-place operations with PyTorch", + "input": "", + "output": "

I am not sure about how much in-place operation affect performance but I can address the second query. You can use a mask instead of in-place ops.

\n\n
a = torch.rand((2), requires_grad=True)\nprint('a ', a)\nb = torch.rand(2)\n\n# calculation\nc = a + b\n\n# performing in-place operation\nmask = np.zeros(2)\nmask[1] =1\nmask = torch.tensor(mask)\nc = c*mask\n...\n
\n", + "system": "" + }, + { + "instruction": "Higher order gradients in pytorch", + "input": "", + "output": "

You can just iterate calling the grad function:

\n\n
import torch\nfrom torch.autograd import grad\n\ndef nth_derivative(f, wrt, n):\n\n    for i in range(n):\n\n        grads = grad(f, wrt, create_graph=True)[0]\n        f = grads.sum()\n\n    return grads\n\nx = torch.arange(4, requires_grad=True).reshape(2, 2)\nloss = (x ** 4).sum()\n\nprint(nth_derivative(f=loss, wrt=x, n=3))\n
\n\n

outputs

\n\n
tensor([[  0.,  24.],\n        [ 48.,  72.]])\n
\n", + "system": "" + }, + { + "instruction": "pytorch, AttributeError: module 'torch' has no attribute 'Tensor'", + "input": "", + "output": "

The Python binary that you are running does not have torch installed. It does have a directory named torch on the module search path, and it is treated as a namespace package:

\n\n
$ pwd\n/some/path\n$ python3 -c 'import torch; print(torch); print(torch.__path__)'\nTraceback (most recent call last):\n  File \"<string>\", line 1, in <module>\nModuleNotFoundError: No module named 'torch'\n$ mkdir torch\n$ python3 -c 'import torch; print(torch); print(torch.__path__)'\n<module 'torch' (namespace)>\n_NamespacePath(['/some/path/torch'])\n
\n\n

Any directory without a __init__.py file present in it, located on your module search path, will be treated as a namespace, provided no other Python modules or packages by that name are found anywhere else along the search path.

\n\n

This means that if torch was installed for your Python binary, it doesn't matter if there is a local torch directory:

\n\n
$ ls -ld torch/\ndrwxr-xr-x  2 mjpieters users  68 Nov 23 13:57 torch/\n$ mkdir -p additional_path/torch/\n$ touch additional_path/torch/__init__.py\n$ PYTHONPATH=\"./additional_path\" python3 -c 'import os.path as p, sys; print(*(t for t in (p.join(e, \"torch\") for e in sys.path) if p.exists(t)), sep=\"\\n\")'\ntorch\n/some/path/additional_path/torch\n$ PYTHONPATH=\"./additional_path\" python3 -c 'import torch; print(torch); print(torch.__path__)'\n<module 'torch' from '/some/path/additional_path/torch/__init__.py'>\n['/some/path/additional_path/torch']\n
\n\n

The above shows that sys.path lists the torch directory first, followed by additional_path/torch, but the latter is loaded as the torch module when you try to import it. That's because Python gives priority to top-level modules and packages before loading a namespace package.

\n\n

You need to install torch correctly for your current Python binary, see the project homepage; when using pip you may want to use the Python binary with the -m switch instead:

\n\n
python3.5 -m pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp35-cp35m-manylinux1_x86_64.whl \npython3.5 -m pip install torchvision\n
\n\n

So replace the pip3 the homepage instructions use with python3.5 -m pip; python3.5 can also be the full path to your Python binary.

\n\n

Do use the correct download.pytorch.org URL for the latest version.

\n\n

You don't have to move the directory aside, but if you do want to and don't know where it is located, use print(torch.__path__) as I've shown above.

\n\n

Again, note that if you do have an __init__.py file in a local torch directory, it becomes a regular package and it'll mask packages installed by pip into the normal site-packages location. If you have such a package, or a local torch.py single-file module, you need to rename those. The diagnostic information looks different in that case:

\n\n
$ pwd\n/some/path\n$ python3 -c 'import torch; print(torch); print(torch.__path__)'\nTraceback (most recent call last):\n  File \"<string>\", line 1, in <module>\nModuleNotFoundError: No module named 'torch'\n$ mkdir torch\n$ touch torch/__init__.py  # make it a package\n$ python3 -c 'import torch; print(torch); print(torch.__path__)'\n<module 'torch' from '/some/path/torch/__init__.py'>\n['/some/path/torch']\n$ rm -rf torch/\n$ touch torch.py           # make it a module\n$ python3 -c 'import torch; print(torch); print(torch.__file__)'\n<module 'torch' from '/some/path/torch.py'>\n/some/path/torch.py\n
\n\n

Note the differences; a namespace package, above, uses <module 'name' (namespace)>, while a regular package uses ), while a plain module uses`.

\n\n

Such packages and modules (not namespace packages) are found first and stop the search. If the found package or module is not the one you wanted, you need to move them aside or rename them.

\n", + "system": "" + }, + { + "instruction": "Is there a function to extract image patches in PyTorch?", + "input": "", + "output": "

Unfortunately, there might not be a direct way to achieve your goal.
\nBut Tensor.unfold function might be a solution.
\nhttps://discuss.pytorch.org/t/how-to-extract-smaller-image-patches-3d/16837/2
\nThis website might help you.

\n", + "system": "" + }, + { + "instruction": "Include .whl installation in requirements.txt", + "input": "", + "output": "

You can use environment markers:

\n\n
http://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl ; sys_platform == \"linux\"\nhttp://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl ; sys_platform == \"linux2\"\nhttp://download.pytorch.org/whl/torch-0.1.12.post2-cp27-none-macosx_10_7_x86_64.whl ; sys_platform == \"darwin\"\ntorchvision\n
\n\n

(Double Linux entries: linux2 for Python 2, linux for Python 3.)

\n", + "system": "" + }, + { + "instruction": "Unique values in PyTorch tensor", + "input": "", + "output": "

There is a torch.unique() method in 0.4.0

\n\n

In torch <= 0.3.1 you can try:

\n\n
import torch\nimport numpy as np\n\nx = torch.rand((3,3)) * 10\nnp.unique(x.round().numpy())\n
\n", + "system": "" + }, + { + "instruction": "Pytorch: Convert FloatTensor into DoubleTensor", + "input": "", + "output": "

Your numpy arrays are 64-bit floating point and will be converted to torch.DoubleTensor standardly. Now, if you use them with your model, you'll need to make sure that your model parameters are also Double. Or you need to make sure, that your numpy arrays are cast as Float, because model parameters are standardly cast as float.

\n\n

Hence, do either of the following:

\n\n
data_utils.TensorDataset(torch.from_numpy(X).float(), torch.from_numpy(Y).float())\n
\n\n

or do:

\n\n
model.double()\n
\n\n

Depeding, if you want to cast your model parameters, inputs and targets as Float or as Double.

\n", + "system": "" + }, + { + "instruction": "PyTorch on M1 Mac: RuntimeError: Placeholder storage has not been allocated on MPS device", + "input": "", + "output": "

A possible issue with your code may be that you are not sending the inputs to the device inside your training loop. You should send both the model and the inputs to the device, as you can read about in this blog post.

\n

An example code would be the following:

\n
def train(model, train_loader, device, *args):\n    model.train()\n\n    for it, batch in tqdm(enumerate(train_loader), desc="Epoch %s: " % (epoch), total=train_loader.__len__()):\n        batch = {'data': batch['data'].to(device), 'labels': batch['labels'].to(device)}\n\n        # perform training\n        ...\n\n# set model and device\nmodel = MyWonderfulModel(*args)\ndevice = torch.device("mps" if torch.backends.mps.is_available() else "cpu")\nmodel.to(device)\n\n# call training function\ntrain(model, train_loader, device, *args)\n
\n

Running such training function on my M1 Mac works using MPS.

\n", + "system": "" + }, + { + "instruction": "How can I handle this datasets to create a datasetDict?", + "input": "", + "output": "

to get the validation dataset, you can do like this:

\n
train_dataset, validation_dataset= train_dataset.train_test_split(test_size=0.1).values()\n
\n

This function will divide 10% of the train dataset into the validation dataset.

\n

and to obtain "DatasetDict", you can do like this:

\n
import datasets\ndd = datasets.DatasetDict({"train":train_dataset,"test":test_dataset})\n
\n", + "system": "" + }, + { + "instruction": "Calling the forward method in PyTorch vs. calling the model instance", + "input": "", + "output": "

You should avoid calling Module.forward.\nThe difference is that all the hooks are dispatched in the __call__ function see this, so if you call .forward and have hooks in your model, the hooks won\u2019t have any effect.

\n

Inshort when you call Module.forward, pytorch hooks wont have any effect

\n

Detailed answer can be found in this post

\n", + "system": "" + }, + { + "instruction": "Difference between torch.flatten() and nn.Flatten()", + "input": "", + "output": "

Flattening is available in three forms in PyTorch

\n\n

All three are identical and share the same implementation, the only difference being nn.Flatten has start_dim set to 1 by default to avoid flattening the first axis (usually the batch axis). While the other two flatten from axis=0 to axis=-1 - i.e. the entire tensor - if no arguments are given.

\n", + "system": "" + }, + { + "instruction": "PyTorch - RuntimeError: [enforce fail at inline_container.cc:209] . file not found: archive/data.pkl", + "input": "", + "output": "

In my case, my disk drive was full. Clear some space and try again, deleting any partial files if necessary.

\n", + "system": "" + }, + { + "instruction": "Proper Usage of PyTorch's non_blocking=True for Data Prefetching", + "input": "", + "output": "

I think where you are off is that output = model(images) is a synchronization point. It seems the computation is handled by a different part of a GPU. Quote from official PyTorch docs:

\n
\n

Also, once you pin a tensor or storage, you can use asynchronous GPU\ncopies. Just pass an additional non_blocking=True argument to a\nto() or a cuda() call. This can be used to overlap data\ntransfers with computation.

\n
\n", + "system": "" + }, + { + "instruction": "Pytorch: Weight in cross entropy loss", + "input": "", + "output": "

To compute class weight of your classes use sklearn.utils.class_weight.compute_class_weight(class_weight, *, classes, y) read it here
\nThis will return you an array i.e weight.
\neg .

\n
x = torch.randn(20, 5) \ny = torch.randint(0, 5, (20,)) # classes\nclass_weights=class_weight.compute_class_weight('balanced',np.unique(y),y.numpy())\nclass_weights=torch.tensor(class_weights,dtype=torch.float)\n \nprint(class_weights) #([1.0000, 1.0000, 4.0000, 1.0000, 0.5714])\n
\n

Then pass it to nn.CrossEntropyLoss's weight variable

\n
criterion = nn.CrossEntropyLoss(weight=class_weights,reduction='mean')\n\nloss = criterion(...)\n
\n", + "system": "" + }, + { + "instruction": "Difficulty in understanding the tokenizer used in Roberta model", + "input": "", + "output": "

This question is extremely broad, so I'm trying to give an answer that focuses on the main problem at hand. If you feel the need to have other questions answered, please open another question focusing on one question at a time, see the [help/on-topic] rules for Stackoverflow.

\n\n

Essentially, as you've correctly identified, BPE is central to any tokenization in modern deep networks. I highly recommend you to read the original BPE paper by Sennrich et al., in which they also highlight a bit more of the history of BPEs.
\nIn any case, the tokenizers for any of the huggingface models are pretrained, meaning that they are usually generated from the training set of the algorithm beforehand. Common implementations such as SentencePiece also give a bit better understanding of it, but essentially the task is framed as a constrained optimization problem, where you specify a maximum number of k allowed vocabulary words (the constraint), and the algorithm tries to then keep as many words intact without exceeding k.

\n\n

if there are not enough words to cover the whole vocabulary, smaller units are used to approximate the vocabulary, which results in the splits observed in the example you gave.\nRoBERTa uses a variant called \"byte-level BPE\", the best explanation is probably given in this study by Wang et al.. The main benefit is, that it results in a smaller vocabulary while maintaining the quality of splits, from what I understand.

\n\n

The second part of your question is easier to explain; while BERT highlights the merging of two subsequent tokens (with ##), RoBERTa's tokenizer instead highlights the start of a new token with a specific unicode character (in this case, \\u0120, the G with a dot). The best reason I could find for this was this thread, which argues that it basically avoids the use of whitespaces in training.

\n", + "system": "" + }, + { + "instruction": "How to calculate unbalanced weights for BCEWithLogitsLoss in pytorch", + "input": "", + "output": "

The PyTorch documentation for BCEWithLogitsLoss recommends the pos_weight to be a ratio between the negative counts and the positive counts for each class.

\n

So, if len(dataset) is 1000, element 0 of your multihot encoding has 100 positive counts, then element 0 of the pos_weights_vector should be 900/100 = 9. That means that the binary crossent loss will behave as if the dataset contains 900 positive examples instead of 100.

\n

Here is my implementation:

\n

(new, based on this post)

\n

pos_weight = (y==0.).sum()/y.sum()

\n

(original)

\n
  def calculate_pos_weights(class_counts):\n    pos_weights = np.ones_like(class_counts)\n    neg_counts = [len(data)-pos_count for pos_count in class_counts]\n    for cdx, pos_count, neg_count in enumerate(zip(class_counts,  neg_counts)):\n      pos_weights[cdx] = neg_count / (pos_count + 1e-5)\n\n    return torch.as_tensor(pos_weights, dtype=torch.float)\n
\n

Where class_counts is just a column-wise sum of the positive samples. I posted it on the PyTorch forum and one of the PyTorch devs gave it his blessing.

\n", + "system": "" + }, + { + "instruction": "What does "RuntimeError: CUDA error: device-side assert triggered" in PyTorch mean?", + "input": "", + "output": "

When I shifted my code to work on CPU instead of GPU, I got the following error:

\n

IndexError: index 128 is out of bounds for dimension 0 with size 128

\n

So, perhaps there might be a mistake in the code which for some strange reason comes out as a CUDA error.

\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'torch._C'", + "input": "", + "output": "

In my case, I installed the dependency using pip install and it didn't work. I reinstalled torch using conda install instead and it seemed to work!

\n", + "system": "" + }, + { + "instruction": "Why does PyTorch not find my NVDIA drivers for CUDA support?", + "input": "", + "output": "

Since you had two graphic cards, selecting a card ID CUDA_VISIBLE_DEVICES=GPU_ID should fix the problem as per this explanation.

\n", + "system": "" + }, + { + "instruction": "Calling cuda() with async results in SyntaxError", + "input": "", + "output": "

Your code does not work because:

\n\n
\n

cuda(device=None, non_blocking=False) \u2192 Tensor

\n
\n\n

Use instead non_blocking:

\n

The argument non_blocking has the same effect as async previously had:

\n
\n\n
\n
\n
\n

As an add-on: If you are interested in what async is actually used for you can take a look here:\nhttps://www.python.org/dev/peps/pep-0492/#new-syntax

\n", + "system": "" + }, + { + "instruction": "Bidirectional LSTM output question in PyTorch", + "input": "", + "output": "

Yes, when using a BiLSTM the hidden states of the directions are just concatenated (the second part after the middle is the hidden state for feeding in the reversed sequence).
So splitting up in the middle works just fine.

\n\n

As reshaping works from the right to the left dimensions you won't have any problems in separating the two directions.

\n\n
\n\n

Here is a small example:

\n\n
# so these are your original hidden states for each direction\n# in this case hidden size is 5, but this works for any size\ndirection_one_out = torch.tensor(range(5))\ndirection_two_out = torch.tensor(list(reversed(range(5))))\nprint('Direction one:')\nprint(direction_one_out)\nprint('Direction two:')\nprint(direction_two_out)\n\n# before outputting they will be concatinated \n# I'm adding here batch dimension and sequence length, in this case seq length is 1\nhidden = torch.cat((direction_one_out, direction_two_out), dim=0).view(1, 1, -1)\nprint('\\nYour hidden output:')\nprint(hidden, hidden.shape)\n\n# trivial case, reshaping for one hidden state\nhidden_reshaped = hidden.view(1, 1, 2, -1)\nprint('\\nReshaped:')\nprint(hidden_reshaped, hidden_reshaped.shape)\n\n# This works as well for abitrary sequence lengths as you can see here\n# I've set sequence length here to 5, but this will work for any other value as well\nprint('\\nThis also works for more multiple hidden states in a tensor:')\nmulti_hidden = hidden.expand(5, 1, 10)\nprint(multi_hidden, multi_hidden.shape)\nprint('Directions can be split up just like this:')\nmulti_hidden = multi_hidden.view(5, 1, 2, 5)\nprint(multi_hidden, multi_hidden.shape)\n
\n\n

Output:

\n\n
Direction one:\ntensor([0, 1, 2, 3, 4])\nDirection two:\ntensor([4, 3, 2, 1, 0])\n\nYour hidden output:\ntensor([[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]]]) torch.Size([1, 1, 10])\n\nReshaped:\ntensor([[[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]]]) torch.Size([1, 1, 2, 5])\n\nThis also works for more multiple hidden states in a tensor:\ntensor([[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],\n\n        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],\n\n        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],\n\n        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],\n\n        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]]]) torch.Size([5, 1, 10])\nDirections can be split up just like this:\ntensor([[[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]],\n\n\n        [[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]],\n\n\n        [[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]],\n\n\n        [[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]],\n\n\n        [[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]]]) torch.Size([5, 1, 2, 5])\n
\n\n

Hope this helps! :)

\n", + "system": "" + }, + { + "instruction": "pytorch skip connection in a sequential model", + "input": "", + "output": "

Your observations are correct, but you may have missed the definition of UnetSkipConnectionBlock.forward() (UnetSkipConnectionBlock being the Module defining the U-Net block you shared), which may clarify this implementation:

\n\n

(from pytorch-CycleGAN-and-pix2pix/models/networks.py#L259)\n

\n\n
# Defines the submodule with skip connection.\n# X -------------------identity---------------------- X\n#   |-- downsampling -- |submodule| -- upsampling --|\nclass UnetSkipConnectionBlock(nn.Module):\n\n    # ...\n\n    def forward(self, x):\n        if self.outermost:\n            return self.model(x)\n        else:\n            return torch.cat([x, self.model(x)], 1)\n
\n\n

The last line is the key (applied for all inner blocks). The skip layer is simply done by concatenating the input x and the (recursive) block output self.model(x), with self.model the list of operations you mentioned -- so not so differently from the Functional code you wrote.

\n", + "system": "" + }, + { + "instruction": "How to iterate over two dataloaders simultaneously using pytorch?", + "input": "", + "output": "

If you want to iterate over two datasets simultaneously, there is no need to define your own dataset class just use TensorDataset like below:

\n\n
dataset = torch.utils.data.TensorDataset(dataset1, dataset2)\ndataloader = DataLoader(dataset, batch_size=128, shuffle=True)\nfor index, (xb1, xb2) in enumerate(dataloader):\n    ....\n
\n\n

If you want the labels or iterating over more than two datasets just feed them as an argument to the TensorDataset after dataset2.

\n", + "system": "" + }, + { + "instruction": "Why can GPU do matrix multiplication faster than CPU?", + "input": "", + "output": "

How do you parallelize the computations?

\n\n

GPU's are able to do a lot of parallel computations. A Lot more than a CPU could do.\nLook at this example of vector addition of let's say 1M elements.

\n\n

Using a CPU let's say you have 100 maximum threads you can run :\n(100 is lot more but let's assume for a while)

\n\n

In a typical multi-threading example let's say you parallelized additions on all threads.

\n\n

Here is what I mean by it :

\n\n
c[0] = a[0] + b[0] # let's do it on thread 0\nc[1] = a[1] + b[1] # let's do it on thread 1\nc[101] = a[101] + b[101] # let's do it on thread 1\n
\n\n

We are able to do it because value of c[0], doesn't depend upon any other values except a[0] and b[0]. So each addition is independent of others. Hence, we were able to easily parallelize the task.

\n\n

As you see in above example that simultaneously all the addition of 100 different elements take place saving you time. In this way it takes 1M/100 = 10,000 steps to add all the elements.

\n\n
\n\n

How Efficient does GPU Parallelizes?

\n\n

Now consider today's GPU with about 2048 threads, all threads can independently do 2048 different operations in constant time. Hence giving a boost up.

\n\n

In your case of matrix multiplication. You can parallelize the computations, Because GPU have much more threads and in each thread you have multiple blocks. So a lot of computations are parallelized, resulting quick computations.

\n\n
\n\n
\n

But I didn't write any parallel processing for my GTX1080! Does it do it by itself?

\n
\n\n

Almost all the framework for machine learning uses parallelized implementation of all the possible operations. This is achieved by CUDA programming, NVIDIA API to do parallel computations on NVIDIA GPU's. You don't write it explicitly, it's all done at low level, and you do not even get to know.

\n\n

Yes it doesn't mean that a C++ program you wrote will automatically be parallelized, just because you have a GPU.\nNo, you need to write it using CUDA, only then it will be parallelized, but most programming framework have it, So it is not required from your end.

\n", + "system": "" + }, + { + "instruction": "what's the difference between torch.Tensor() vs torch.empty() in pytorch?", + "input": "", + "output": "

torch.Tensor() is just an alias to torch.FloatTensor() which is the default type of tensor, when no dtype is specified during tensor construction.

\n\n

From the torch for numpy users notes, it seems that torch.Tensor() is a drop-in replacement of numpy.empty()

\n\n

So, in essence torch.FloatTensor() and torch.empty() does the same job of returning a tensor filled with garbage values of dtype torch.float32. Below is a small run:

\n\n
In [87]: torch.FloatTensor(2, 3)\nOut[87]: \ntensor([[-1.0049e+08,  4.5688e-41, -8.9389e-38],\n        [ 3.0638e-41,  4.4842e-44,  0.0000e+00]])\n\nIn [88]: torch.FloatTensor(2, 3)\nOut[88]: \ntensor([[-1.0049e+08,  4.5688e-41, -1.6512e-38],\n        [ 3.0638e-41,  4.4842e-44,  0.0000e+00]])\n
\n\n
\n\n
In [89]: torch.empty(2, 3)\nOut[89]: \ntensor([[-1.0049e+08,  4.5688e-41, -9.0400e-38],\n        [ 3.0638e-41,  4.4842e-44,  0.0000e+00]])\n\nIn [90]: torch.empty(2, 3)\nOut[90]: \ntensor([[-1.0049e+08,  4.5688e-41, -9.2852e-38],\n        [ 3.0638e-41,  4.4842e-44,  0.0000e+00]])\n
\n", + "system": "" + }, + { + "instruction": "How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0?", + "input": "", + "output": "

How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0, preferably in a single line?

\n\n

If x is your numpy array this line should do the trick:

\n\n
torch.tensor(x, requires_grad=True)\n
\n\n

Here is a full example tested with PyTorch 0.4.0:

\n\n
import numpy as np\nimport torch\n\nx = np.array([1.3, 0.5, 1.9, 2.45])\nprint('np.array:', x)\nt = torch.tensor(x, requires_grad=True)\nprint('tensor:', t)\nprint('requires_grad:', t.requires_grad)\n
\n\n

This gives the following output:

\n\n
np.array: [1.3  0.5  1.9  2.45]\ntensor: tensor([ 1.3000,  0.5000,  1.9000,  2.4500], dtype=torch.float64)\nrequires_grad: True\n
\n\n

Edit: dtype should be determined by the given dtype of your numpy array x.

\n\n

I hope this helps.

\n", + "system": "" + }, + { + "instruction": "How to correctly implement a batch-input LSTM network in PyTorch?", + "input": "", + "output": "

Question 1 - Last Timestep

\n\n

This is the code that i use to get the output of the last timestep. I don't know if there is a simpler solution. If it is, i'd like to know it. I followed this discussion and grabbed the relative code snippet for my last_timestep method. This is my forward.

\n\n
class BaselineRNN(nn.Module):\n    def __init__(self, **kwargs):\n        ...\n\n    def last_timestep(self, unpacked, lengths):\n        # Index of the last output for each sequence.\n        idx = (lengths - 1).view(-1, 1).expand(unpacked.size(0),\n                                               unpacked.size(2)).unsqueeze(1)\n        return unpacked.gather(1, idx).squeeze()\n\n    def forward(self, x, lengths):\n        embs = self.embedding(x)\n\n        # pack the batch\n        packed = pack_padded_sequence(embs, list(lengths.data),\n                                      batch_first=True)\n\n        out_packed, (h, c) = self.rnn(packed)\n\n        out_unpacked, _ = pad_packed_sequence(out_packed, batch_first=True)\n\n        # get the outputs from the last *non-masked* timestep for each sentence\n        last_outputs = self.last_timestep(out_unpacked, lengths)\n\n        # project to the classes using a linear layer\n        logits = self.linear(last_outputs)\n\n        return logits\n
\n\n

Question 2 - Masked Cross Entropy Loss

\n\n

Yes, by default the zero padded timesteps (targets) matter. However, it is very easy to mask them. You have two options, depending on the version of PyTorch that you use.

\n\n
    \n
  1. PyTorch 0.2.0: Now pytorch supports masking directly in the CrossEntropyLoss, with the ignore_index argument. For example, in language modeling or seq2seq, where i add zero padding, i mask the zero padded words (target) simply like this:

    \n\n

    loss_function = nn.CrossEntropyLoss(ignore_index=0)

  2. \n
  3. PyTorch 0.1.12 and older: In the older versions of PyTorch, masking was not supported, so you had to implement your own workaround. I solution that i used, was masked_cross_entropy.py, by jihunchoi. You may be also interested in this discussion.

  4. \n
\n", + "system": "" + }, + { + "instruction": "How to convert Pytorch autograd.Variable to Numpy?", + "input": "", + "output": "

Two possible case

\n\n\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'torch._C'", + "input": "", + "output": "

In my case, I installed the dependency using pip install and it didn't work. I reinstalled torch using conda install instead and it seemed to work!

\n", + "system": "" + }, + { + "instruction": "How to move PyTorch model to GPU on Apple M1 chips?", + "input": "", + "output": "

This is what I used:

\n
if torch.backends.mps.is_available():\n    mps_device = torch.device("mps")\n    G.to(mps_device)\n    D.to(mps_device)\n
\n

Similarly for all tensors that I want to move to M1 GPU, I used:

\n
tensor_ = tensor_(mps_device)\n
\n

Some operations are ot yet implemented using MPS, and we might need to set a few environment variables to use CPU fall back instead:\nOne error that I faced during executing the script was

\n
# NotImplementedError: The operator 'aten::_slow_conv2d_forward' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.\n
\n

To solve it I set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1

\n
conda env config vars set PYTORCH_ENABLE_MPS_FALLBACK=1\nconda activate <test-env>\n
\n

References:

\n
    \n
  1. https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
  2. \n
  3. https://pytorch.org/docs/master/notes/mps.html
  4. \n
  5. https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html
  6. \n
  7. https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html
  8. \n
  9. https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#setting-environment-variables
  10. \n
\n", + "system": "" + }, + { + "instruction": "Indexing a tensor with None in PyTorch", + "input": "", + "output": "

Similar to NumPy you can insert a singleton dimension ("unsqueeze" a dimension) by indexing this dimension with None. In turn n[:, None] will have the effect of inserting a new dimension on dim=1. This is equivalent to n.unsqueeze(dim=1):

\n
>>> n = torch.rand(3, 100, 100)\n\n>>> n[:, None].shape\n(3, 1, 100, 100)\n\n>>> n.unsqueeze(1).shape\n(3, 1, 100, 100)\n
\n
\n

Here are some other types of None indexings.

\n

In the example above : is was used as a placeholder to designate the first dimension dim=0. If you want to insert a dimension on dim=2, you can add a second : as n[:, :, None].

\n

You can also place None with respect to the last dimension instead. To do so you can use the ellipsis syntax ...:

\n\n", + "system": "" + }, + { + "instruction": "Difference between autograd.grad and autograd.backward?", + "input": "", + "output": "

TLDR; Both are different interfaces for gradient computation: torch.autograd.grad is non-mutable while torch.autograd.backward is.

\n
\n

Descriptions

\n

The torch.autograd module is the automatic differentiation package for PyTorch. As described in the documentation it only requires minimal change to the code base to be used:

\n
\n

you only need to declare Tensors for which gradients should be computed with the requires_grad=True keyword.

\n
\n

The two main functions torch.autograd provides for gradient computation are torch.autograd.backward and torch.autograd.grad:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
torch.autograd.backward (source)torch.autograd.grad (source)
DescriptionComputes the sum of gradients of given tensors with respect to graph leaves.Computes and returns the sum of gradients of outputs with respect to the inputs.
Headertorch.autograd.backward(
tensors,
grad_tensors=None,
retain_graph=None,
create_graph=False,
grad_variables=None,
inputs=None)
torch.autograd.grad(
outputs,
inputs,
grad_outputs=None,
retain_graph=None,
create_graph=False,
only_inputs=True,
allow_unused=False)
Parameters- tensors \u2013 Tensors of which the derivative will be computed.
- grad_tensors \u2013 The "vector" in the Jacobian-vector product, usually gradients w.r.t. each element of corresponding tensors.
- retain_graph \u2013 If False, the graph used to compute the grad will be freed. [...]
- inputs \u2013 Inputs w.r.t. which the gradient be will be accumulated into .grad. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used [...].
- outputs \u2013 outputs of the differentiated function.
- inputs \u2013 Inputs w.r.t. which the gradient will be returned (and not accumulated into .grad).
- grad_outputs \u2013 The "vector" in the Jacobian-vector product, usually gradients w.r.t. each element of corresponding tensors.
- retain_graph \u2013 If False, the graph used to compute the grad will be freed. [...].
\n
\n

Usage examples

\n

In terms of high-level usage, you can look at torch.autograd.grad as a non-mutable function. As mentioned in the documentation table above, it will not accumulate the gradients on the grad attribute but instead return the computed partial derivatives. In contrast torch.autograd.backward will be able to mutate the tensors by updating the grad attribute of leaf nodes, the function won't return any value. In other words, the latter is more suitable when computing gradients for a large number of parameters.

\n

In the following, we will take two inputs (x1 and, x2), calculate a tensor y with them, and then compute the partial derivatives of the result w.r.t both inputs, i.e. dL/dx1 and dL/dx2:

\n
>>> x1 = torch.rand(1, requires_grad=True)\n>>> x2 = torch.rand(1, requires_grad=True)\n>>> x1, x2\n(tensor(0.3939, grad_fn=<UnbindBackward>),\n tensor(0.7965, grad_fn=<UnbindBackward>))\n
\n

Inference:

\n
>>> y = x1**2 + 5*x2\n>>> y\ntensor(4.1377, grad_fn=<AddBackward0>)\n
\n

Since y was computed using tensor(s) requiring gradients (i.e. with requires_grad=True) - *outside of a torch.no_grad context. It will have a grad_fn function attached. This callback is used to backpropagate onto the computation graph to compute the gradients of preceding tensor nodes.

\n\n
\n

In conclusion: both perform the same operation. They are two different interfaces to interact with the autograd library and perform gradient computations. The latter, torch.autograd.backward (equivalent to torch.Tensor.backward), is generally used in neural networks training loops to compute the partial derivative of the loss w.r.t each one of the model's parameters.

\n

You can read more about how torch.autograd.grad works by reading through this other answer I made on: Meaning of grad_outputs in PyTorch's torch.autograd.grad.

\n", + "system": "" + }, + { + "instruction": "PyTorch CUDA error: an illegal memory access was encountered", + "input": "", + "output": "

It was partially said by the answer of the OP, but the problem under the hood with illegal memory access is that the GPU runs out of memory.

\n

In my case, when I run a script on Windows I get the error message:

\n
RuntimeError: CUDA out of memory. Tried to allocate 1.64 GiB (GPU 0; 4.00 GiB total capacity; 1.10 GiB already allocated; 1.27 GiB free; 1.12 GiB reserved in total by PyTorch)\n
\n

but when run on Linux I get:

\n
RuntimeError: CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\n
\n

Perhaps the message in Windows is more understandable :)

\n

References:\nhttps://forums.fast.ai/t/runtimeerror-cuda-error-an-illegal-memory-access-was-encountered/93899

\n", + "system": "" + }, + { + "instruction": "Unable to import pytorch_lightning on google colab", + "input": "", + "output": "

As said in Issue #6415 on Github, try installing from the GitHub.
\nIt worked for me.

\n
!pip install git+https://github.com/PyTorchLightning/pytorch-lightning\nimport pytorch_lightning as pl\nprint(pl.__version__)\n
\n

Output:

\n
1.3.0dev\n
\n

It seems that the error is coming from Issue #6210 and they say it was fixed. I guess it wasn't uploaded to PyPi.

\n", + "system": "" + }, + { + "instruction": "How to convert one-hot vector to label index and back in Pytorch?", + "input": "", + "output": "

From the Pytorch forums

\n
import torch\nimport numpy as np\n\n\nlabels = torch.randint(0, 10, (10,))\n\n# labels --> one-hot \none_hot = torch.nn.functional.one_hot(labels)\n# one-hot --> labels\nlabels_again = torch.argmax(one_hot, dim=1)\n\nnp.testing.assert_equals(labels.numpy(), labels_again.numpy())\n
\n", + "system": "" + }, + { + "instruction": "How to make a Trainer pad inputs in a batch with huggingface-transformers?", + "input": "", + "output": "

Look at the columns your tokenizer is returning. You might wanna limit it to only the required columns.

\n

For Example

\n
def preprocess_function(examples):\n   #function to tokenize the dataset.\n   if sentence2_key is None:\n       return tokenizer(examples[sentence1_key], truncation=True, padding=True)\n   return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True, padding=True)\n\n\nencoded_dataset = dataset.map(preprocess_function, batched=True, load_from_cache_file=False)\n\n\n#Thing you should do is \n\ncolumns_to_return = ['input_ids', 'label', 'attention_mask']\nencoded_dataset.set_format(type='torch', columns=columns_to_return)\n
\n", + "system": "" + }, + { + "instruction": "pytorch dataset map-style vs iterable-style", + "input": "", + "output": "

I wrote a short post on how to use PyTorch datasets, and the difference between map-style and iterable-style dataset.

\n

In essence, you should use map-style datasets when possible. Map-style datasets give you their size ahead of time, are easier to shuffle, and allow for easy parallel loading.

\n

It\u2019s a common misconception that if your data doesn\u2019t fit in memory, you have to use iterable-style dataset. That is not true. You can implement a map-style dataset such that it retrives data as needed.

\n

Check out the full post here.

\n", + "system": "" + }, + { + "instruction": "Save only best weights with huggingface transformers", + "input": "", + "output": "

You may try the following parameters from trainer in the huggingface

\n
training_args = TrainingArguments(\n    output_dir='/content/drive/results',          # output directory\n    do_predict= True, \n    num_train_epochs=3,              # total number of training epochs\n    **per_device_train_batch_size=4,  # batch size per device during training\n    per_device_eval_batch_size=2**,   # batch size for evaluation\n    warmup_steps=1000,                # number of warmup steps for learning rate  \n    save_steps=1000,\n    save_total_limit=10,\n    load_best_model_at_end= True,\n    weight_decay=0.01,               # strength of weight decay\n    logging_dir='./logs',            # directory for storing logs\n    logging_steps=0, evaluate_during_training=True)\n
\n

There may be better ways to avoid too many checkpoints and selecting the best model.\nSo far you can not save only the best model, but you check when the evaluation yields better results than the previous one.

\n", + "system": "" + }, + { + "instruction": "How to understand hidden_states of the returns in BertModel?(huggingface-transformers)", + "input": "", + "output": "
\n

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True): Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

\n
\n
\n

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

\n
\n

For a given token, its input representation is constructed by summing the corresponding token embedding, segment embedding, and position embedding. This input representation is called the initial embedding output which can be found at index 0 of the tuple hidden_states.\nThis figure explains how the embeddings are calculated.\n\"enter

\n

The remaining 12 elements in the tuple contain the output of the corresponding hidden layer. E.g: the last hidden layer can be found at index 12, which is the 13th item in the tuple. The dimension of both the initial embedding output and the hidden states are [batch_size, sequence_length, hidden_size]. It would be useful to compare the indexing of hidden_states bottom-up with this image from the BERT paper.

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "what does padding_idx do in nn.embeddings()", + "input": "", + "output": "

padding_idx is indeed quite badly described in the documentation.

\n\n

Basically, it specifies which index passed during call will mean \"zero vector\" (which is quite often used in NLP in case some token is missing). By default no index will mean \"zero vector\", as you can see in the example below:

\n\n
import torch\n\nembedding = torch.nn.Embedding(10, 3)\ninput = torch.LongTensor([[0, 1, 0, 5]])\nprint(embedding(input))\n
\n\n

Will give you:

\n\n
tensor([[[ 0.1280, -1.1390, -2.5007],\n         [ 0.3617, -0.9280,  1.2894],\n         [ 0.1280, -1.1390, -2.5007],\n         [-1.3135, -0.0229,  0.2451]]], grad_fn=<EmbeddingBackward>)\n
\n\n

If you specify padding_idx=0 every input where the value is equal to 0 (so zero-th and second row) will be zero-ed out like this (code: embedding = torch.nn.Embedding(10, 3, padding_idx=0)):

\n\n
tensor([[[ 0.0000,  0.0000,  0.0000],\n         [-0.4448, -0.2076,  1.1575],\n         [ 0.0000,  0.0000,  0.0000],\n         [ 1.3602, -0.6299, -0.5809]]], grad_fn=<EmbeddingBackward>\n
\n\n

If you were to specify padding_idx=5 last row would be full of zeros etc.

\n", + "system": "" + }, + { + "instruction": "AttributeError: 'torch.return_types.max' object has no attribute 'dim' - Maxpooling Channel", + "input": "", + "output": "

The torch.max function called with dim returns a tuple so:

\n
class ChannelPool(nn.Module):\n    def forward(self, input):\n        input_max, max_indices = torch.max(input, dim=1)\n        return input_max\n
\n

From the documentation of torch.max:

\n
\n

Returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax).

\n
\n", + "system": "" + }, + { + "instruction": "Pytorch: RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered", + "input": "", + "output": "

This kind of error generally occurs when using NLLLoss or CrossEntropyLoss, and when your dataset has negative labels (or labels greater than the number of classes). That is also the exact error you are getting Assertion t >= 0 && t < n_classes failed.

\n\n

This won't occur for MSELoss, but OP mentions that there is a CrossEntropyLoss somewhere and thus the error occurs (the program crashes asynchronously on some other line). The solution is to clean the dataset and ensure that t >= 0 && t < n_classes is satisfied (where t represents the label).

\n\n

Also, ensure that your network output is in the range 0 to 1 in case you use NLLLoss or BCELoss (then you require softmax or sigmoid activation respectively). Note that this is not required for CrossEntropyLoss or BCEWithLogitsLoss because they implement the activation function inside the loss function. (Thanks to @PouyaB for pointing out).

\n", + "system": "" + }, + { + "instruction": "Make GPU available again after numba.cuda.close()?", + "input": "", + "output": "

could you provide a more complete snippet, I am running

\n
from numba import cuda\nimport torch \ndevice = cuda.get_current_device()\ndevice.reset()\ncuda.close()\ntorch.cuda.isavailable()\n
\n

which prints True, not sure what is your issue?

\n", + "system": "" + }, + { + "instruction": "How to disable progress bar in Pytorch Lightning", + "input": "", + "output": "

F.Y.I. show_progress_bar=False deprecated since version 0.7.2, but you can use progress_bar_refresh_rate=0

\n
\n

update:

\n

progress_bar_refresh_rate has been deprecated in v1.5 and will be removed in v1.7. To disable the progress bar, set enable_progress_bar to false

\n
progress_bar_refresh_rate: How often to refresh progress bar (in steps). Value ``0`` disables progress bar.\n    Ignored when a custom progress bar is passed to :paramref:`~Trainer.callbacks`. Default: None, means\n    a suitable value will be chosen based on the environment (terminal, Google COLAB, etc.).\n\n    .. deprecated:: v1.5\n        ``progress_bar_refresh_rate`` has been deprecated in v1.5 and will be removed in v1.7.\n        Please pass :class:`~pytorch_lightning.callbacks.progress.TQDMProgressBar` with ``refresh_rate``\n        directly to the Trainer's ``callbacks`` argument instead. To disable the progress bar,\n        pass ``enable_progress_bar = False`` to the Trainer.\n\nenable_progress_bar: Whether to enable to progress bar by default.\n
\n", + "system": "" + }, + { + "instruction": "Pinning memory is actually slower in PyTorch?", + "input": "", + "output": "

TL:DR

\n

Your code is slower, because you allocate a new block of pinned memory each time you call the generator. Allocating new memory each time requires synchronization each time making it much slower than non-pinned memory. Likely, you are measuring this overhead.

\n

Your code example in the edit fails in the THCCachingHostAllocator.cpp. It's not the GPU running out of memory, but your host denying you to allocate 68GB of pinned physical memory.

\n
\n
\n

Pinning memory is actually slower in PyTorch?

\n
\n

Creating or releasing pinned memory (cudaHostAlloc()/cudaFreeHost() via the CUDA Runtime) is much slower than malloc/free because it involves synchronization between the devices (GPU and host). Likely, what you are measuring is - to a large extent - this overhead, as you are incrementally allocating pinned memory.

\n
\n

Shouldn't pinning memory make data transfer asynchronous and therefore be faster? If that's not the case, why would we do pin memory?

\n
\n

It can, but not if you halt/join to synchronize before each transfer in order to allocate the memory.

\n

What pinning memory ultimately does is that it prevents the memory block from being swapped out by the OS; it is guaranteed to remain in RAM. This guarantee enables the GPU's DMA to operate on that block without going through the CPU (which has to check, among other things, if the data needs to be swapped back in). Thus, the CPU is free to do other stuff in the meantime.

\n

It is not a perfect analogy, but you could think about pinned memory as shared memory between the GPU and the host. Both parties can operate on it without informing the other party; a bit like multiple threads in a process. This can be much faster if you implement non-blocking code. However, it can also be much slower if parties end up joining all the time.

\n

Contrast this to the non-pinned approach, where the CPU loads the data from RAM (swapped in if necessary) and then sends it to the GPU. Not only is it slower (needs to go through the northbridge twice), but it also keeps the thread (and hence one CPU core) busy. Python also has the infamous GIL, so it could be that your entire application is waiting for that synchronous I/O.

\n

If you want to use pinned memory to shuffle batches of data into the GPU, then one way to do it is to use pinned memory as a (circular) buffer. The CPU can load the data from disk, apply preprocessing, and place the batch into the buffer. The GPU can then fetch batches from the buffer in its own time and do the inference. If the implementation is done well, then the GPU will not idle more than necessary, and there is no more need for synchronization between the host and the GPU.

\n
\n

And if I do want to pin a small tensor, why don't I directly move the whole tensor into GPU memory in advance?

\n
\n

If you don't need to access the tensor from the CPU and it fits onto the GPU, then there is indeed no need to put it into pinned memory.

\n

In your example, you are opening a memory-mapped numpy array memmap, and then ask to transfer it to pinned memory. A memory-mapped file works very similar to paged memory in that data that doesn't fit the RAM anymore is flushed to disk, and loaded back in when it is accessed again.

\n

This "swapping" can not happen for pinned memory, because we need to guarantee that the entire block resides in RAM at all dimes. Hence, we need to first load the entire array into host memory - a contiguous block of 68 GB -, likely creating a copy of the array in the process to not destroy the memmap object, and then we need to pin that memory block, telling the host to forfeit 68GB of managed physical memory to our application. Either of these two steps can be denied by the OS and raise an OutOfMemory error.

\n

This is pretty much what you are seeing, as you fail in the THCCachingHostAllocator.cpp.

\n", + "system": "" + }, + { + "instruction": "Does pytorch apply softmax automatically in nn.Linear", + "input": "", + "output": "

Latching on to what @jodag was already saying in his comment, and extending it a bit to form a full answer:

\n\n

No, PyTorch does not automatically apply softmax, and you can at any point apply torch.nn.Softmax() as you want. But, softmax has some issues with numerical stability, which we want to avoid as much as we can. One solution is to use log-softmax, but this tends to be slower than a direct computation.

\n\n

Especially when we are using Negative Log Likelihood as a loss function (in PyTorch, this is torch.nn.NLLLoss, we can utilize the fact that the derivative of (log-)softmax+NLLL is actually mathematically quite nice and simple, which is why it makes sense to combine the both into a single function/element. The result is then torch.nn.CrossEntropyLoss. Again, note that this only applies directly to the last layer of your network, any other computation is not affected by any of this.

\n", + "system": "" + }, + { + "instruction": "Is there a pytorch method to check the number of cpus?", + "input": "", + "output": "

just use this :

\n\n
os.cpu_count()\n
\n", + "system": "" + }, + { + "instruction": "In Colaboratory, CUDA cannot be used for Pytorch", + "input": "", + "output": "

Click on Runtime and select Change runtime type.

\n

Now in Hardware Acceleration, select GPU and hit Save.

\n", + "system": "" + }, + { + "instruction": "Calling super's forward() method", + "input": "", + "output": "

TLDR;

\n

You can use super().forward(...) freely even with hooks and even with hooks registered in super() instance.

\n

Explanation

\n

As stated by this answer __call__ is here so the registered hooks (e.g. register_forward_hook) will be run.

\n

If you inherit and want to reuse base class's forward, e.g. this:

\n
import torch\n\n\nclass Parent(torch.nn.Module):\n    def forward(self, tensor):\n        return tensor + 1\n\n\nclass Child(Parent):\n    def forward(self, tensor):\n        return super(Child, self).forward(tensor) + 1\n\n\nmodule = Child()\n# Increment output by 1 so we should get `4`\nmodule.register_forward_hook(lambda module, input, output: output + 1)\nprint(module(torch.tensor(1))) # and it is 4 indeed\nprint(module.forward(torch.tensor(1))) # here it is 3 still\n
\n

You are perfectly fine if you call __call__ method, forward won't run the hook (so you get 3 as above).

\n

It is unlikely you would like to register_hook on the instance of super , but let's consider such example:

\n
def increment_by_one(module, input, output):\n    return output + 1\n\n\nclass Parent(torch.nn.Module):\n    def forward(self, tensor):\n        return tensor + 1\n\n\nclass Child(Parent):\n    def forward(self, tensor):\n        # Increment by `1` from Parent\n        super().register_forward_hook(increment_by_one)\n        return super().forward(tensor) + 1\n\n\nmodule = Child()\n# Increment output by 1 so we should get `5` in total\nmodule.register_forward_hook(increment_by_one)\nprint(module(torch.tensor(1)))  # and it is 5 indeed\nprint(module.forward(torch.tensor(1)))  # here is 3\n
\n

You are perfectly fine using super().forward(...) and even hooks will work correctly (and that is the main idea of using __call__ instead of forward).

\n

BTW. Calling super().__call__(...) would raise InifiniteRecursion error.

\n", + "system": "" + }, + { + "instruction": "Running LSTM with multiple GPUs gets "Input and hidden tensors are not at the same device"", + "input": "", + "output": "

When you call .cuda() on the tensor, Pytorch moves it to the current GPU device by default (GPU-0). So, due to data parallelism, your data lives in a different GPU while your model goes to another, this results in the runtime error you are facing.

\n\n

The correct way to implement data parallelism for recurrent neural networks is as follows:

\n\n
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence\n\nclass MyModule(nn.Module):\n    # ... __init__, other methods, etc.\n\n    # padded_input is of shape [B x T x *] (batch_first mode) and contains\n    # the sequences sorted by lengths\n    #   B is the batch size\n    #   T is max sequence length\n    def forward(self, padded_input, input_lengths):\n        total_length = padded_input.size(1)  # get the max sequence length\n        packed_input = pack_padded_sequence(padded_input, input_lengths,\n                                            batch_first=True)\n        packed_output, _ = self.my_lstm(packed_input)\n        output, _ = pad_packed_sequence(packed_output, batch_first=True,\n                                        total_length=total_length)\n        return output\n\nm = MyModule().cuda()\ndp_m = nn.DataParallel(m)\n
\n\n

You also need to set the CUDA_VISIBLE_DEVICES environment variable accordingly for a multi GPU setup.

\n\n

References:

\n\n\n", + "system": "" + }, + { + "instruction": "Convert PyTorch CUDA tensor to NumPy array", + "input": "", + "output": "

Use .detach() to convert from GPU / CUDA Tensor to numpy array:

\n
tensor.detach().cpu().numpy()\n
\n", + "system": "" + }, + { + "instruction": "Check if PyTorch tensors are equal within epsilon", + "input": "", + "output": "

At the time of writing, this is a undocumented function in the latest stable release (0.4.1), but the documentation is in the master (unstable) branch.

\n\n

torch.allclose() will return a boolean indicating whether all element-wise differences are equal allowing for a margin of error.

\n\n

Additionally, there's the undocumented isclose():

\n\n
>>> torch.isclose(torch.Tensor([1]), torch.Tensor([1.00000001]))\ntensor([1], dtype=torch.uint8)\n
\n", + "system": "" + }, + { + "instruction": "Pytorch 0.4.0: There are three ways to create tensors on CUDA device. Is there some difference between them?", + "input": "", + "output": "

All three methods worked for me.

\n\n

In 1 and 2, you create a tensor on CPU and then move it to GPU when you use .to(device) or .cuda(). They are the same here.

\n\n

However, when you use .to(device) method you can explicitly tell torch to move to specific GPU by setting device=torch.device(\"cuda:<id>\"). with .cuda() you have to do .cuda(<id>) to move to some particular GPU.

\n\n
\n\n

Why do these two methods exist then?

\n\n

.to(device) was introduced in 0.4 because it is easier to declare device variable at top of the code as

\n\n

device = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")

\n\n

and use .to(device) everywhere. This makes it quite easy to switch from CPU to GPU and vice-versa

\n\n

Before this, we had to use .cuda() and your code will have if check for cuda.is_available() everywhere which made it cumbersome to switch between GPU/CPU.

\n\n
\n\n

The third method doesn't create a tensor on the CPU and directly copies data to GPU, which is more efficient.

\n", + "system": "" + }, + { + "instruction": "What does flatten_parameters() do?", + "input": "", + "output": "

It may not be a full answer to your question. But, if you give a look at the flatten_parameters's source code , you will notice that it calls _cudnn_rnn_flatten_weight in

\n\n
...\nNoGradGuard no_grad;\ntorch::_cudnn_rnn_flatten_weight(...)\n...\n
\n\n

is the function that does the job. You will find that what it actually does is copying the model's weights into a vector<Tensor> (check the params_arr declaration) in:

\n\n
  // Slice off views into weight_buf\n  std::vector<Tensor> params_arr;\n  size_t params_stride0;\n  std::tie(params_arr, params_stride0) = get_parameters(handle, rnn, rnn_desc, x_desc, w_desc, weight_buf);\n\n  MatrixRef<Tensor> weight{weight_arr, static_cast<size_t>(weight_stride0)},\n                    params{params_arr, params_stride0};\n
\n\n

And the weights copying in

\n\n
  // Copy weights\n  _copyParams(weight, params);\n
\n\n

Also note that they update (or Reset as they explicitly say in docs) the original pointers of weights with the new pointers of params by doing an in-place operation .set_ (_ is their notation for the in-place operations) in orig_param.set_(new_param.view_as(orig_param));

\n\n
  // Update the storage\n  for (size_t i = 0; i < weight.size(0); i++) {\n    for (auto orig_param_it = weight[i].begin(), new_param_it = params[i].begin();\n         orig_param_it != weight[i].end() && new_param_it != params[i].end();\n         orig_param_it++, new_param_it++) {\n      auto orig_param = *orig_param_it, new_param = *new_param_it;\n      orig_param.set_(new_param.view_as(orig_param));\n    }\n  }\n
\n\n

And according to n2798 (draft of C++0x)

\n\n

\u00a9ISO/IECN3092

\n\n
\n

23.3.6 Class template vector

\n \n

A vector is a sequence container that supports random access iterators. In addition, it supports (amortized)constant time insert and erase operations at the end; insert and erase in the middle take linear time. Storage management is handled automatically, though hints can be given to improve efficiency. The elements of a vector are stored contiguously, meaning that if v is a vector <T, Allocator> where T is some type other than bool, then it obeys the identity&v[n] == &v[0] + n for all 0 <= n < v.size().

\n
\n\n
\n\n

In some situations

\n\n
\n

UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greately increasing memory usage. To compact weights again call flatten_parameters().

\n
\n\n

They explicitly advise people in code warnings to have a contiguous chunk of memory.

\n", + "system": "" + }, + { + "instruction": "Understanding Bilinear Layers", + "input": "", + "output": "

The operation done by nn.Bilinear is B(x1, x2) = x1*A*x2 + b (c.f. doc) with:

\n\n\n\n

If you take into account the (optional) bias, you should obtain the expected results.

\n\n
\n\n
import torch\nimport torch.nn as nn\n\ndef manual_bilinear(x1, x2, A, b):\n    return torch.mm(x1, torch.mm(A, x2)) + b\n\nx_ones = torch.ones(2)\nx_zeros = torch.zeros(2)\n\n# ---------------------------\n# With Bias:\n\nB = nn.Bilinear(2, 2, 1)\nA = B.weight\nprint(B.bias)\n# > tensor([-0.6748], requires_grad=True)\nb = B.bias\n\nprint(B(x_ones, x_zeros))\n# > tensor([-0.6748], grad_fn=<ThAddBackward>)\nprint(manual_bilinear(x_ones.view(1, 2), x_zeros.view(2, 1), A.squeeze(), b))\n# > tensor([[-0.6748]], grad_fn=<ThAddBackward>)\n\nprint(B(x_ones, x_ones))\n# > tensor([-1.7684], grad_fn=<ThAddBackward>)\nprint(manual_bilinear(x_ones.view(1, 2), x_ones.view(2, 1), A.squeeze(), b))\n# > tensor([[-1.7684]], grad_fn=<ThAddBackward>)\n\n# ---------------------------\n# Without Bias:\n\nB = nn.Bilinear(2, 2, 1, bias=False)\nA = B.weight\nprint(B.bias)\n# None\nb = torch.zeros(1)\n\nprint(B(x_ones, x_zeros))\n# > tensor([0.], grad_fn=<ThAddBackward>)\nprint(manual_bilinear(x_ones.view(1, 2), x_zeros.view(2, 1), A.squeeze(), b))\n# > tensor([0.], grad_fn=<ThAddBackward>)\n\nprint(B(x_ones, x_ones))\n# > tensor([-0.7897], grad_fn=<ThAddBackward>)\nprint(manual_bilinear(x_ones.view(1, 2), x_ones.view(2, 1), A.squeeze(), b))\n# > tensor([[-0.7897]], grad_fn=<ThAddBackward>)\n
\n", + "system": "" + }, + { + "instruction": "Is there any pytorch function can combine the specific continuous dimensions of tensor into one?", + "input": "", + "output": "

There is a variant of flatten that takes start_dim and end_dim parameters. You can call it in the same way as your magic_combine (except that end_dim is inclusive).

\n
a = torch.zeros(1, 2, 3, 4, 5, 6)  \nb = a.flatten(2, 4) # combine dimension 2, 3, 4 \nprint(b.size()) # should be (1, 2, 60, 6)\n
\n

https://pytorch.org/docs/stable/generated/torch.flatten.html

\n

There is also a corresponding unflatten, in which you can specify a dimension to unflatten and a shape to unflatten it to.

\n", + "system": "" + }, + { + "instruction": "How are PyTorch's tensors implemented?", + "input": "", + "output": "

Contiguous array

\n\n

The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other.

\n\n

The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item [3, 1, 1] you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the \"storage\" would be 27 items long, but the interpretation of \"coordinates\" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension.

\n\n

This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.

\n", + "system": "" + }, + { + "instruction": "PyTorch's dataloader "too many open files" error when no files should be open", + "input": "", + "output": "

I know how to fix the error, but I don't have a complete explanation for why it happens.

\n\n

First, the solution: you need to make sure that the image data is stored as numpy.arrays, when you call json.loads it loads them as python lists of floats. This causes the torch.utils.data.DataLoader to individually transform each float in the list into a torch.DoubleTensor.

\n\n

Have a look at default_collate in torch.utils.data.DataLoader - your __getitem__ returns a dict which is a mapping, so default_collate gets called again on each element of the dict. The first couple are ints, but then you get to the image data which is a list, i.e. a collections.Sequence - this is where things get funky as default_collate is called on each element of the list. This is clearly not what you intended. I don't know what the assumption in torch is about the contents of a list versus a numpy.array, but given the error it would appear that that assumption is being violated.

\n\n

The fix is pretty trivial, just make sure the two image bands are numpy.arrays, for instance in __init__

\n\n
def __init__(self,data,transform=None):\n    self.data=[]\n    for d in data:\n        d[self.BAND1] = np.asarray(d[self.BAND1])\n        d[self.BAND2] = np.asarray(d[self.BAND2])\n        self.data.append(d)\n    self.transform=transform\n
\n\n

or after you load the json, what ever - doesn't really matter where you do it, as long as you do it.

\n\n
\n\n

Why does the above results in too many open files?

\n\n

I don't know, but as the comments pointed out, it is likely to do with interprocess communication and lock files on the two queues data is taken from and added to.

\n\n

Footnote: the train.json was not available for download from Kaggle due to the competition still being open (??). I made a dummy json file that should have the same structure and tested the fix on that dummy file.

\n", + "system": "" + }, + { + "instruction": "Custom loss function in PyTorch", + "input": "", + "output": "

Let me have a go.

\n\n
    \n
  1. This depends on what you mean by \"non-differentiable\". The first definition that makes sense here is that PyTorch doesn't know how to compute gradients. If you try to compute gradients nevertheless, this will raise an error. The two possible scenarios are:

    \n\n

    a) You're using a custom PyTorch operation for which gradients have not been implemented, e.g. torch.svd(). In that case you will get a TypeError:

    \n\n
    import torch\nfrom torch.autograd import Function\nfrom torch.autograd import Variable\n\nA = Variable(torch.randn(10,10), requires_grad=True)\nu, s, v = torch.svd(A) # raises TypeError\n
    \n\n

    b) You have implemented your own operation, but did not define backward(). In this case, you will get a NotImplementedError:

    \n\n
    class my_function(Function): # forgot to define backward()\n\n    def forward(self, x):\n        return 2 * x\n\nA = Variable(torch.randn(10,10))\nB = my_function()(A)\nC = torch.sum(B)\nC.backward() # will raise NotImplementedError\n
    \n\n

    The second definition that makes sense is \"mathematically non-differentiable\". Clearly, an operation which is mathematically not differentiable should either not have a backward() method implemented or a sensible sub-gradient. Consider for example torch.abs() whose backward() method returns the subgradient 0 at 0:

    \n\n
    A = Variable(torch.Tensor([-1,0,1]),requires_grad=True)\nB = torch.abs(A)\nB.backward(torch.Tensor([1,1,1]))\nA.grad.data\n
    \n\n

    For these cases, you should refer to the PyTorch documentation directly and dig out the backward() method of the respective operation directly.

  2. \n
  3. It doesn't matter. The use of requires_gradis to avoid unnecessary computations of gradients for subgraphs. If there\u2019s a single input to an operation that requires gradient, its output will also require gradient. Conversely, only if all inputs don\u2019t require gradient, the output also won\u2019t require it. Backward computation is never performed in the subgraphs, where all Variables didn\u2019t require gradients.

    \n\n

    Since, there are most likely some Variables (for example parameters of a subclass of nn.Module()), your loss Variable will also require gradients automatically. However, you should notice that exactly for how requires_grad works (see above again), you can only change requires_grad for leaf variables of your graph anyway.

  4. \n
  5. All the custom PyTorch loss functions, are subclasses of _Loss which is a subclass of nn.Module. See here. If you'd like to stick to this convention, you should subclass _Loss when defining your custom loss function. Apart from consistency, one advantage is that your subclass will raise an AssertionError, if you haven't marked your target variables as volatile or requires_grad = False. Another advantage is that you can nest your loss function in nn.Sequential(), because its a nn.Module I would recommend this approach for these reasons.

  6. \n
\n", + "system": "" + }, + { + "instruction": "What is recommended number of threads for pytorch based on available CPU cores?", + "input": "", + "output": "

PyTorch typically uses the number of physical CPU cores as the default number of threads. This means:\ntorch.get_num_threads() and torch.get_num_interop_threads() typically return the number of physical CPU cores.

\n\n

For example:

\n
import torch\n\n# Get current number of threads\nnum_threads = torch.get_num_threads()\nprint(f"Current number of threads: {num_threads}")\n\n# Set custom number of threads (e.g., equal to physical cores)\ntorch.set_num_threads(num_threads)\ntorch.set_num_interop_threads(num_threads)\n\n# Check new settings\nprint(f"New number of threads: {torch.get_num_threads()}")\nprint(f"New number of inter-op threads: {torch.get_num_interop_threads()}")\n
\n", + "system": "" + }, + { + "instruction": "Python: BERT Error - Some weights of the model checkpoint at were not used when initializing BertModel", + "input": "", + "output": "

As R. Marolahy suggests, if you don't want to see this every time, I know I don't, add the following:

\n
from transformers import logging\nlogging.set_verbosity_error()\n
\n", + "system": "" + }, + { + "instruction": "Why embed dimemsion must be divisible by num of heads in MultiheadAttention?", + "input": "", + "output": "

From what I understood, it is a simplification they have added to keep things simple. Theoretically, we can implement the model like you proposed (similar to the original paper).\nIn pytorch documention, they have briefly mentioned it.

\n
Note that `embed_dim` will be split across `num_heads` (i.e. each head will have dimension `embed_dim` // `num_heads`)\n
\n

Also, if you see the Pytorch implementation, you can see it is a bit different (optimised in my point of view) when comparing to the originally proposed model. For example, they use MatMul instead of Linear and Concat layer is ignored. Refer the below which shows the first encoder (with Btach size 32, 10 words, 512 features).

\n

\"enter

\n

P.s:\nIf you need to see the model params (like the above image), this is the code I used.

\n
import torch\ntransformer_model = torch.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=1,num_decoder_layers=1,dim_feedforward=11)  # change params as necessary\ntgt = torch.rand((20, 32, 512))\nsrc = torch.rand((11, 32, 512))\ntorch.onnx.export(transformer_model, (src, tgt), "transformer_model.onnx")\n
\n", + "system": "" + }, + { + "instruction": "With BERT Text Classification, ValueError: too many dimensions 'str' error occuring", + "input": "", + "output": "

REASON

\n

The issue is you are passing a list of strings (str) in torch.tensor() , it only accepts the list of numerical values (integer, float etc.) .

\n

SOLUTION

\n

So I suggest you to convert your string labels into integer values before passing it to the torch.tensor().

\n

IMPLEMENTATION

\n

Following code might help you

\n
# a temporary list to store the string labels\ntemp_list = train_labels.tolist()\n\n# dictionary that maps integer to its string value \nlabel_dict = {}\n\n# list to store integer labels \nint_labels = []\n\nfor i in range(len(temp_list)):\n    label_dict[i] = temp_list[i]\n    int_labels.append(i)\n
\n

Now pass this int_labels to the torch.tensor and use it as label.

\n
train_y = torch.tensor(int_labels)\n
\n

and whenever you want to see the respective string label of any integer just use label_dict dictionary.

\n", + "system": "" + }, + { + "instruction": "load pytorch dataloader into GPU", + "input": "", + "output": "

you can put your data of dataset in advance

\n
train_dataset.train_data.to(CTX)  #train_dataset.train_data is a Tensor(input data)\ntrain_dataset.train_labels.to(CTX)\n
\n

for example of minst

\n
import torch\nfrom torch.utils.data import DataLoader\nfrom torchvision import datasets\nfrom torchvision import transforms\nbatch_size = 64\ntransform = transforms.Compose([\n    transforms.ToTensor(),\n    transforms.Normalize((0.1307,), (0.3081,))\n])\ntrain_data = datasets.MNIST(\n    root='./dataset/minst/',\n    train=True,\n    download=False,\n    transform=transform\n)\ntrain_loader = DataLoader(\n    dataset=train_data,\n    shuffle=True,\n    batch_size=batch_size\n)\ntrain_data.train_data = train_data.train_data.to(torch.device("cuda:0"))  # put data into GPU entirely\ntrain_data.train_labels = train_data.train_labels.to(torch.device("cuda:0"))\n# edit note for newer versions: use train_data.data and train_data.targets instead\n
\n

I got this solution by using debugger...

\n", + "system": "" + }, + { + "instruction": "pytorch DataLoader extremely slow first epoch", + "input": "", + "output": "

Slavka,

\n

TLDR: This is a caching effect.

\n

I did not download the whole GLR2020 dataset but I was able to observe this effect on the image dataset that I had locally (80000 jpg images of approx 400x400 size).

\n

To find the reasons for the difference in performance I tried the following:

\n
    \n
  1. reducing the augmentation to just resizing
  2. \n
  3. testing just ImgDataset.__getitem__() function
  4. \n
  5. ImgDataset.__getitem__() without augmentation
  6. \n
  7. just loading the raw jpg image and passing it from the dataset without even numpy conversion.
  8. \n
\n

It turns out that the difference comes from the image loading timing. Python (or OS itself) implements some kind of caching which is observed when loading image multiple times in the following test.

\n
for i in range(5):    \n    t0 = time.time()\n    data = cv2.imread(filename)\n    print (time.time() - t0)\n    \n0.03395271301269531\n0.0010004043579101562\n0.0010004043579101562\n0.0010008811950683594\n0.001001119613647461\n
\n

same is observed when just reading from file to variable

\n
for i in range(5):    \n    t0 = time.time()\n    with open(filename, mode='rb') as file: \n        data = file.read()\n    print (time.time() - t0)\n\n0.036234378814697266\n0.0028831958770751953\n0.0020024776458740234\n0.0031833648681640625\n0.0028734207153320312\n
\n

One way to reduce the loading speed is to keep the data on very fast local SSD. If size allows, try loading part of the dataset into RAM and writing custom dataloader to feed from there...

\n

BTW Based on my findings this effect should be reproducible with any dataset - see if you used different drives or some caching.

\n", + "system": "" + }, + { + "instruction": "How to convert a tensor of booleans to ints in PyTorch?", + "input": "", + "output": "

The solution is just a single line of code.

\n

To convert a tensor t with values [True, False, True, False] to an integer tensor, just do the following.

\n
t = torch.tensor([True, False, True, False])\nt_integer = t.long()\nprint(t_integer)\n[1, 0, 1, 0]\n
\n", + "system": "" + }, + { + "instruction": "Pytorch: AttributeError: 'function' object has no attribute 'copy'", + "input": "", + "output": "

I am guessing this is what you did by mistake.\nYou saved the function

\n\n

torch.save(model.state_dict, 'model_state.pth')

\n\n

instead of the state_dict()

\n\n

torch.save(model.state_dict(), 'model_state.pth')

\n\n

Otherwise, everything should work as expected. (I tested the following code on Colab)

\n\n

Replace model.state_dict() with model.state_dict to reproduce error

\n\n
import copy\nmodel = TheModelClass()\ntorch.save(model.state_dict(), 'model_state.pth')\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nmodel.load_state_dict(copy.deepcopy(torch.load(\"model_state.pth\",device)))\n
\n", + "system": "" + }, + { + "instruction": "How to index/slice the last dimension of a PyTorch tensor/numpy array of unknown dimensions", + "input": "", + "output": "

PyTorch support NumPy-like indexing so you can use Ellipsis(...)

\n
>>> z[..., -1:]\n
\n

Example:

\n
>>> x                     # (2,2) tensor\ntensor([[0.5385, 0.9280],\n        [0.8937, 0.0423]])\n>>> x[..., -1:]\ntensor([[0.9280],\n        [0.0423]])\n
\n
>>> y                     # (2,2,2) tensor\ntensor([[[0.5610, 0.8542],\n         [0.2902, 0.2388]],\n\n        [[0.2440, 0.1063],\n         [0.7201, 0.1010]]])\n>>> y[..., -1:]\ntensor([[[0.8542],\n         [0.2388]],\n\n        [[0.1063],\n         [0.1010]]])\n
\n\n", + "system": "" + }, + { + "instruction": "Converting python list to pytorch tensor", + "input": "", + "output": "

You can directly convert python list to a pytorch Tensor by defining the dtype. For example,

\n
import torch\n\na_list = [3,23,53,32,53] \na_tensor = torch.Tensor(a_list)\nprint(a_tensor.int())\n\n>>> tensor([3,23,53,32,53])\n
\n", + "system": "" + }, + { + "instruction": "CNN Pytorch Error : Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same", + "input": "", + "output": "

Cast your input x_batch to float. Use x_batch = x_batch.float() before you pass it through your model.

\n", + "system": "" + }, + { + "instruction": "Multi dimensional inputs in pytorch Linear method?", + "input": "", + "output": "

Newer versions of PyTorch allows nn.Linear to accept N-D input tensor, the only constraint is that the last dimension of the input tensor will equal in_features of the linear layer. The linear transformation is then applied on the last dimension of the tensor.
\nFor instance, if in_features=5 and out_features=10 and the input tensor x has dimensions 2-3-5, then the output tensor will have dimensions 2-3-10.

\n", + "system": "" + }, + { + "instruction": "PyTorch Binary Classification - same network structure, 'simpler' data, but worse performance?", + "input": "", + "output": "

TL;DR

\n

Your input data is not normalized.

\n
    \n
  1. use x_data = (x_data - x_data.mean()) / x_data.std()
  2. \n
  3. increase the learning rate optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
  4. \n
\n

You'll get
\n\"enter

\n

convergence in only 1000 iterations.

\n

More details

\n

The key difference between the two examples you have is that the data x in the first example is centered around (0, 0) and has very low variance.
\nOn the other hand, the data in the second example is centered around 92 and has relatively large variance.

\n

This initial bias in the data is not taken into account when you randomly initialize the weights which is done based on the assumption that the inputs are roughly normally distributed around zero.
\nIt is almost impossible for the optimization process to compensate for this gross deviation - thus the model gets stuck in a sub-optimal solution.

\n

Once you normalize the inputs, by subtracting the mean and dividing by the std, the optimization process becomes stable again and rapidly converges to a good solution.

\n

For more details about input normalization and weights initialization, you can read section 2.2 in He et al Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (ICCV 2015).

\n

What if I cannot normalize the data?

\n

If, for some reason, you cannot compute mean and std data in advance, you can still use nn.BatchNorm1d to estimate and normalize the data as part of the training process. For example

\n
class Model(nn.Module):\n    def __init__(self, input_size, H1, output_size):\n        super().__init__()\n        self.bn = nn.BatchNorm1d(input_size)  # adding batchnorm\n        self.linear = nn.Linear(input_size, H1)\n        self.linear2 = nn.Linear(H1, output_size)\n    \n    def forward(self, x):\n        x = torch.sigmoid(self.linear(self.bn(x)))  # batchnorm the input x\n        x = torch.sigmoid(self.linear2(x))\n        return x\n
\n

This modification without any change to the input data, yields similar convergance after only 1000 epochs:
\n\"enter

\n

A minor comment

\n

For numerical stability, it is better to use nn.BCEWithLogitsLoss instead of nn.BCELoss. For this end, you need to remove the torch.sigmoid from the forward() output, the sigmoid will be computed inside the loss.
\nSee, for example, this thread regarding the related sigmoid + cross entropy loss for binary predictions.

\n", + "system": "" + }, + { + "instruction": "Multivariate input LSTM in pytorch", + "input": "", + "output": "

I hope that problematic parts are commented to make sense:

\n

Data preparation

\n
import random\nimport numpy as np\nimport torch\n\n# multivariate data preparation\nfrom numpy import array\nfrom numpy import hstack\n \n# split a multivariate sequence into samples\ndef split_sequences(sequences, n_steps):\n    X, y = list(), list()\n    for i in range(len(sequences)):\n        # find the end of this pattern\n        end_ix = i + n_steps\n        # check if we are beyond the dataset\n        if end_ix > len(sequences):\n            break\n        # gather input and output parts of the pattern\n        seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1, -1]\n        X.append(seq_x)\n        y.append(seq_y)\n    return array(X), array(y)\n \n# define input sequence\nin_seq1 = array([x for x in range(0,100,10)])\nin_seq2 = array([x for x in range(5,105,10)])\nout_seq = array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])\n# convert to [rows, columns] structure\nin_seq1 = in_seq1.reshape((len(in_seq1), 1))\nin_seq2 = in_seq2.reshape((len(in_seq2), 1))\nout_seq = out_seq.reshape((len(out_seq), 1))\n# horizontally stack columns\ndataset = hstack((in_seq1, in_seq2, out_seq))\n
\n

Multivariate LSTM Network

\n
class MV_LSTM(torch.nn.Module):\n    def __init__(self,n_features,seq_length):\n        super(MV_LSTM, self).__init__()\n        self.n_features = n_features\n        self.seq_len = seq_length\n        self.n_hidden = 20 # number of hidden states\n        self.n_layers = 1 # number of LSTM layers (stacked)\n    \n        self.l_lstm = torch.nn.LSTM(input_size = n_features, \n                                 hidden_size = self.n_hidden,\n                                 num_layers = self.n_layers, \n                                 batch_first = True)\n        # according to pytorch docs LSTM output is \n        # (batch_size,seq_len, num_directions * hidden_size)\n        # when considering batch_first = True\n        self.l_linear = torch.nn.Linear(self.n_hidden*self.seq_len, 1)\n        \n    \n    def init_hidden(self, batch_size):\n        # even with batch_first = True this remains same as docs\n        hidden_state = torch.zeros(self.n_layers,batch_size,self.n_hidden)\n        cell_state = torch.zeros(self.n_layers,batch_size,self.n_hidden)\n        self.hidden = (hidden_state, cell_state)\n    \n    \n    def forward(self, x):        \n        batch_size, seq_len, _ = x.size()\n        \n        lstm_out, self.hidden = self.l_lstm(x,self.hidden)\n        # lstm_out(with batch_first = True) is \n        # (batch_size,seq_len,num_directions * hidden_size)\n        # for following linear layer we want to keep batch_size dimension and merge rest       \n        # .contiguous() -> solves tensor compatibility error\n        x = lstm_out.contiguous().view(batch_size,-1)\n        return self.l_linear(x)\n
\n

Initialization

\n
n_features = 2 # this is number of parallel inputs\nn_timesteps = 3 # this is number of timesteps\n\n# convert dataset into input/output\nX, y = split_sequences(dataset, n_timesteps)\nprint(X.shape, y.shape)\n\n# create NN\nmv_net = MV_LSTM(n_features,n_timesteps)\ncriterion = torch.nn.MSELoss() # reduction='sum' created huge loss value\noptimizer = torch.optim.Adam(mv_net.parameters(), lr=1e-1)\n\ntrain_episodes = 500\nbatch_size = 16\n
\n

Training

\n
mv_net.train()\nfor t in range(train_episodes):\n    for b in range(0,len(X),batch_size):\n        inpt = X[b:b+batch_size,:,:]\n        target = y[b:b+batch_size]    \n        \n        x_batch = torch.tensor(inpt,dtype=torch.float32)    \n        y_batch = torch.tensor(target,dtype=torch.float32)\n    \n        mv_net.init_hidden(x_batch.size(0))\n    #    lstm_out, _ = mv_net.l_lstm(x_batch,nnet.hidden)    \n    #    lstm_out.contiguous().view(x_batch.size(0),-1)\n        output = mv_net(x_batch) \n        loss = criterion(output.view(-1), y_batch)  \n        \n        loss.backward()\n        optimizer.step()        \n        optimizer.zero_grad() \n    print('step : ' , t , 'loss : ' , loss.item())\n
\n

Results

\n
step :  499 loss :  0.0010267728939652443 # probably overfitted due to 500 training episodes\n
\n", + "system": "" + }, + { + "instruction": "What does layout = torch.strided mean?", + "input": "", + "output": "

strides is number of steps (or jumps) that is needed to go from one element to next element, in a given dimension. In computer memory, the data is stored linearly in a contiguous block of memory. What we view is just a (re)presentation.

\n

Let's take an example tensor for understanding this:

\n
# a 2D tensor\nIn [62]: tensor = torch.arange(1, 16).reshape(3, 5) \n\nIn [63]: tensor  \nOut[63]: \ntensor([[ 1,  2,  3,  4,  5],\n        [ 6,  7,  8,  9, 10],\n        [11, 12, 13, 14, 15]])\n
\n

With this tensor in place, the strides are:

\n
# get the strides\nIn [64]: tensor.stride() \nOut[64]: (5, 1)\n
\n

What this resultant tuple (5, 1) says is:

\n\n

The order (or index) of 5 & 1 in the tuple represents the dimension/axis. You can also pass the dimension, for which you want the stride, as an argument:

\n
# get stride for axis 0\nIn [65]: tensor.stride(0) \nOut[65]: 5\n\n# get stride for axis 1\nIn [66]: tensor.stride(1) \nOut[66]: 1\n
\n
\n

With that understanding, we might have to ask why is this extra parameter needed when we create the tensors? The answer to that is for efficiency reasons. (How can we store/read/access the elements in the (sparse) tensor most efficiently?).

\n

With sparse tensors (a tensor where most of the elements are just zeroes), so we don't want to store these values. we only store the non-zero values and their indices. With a desired shape, the rest of the values can then be filled with zeroes, yielding the desired sparse tensor.

\n
\n

For further reading on this, the following articles might be of help:

\n\n
\n

P.S: I guess there's a typo in the torch.layout documentation which says

\n
\n

Strides are a list of integers ...

\n
\n

The composite data type returned by tensor.stride() is a tuple, not a list.

\n", + "system": "" + }, + { + "instruction": "IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number", + "input": "", + "output": "

I found the solution in the github issues here

\n\n

Try to change

\n\n
i = order[0] # works for PyTorch 0.4.1.\n
\n\n

to

\n\n
i = order # works for PyTorch>=0.5.\n
\n", + "system": "" + }, + { + "instruction": "Conv1D with kernel_size=1 vs Linear layer", + "input": "", + "output": "

nn.Conv1d with a kernel size of 1 and nn.Linear give essentially the same results. The only differences are the initialization procedure and how the operations are applied (which has some effect on the speed). Note that using a linear layer should be faster as it is implemented as a simple matrix multiplication (+ adding a broadcasted bias vector)

\n

@RobinFrcd your answers are either different due to MaxPool1d or due to the different initialization procedure.

\n

Here are a few experiments to prove my claims:

\n
def count_parameters(model):\n    """Count the number of parameters in a model."""\n    return sum([p.numel() for p in model.parameters()])\n\nconv = torch.nn.Conv1d(8,32,1)\nprint(count_parameters(conv))\n# 288\n\nlinear = torch.nn.Linear(8,32)\nprint(count_parameters(linear))\n# 288\n\nprint(conv.weight.shape)\n# torch.Size([32, 8, 1])\nprint(linear.weight.shape)\n# torch.Size([32, 8])\n\n# use same initialization\nlinear.weight = torch.nn.Parameter(conv.weight.squeeze(2))\nlinear.bias = torch.nn.Parameter(conv.bias)\n\ntensor = torch.randn(128,256,8)\npermuted_tensor = tensor.permute(0,2,1).clone().contiguous()\n\nout_linear = linear(tensor)\nprint(out_linear.mean())\n# tensor(0.0067, grad_fn=<MeanBackward0>)\n\nout_conv = conv(permuted_tensor)\nprint(out_conv.mean())\n# tensor(0.0067, grad_fn=<MeanBackward0>)\n
\n

Speed test:

\n
%%timeit\n_ = linear(tensor)\n# 151 \u00b5s \u00b1 297 ns per loop\n\n%%timeit\n_ = conv(permuted_tensor)\n# 1.43 ms \u00b1 6.33 \u00b5s per loop\n
\n

As Hanchen's answer show, the results can differ very slightly due to numerical precision.

\n", + "system": "" + }, + { + "instruction": "Finding non-intersection of two pytorch tensors", + "input": "", + "output": "

I came across the same problem but the proposed solutions were far too slow when using larger arrays. The following simple solution works on CPU and GPU and is significantly faster than the other proposed solutions:

\n\n
combined = torch.cat((t1, t2))\nuniques, counts = combined.unique(return_counts=True)\ndifference = uniques[counts == 1]\nintersection = uniques[counts > 1]\n
\n", + "system": "" + }, + { + "instruction": "Is it possible to freeze only certain embedding weights in the embedding layer in pytorch?", + "input": "", + "output": "

1. Divide embeddings into two separate objects

\n

One approach would be to use two separate embeddings one for pretrained, another for the one to be trained.

\n

The GloVe one should be frozen, while the one for which there is no pretrained representation would be taken from the trainable layer.

\n

If you format your data that for pretrained token representations it is in smaller range than the tokens without GloVe representation it could be done. Let's say your pretrained indices are in the range [0, 300], while those without representation are [301, 500]. I would go with something along those lines:

\n
import numpy as np\nimport torch\n\n\nclass YourNetwork(torch.nn.Module):\n    def __init__(self, glove_embeddings: np.array, how_many_tokens_not_present: int):\n        self.pretrained_embedding = torch.nn.Embedding.from_pretrained(glove_embeddings)\n        self.trainable_embedding = torch.nn.Embedding(\n            how_many_tokens_not_present, glove_embeddings.shape[1]\n        )\n        # Rest of your network setup\n\n    def forward(self, batch):\n        # Which tokens in batch do not have representation, should have indices BIGGER\n        # than the pretrained ones, adjust your data creating function accordingly\n        mask = batch > self.pretrained_embedding.num_embeddings\n\n        # You may want to optimize it, you could probably get away without copy, though\n        # I'm not currently sure how\n        pretrained_batch = batch.copy()\n        pretrained_batch[mask] = 0\n\n        embedded_batch = self.pretrained_embedding(pretrained_batch)\n\n        # Every token without representation has to be brought into appropriate range\n        batch -= self.pretrained_embedding.num_embeddings\n        # Zero out the ones which already have pretrained embedding\n        batch[~mask] = 0\n        non_pretrained_embedded_batch = self.trainable_embedding(batch)\n\n        # And finally change appropriate tokens from placeholder embedding created by\n        # pretrained into trainable embeddings.\n        embedded_batch[mask] = non_pretrained_embedded_batch[mask]\n\n        # Rest of your code\n        ...\n
\n

Let's say your pretrained indices are in the range [0, 300], while those without representation are [301, 500].

\n

2. Zero gradients for specified tokens.

\n

This one is a bit tricky, but I think it's pretty concise and easy to implement. So, if you obtain the indices of tokens which got no GloVe representation, you can explicitly zero their gradient after backprop, so those rows will not get updated.

\n
import torch\n\nembedding = torch.nn.Embedding(10, 3)\nX = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]])\n\nvalues = embedding(X)\nloss = values.mean()\n\n# Use whatever loss you want\nloss.backward()\n\n# Let's say those indices in your embedding are pretrained (have GloVe representation)\nindices = torch.LongTensor([2, 4, 5])\n\nprint("Before zeroing out gradient")\nprint(embedding.weight.grad)\n\nprint("After zeroing out gradient")\nembedding.weight.grad[indices] = 0\nprint(embedding.weight.grad)\n
\n

And the output of the second approach:

\n
Before zeroing out gradient\ntensor([[0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417],\n        [0.0833, 0.0833, 0.0833],\n        [0.0417, 0.0417, 0.0417],\n        [0.0833, 0.0833, 0.0833],\n        [0.0417, 0.0417, 0.0417],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417]])\nAfter zeroing out gradient\ntensor([[0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417],\n        [0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417]])\n
\n", + "system": "" + }, + { + "instruction": "RuntimeError: Error(s) in loading state_dict for ResNet:", + "input": "", + "output": "

I was using Pytorch 0.4.1 but Jupyter Notebook which I loaded uses 0.4.0. So I added strict=False attribute to load_state_dict().

\n
model.load_state_dict(checkpoint['state_dict'], strict=False)\n
\n", + "system": "" + }, + { + "instruction": "Problem with missing and unexpected keys while loading my model in Pytorch", + "input": "", + "output": "

So your Network is essentially the classifier part of AlexNet and you're looking to load pretrained AlexNet weights into it. The problem is that the keys in state_dict are \"fully qualified\", which means that if you look at your network as a tree of nested modules, a key is just a list of modules in each branch, joined with dots like grandparent.parent.child. You want to

\n\n
    \n
  1. Keep only the tensors with name starting with \"classifier.\"
  2. \n
  3. Remove the \"classifier.\" part of keys
  4. \n
\n\n

so try

\n\n
model = Network()\nloaded_dict = checkpoint['model_state_dict']\nprefix = 'classifier.'\nn_clip = len(prefix)\nadapted_dict = {k[n_clip:]: v for k, v in loaded_dict.items()\n                if k.startswith(prefix)}\nmodel.load_state_dict(adapted_dict)\n
\n", + "system": "" + }, + { + "instruction": "Size mismatch for fc.bias and fc.weight in PyTorch", + "input": "", + "output": "

Cause:

\n\n

You trained a model derived from resnet18 in this way:

\n\n
model_ft = models.resnet18(pretrained=True)\nnum_ftrs = model_ft.fc.in_features\nmodel_ft.fc = nn.Linear(num_ftrs, 4)\n
\n\n

That is, you changed the last nn.Linear layer to output 4 dim prediction instead of the default 1000.
\nWhen you try and load the model for prediction, your code is:

\n\n
model = resnet18(pretrained=True)    \nmodel.load_state_dict(checkpoint)\n
\n\n

You did not apply the same change of the last nn.Linear layer to model therefore the checkpoint you are trying to load does not fit.

\n\n

Fix:

\n\n

(1) Apply the same change before loading the checkpoint:

\n\n
model = resnet18(pretrained=True)    \nnum_ftrs = model_ft.fc.in_features\nmodel_ft.fc = nn.Linear(num_ftrs, 4)  # make the change\nmodel.load_state_dict(checkpoint)  # load\n
\n\n

(2) Even better, use num_classes argument to construct resnet with the desired number of outputs to begin with:

\n\n
model = resnet18(pretrained=True, num_classes=4)  \nmodel.load_state_dict(checkpoint)  # load\n
\n", + "system": "" + }, + { + "instruction": "RuntimeError: "exp" not implemented for 'torch.LongTensor'", + "input": "", + "output": "

I happened to follow this tutorial too.

\n\n

For me I just got the torch.arange to generate float type tensor

\n\n

from

\n\n
position = torch.arange(0, max_len).unsqueeze(1)\ndiv_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))\n
\n\n

to

\n\n
position = torch.arange(0., max_len).unsqueeze(1)\ndiv_term = torch.exp(torch.arange(0., d_model, 2) * -(math.log(10000.0) / d_model))\n
\n\n

Just a simple fix. But now it works for me. It is possible that the torch exp and sin previously support LongTensor but not anymore (not very sure about it).

\n", + "system": "" + }, + { + "instruction": "Is there any way I can download the pre-trained models available in PyTorch to a specific path?", + "input": "", + "output": "

As, @dennlinger mentioned in his answer : torch.utils.model_zoo, is being internally called when you load a pre-trained model.

\n\n

More specifically, the method: torch.utils.model_zoo.load_url() is being called every time a pre-trained model is loaded. The documentation for the same, mentions:

\n\n
\n

The default value of model_dir is $TORCH_HOME/models where\n $TORCH_HOME defaults to ~/.torch.

\n \n

The default directory can be overridden with the $TORCH_HOME\n environment variable.

\n
\n\n

This can be done as follows:

\n\n
import torch \nimport torchvision\nimport os\n\n# Suppose you are trying to load pre-trained resnet model in directory- models\\resnet\n\nos.environ['TORCH_HOME'] = 'models\\\\resnet' #setting the environment variable\nresnet = torchvision.models.resnet18(pretrained=True)\n
\n\n

I came across the above solution by raising an issue in the PyTorch's GitHub repository:\nhttps://github.com/pytorch/vision/issues/616

\n\n

This led to an improvement in the documentation i.e. the solution mentioned above.

\n", + "system": "" + }, + { + "instruction": "PyTorch autograd -- grad can be implicitly created only for scalar outputs", + "input": "", + "output": "

I changed my basic_fun to the following, which resolved my problem:

\n\n
def basic_fun(x_cloned):\n    res = torch.FloatTensor([0])\n    for i in range(len(x)):\n        res += x_cloned[i] * x_cloned[i]\n    return res\n
\n\n

This version returns a scalar value.

\n", + "system": "" + }, + { + "instruction": "TypeError: tensor is not a torch image", + "input": "", + "output": "

The problem is with the order of the transforms. The ToTensor transform should come before the Normalize transform, since the latter expects a tensor, but the Resize transform returns an image. Correct code with the faulty lines changed:

\n\n
train_transforms = transforms.Compose([\n    transforms.Resize((224,224)), \n    transforms.ToTensor(), \n    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])\ntest_transforms = transforms.Compose([\n    transforms.Resize((224,224)), \n    transforms.ToTensor(), \n    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])\n
\n", + "system": "" + }, + { + "instruction": "If I'm not specifying to use CPU/GPU, which one is my script using?", + "input": "", + "output": "

PyTorch defaults to the CPU, unless you use the .cuda() methods on your models and the torch.cuda.XTensor variants of PyTorch's tensors.

\n", + "system": "" + }, + { + "instruction": "Pytorch What's the difference between define layer in __init__() and directly use in forward()?", + "input": "", + "output": "

Everything which contains weights which you want to be trained during the training process should be defined in your __init__ method.

\n\n

You don't need do define activation functions like softmax, ReLU or sigmoid in your __init__, you can just call them in forward.

\n\n

Dropout layers for example also don't need to be defined in __init__, they can just be called in forward too. [However defining them in your __init__ has the advantage that they can be switched off easier during evaluation (by calling eval() on your model). You can see an example of both versions here.

\n\n

Hope this is clear. Just ask if you have any further questions.

\n", + "system": "" + }, + { + "instruction": "what is the default weight initializer for conv in pytorch?", + "input": "", + "output": "

Each pytorch layer implements the method reset_parameters which is called at the end of the layer initialization to initialize the weights.\nYou can find the implementation of the layers here.

\n\n

For the dense layer which in pytorch is called linear for example, weights are initialized uniformly

\n\n
stdv = 1. / math.sqrt(self.weight.size(1))\nself.weight.data.uniform_(-stdv, stdv)\n
\n\n

where self.weight.size(1) is the number of inputs. This is done to keep the variance of the distributions of each layer relatively similar at the beginning of training by normalizing it to one. You can read a more detailed explanation here.

\n\n

For the convolutional layer the initialization is basically the same. You just compute the number of inputs by multiplying the number of channels with the kernel size.

\n", + "system": "" + }, + { + "instruction": "pytorch error: multi-target not supported in CrossEntropyLoss()", + "input": "", + "output": "

Ok. So I reproduced your problem and after some search and reading the API of CrossEntropyLoss(), I have found it's because you have a wrong label dimension.

\n\n

Offical docs of CrossEntropyLoss here. And you can see

\n\n
\n

Input: (N,C) where C = number of classes
\n Target: (N) where each value is 0\u2264targets[i]\u2264C\u22121

\n
\n\n

While here, in your criterion() function, you have a batchSize x 7 input and batchSize x 1 label. The confusing point is, say your batchSize is 10, a 10x1 tensor can not be regarded as a size-10 tensor, which is what the loss function expectes. You must explictly do the size conversion.

\n\n

Solution:
\nAdd labels = labels.squeeze_() before you call loss = criterion(y_pred, labels) and do the same thing in your test code. The squeeze_() funciton removes size-1 dimensions inplace. So you have a batchSize-size label now.

\n", + "system": "" + }, + { + "instruction": "How to convert caffe prototxt to pytorch model?", + "input": "", + "output": "

You can use one of the following libraries:

\n\n
\n

Usage

\n

Conversion

\n
python caffe2pth_convertor.py \\\n--prototxt=YOUT_PROTOTXT_PATH \\\n--caffemodel=YOUT_CAFFEMODEL_PATH \\\n--pthmodel=OUTPUT_PTHMODEL_PATH\n
\n

Use the model in Pytorch

\n
from caffe2pth.caffenet import *\n\nnet = CaffeNet(YOUT_PROTOTXT_PATH)\nnet.load_state_dict(torch.load(OUTPUT_PTHMODEL_PATH))\n
\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow: Hierarchical Softmax Implementation", + "input": "", + "output": "

Practically if your total number of categories is in the range of hundreds to thousands (less than 50K), you don't need to consider using hierarchical softmax, which is designed to run training faster for classifying into millions of categories (for example, the number of words in a vocabulary).

\n\n

In my experience (with Naive Bayesian and neural networks), utilizing the hierarchical structure at training time does not necessarily improve your classification quality.

\n\n

However, if you are interested to implement Hierarchical Softmax anyway, that's another story.

\n", + "system": "" + }, + { + "instruction": "FBGEMM load error trying to use PyTorch on Windows", + "input": "", + "output": "

Uninstalled all:

\n
pip uninstall torch torchvision torchaudio\n
\n

Then installed the one-step older version:

\n
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0\n
\n

Now, it's working fine.

\n

Initially, I got this problem after the installation of torchvision, torch was working fine before that.

\n", + "system": "" + }, + { + "instruction": "CUDA Out of memory when there is plenty available", + "input": "", + "output": "

You need empty torch cache after some method(before error)

\n
torch.cuda.empty_cache()\n
\n", + "system": "" + }, + { + "instruction": "How to install pytorch with CUDA support with pip in Visual Studio", + "input": "", + "output": "

I followed the main instructions for installing on pytorch's site but still failed to have success. For my setup this resulted in pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117.

\n

I had to add --upgrade --force-reinstall which finally fixed the problem.

\n", + "system": "" + }, + { + "instruction": "What is the difference between cuda.amp and model.half()?", + "input": "", + "output": "

If you convert the entire model to fp16, there is a chance that some of the activations functions and batchnorm layers will cause the fp16 weights to underflow, i.e., become zero. So it is always recommended to use autocast which internally converts the weights to fp32 in problematic layers.

\n

model.half() in the end will save weight in fp16 where as autocast weights will be still in fp32. Training in fp16 will be faster than autocast but higher chance for instability if you are not careful.\nWhile using autocast you also need to scale up the gradient during the back propagation

\n

If fp16 requirement is on the inference side, I recommend using autocast and then converting to fp16 using ONNX and tensorrt.

\n", + "system": "" + }, + { + "instruction": "GPU is not available for Pytorch", + "input": "", + "output": "\n", + "system": "" + }, + { + "instruction": "How to select half precision (BFLOAT16 vs FLOAT16) for your trained model?", + "input": "", + "output": "

bfloat16 is generally easier to use, because it works as a drop-in replacement for float32. If your code doesn't create nan/inf numbers or turn a non-0 into a 0 with float32, then it shouldn't do it with bfloat16 either, roughly speaking. So, if your hardware supports it, I'd pick that.

\n

Check out AMP if you choose float16.

\n", + "system": "" + }, + { + "instruction": "Using weights in CrossEntropyLoss and BCELoss (PyTorch)", + "input": "", + "output": "

Another way you could accomplish your goal is to use reduction=none when initializing the loss and then multiply the resulting tensor by your weights before computing the mean.\ne.g.

\n
loss = torch.nn.BCELoss(reduction='none')\nmodel = torch.sigmoid\n\nweights = torch.rand(10,1)\ninputs = torch.rand(10,1)\ntargets = torch.rand(10,1)\n\nintermediate_losses = loss(model(inputs), targets)\nfinal_loss = torch.mean(weights*intermediate_losses)\n
\n

Of course for your scenario you still would need to calculate the weights tensor. But hopefully this helps!

\n", + "system": "" + }, + { + "instruction": "How to conda install CUDA enabled PyTorch in a Docker container?", + "input": "", + "output": "

I got it working after many, many tries. Posting the answer here in case it helps anyone.

\n

Basically, I installed pytorch and torchvision through pip (from within the conda environment) and rest of the dependencies through conda as usual.

\n

This is how the final Dockerfile looks:

\n
# Use nvidia/cuda image\nFROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04\n\n# set bash as current shell\nRUN chsh -s /bin/bash\nSHELL ["/bin/bash", "-c"]\n\n# install anaconda\nRUN apt-get update\nRUN apt-get install -y wget bzip2 ca-certificates libglib2.0-0 libxext6 libsm6 libxrender1 git mercurial subversion && \\\n        apt-get clean\nRUN wget --quiet https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh -O ~/anaconda.sh && \\\n        /bin/bash ~/anaconda.sh -b -p /opt/conda && \\\n        rm ~/anaconda.sh && \\\n        ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \\\n        echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \\\n        find /opt/conda/ -follow -type f -name '*.a' -delete && \\\n        find /opt/conda/ -follow -type f -name '*.js.map' -delete && \\\n        /opt/conda/bin/conda clean -afy\n\n# set path to conda\nENV PATH /opt/conda/bin:$PATH\n\n\n# setup conda virtual environment\nCOPY ./requirements.yaml /tmp/requirements.yaml\nRUN conda update conda \\\n    && conda env create --name camera-seg -f /tmp/requirements.yaml\n\nRUN echo "conda activate camera-seg" >> ~/.bashrc\nENV PATH /opt/conda/envs/camera-seg/bin:$PATH\nENV CONDA_DEFAULT_ENV $camera-seg\n
\n

And this is how the requirements.yaml looks like:

\n
name: camera-seg\nchannels:\n  - defaults\n  - conda-forge\ndependencies:\n  - python=3.6\n  - pip\n  - numpy\n  - pillow\n  - yaml\n  - pyyaml\n  - matplotlib\n  - jupyter\n  - notebook\n  - tensorboardx\n  - tensorboard\n  - protobuf\n  - tqdm\n  - pip:\n    - torch\n    - torchvision\n
\n

Then I build the container using the command docker build -t camera-seg . and PyTorch is now being able to recognize CUDA.

\n", + "system": "" + }, + { + "instruction": "What is the difference between an Embedding Layer with a bias immediately afterwards and a Linear Layer in PyTorch", + "input": "", + "output": "

Embedding

\n
\n

[...] what Embedding does differently than Linear without a bias.

\n
\n

Essentially everything. torch.nn.Embedding is a lookup table; it works the same as torch.Tensor but with a few twists (like possibility to use sparse embedding or default value at specified index).

\n

For example:

\n
import torch\n\nembedding = torch.nn.Embedding(3, 4)\n\nprint(embedding.weight)\n\nprint(embedding(torch.tensor([1])))\n
\n

Would output:

\n
Parameter containing:\ntensor([[ 0.1420, -0.1886,  0.6524,  0.3079],\n        [ 0.2620,  0.4661,  0.7936, -1.6946],\n        [ 0.0931,  0.3512,  0.3210, -0.5828]], requires_grad=True)\ntensor([[ 0.2620,  0.4661,  0.7936, -1.6946]], grad_fn=<EmbeddingBackward>)\n
\n

So we took the first row of the embedding. It does nothing more than that.

\n

Where is it used?

\n

Usually when we want to encode some meaning (like word2vec) for each row (e.g. words being close semantically are close in euclidean space) and possibly train them.

\n

Linear

\n

torch.nn.Linear (without bias) is also a torch.Tensor (weight) but it does operation on it (and the input) which is essentially:

\n
output = input.matmul(weight.t())\n
\n

every time you call the layer (see source code and functional definition of this layer).

\n

Code snippet

\n

The layer in your code snippet does this:

\n\n

This is probably used to train both representations (of users and movies) for some recommender-like system.

\n

Other stuff

\n
\n

I know it does some faster computational version of a dot product\nwhere one of the matrices is a one-hot encoded matrix and the other is\nthe embedding matrix.

\n
\n

No, it doesn't. torch.nn.Embedding can be one hot encoded and might also be sparse, but depending on the algorithms (and whether those support sparsity) there might be performance boost or not.

\n", + "system": "" + }, + { + "instruction": "How can I invert a MelSpectrogram with torchaudio and get an audio waveform?", + "input": "", + "output": "

Just for history, full code:

\n
import torch\nimport torchaudio\nimport IPython\n\nwaveform, sample_rate = torchaudio.load("wavs/LJ030-0196.wav", normalize=True)\n\nn_fft = 256\n\nn_stft = int((n_fft//2) + 1)\ntransofrm = torchaudio.transforms.MelSpectrogram(sample_rate, n_fft=n_fft)\ninvers_transform = torchaudio.transforms.InverseMelScale(sample_rate=sample_rate, n_stft=n_stft)\ngrifflim_transform = torchaudio.transforms.GriffinLim(n_fft=n_fft)\n\nmel_specgram = transofrm(waveform)\ninverse_waveform = invers_transform(mel_specgram)\npseudo_waveform = grifflim_transform(inverse_waveform)\n
\n

And

\n
IPython.display.Audio(waveform.numpy(), rate=sample_rate)\n
\n
IPython.display.Audio(pseudo_waveform.numpy(), rate=sample_rate)\n
\n", + "system": "" + }, + { + "instruction": "Pytorch RuntimeError: expected scalar type Float but found Byte", + "input": "", + "output": "

This line is the cause of your error:

\n
images = self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16))\n
\n

images are uint8 (byte) while the neural network needs inputs as floating point in order to calculate gradients (you can't calculate gradients for backprop using integers as those are not continuous and non-differentiable).

\n

You can use torchvision.transforms.functional.to_tensor to convert the image into float and into [0, 1] like this:

\n
import torchvision\n\nimages = torchvision.transforms.functional.to_tensor(\n    self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16))\n)\n
\n

or simply divide by 255 to get values into [0, 1].

\n", + "system": "" + }, + { + "instruction": "PyTorch multiprocessing error with Hogwild", + "input": "", + "output": "

If you modify your code to create new processes like this:

\n
processes = []\nctx = mp.get_context('spawn')\nfor rank in range(num_processes):\n    p = ctx.Process(target=train, args=(model,))\n
\n

it seems to run fine (rest of code same as yours, tested on pytorch 1.5.0 / python 3.6 / NVIDIA T4 GPU).

\n

I'm not completely sure what is carried over from the non-parallel run to the parallel run; I tried creating a completely new model for the two runs (with its own class), and/or deleting anything from the original, and/or making sure to delete any tensors and free up memory, and none of that made any difference.

\n

What did make a difference was making sure that .backward() never got called outside of mp.Process() before it was called by a function within mp.Process(). I think what may be carried over is an autograd thread; if the thread exists before multiprocessing with the default fork method it fails, if the thread is created after fork it seems to work okay, and if using spawn it also works okay.

\n

Btw: That's a really interesting question - thank you especially for digesting it to a minimal example!

\n", + "system": "" + }, + { + "instruction": "What is the param `last_epoch` on Pytorch Optimizer's Schedulers is for?", + "input": "", + "output": "

The last_epoch parameter is used when resuming training and you want to start the scheduler where it left off earlier. Its value is increased every time you call .step() of scheduler. The default value of -1 indicates that the scheduler is started from the beginning.

\n

From the docs:

\n
\n

Since step() should be invoked after each batch instead of after each epoch, this number represents the total number of batches computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning.

\n
\n

For example,

\n
>>> import torch\n>>> cc = torch.nn.Conv2d(10,10,3)\n>>> myoptimizer = torch.optim.Adam(cc.parameters(), lr=0.1)\n>>> myscheduler = torch.optim.lr_scheduler.StepLR(myoptimizer,step_size=1, gamma=0.1)\n>>> myscheduler.last_epoch, myscheduler.get_lr()\n(0, [0.1])\n>>> myscheduler.step()\n>>> myscheduler.last_epoch, myscheduler.get_lr()\n(1, [0.001])\n>>> myscheduler.step()\n>>> myscheduler.last_epoch, myscheduler.get_lr()\n(2, [0.0001])\n
\n

Now, if you decide to stop the training in the middle, then resume it, you can provide last_epoch parameter to schedular so that it start from where it was left off, not from the beginning again.

\n
>>> mynewscheduler = torch.optim.lr_scheduler.StepLR(myoptimizer,step_size=1, gamma=0.1, last_epoch=myscheduler.last_epoch)\n>>> mynewscheduler.last_epoch, mynewscheduler.get_lr()\n(3, [1.0000000000000004e-05])\n
\n", + "system": "" + }, + { + "instruction": "Which PyTorch version is CUDA compute capability 3.0 compatible?", + "input": "", + "output": "

PyTorch 0.3.0 was the last version to support CUDA compute capability 3.0.

\n

Alternatively, you could compile PyTorch from source (which may still support this, I haven't verified).

\n", + "system": "" + }, + { + "instruction": "Understanding input shape to PyTorch conv1D?", + "input": "", + "output": "

In pytorch your input shape of [6, 512, 768] should actually be [6, 768, 512] where the feature length is represented by the channel dimension and sequence length is the length dimension. Then you can define your conv1d with in/out channels of 768 and 100 respectively to get an output of [6, 100, 511].

\n\n

Given an input of shape [6, 512, 768] you can convert it to the correct shape with Tensor.transpose.

\n\n
input = input.transpose(1, 2).contiguous()\n
\n\n

The .contiguous() ensures the memory of the tensor is stored contiguously which helps avoid potential issues during processing.

\n", + "system": "" + }, + { + "instruction": "What are saved in optimizer's state_dict? what "state","param_groups" stands for?", + "input": "", + "output": "

In contrast to model's state_dict, which saves learnable parameters, the optimizer's state_dict contains information about the optimizer\u2019s state (parameters to be optimized), as well as the hyperparameters used.

\n\n
\n\n

All optimizers in PyTorch need to inherit from the base class torch.optim.Optimizer. It requires two entries:

\n\n\n\n

In addition to that, optimizers also support specifying per-parameter options.

\n\n
\n

To do this, instead of passing an iterable of Tensors, pass in an iterable of dicts. Each of them will define a separate parameter group, and should contain a params key, containing a list of parameters belonging to it.

\n
\n\n

Consider an example,

\n\n
optim.SGD([\n                {'params': model.base.parameters()},\n                {'params': model.classifier.parameters(), 'lr': 1e-3}\n            ], lr=1e-2, momentum=0.9)\n
\n\n

Here, we have provided the a) params, b) default hyperparameters: lr, momentum, and c) a parameter group. In this case, the model.base\u2019s parameters will use the default learning rate of 1e-2, model.classifier\u2019s parameters will use a learning rate of 1e-3, and a momentum of 0.9 will be used for all parameters.

\n\n
\n\n

The step (optimizer.step()) performs a single optimization step (parameter update), which changes the state of the optimizer.

\n\n
\n\n

Now, coming to optimizer's state_dict, it returns the state of the optimizer as a dict. It contains two entries:

\n\n\n\n
\n\n

Some of the hyperparameters are specific to the optimizer or model used e.g. (used in Adam)

\n\n\n", + "system": "" + }, + { + "instruction": "Implementation of the Dense Synthesizer", + "input": "", + "output": "

Is the implementation and understanding of the dense synthesizer correct?

\n

Not exactly, linear1 = nn.Linear(d,d) according to the paper and not (d,l).\nOf course this does not work if X.shape = (l,d) according to matrix multiplication rules.

\n

This is because :

\n

\"enter\n\"enter

\n

So F is applied to each Xi in X for i in [1,l]

\n

The resulting matrix B is then passed to the softmax function and multiplied by G(x).\nSo you'd have to modify your code to sequentially process the input then use the returned matrix to compute Y.

\n

how is that different from a multi-layered perceptron that takes in two different inputs and makes uses of it at different point in the forward propagation?

\n

To understand, we need to put things into context, the idea of introducing attention mechanism was first described here in the context of Encoder - Decoder : https://arxiv.org/pdf/1409.0473.pdf

\n

The core idea is to allow the model to have control over how the context vector from the encoder is retrieved using a neural network instead of relying solely on the last encoded state :

\n

\"enter

\n

see this post for more detail.

\n

The Transformers introduced the idea of using "Multi-Head Attention" (see graph below) to reduce the computational burden and focus solely on the attention mechanism itself. post

\n

https://arxiv.org/pdf/1706.03762.pdf

\n

\"enter

\n

\"enter

\n

So where does the Dense synthesizer fits into all of that ?

\n

It simply replaces the Dot product (as illustrated in the first pictures in your post) by F(.). If you replace what's inside the softmax by F you get the equation for Y

\n

\"enter

\n

Conclusion

\n

This is an MLP but applied step wise to the input in the context of sequence processing.

\n

Thank you

\n", + "system": "" + }, + { + "instruction": "PyTorch slice matrix with vector", + "input": "", + "output": "

You can specify the corresponding row index as:

\n\n
import torch\nx = torch.tensor([[1, 2, 3],\n                  [4, 5, 6],\n                  [7, 8, 9]])\n\ny = torch.tensor([0, 2, 1])\n\nx[range(x.shape[0]), y]\ntensor([1, 6, 8])\n
\n", + "system": "" + }, + { + "instruction": "Pytorch "NCCL error": unhandled system error, NCCL version 2.4.8"", + "input": "", + "output": "

A number of things can cause this issue, see for example 1, 2. Adding the line

\n
import os\nos.environ["NCCL_DEBUG"] = "INFO"\n
\n

to your script will log more specific debug info leading up to the error, giving you a more helpful error message to google.

\n", + "system": "" + }, + { + "instruction": "Pytorch - Concatenating Datasets before using Dataloader", + "input": "", + "output": "

If I got your question right, you have train and dev sets (and their corresponding loaders) as follows:

\n
train_set = CustomDataset(...)\ntrain_loader = DataLoader(dataset=train_set, ...)\ndev_set = CustomDataset(...)\ndev_loader = DataLoader(dataset=dev_set, ...)\n
\n

And you want to concatenate them in order to use train+dev as the training data, right? If so, you just simply call:

\n
train_dev_sets = torch.utils.data.ConcatDataset([train_set, dev_set])\ntrain_dev_loader = DataLoader(dataset=train_dev_sets, ...)\n
\n

The train_dev_loader is the loader containing data from both sets.

\n

Now, be sure your data has the same shapes and the same types, that is, the same number of features, or the same categories/numbers, etc.

\n", + "system": "" + }, + { + "instruction": "How do I add some Gaussian noise to a tensor in PyTorch?", + "input": "", + "output": "

The function torch.randn produces a tensor with elements drawn from a Gaussian distribution of zero mean and unit variance. Multiply by sqrt(0.1) to have the desired variance.

\n\n
x = torch.zeros(5, 10, 20, dtype=torch.float64)\nx = x + (0.1**0.5)*torch.randn(5, 10, 20)\n
\n", + "system": "" + }, + { + "instruction": "How to resize a PyTorch tensor?", + "input": "", + "output": "

It seems like you are looking for interpolate (a function in nn.functional):

\n
import torch.nn.functional as nnf\n\nx = torch.rand(5, 1, 44, 44)\nout = nnf.interpolate(x, size=(224, 224), mode='bicubic', align_corners=False)\n
\n

If you really care about the accuracy of the interpolation, you should have a look at ResizeRight: a pytorch/numpy package that accurately deals with all sorts of "edge cases" when resizing images. This can have an effect when directly merging features of different scales: inaccurate interpolation may result in misalignments.

\n", + "system": "" + }, + { + "instruction": "PyTorch: Dataloader for time series task", + "input": "", + "output": "

I ended up writing custom dataset as well, though it's a bit different from the answer above:

\n
class TimeseriesDataset(torch.utils.data.Dataset):   \n    def __init__(self, X, y, seq_len=1):\n        self.X = X\n        self.y = y\n        self.seq_len = seq_len\n\n    def __len__(self):\n        return self.X.__len__() - (self.seq_len-1)\n\n    def __getitem__(self, index):\n        return (self.X[index:index+self.seq_len], self.y[index+self.seq_len-1])\n
\n

And the usage looks like that:

\n
train_dataset = TimeseriesDataset(X_lstm, y_lstm, seq_len=4)\ntrain_loader = torch.utils.data.DataLoader(train_dataset, batch_size = 3, shuffle = False)\n\nfor i, d in enumerate(train_loader):\n    print(i, d[0].shape, d[1].shape)\n\n>>>\n# shape: tuple((batch_size, seq_len, n_features), (batch_size))\n0 torch.Size([3, 4, 2]) torch.Size([3])\n
\n", + "system": "" + }, + { + "instruction": "How to get the filename of a sample from a DataLoader?", + "input": "", + "output": "

Well, it depends on how your Dataset is implemented. For instance, in the torchvision.datasets.MNIST(...) case, you cannot retrieve the filename simply because there is no such thing as the filename of a single sample (MNIST samples are loaded in a different way).

\n\n

As you did not show your Dataset implementation, I'll tell you how this could be done with the torchvision.datasets.ImageFolder(...) (or any torchvision.datasets.DatasetFolder(...)):

\n\n
f = open(\"test_y\", \"w\")\nwith torch.no_grad():\n    for i, (images, labels) in enumerate(test_loader, 0):\n        outputs = model(images)\n        _, predicted = torch.max(outputs.data, 1)\n        sample_fname, _ = test_loader.dataset.samples[i]\n        f.write(\"{}, {}\\n\".format(sample_fname, predicted.item()))\nf.close()\n
\n\n

You can see that the path of the file is retrieved during the __getitem__(self, index), especifically here.

\n\n

If you implemented your own Dataset (and perhaps would like to support shuffle and batch_size > 1), then I would return the sample_fname on the __getitem__(...) call and do something like this:

\n\n
for i, (images, labels, sample_fname) in enumerate(test_loader, 0):\n    # [...]\n
\n\n

This way you wouldn't need to care about shuffle. And if the batch_size is greater than 1, you would need to change the content of the loop for something more generic, e.g.:

\n\n
f = open(\"test_y\", \"w\")\nfor i, (images, labels, samples_fname) in enumerate(test_loader, 0):\n    outputs = model(images)\n    pred = torch.max(outputs, 1)[1]\n    f.write(\"\\n\".join([\n        \", \".join(x)\n        for x in zip(map(str, pred.cpu().tolist()), samples_fname)\n    ]) + \"\\n\")\nf.close()\n
\n", + "system": "" + }, + { + "instruction": "Pytorch custom activation functions?", + "input": "", + "output": "

There are four possibilities depending on what you are looking for. You will need to ask yourself two questions:

\n

Q1) Will your activation function have learnable parameters?

\n

If yes, you have no choice but to create your activation function as an nn.Module class because you need to store those weights.

\n

If no, you are free to simply create a normal function, or a class, depending on what is convenient for you.

\n

Q2) Can your activation function be expressed as a combination of existing PyTorch functions?

\n

If yes, you can simply write it as a combination of existing PyTorch function and won't need to create a backward function which defines the gradient.

\n

If no you will need to write the gradient by hand.

\n

Example 1: SiLU function

\n

The SiLU function f(x) = x * sigmoid(x) does not have any learned weights and can be written entirely with existing PyTorch functions, thus you can simply define it as a function:

\n
def silu(x):\n    return x * torch.sigmoid(x)\n
\n

and then simply use it as you would have torch.relu or any other activation function.

\n

Example 2: SiLU with learned slope

\n

In this case you have one learned parameter, the slope, thus you need to make a class of it.

\n
class LearnedSiLU(nn.Module):\n    def __init__(self, slope = 1):\n        super().__init__()\n        self.slope = slope * torch.nn.Parameter(torch.ones(1))\n\n    def forward(self, x):\n        return self.slope * x * torch.sigmoid(x)\n
\n

Example 3: with backward

\n

If you have something for which you need to create your own gradient function, you can look at this example: Pytorch: define custom function

\n", + "system": "" + }, + { + "instruction": "Why does dim=1 return row indices in torch.argmax?", + "input": "", + "output": "

It's time to correctly understand how the axis or dim argument work in PyTorch:

\n\n

\"tensor

\n\n
\n\n

The following example should make sense once you comprehend the above picture:

\n\n
\n
    |\n    v\n  dim-0  ---> -----> dim-1 ------> -----> --------> dim-1\n    |   [[-1.7739,  0.8073,  0.0472, -0.4084],\n    v    [ 0.6378,  0.6575, -1.2970, -0.0625],\n    |    [ 1.7970, -1.3463,  0.9011, -0.8704],\n    v    [ 1.5639,  0.7123,  0.0385,  1.8410]]\n    |\n    v\n
\n
\n\n
# argmax (indices where max values are present) along dimension-1\nIn [215]: torch.argmax(a, dim=1)\nOut[215]: tensor([1, 1, 0, 3])\n
\n\n
\n\n

Note: dim (short for 'dimension') is the torch equivalent of 'axis' in NumPy.

\n", + "system": "" + }, + { + "instruction": "torch.nn.sequential vs. combination of multiple torch.nn.linear", + "input": "", + "output": "

Yes, these two pieces of code create the same network.\nOne way to convince yourself that this is true is to save both models to ONNX.

\n\n
import torch.nn as nn\n\nclass TestModel(nn.Module):\n    def __init__(self, input_dim, hidden_dim, output_dim):\n        super(TestModel, self).__init__()\n        self.fc1 = nn.Linear(input_dim,hidden_dim)\n        self.fc2 = nn.Linear(hidden_dim,output_dim)\n\n    def forward(self, x):\n        x = nn.functional.relu(self.fc1(x))\n        x = nn.functional.softmax(self.fc2(x))\n        return x   \n\n\nclass TestModel2(nn.Module):\n    def __init__(self, input_dim, hidden_dim, output_dim):\n        super(TestModel2, self).__init__()\n        self.seq = nn.Sequential(\n                nn.Linear(input_dim, hidden_dim),\n                nn.ReLU(),\n                nn.Linear(hidden_dim, output_dim),\n                nn.Softmax()\n                )\n\n    def forward(self, x):\n        return self.seq(x)\n\nm = TestModel(1, 2, 3)\nm2 = TestModel2(1, 2, 3)\n\ntorch.onnx.export(m, torch.Tensor([0]), \"test.onnx\", verbose=True)\n/opt/anaconda3/envs/py36/bin/ipython:9: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\ngraph(%0 : Float(1)\n      %1 : Float(2, 1)\n      %2 : Float(2)\n      %3 : Float(3, 2)\n      %4 : Float(3)) {\n  %5 : Float(1!, 2) = onnx::Transpose[perm=[1, 0]](%1), scope: TestModel/Linear[fc1]\n  %6 : Float(2) = onnx::MatMul(%0, %5), scope: TestModel/Linear[fc1]\n  %7 : Float(2) = onnx::Add(%6, %2), scope: TestModel/Linear[fc1]\n  %8 : Float(2) = onnx::Relu(%7), scope: TestModel\n  %9 : Float(2!, 3!) = onnx::Transpose[perm=[1, 0]](%3), scope: TestModel/Linear[fc2]\n  %10 : Float(3) = onnx::MatMul(%8, %9), scope: TestModel/Linear[fc2]\n  %11 : Float(3) = onnx::Add(%10, %4), scope: TestModel/Linear[fc2]\n  %12 : Float(3) = onnx::Softmax[axis=0](%11), scope: TestModel\n  return (%12);\n}\n\ntorch.onnx.export(m2, torch.Tensor([0]), \"test.onnx\", verbose=True)\n/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py:475: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n  result = self._slow_forward(*input, **kwargs)\ngraph(%0 : Float(1)\n      %1 : Float(2, 1)\n      %2 : Float(2)\n      %3 : Float(3, 2)\n      %4 : Float(3)) {\n  %5 : Float(1!, 2) = onnx::Transpose[perm=[1, 0]](%1), scope: TestModel2/Sequential[seq]/Linear[0]\n  %6 : Float(2) = onnx::MatMul(%0, %5), scope: TestModel2/Sequential[seq]/Linear[0]\n  %7 : Float(2) = onnx::Add(%6, %2), scope: TestModel2/Sequential[seq]/Linear[0]\n  %8 : Float(2) = onnx::Relu(%7), scope: TestModel2/Sequential[seq]/ReLU[1]\n  %9 : Float(2!, 3!) = onnx::Transpose[perm=[1, 0]](%3), scope: TestModel2/Sequential[seq]/Linear[2]\n  %10 : Float(3) = onnx::MatMul(%8, %9), scope: TestModel2/Sequential[seq]/Linear[2]\n  %11 : Float(3) = onnx::Add(%10, %4), scope: TestModel2/Sequential[seq]/Linear[2]\n  %12 : Float(3) = onnx::Softmax[axis=0](%11), scope: TestModel2/Sequential[seq]/Softmax[3]\n  return (%12);\n}\n
\n\n

So both models result in the same ONNX graph with the same operations.

\n", + "system": "" + }, + { + "instruction": "How can I make a neural network that has multiple outputs using pytorch?", + "input": "", + "output": "

Multiple outputs can be trivially achieved with pytorch.

\n\n

Here is one such network.

\n\n
import torch.nn as nn\n\nclass NeuralNetwork(nn.Module):\n  def __init__(self):\n    super(NeuralNetwork, self).__init__()\n    self.linear1 = nn.Linear(in_features = 3, out_features = 1)\n    self.linear2 = nn.Linear(in_features = 3,out_features = 2)\n\n  def forward(self, x):\n    output1 = self.linear1(x)\n    output2 = self.linear2(x)\n    return output1, output2\n
\n", + "system": "" + }, + { + "instruction": "Accessing PyTorch GPU matrix from TensorFlow directly", + "input": "", + "output": "

I am not familiar with tensorflow, but you may use pyTorch to expose the \"internals\" of a tensor.
\nYou can access the underlying storage of a tensor

\n\n
a.storage()\n
\n\n

Once you have the storage, you can get a pointer to the memory (either CPU or GPU):

\n\n
a.storage().data_ptr()\n
\n\n

You can check if it is pinned or not

\n\n
a.storage().is_pinned()\n
\n\n

And you can pin it

\n\n
a.storage().pin_memory()\n
\n\n

I am not familiar with interfaces between pyTorch and tensorflow, but I came across an example of a package (FAISS) directly accessing pytorch tensors in GPU.

\n", + "system": "" + }, + { + "instruction": "AssertionError: Torch not compiled with CUDA enabled", + "input": "", + "output": "

To summarize and expand on the comments:

\n\n\n\n

This PyTorch github issue mentions that very few Macs have Nvidia processors: https://github.com/pytorch/pytorch/issues/30664

\n\n

IF your Mac does have a CUDA-capable GPU, then to use CUDA commands on MacOS you'll need to recompile pytorch from source with correct command line options.

\n", + "system": "" + }, + { + "instruction": "Pytorch speed comparison - GPU slower than CPU", + "input": "", + "output": "

GPU acceleration works by heavy parallelization of computation. On a GPU you have a huge amount of cores, each of them is not very powerful, but the huge amount of cores here matters.

\n\n

Frameworks like PyTorch do their to make it possible to compute as much as possible in parallel. In general matrix operations are very well suited for parallelization, but still it isn't always possible to parallelize computation!

\n\n

In your example you have a loop:

\n\n
b = torch.ones(4,4).cuda()\nfor _ in range(1000000):\n    b += b\n
\n\n

You have 1000000 operations, but due to the structure of the code it impossible to parallelize much of these computations. If you think about it, to compute the next b you need to know the value of the previous (or current) b.

\n\n

So you have 1000000 operations, but each of these has to be computed one after another. Possible parallelization is limited to the size of your tensor. This size though is not very large in your example:

\n\n
torch.ones(4,4)\n
\n\n

So you only can parallelize 16 operations (additions) per iteration.\nAs the CPU has few, but much more powerful cores, it is just much faster for the given example!

\n\n

But things change if you change the size of the tensor, then PyTorch is able to parallelize much more of the overall computation. I changed the iterations to 1000 (because I did not want to wait so long :), but you can put in any value you like, the relation between CPU and GPU should stay the same.

\n\n

Here are the results for different tensor sizes:

\n\n
#torch.ones(4,4)       - the size you used\nCPU time =  0.00926661491394043\nGPU time =  0.0431208610534668\n\n#torch.ones(40,40)     - CPU gets slower, but still faster than GPU\nCPU time =  0.014729976654052734\nGPU time =  0.04474186897277832\n\n#torch.ones(400,400)   - CPU now much slower than GPU\nCPU time =  0.9702610969543457\nGPU time =  0.04415607452392578\n\n#torch.ones(4000,4000) - GPU much faster then CPU \nCPU time =  38.088677167892456\nGPU time =  0.044649362564086914\n
\n\n

So as you see, where it is possible to parallelize stuff (here the addition of the tensor elements), GPU becomes very powerful.
GPU time is not changing at all for the given calculations, the GPU can handle much more!
(as long as it doesn't run out of memory :)

\n", + "system": "" + }, + { + "instruction": "How do I load custom image based datasets into Pytorch for use with a CNN?", + "input": "", + "output": "

Looking at the data from Kaggle and your code, there are problems in your data loading.

\n\n

The data should be in a different folder per class label for PyTorch ImageFolder to load it correctly. In your case, since all the training data is in the same folder, PyTorch is loading it as one train set. You can correct this by using a folder structure like - train/daisy, train/dandelion, test/daisy, test/dandelion and then passing the train and the test folder to the train and test ImageFolder respectively. Just change the folder structure and you should be good. Take a look at the official documentation of torchvision.datasets.Imagefolder which has a similar example.

\n\n
\n\n

As you said, these images which are already divided by folders in /images. PyTorch ImageFolder assumes that images are organized in the following way. But this folder structure is only correct if you are using all the images for train set:

\n\n
```\n/images/daisy/100080576_f52e8ee070_n.jpg\n/images/daisy/10140303196_b88d3d6cec.jpg\n.\n.\n.\n/images/dandelion/10043234166_e6dd915111_n.jpg\n/images/dandelion/10200780773_c6051a7d71_n.jpg\n```\n
\n\n

where 'daisy', 'dandelion' etc. are class labels.

\n\n

The correct folder structure if you want to split the dataset into train and test set in your case (note that I know you want to split the dataset into train, validation, and test set, but it doesn't matters as this is just an example to get the idea out):

\n\n
```\n/images/train/daisy/100080576_f52e8ee070_n.jpg\n/images/train/daisy/10140303196_b88d3d6cec.jpg\n.\n.\n/images/train/dandelion/10043234166_e6dd915111_n.jpg\n/images/train/dandelion/10200780773_c6051a7d71_n.jpg\n.\n.\n/images/test/daisy/300080576_f52e8ee070_n.jpg\n/images/test/daisy/95140303196_b88d3d6cec.jpg\n.\n.\n/images/test/dandelion/32143234166_e6dd915111_n.jpg\n/images/test/dandelion/65200780773_c6051a7d71_n.jpg\n```\n
\n\n

Then, you can refer to the following full code example on how to write a dataloader:

\n\n
import os\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport torch.utils.data as data\nimport torchvision\nfrom torchvision import transforms\n\nEPOCHS = 2\nBATCH_SIZE = 10\nLEARNING_RATE = 0.003\nTRAIN_DATA_PATH = \"./images/train/\"\nTEST_DATA_PATH = \"./images/test/\"\nTRANSFORM_IMG = transforms.Compose([\n    transforms.Resize(256),\n    transforms.CenterCrop(256),\n    transforms.ToTensor(),\n    transforms.Normalize(mean=[0.485, 0.456, 0.406],\n                         std=[0.229, 0.224, 0.225] )\n    ])\n\ntrain_data = torchvision.datasets.ImageFolder(root=TRAIN_DATA_PATH, transform=TRANSFORM_IMG)\ntrain_data_loader = data.DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True,  num_workers=4)\ntest_data = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=TRANSFORM_IMG)\ntest_data_loader  = data.DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) \n\nclass CNN(nn.Module):\n    # omitted...\n\nif __name__ == '__main__':\n\n    print(\"Number of train samples: \", len(train_data))\n    print(\"Number of test samples: \", len(test_data))\n    print(\"Detected Classes are: \", train_data.class_to_idx) # classes are detected by folder structure\n\n    model = CNN()    \n    optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)\n    loss_func = nn.CrossEntropyLoss()    \n\n    # Training and Testing\n    for epoch in range(EPOCHS):        \n        for step, (x, y) in enumerate(train_data_loader):\n            b_x = Variable(x)   # batch x (image)\n            b_y = Variable(y)   # batch y (target)\n            output = model(b_x)[0]          \n            loss = loss_func(output, b_y)   \n            optimizer.zero_grad()           \n            loss.backward()                 \n            optimizer.step()\n\n            if step % 50 == 0:\n                test_x = Variable(test_data_loader)\n                test_output, last_layer = model(test_x)\n                pred_y = torch.max(test_output, 1)[1].data.squeeze()\n                accuracy = sum(pred_y == test_y) / float(test_y.size(0))\n                print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)\n
\n", + "system": "" + }, + { + "instruction": "Concatenating two tensors with different dimensions in Pytorch", + "input": "", + "output": "

You could do the broadcasting manually (using Tensor.expand()) before the concatenation (using torch.cat()):

\n
import torch\n\na = torch.randn(15, 200, 2048)\nb = torch.randn(1, 200, 2048)\n\nrepeat_vals = [a.shape[0] // b.shape[0]] + [-1] * (len(b.shape) - 1)\n# or directly repeat_vals = (15, -1, -1) or (15, 200, 2048) if shapes are known and fixed...\nres = torch.cat((a, b.expand(*repeat_vals)), dim=-1)\nprint(res.shape)\n# torch.Size([15, 200, 4096])\n
\n", + "system": "" + }, + { + "instruction": "Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small", + "input": "", + "output": "

Your problem is that before the Pool4 your image has already reduced to a 1x1pixel size image. So you need to either feed an much larger image of size at least around double that (~134x134) or remove a pooling layer in your network.

\n", + "system": "" + }, + { + "instruction": "Why detach needs to be called on variable in this example?", + "input": "", + "output": "

ORIGINAL ANSWER (WRONG / INCOMPLETE)

\n\n

You're right, optimizerD only updates netD and the gradients on netG are not used before netG.zero_grad() is called, so detaching is not necessary, it just saves time, because you're not computing gradients for the generator.

\n\n

You're basically also answering your other question yourself, you don't detach fake in the second block because you specifically want to compute gradients on netG to be able to update its parameters.

\n\n

Note how in the second block real_label is used as the corresponding label for fake, so if the discriminator finds the fake input to be real, the final loss is small, and vice versa, which is precisely what you want for the generator. Not sure if that's what confused you, but it's really the only difference compared to training the discriminator on fake inputs.

\n\n

EDIT

\n\n

Please see FatPanda's comment! My original answer is in fact incorrect. Pytorch destroys (parts of) the compute graph when .backward() is called. Without detaching before errD_fake.backward() the errG.backward() call later would not be able to backprop into the generator because the required graph is no longer available (unless you specify retain_graph=True). I'm relieved Soumith made the same mistake :D

\n", + "system": "" + }, + { + "instruction": "PyTorch: Extract learned weights correctly", + "input": "", + "output": "

Use model.parameters() to get trainable weight for any model or layer. Remember to put it inside list(), or you cannot print it out.

\n\n

The following code snip worked

\n\n
>>> import torch\n>>> import torch.nn as nn\n>>> l = nn.Linear(3,5)\n>>> w = list(l.parameters())\n>>> w\n
\n", + "system": "" + }, + { + "instruction": "Accuracy score in pyTorch LSTM", + "input": "", + "output": "

I would use numpy in order to not iterate the list in pure python.

\n\n

The results are the same, but it runs much faster

\n\n
def accuracy_score(y_true, y_pred):\n    y_pred = np.concatenate(tuple(y_pred))\n    y_true = np.concatenate(tuple([[t for t in y] for y in y_true])).reshape(y_pred.shape)\n    return (y_true == y_pred).sum() / float(len(y_true))\n
\n\n

And this is how to use it:

\n\n
#original code:\ny_pred = list(predict([s for s, t in training_data]))\ny_true = [t for s, t in training_data]\n#numpy accuracy score\nprint(accuracy_score(y_true, y_pred))\n
\n", + "system": "" + }, + { + "instruction": "PyTorch: Dataloader for time series task", + "input": "", + "output": "

I ended up writing custom dataset as well, though it's a bit different from the answer above:

\n
class TimeseriesDataset(torch.utils.data.Dataset):   \n    def __init__(self, X, y, seq_len=1):\n        self.X = X\n        self.y = y\n        self.seq_len = seq_len\n\n    def __len__(self):\n        return self.X.__len__() - (self.seq_len-1)\n\n    def __getitem__(self, index):\n        return (self.X[index:index+self.seq_len], self.y[index+self.seq_len-1])\n
\n

And the usage looks like that:

\n
train_dataset = TimeseriesDataset(X_lstm, y_lstm, seq_len=4)\ntrain_loader = torch.utils.data.DataLoader(train_dataset, batch_size = 3, shuffle = False)\n\nfor i, d in enumerate(train_loader):\n    print(i, d[0].shape, d[1].shape)\n\n>>>\n# shape: tuple((batch_size, seq_len, n_features), (batch_size))\n0 torch.Size([3, 4, 2]) torch.Size([3])\n
\n", + "system": "" + }, + { + "instruction": "What's the workaround for "ragged/jagged tensors" in PyTorch?", + "input": "", + "output": "

PyTorch is implementing something called NestedTensors which seems to have pretty much the same purpose as RaggedTensors in Tensorflow. You can follow the RFC and progress here.

\n", + "system": "" + }, + { + "instruction": "Suboptimal convergence in PyTorch compared to TensorFlow when using Adam optimizer", + "input": "", + "output": "

The default epsilon in TF is 1e-7 not 1e-8. See here and here.

\n", + "system": "" + }, + { + "instruction": "_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv", + "input": "", + "output": "

This error usually shows up when there's a compatibility issue between the installed pytorch version and the detector library version(Detectron2 or mmdet).

\n

Both the detector library and pytorch have to be built by the same CUDA version otherwise some packages will conflict when training your model.

\n

There's a possibility that the Pytorch (1.8.1) + CUDA version (11.1) you have is incompatible with detectron2 v(0.4).

\n

From the repo here, detectron2 v0.4 is built with torch 1.8 + cuda 11.1. It might help if you use torch 1.8.0 instead of 1.8.1

\n", + "system": "" + }, + { + "instruction": "What output_padding does in nn.ConvTranspose2d?", + "input": "", + "output": "

According to documentation here: https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html when applying Conv2D operation with Stride > 1 you can get same output dimensions with different inputs. For example, 7x7 and 8x8 inputs would both return 3x3 output with Stride=2:

\n
import torch\n\nconv_inp1 = torch.rand(1,1,7,7)\nconv_inp2 = torch.rand(1,1,8,8)\n\nconv1 = torch.nn.Conv2d(1, 1, kernel_size = 3, stride = 2)\n\nout1 = conv1(conv_inp1)     \nout2 = conv1(conv_inp2)\nprint(out1.shape)         # torch.Size([1, 1, 3, 3])\nprint(out2.shape)         # torch.Size([1, 1, 3, 3])\n
\n

And when applying the transpose convolution, it is ambiguous that which output shape to return, 7x7 or 8x8 for stride=2 transpose convolution. Output padding helps pytorch to determine 7x7 or 8x8 output with output_padding parameter. Note that, it doesn't pad zeros or anything to output, it is just a way to determine the output shape and apply transpose convolution accordingly.

\n
conv_t1 = torch.nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2)\nconv_t2 = torch.nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2, output_padding=1)\ntransposed1 = conv_t1(out1)\ntransposed2 = conv_t2(out2)\n\nprint(transposed1.shape)      # torch.Size([1, 1, 7, 7])\nprint(transposed2.shape)      # torch.Size([1, 1, 8, 8])\n
\n", + "system": "" + }, + { + "instruction": "Transformer Model Output Nan Values in Pytorch", + "input": "", + "output": "

For what it's worth, I had this problem and it turned out that I had forgot to initialize an embedding vector, so it was just whatever torch.empty() happened to come upon (likely a lot of zeros.)

\n", + "system": "" + }, + { + "instruction": "How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch", + "input": "", + "output": "
\n

How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch

\n
\n

Short answer: you can not.

\n

Details: I believe this answer covers all the information that you need.

\n

You can reduce the amount of usage memory by lower the batch size as @John Stud commented, or using automatic mixed precision as @Dwight Foster suggested.

\n

While training, you can implement the gradient accumulation to reduce the batch size without affecting the performance.

\n", + "system": "" + }, + { + "instruction": "PyTorch: while loading batched data using Dataloader, how to transfer the data to GPU automatically", + "input": "", + "output": "

You can modify the collate_fn to handle several items at once:

\n
from torch.utils.data.dataloader import default_collate\n\ndevice = torch.device('cuda:0')  # or whatever device/cpu you like\n\n# the new collate function is quite generic\nloader = DataLoader(demo, batch_size=50, shuffle=True, \n                    collate_fn=lambda x: tuple(x_.to(device) for x_ in default_collate(x)))\n
\n

Note that if you want to have multiple workers for the dataloader, you'll need to add

\n
torch.multiprocessing.set_start_method('spawn')\n
\n

after your if __name__ == '__main__' (see this issue).

\n

Having said that, it seems like using pin_memory=True in your DataLoader would be much more efficient. Have you tried this option?
\nSee memory pinning for more information.

\n
\n

Update (Feb 8th, 2021)
\nThis post made me look at my "data-to-model" time spent during training.\nI compared three alternatives:

\n
    \n
  1. DataLoader works on CPU and only after the batch is retrieved data is moved to GPU.
  2. \n
  3. Same as (1) but with pin_memory=True in DataLoader.
  4. \n
  5. The proposed method of using collate_fn to move data to GPU.
  6. \n
\n

From my limited experimentation it seems like the second option performs best (but not by a big margin).
\nThe third option required fussing about the start_method of the data loader processes, and it seems to incur an overhead at the beginning of each epoch.

\n", + "system": "" + }, + { + "instruction": "Pytorch CUDA error: no kernel image is available for execution on the device on RTX 3090 with cuda 11.1", + "input": "", + "output": "

Found a fix for my problem here: https://github.com/pytorch/pytorch/issues/31285#issuecomment-739139454

\n
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U\n
\n

Then my code snippet gives:

\n
A 3.7.5 (default, Nov  7 2019, 10:50:52) \n[GCC 8.3.0]\nB 1.8.0.dev20210115+cu110\nC True\nD True\nE _CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24267MB, multi_processor_count=82)\nF tensor([1., 2.], device='cuda:0')\n
\n", + "system": "" + }, + { + "instruction": "PyTorch : How to apply the same random transformation to multiple image?", + "input": "", + "output": "

Usually a workaround is to apply the transform on the first image, retrieve the parameters of that transform, then apply with a deterministic transform with those parameters on the remaining images. However, here RandomChoice does not provide an API to get the parameters of the applied transform since it involves a variable number of transforms.\nIn those cases, I usually implement an overwrite to the original function.

\n

Looking at the torchvision implementation, it's as simple as:

\n
class RandomChoice(RandomTransforms):\n    def __call__(self, img):\n        t = random.choice(self.transforms)\n        return t(img)\n
\n

Here are two possible solutions.

\n
    \n
  1. You can either sample from the transform list on __init__ instead of on __call__:

    \n
    import random\nimport torchvision.transforms as T\n\nclass RandomChoice(torch.nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.t = random.choice(self.transforms)\n\n    def __call__(self, img):\n        return self.t(img)\n
    \n

    So you can do:

    \n
    transform = RandomChoice([\n     T.RandomHorizontalFlip(), \n     T.RandomVerticalFlip()\n])\ndisplay(transform(img_a)) # both img_a and img_b will\ndisplay(transform(img_b)) # have the same transform\n\ntransform = RandomChoice([\n    T.RandomHorizontalFlip(), \n    T.RandomVerticalFlip()\n])\ndisplay(transform(img_c)) # both img_c and img_d will\ndisplay(transform(img_d)) # have the same transform\n
    \n
  2. \n
\n
\n
    \n
  1. Or better yet, transform the images in batch:

    \n
    import random\nimport torchvision.transforms as T\n\nclass RandomChoice(torch.nn.Module):\n    def __init__(self, transforms):\n       super().__init__()\n       self.transforms = transforms\n\n    def __call__(self, imgs):\n        t = random.choice(self.transforms)\n        return [t(img) for img in imgs]\n
    \n

    Which allows to do:

    \n
    transform = RandomChoice([\n     T.RandomHorizontalFlip(), \n     T.RandomVerticalFlip()\n])\n\nimg_at, img_bt = transform([img_a, img_b])\ndisplay(img_at) # both img_a and img_b will\ndisplay(img_bt) # have the same transform\n\nimg_ct, img_dt = transform([img_c, img_d])\ndisplay(img_ct) # both img_c and img_d will\ndisplay(img_dt) # have the same transform\n
    \n
  2. \n
\n", + "system": "" + }, + { + "instruction": "How to Use Class Weights with Focal Loss in PyTorch for Imbalanced dataset for MultiClass Classification", + "input": "", + "output": "

I think OP would've gotten his answer by now. I am writing this for other people who might ponder upon this.

\n

There in one problem in OPs implementation of Focal Loss:

\n
    \n
  1. F_loss = self.alpha * (1-pt)**self.gamma * BCE_loss
  2. \n
\n

In this line, the same alpha value is multiplied with every class output probability i.e. (pt). Additionally, code doesn't show how we get pt. A very good implementation of Focal Loss could be find in What is Focal Loss and when should you use it. But this implementation is only for binary classification as it has alpha and 1-alpha for two classes in self.alpha tensor.

\n

In case of multi-class classification or multi-label classification, self.alpha tensor should contain number of elements equal to the total number of labels. The values could be inverse label frequency of labels or inverse label normalized frequency (just be cautious with labels which has 0 as frequency).

\n", + "system": "" + }, + { + "instruction": "Error while installing PyTorch using pip - cannot build wheel", + "input": "", + "output": "

From your error:

\n
\n

Exception: You tried to install "pytorch". The package named for PyTorch is "torch"

\n
\n

which tells you what you need to know, instead of

\n
pip install pytorch\n
\n

it should be

\n
pip install torch\n
\n
\n

I downloaded the matching wheel from here, but am couldn't figure out what to do with it

\n
\n

Installing .whl files is as easy as

\n
pip install <path to .whl file>\n
\n
\n

My Python installation is using anaconda3

\n
\n

That is very relevant. You should generally avoid as much as possible to use pip in your conda environment. Instead, you can find the correct conda install command for your setup(cuda version etc.) from pytroch.org, e.g. for cuda 11 it would be

\n
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch\n
\n", + "system": "" + }, + { + "instruction": "PyTorch torch.no_grad() versus requires_grad=False", + "input": "", + "output": "

This is an older discussion, which has changed slightly over the years (mainly due to the purpose of with torch.no_grad() as a pattern. An excellent answer that kind of answers your question as well can be found on Stackoverflow already.
\nHowever, since the original question is vastly different, I'll refrain from marking as duplicate, especially due to the second part about the memory.

\n

An initial explanation of no_grad is given here:

\n
\n

with torch.no_grad() is a context manager and is used to prevent calculating gradients [...].

\n
\n

requires_grad on the other hand is used

\n
\n

to freeze part of your model and train the rest [...].

\n
\n

Source again the SO post.

\n

Essentially, with requires_grad you are just disabling parts of a network, whereas no_grad will not store any gradients at all, since you're likely using it for inference and not training.
\nTo analyze the behavior of your combinations of parameters, let us investigate what is happening:

\n\n", + "system": "" + }, + { + "instruction": "PyTorch Lightning move tensor to correct device in validation_epoch_end", + "input": "", + "output": "

did you check part 3.4 (page 34) in the doc you linked ?

\n
\n

LightningModules know what device they are on! construct tensors on the device directly to avoid CPU->Device transfer

\n
\n
t = tensor.rand(2, 2).cuda()# bad\n(self is lightningModule)t = tensor.rand(2,2, device=self.device)# good \n
\n

I had a similar issue to create tensors this helped me. I hope it will help you too.

\n", + "system": "" + }, + { + "instruction": "RuntimeError: Given groups=1, weight of size [32, 3, 16, 16, 16], expected input[100, 16, 16, 16, 3] to have 3 channels, but got 16 channels instead", + "input": "", + "output": "

nn.Conv3d expects the input to have size [batch_size, channels, depth, height, width]. The first convolution expects 3 channels, but with your input having size [100, 16, 16, 16, 3], that would be 16 channels.

\n\n

Assuming that your data is given as [batch_size, depth, height, width, channels], you need to swap the dimensions around, which can be done with torch.Tensor.permute:

\n\n
# From: [batch_size, depth, height, width, channels]\n# To: [batch_size, channels, depth, height, width]\ninput = input.permute(0, 4, 1, 2, 3)\n
\n", + "system": "" + }, + { + "instruction": "Downloading transformers models to use offline", + "input": "", + "output": "

One relatively easy way to deal with this issue is to simply "rename" the pretrained models, as is detailed in this thread.

\n

Essentially, all you have to do is something like this for whatever model you're trying to work with:

\n
from transformers import BertModel\n\nmodel = BertModel.from_pretrained("bert-base-uncased")\nmodel.save_pretrained("./my_named_bert")\n
\n

The thread also details how the local model folders are named, see LysandreJik's post:

\n
\n

Hi, they are named as such because that's a clean way to make sure the model on the S3 is the same as the model in the cache. The name is created from the etag of the file hosted on the S3. [...]

\n
\n", + "system": "" + }, + { + "instruction": "How to fine tune BERT on unlabeled data?", + "input": "", + "output": "

The important distinction to make here is whether you want to fine-tune your model, or whether you want to expose it to additional pretraining.

\n

The former is simply a way to train BERT to adapt to a specific supervised task, for which you generally need in the order of 1000 or more samples including labels.

\n

Pretraining, on the other hand, is basically trying to help BERT better "understand" data from a certain domain, by basically continuing its unsupervised training objective ([MASK]ing specific words and trying to predict what word should be there), for which you do not need labeled data.

\n

If your ultimate objective is sentence embeddings, however, I would strongly suggest you to have a look at Sentence Transformers, which is based on a slightly outdated version of Huggingface's transformers library, but primarily tries to generate high-quality embeddings. Note that there are ways to train with surrogate losses, where you try to emulate some form ofloss that is relevant for embeddings.

\n

Edit: The author of Sentence-Transformers recently joined Huggingface, so I expect support to greatly improve over the upcoming months!

\n", + "system": "" + }, + { + "instruction": "How does one reset the dataloader in pytorch?", + "input": "", + "output": "

To reset a DataLoader then just enumerate the loader again. Each call to enumerate(loader) starts from the beginning.

\n\n

To not break transformers that use random values, then reset the random seed each time the DataLoader is initialized.

\n\n
def seed_init_fn(x):\n   seed = args.seed + x\n   np.random.seed(seed)\n   random.seed(seed)\n   torch.manual_seed(seed)\n   return\n\nloader = torch.utils.data.DataLoader(...., worker_init_fn = seed_init_fn)\n\nwhile True:\n   for i,data in enumerate(loader):\n      # will always yield same data\n
\n\n

See worker_init_fn in the documents:

\n\n

https://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader

\n\n

Here is a better example:

\n\n

https://github.com/pytorch/pytorch/issues/5059#issuecomment-404232359

\n", + "system": "" + }, + { + "instruction": "What does the copy_initial_weights documentation mean in the higher library for Pytorch?", + "input": "", + "output": "

Short version

\n\n

Call to higher.innerloop_ctx with model as argument create temporary patched model and unrolled optimizer for that model: (fmodel, diffopt). It is expected that in the inner loop fmodel will iteratively receive some input, compute output and loss and then diffopt.step(loss) will be called. Each time diffopt.step is called fmodel will create next version of parameters fmodel.parameters(time=T) which is a new tensor computed using previous ones (with the full graph allowing to compute gradients through the process). If at any point user calls backward on any tensor, regular pytorch gradient computation/accumulation will start in a way allowing gradients to propagate to e.g. optimizer's parameters (such as lr, momentum - if they were passed as tensors requiring gradients to higher.innerloop_ctx using override).

\n\n

Creation-time version of fmodel's parameters fmodel.parameters(time=0) is a copy of original model parameters. If copy_initial_weights=True provided (default) then fmodel.parameters(time=0) will be a clone+detach'ed version of model's parameters (i.e. will preserve values, but will severe all connections to the original model). If copy_initial_weights=False provided, then fmodel.parameters(time=0) will be clone'd version of model's parameters and thus will allow gradients to propagate to original model's parameters (see pytorch doc on clone).

\n\n

Terminology clarifications

\n\n\n\n

Long version

\n\n

Let's start from the beginning. Main functionality (only functionality, really) of higher library is unrolling of a model's parameter optimization in a differentiable manner. It can come either in the form of directly using differentiable optimizer through e.g. higher.get_diff_optim as in this example or in the form of higher.innerloop_ctx as in this example.

\n\n

The option with higher.innerloop_ctx is wrapping the creation of \"stateless\" model fmodel from existing model for you and gives you an \"optimizer\" diffopt for this fmodel. So as summarized in the README.md of higher it allows you to switch from:

\n\n
model = MyModel()\nopt = torch.optim.Adam(model.parameters())\n\nfor xs, ys in data:\n    opt.zero_grad()\n    logits = model(xs)\n    loss = loss_function(logits, ys)\n    loss.backward()\n    opt.step()\n
\n\n

to

\n\n
model = MyModel()\nopt = torch.optim.Adam(model.parameters())\n\nwith higher.innerloop_ctx(model, opt) as (fmodel, diffopt):\n    for xs, ys in data:\n        logits = fmodel(xs)  # modified `params` can also be passed as a kwarg\n        loss = loss_function(logits, ys)  # no need to call loss.backwards()\n        diffopt.step(loss)  # note that `step` must take `loss` as an argument!\n\n    # At the end of your inner loop you can obtain these e.g. ...\n    grad_of_grads = torch.autograd.grad(\n        meta_loss_fn(fmodel.parameters()), fmodel.parameters(time=0))\n
\n\n

The difference between training model and doing diffopt.step to update fmodel is that fmodel is not updating the parameters in-place as opt.step() in the original part would do. Instead each time diffopt.step is called new versions of parameters are created in such a way, that fmodel would use new ones for the next step, but all previous ones are still preserved.

\n\n

I.e. fmodel starts with only fmodel.parameters(time=0) available, but after you called diffopt.step N times you can ask fmodel to give you fmodel.parameters(time=i) for any i up to N inclusive. Notice that fmodel.parameters(time=0) doesn't change in this process at all, just every time fmodel is applied to some input it will use the latest version of parameters it currently has.

\n\n

Now, what exactly is fmodel.parameters(time=0)? It is created here and depends on copy_initial_weights. If copy_initial_weights==True then fmodel.parameters(time=0) are clone'd and detach'ed parameters of model. Otherwise they are only clone'd, but not detach'ed!

\n\n

That means that when we do meta-optimization step, the original model's parameters will actually accumulate gradients if and only if copy_initial_weights==False. And in MAML we want to optimize model's starting weights so we actually do need to get gradients from meta-optimization step.

\n\n

I think one of the issues here is that higher lack of simpler toy examples to demonstrate what is going on, instead rushing to show more serious things as the examples. So let me try to fill that gap here and demonstrate what is going on using the simplest toy example I could come up with (model with 1 weight which multiplies input by that weight):

\n\n
import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport higher\nimport numpy as np\n\nnp.random.seed(1)\ntorch.manual_seed(3)\nN = 100\nactual_multiplier = 3.5\nmeta_lr = 0.00001\nloops = 5 # how many iterations in the inner loop we want to do\n\nx = torch.tensor(np.random.random((N,1)), dtype=torch.float64) # features for inner training loop\ny = x * actual_multiplier # target for inner training loop\nmodel = nn.Linear(1, 1, bias=False).double() # simplest possible model - multiple input x by weight w without bias\nmeta_opt = optim.SGD(model.parameters(), lr=meta_lr, momentum=0.)\n\n\ndef run_inner_loop_once(model, verbose, copy_initial_weights):\n    lr_tensor = torch.tensor([0.3], requires_grad=True)\n    momentum_tensor = torch.tensor([0.5], requires_grad=True)\n    opt = optim.SGD(model.parameters(), lr=0.3, momentum=0.5)\n    with higher.innerloop_ctx(model, opt, copy_initial_weights=copy_initial_weights, override={'lr': lr_tensor, 'momentum': momentum_tensor}) as (fmodel, diffopt):\n        for j in range(loops):\n            if verbose:\n                print('Starting inner loop step j=={0}'.format(j))\n                print('    Representation of fmodel.parameters(time={0}): {1}'.format(j, str(list(fmodel.parameters(time=j)))))\n                print('    Notice that fmodel.parameters() is same as fmodel.parameters(time={0}): {1}'.format(j, (list(fmodel.parameters())[0] is list(fmodel.parameters(time=j))[0])))\n            out = fmodel(x)\n            if verbose:\n                print('    Notice how `out` is `x` multiplied by the latest version of weight: {0:.4} * {1:.4} == {2:.4}'.format(x[0,0].item(), list(fmodel.parameters())[0].item(), out[0].item()))\n            loss = ((out - y)**2).mean()\n            diffopt.step(loss)\n\n        if verbose:\n            # after all inner training let's see all steps' parameter tensors\n            print()\n            print(\"Let's print all intermediate parameters versions after inner loop is done:\")\n            for j in range(loops+1):\n                print('    For j=={0} parameter is: {1}'.format(j, str(list(fmodel.parameters(time=j)))))\n            print()\n\n        # let's imagine now that our meta-learning optimization is trying to check how far we got in the end from the actual_multiplier\n        weight_learned_after_full_inner_loop = list(fmodel.parameters())[0]\n        meta_loss = (weight_learned_after_full_inner_loop - actual_multiplier)**2\n        print('  Final meta-loss: {0}'.format(meta_loss.item()))\n        meta_loss.backward() # will only propagate gradient to original model parameter's `grad` if copy_initial_weight=False\n        if verbose:\n            print('  Gradient of final loss we got for lr and momentum: {0} and {1}'.format(lr_tensor.grad, momentum_tensor.grad))\n            print('  If you change number of iterations \"loops\" to much larger number final loss will be stable and the values above will be smaller')\n        return meta_loss.item()\n\nprint('=================== Run Inner Loop First Time (copy_initial_weights=True) =================\\n')\nmeta_loss_val1 = run_inner_loop_once(model, verbose=True, copy_initial_weights=True)\nprint(\"\\nLet's see if we got any gradient for initial model parameters: {0}\\n\".format(list(model.parameters())[0].grad))\n\nprint('=================== Run Inner Loop Second Time (copy_initial_weights=False) =================\\n')\nmeta_loss_val2 = run_inner_loop_once(model, verbose=False, copy_initial_weights=False)\nprint(\"\\nLet's see if we got any gradient for initial model parameters: {0}\\n\".format(list(model.parameters())[0].grad))\n\nprint('=================== Run Inner Loop Third Time (copy_initial_weights=False) =================\\n')\nfinal_meta_gradient = list(model.parameters())[0].grad.item()\n# Now let's double-check `higher` library is actually doing what it promised to do, not just giving us\n# a bunch of hand-wavy statements and difficult to read code.\n# We will do a simple SGD step using meta_opt changing initial weight for the training and see how meta loss changed\nmeta_opt.step()\nmeta_opt.zero_grad()\nmeta_step = - meta_lr * final_meta_gradient # how much meta_opt actually shifted inital weight value\nmeta_loss_val3 = run_inner_loop_once(model, verbose=False, copy_initial_weights=False)\n\nmeta_loss_gradient_approximation = (meta_loss_val3 - meta_loss_val2) / meta_step\n\nprint()\nprint('Side-by-side meta_loss_gradient_approximation and gradient computed by `higher` lib: {0:.4} VS {1:.4}'.format(meta_loss_gradient_approximation, final_meta_gradient))\n
\n\n

Which produces this output:

\n\n
=================== Run Inner Loop First Time (copy_initial_weights=True) =================\n\nStarting inner loop step j==0\n    Representation of fmodel.parameters(time=0): [tensor([[-0.9915]], dtype=torch.float64, requires_grad=True)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=0): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * -0.9915 == -0.4135\nStarting inner loop step j==1\n    Representation of fmodel.parameters(time=1): [tensor([[-0.1217]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=1): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * -0.1217 == -0.05075\nStarting inner loop step j==2\n    Representation of fmodel.parameters(time=2): [tensor([[1.0145]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=2): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * 1.015 == 0.4231\nStarting inner loop step j==3\n    Representation of fmodel.parameters(time=3): [tensor([[2.0640]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=3): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * 2.064 == 0.8607\nStarting inner loop step j==4\n    Representation of fmodel.parameters(time=4): [tensor([[2.8668]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=4): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * 2.867 == 1.196\n\nLet's print all intermediate parameters versions after inner loop is done:\n    For j==0 parameter is: [tensor([[-0.9915]], dtype=torch.float64, requires_grad=True)]\n    For j==1 parameter is: [tensor([[-0.1217]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    For j==2 parameter is: [tensor([[1.0145]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    For j==3 parameter is: [tensor([[2.0640]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    For j==4 parameter is: [tensor([[2.8668]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    For j==5 parameter is: [tensor([[3.3908]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n\n  Final meta-loss: 0.011927987982895929\n  Gradient of final loss we got for lr and momentum: tensor([-1.6295]) and tensor([-0.9496])\n  If you change number of iterations \"loops\" to much larger number final loss will be stable and the values above will be smaller\n\nLet's see if we got any gradient for initial model parameters: None\n\n=================== Run Inner Loop Second Time (copy_initial_weights=False) =================\n\n  Final meta-loss: 0.011927987982895929\n\nLet's see if we got any gradient for initial model parameters: tensor([[-0.0053]], dtype=torch.float64)\n\n=================== Run Inner Loop Third Time (copy_initial_weights=False) =================\n\n  Final meta-loss: 0.01192798770078706\n\nSide-by-side meta_loss_gradient_approximation and gradient computed by `higher` lib: -0.005311 VS -0.005311\n
\n", + "system": "" + }, + { + "instruction": "Difference between Keras' BatchNormalization and PyTorch's BatchNorm2d?", + "input": "", + "output": "

Keras treats as parameters (weights) many things that will be \"saved/loaded\" in the layer.

\n\n

While both implementations naturally have the accumulated \"mean\" and \"variance\" of the batches, these values are not trainable with backpropagation.

\n\n

Nevertheless, these values are updated every batch, and Keras treats them as non-trainable weights, while PyTorch simply hides them. The term \"non-trainable\" here means \"not trainable by backpropagation\", but doesn't mean the values are frozen.

\n\n

In total they are 4 groups of \"weights\" for a BatchNormalization layer. Considering the selected axis (default = -1, size=32 for your layer)

\n\n\n\n

The advantage of having it like this in Keras is that when you save the layer, you also save the mean and variance values the same way you save all other weights in the layer automatically. And when you load the layer, these weights are loaded together.

\n", + "system": "" + }, + { + "instruction": "How can I load a model in PyTorch without redefining the model?", + "input": "", + "output": "

You can attempt to export your model to TorchScript using tracing. This has limitations. Due to the way PyTorch constructs the model's computation graph on the fly, if you have any control-flow in your model then the exported model may not completely represent your python module. TorchScript is only supported in PyTorch >= 1.0.0, though I would recommend using the latest version possible.

\n\n

For example, a model without any conditional behavior is fine

\n\n
from torch import nn\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(3, 10, 3, padding=1)\n        self.bn1 = nn.BatchNorm2d(10)\n        self.conv2 = nn.Conv2d(10, 20, 3, padding=1)\n        self.bn2 = nn.BatchNorm2d(20)\n        self.fc = nn.Linear(20 * 4 * 4, 2)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2, 2)\n        x = self.bn1(x)\n        x = self.conv2(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2, 2)\n        x = self.bn2(x)\n        x = self.fc(x.flatten(1))\n        return x\n
\n\n

We can export this as follows

\n\n
from torch import jit\n\nnet = Model()\n# ... train your model\n\n# put model in the mode you want to export (see bolded comment below)\nnet.eval()\n\n# print example output\nx = torch.ones(1, 3, 16, 16)\nprint(net(x))\n\n# create TorchScript by tracing the computation graph with an example input\nx = torch.ones(1, 3, 16, 16)\nnet_trace = jit.trace(net, x)\njit.save(net_trace, 'model.zip')\n
\n\n

If successful then we can load our model into a new python script without using Model.

\n\n
from torch import jit\nnet = jit.load('model.zip')\n\n# print example output (should be same as during save)\nx = torch.ones(1, 3, 16, 16)\nprint(net(x))\n
\n\n

The loaded model is also trainable, however, the loaded model will only behave in the mode it was exported in. For example, in this case we exported our model in eval() mode, so using net.train() on the loaded module will have no effect.

\n\n
\n\n

Control-flow

\n\n

A model like this, which has behavior that changes between passes won't be properly exported. Only the code evaluated during jit.trace will be exported.

\n\n
from torch import nn\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(3, 10, 3, padding=1)\n        self.bn1 = nn.BatchNorm2d(10)\n        self.conv2 = nn.Conv2d(10, 20, 3, padding=1)\n        self.bn2 = nn.BatchNorm2d(20)\n        self.fca = nn.Linear(20 * 4 * 4, 2)\n        self.fcb = nn.Linear(20 * 4 * 4, 2)\n\n        self.use_a = True\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2, 2)\n        x = self.bn1(x)\n        x = self.conv2(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2, 2)\n        x = self.bn2(x)\n        if self.use_a:\n            x = self.fca(x.flatten(1))\n        else:\n            x = self.fcb(x.flatten(1))\n        return x\n
\n\n

We can still export the model as follows

\n\n
import torch\nfrom torch import jit\n\nnet = Model()\n# ... train your model\n\nnet.eval()\n\n# print example input\nx = torch.ones(1, 3, 16, 16)\nnet.use_a = True\nprint('a:', net(x))\nnet.use_a = False\nprint('b:', net(x))\n\n# save model\nx = torch.ones(1, 3, 16, 16)\nnet_trace = jit.trace(net, x)\njit.save(net_trace, \"model.ts\")\n
\n\n

In this case the example outputs are

\n\n
a: tensor([[-0.0959,  0.0657]], grad_fn=<AddmmBackward>)\nb: tensor([[ 0.1437, -0.0033]], grad_fn=<AddmmBackward>)\n
\n\n

However, loading

\n\n
import torch\nfrom torch import jit\n\nnet = jit.load(\"model.ts\")\n\n# will not match the output from before\nx = torch.ones(1, 3, 16, 16)\nnet.use_a = True\nprint('a:', net(x))\nnet.use_a = False\nprint('b:', net(x))\n
\n\n

results in

\n\n
a: tensor([[ 0.1437, -0.0033]], grad_fn=<DifferentiableGraphBackward>)\nb: tensor([[ 0.1437, -0.0033]], grad_fn=<DifferentiableGraphBackward>)\n
\n\n

Notice that the logic of the branch \"a\" is not present since net.use_a was False when jit.trace was called.

\n\n
\n\n

Scripting

\n\n

These limitations can be overcome but require some effort on your end. You can use the scripting functionality to ensure that all the logic is exported.

\n", + "system": "" + }, + { + "instruction": "Expected object of device type cuda but got device type cpu in Pytorch", + "input": "", + "output": "

In the forward of your MSE_loss, you define a linear layer that is probably still in the CPU (you didn't provide an MCVE, so I can only assume):

\n\n
self.linear = nn.Linear(output1.size()[0], 1)\n
\n\n

If you want to try and see if this is the problem, you can:

\n\n
self.linear = nn.Linear(output1.size()[0], 1).cuda()\n
\n\n

However, if self.d is in the CPU, then it would fail again. To solve this, you could move the linear to the same device of the self.d tensor by doing this:

\n\n
def forward(self, output1, output2, labels):\n    self.labels = labels         \n    self.linear = nn.Linear(output1.size()[0], 1)\n\n    if self.metric == 'cos':\n        self.d = F.cosine_similarity(output1, output2)\n    elif self.metric == 'l1':\n        self.d = torch.abs(output1-output2)\n    elif self.metric == 'l2':\n        self.d = torch.sqrt((output1-output2)**2)\n\n    # move self.linear to the correct device\n    self.linear = self.linear.to(self.d.device)\n
\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'past' when installing tensorboard with pytorch 1.2", + "input": "", + "output": "

Following this issue: https://github.com/pytorch/pytorch/issues/22389,

\n\n

Adding future to the list of requirements solved the problem

\n\n
# requirements.txt:\ntb-nightly\nfuture\n
\n\n

pip install -r requirements.txt

\n", + "system": "" + }, + { + "instruction": "How to save and load random number generator state in Pytorch?", + "input": "", + "output": "

You can use torch.get_rng_state and torch.set_rng_state

\n\n

When calling torch.get_rng_state you will get your random number generator state as a torch.ByteTensor.

\n\n

You can then save this tensor somewhere in a file and later you can load and use torch.set_rng_state to set the random number generator state.

\n\n
\n\n

When using numpy you can of course do the same there using:
\nnumpy.random.get_state and numpy.random.set_state

\n", + "system": "" + }, + { + "instruction": "Pytorch inputs for nn.CrossEntropyLoss()", + "input": "", + "output": "

For the most part, the PyTorch documentation does an amazing job to explain the different functions; they usually do include expected input dimensions, as well as some simple examples.
\nYou can find the description for nn.CrossEntropyLoss() here.

\n\n

To walk through your specific example, let us start by looking at the expected input dimension:

\n\n
\n

Input: (N,C) where C = number of classes. [...]

\n
\n\n

To add to this, N generally refers to the batch size (number of samples). To compare this to what you currently have:

\n\n
outputs.shape\n>>> torch.Size([2])\n
\n\n

I.e. currently we only have an input dimension of (2,), and not (1,2), as is expected by PyTorch. We can alleviate this by adding a \"fake\" dimension to our current tensor, by simply using .unsqueeze() like so:

\n\n
outputs = binary_model(tensor_input).unsqueeze(dim=0)\noutputs.shape\n>>> torch.Size([1,2])\n
\n\n

Now that we got that, let us look at the expected input for the targets:

\n\n
\n

Target: (N) [...]

\n
\n\n

So we already got the right shape for this. If we try this, though, we will still encounter an error, though:

\n\n
RuntimeError: Expected object of scalar type Long but got scalar type Float \n              for argument #2 'target'.\n
\n\n

Again, the error message is rather expressive. The problem here is that PyTorch tensors (by default) are interpreted as torch.FloatTensors, but the input should be integers (or Long) instead. We can simply do this by specifying the exact type during tensor creations:

\n\n
tensor_label = torch.LongTensor([dat[1]])\n
\n\n

I'm using PyTorch 1.0 under Linux fyi.

\n", + "system": "" + }, + { + "instruction": "How do I install PyTorch v1.0.0+ on Google Colab?", + "input": "", + "output": "

try the following code snippet (it works equally for the runtime with or without gpu)

\n\n
!pip install -q torch==1.0.0 torchvision\n
\n\n

to check the version

\n\n
import torch\nprint(torch.__version__)\n
\n\n

here you have the version 1.0.0

\n\n

UPDATE

\n\n
!pip install torch\n
\n\n

Works fine now, as the most stable version is 1.0.0

\n", + "system": "" + }, + { + "instruction": "Confusion matrix and test accuracy for PyTorch Transfer Learning tutorial", + "input": "", + "output": "

Answer given by ptrblck of PyTorch community. Thanks a lot!

\n\n
nb_classes = 9\n\nconfusion_matrix = torch.zeros(nb_classes, nb_classes)\nwith torch.no_grad():\n    for i, (inputs, classes) in enumerate(dataloaders['val']):\n        inputs = inputs.to(device)\n        classes = classes.to(device)\n        outputs = model_ft(inputs)\n        _, preds = torch.max(outputs, 1)\n        for t, p in zip(classes.view(-1), preds.view(-1)):\n                confusion_matrix[t.long(), p.long()] += 1\n\nprint(confusion_matrix)\n
\n\n

To get the per-class accuracy:

\n\n
print(confusion_matrix.diag()/confusion_matrix.sum(1))\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: 'Tensor' has no attribute: 'backwards'", + "input": "", + "output": "

It's loss.backward(), not loss.backwards().

\n", + "system": "" + }, + { + "instruction": "Using TPUs with PyTorch", + "input": "", + "output": "

Currently, it's not possible to use Cloud TPU with PyTorch since it's designed specifically for Tensorflow.

\n\n

But, according to this product news posted three days ago in the Google Cloud blog, \"engineers on Google\u2019s TPU team are actively collaborating with core PyTorch developers to connect PyTorch to Cloud TPUs\".

\n", + "system": "" + }, + { + "instruction": "Pytorch RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'", + "input": "", + "output": "

I know where the problem is.

\n\n

y should be in torch.int64 dtype without one-hot encoding.\nAnd CrossEntropyLoss() will auto encoding it with one-hot (while out is the probability distribution of prediction like one-hot format).

\n\n

It can run now!

\n", + "system": "" + }, + { + "instruction": "tensorflow stop_gradient equivalent in pytorch", + "input": "", + "output": "

Could you check with x.detach().

\n", + "system": "" + }, + { + "instruction": "PyTorch : predict single example", + "input": "", + "output": "

The code you posted is a simple demo trying to reveal the inner mechanism of such deep learning frameworks. These frameworks, including PyTorch, Keras, Tensorflow and many more automatically handle the forward calculation, the tracking and applying gradients for you as long as you defined the network structure. However, the code you showed still try to do these stuff manually. That's the reason why you feel cumbersome when predicting one example, because you are still doing it from scratch.

\n

In practice, we will define a model class inherited from torch.nn.Module and initialize all the network components (like neural layer, GRU, LSTM layer etc.) in the __init__ function, and define how these components interact with the network input in the forward function.

\n

Taken the example from the page you've provided:

\n
# Code in file nn/two_layer_net_module.py\nimport torch\n\nclass TwoLayerNet(torch.nn.Module):\n    def __init__(self, D_in, H, D_out):\n        """\n        In the constructor we instantiate two nn.Linear modules and \n        assign them as\n        member variables.\n        """\n        super(TwoLayerNet, self).__init__()\n        self.linear1 = torch.nn.Linear(D_in, H)\n        self.linear2 = torch.nn.Linear(H, D_out)\n\n    def forward(self, x):\n        """\n        In the forward function we accept a Tensor of input data and we must return\n        a Tensor of output data. We can use Modules defined in the constructor as\n        well as arbitrary (differentiable) operations on Tensors.\n        """\n        h_relu = self.linear1(x).clamp(min=0)\n        y_pred = self.linear2(h_relu)\n        return y_pred\n\n# N is batch size; D_in is input dimension;\n# H is hidden dimension; D_out is output dimension.\nN, D_in, H, D_out = 64, 1000, 100, 10\n\n# Create random Tensors to hold inputs and outputs\nx = torch.randn(N, D_in)\ny = torch.randn(N, D_out)\n\n# Construct our model by instantiating the class defined above.\nmodel = TwoLayerNet(D_in, H, D_out)\n\n# Construct our loss function and an Optimizer. The call to \nmodel.parameters()\n# in the SGD constructor will contain the learnable parameters of the two\n# nn.Linear modules which are members of the model.\nloss_fn = torch.nn.MSELoss(size_average=False)\noptimizer = torch.optim.SGD(model.parameters(), lr=1e-4)\nfor t in range(500):\n    # Forward pass: Compute predicted y by passing x to the model\n    y_pred = model(x)\n\n    # Compute and print loss\n    loss = loss_fn(y_pred, y)\n    print(t, loss.item())\n\n    # Zero gradients, perform a backward pass, and update the weights.\n    optimizer.zero_grad()\n    loss.backward()\n    optimizer.step()\n
\n

The code defined a model named TwoLayerNet, it initializes two linear layers in the __init__ function and further defines how these two linears interact with the input x in the forward function.

\n

Having the model defined, we can perform a single feed-forward operation as follows. Say xu contains a single unseen example:

\n
xu = torch.randn(D_in)\n
\n

Then this performs the prediction:

\n
y_pred = model(torch.atleast_2d(xu))\n
\n", + "system": "" + }, + { + "instruction": "Resize PyTorch Tensor", + "input": "", + "output": "

You can instead choose to go with tensor.reshape(new_shape) or torch.reshape(tensor, new_shape) as in:

\n\n
# a `Variable` tensor\nIn [15]: ten = torch.randn(6, requires_grad=True)\n\n# this would throw RuntimeError error\nIn [16]: ten.resize_(2, 3)\n---------------------------------------------------------------------------\nRuntimeError                              Traceback (most recent call last)\n<ipython-input-16-094491c46baa> in <module>()\n----> 1 ten.resize_(2, 3)\n\nRuntimeError: cannot resize variables that require grad\n
\n\n
\n\n

The above RuntimeError can be resolved or avoided by using tensor.reshape(new_shape)

\n\n
In [17]: ten.reshape(2, 3)\nOut[17]: \ntensor([[-0.2185, -0.6335, -0.0041],\n        [-1.0147, -1.6359,  0.6965]])\n\n# yet another way of changing tensor shape\nIn [18]: torch.reshape(ten, (2, 3))\nOut[18]: \ntensor([[-0.2185, -0.6335, -0.0041],\n        [-1.0147, -1.6359,  0.6965]])\n
\n", + "system": "" + }, + { + "instruction": "UserWarning: Implicit dimension choice for log_softmax has been deprecated", + "input": "", + "output": "

It is almost always you will need the last dimension when you compute the cross-entropy so your line may look like:

\n\n
torch.nn.functional.log_softmax(x, -1)\n
\n", + "system": "" + }, + { + "instruction": "How to use groups parameter in PyTorch conv2d function", + "input": "", + "output": "

If you want to apply a per-channel convolution then your out-channel should be the same as your in-channel. This is expected, considering each of your input channels creates a separate output channel that it corresponds to.

\n\n

In short, this will work

\n\n
import torch\nimport torch.nn.functional as F\n\nfilters = torch.autograd.Variable(torch.randn(3,1,3,3))\ninputs = torch.autograd.Variable(torch.randn(1,3,10,10))\nout = F.conv2d(inputs, filters, padding=1, groups=3)\n
\n\n

whereas, filters of size (2, 1, 3, 3) or (1, 1, 3, 3) will not work.

\n\n

Additionally, you can also make your out-channel a multiple of in-channel. This works for instances where you want to have multiple convolution filters for each input channel.

\n\n

However, This only makes sense if it is a multiple. If not, then pytorch falls back to its closest multiple, a number less than what you specified. This is once again expected behavior. For example a filter of size (4, 1, 3, 3) or (5, 1, 3, 3), will result in an out-channel of size 3.

\n", + "system": "" + }, + { + "instruction": "Any pytorch tools to monitor neural network's training?", + "input": "", + "output": "

PyTorch 1.1.0 supports TensorBoard natively with torch.utils.tensorboard. The API is very similar to tensorboardX. See the documentation for more details.

\n", + "system": "" + }, + { + "instruction": "How padded sequences given as packed sequences are dealt by RNN in pytorch?", + "input": "", + "output": "

For the second question: hidden states at padded sequences will not be computed.

\n\n

To answer how is that happening, let's first see what pack_padded_sequence does for us:

\n\n
from torch.nn.utils.rnn import pad_sequence, pad_packed_sequence, pack_padded_sequence\n\nraw = [ torch.ones(25, 300) / 2, \n        torch.ones(22, 300) / 2.3, \n        torch.ones(15, 300) / 3.2 ]\npadded = pad_sequence(raw)  # size: [25, 3, 300]\n\nlengths = torch.as_tensor([25, 22, 15], dtype=torch.int64)\npacked = pack_padded_sequence(padded, lengths)\n
\n\n

so far we randomly created a three tensor with different length (timestep in the context of RNN) , and we first pad them to the same length, then packed it. Now if we run

\n\n
print(padded.size())\nprint(packed.data.size()) # packed.data refers to the \"packed\" tensor\n
\n\n

we will see:

\n\n
torch.Size([25, 3, 300])\ntorch.Size([62, 300])\n
\n\n

Obviously 62 does not come from 25 * 3. So what pack_padded_sequence does is only keep the meaningful timestep of each batch entry according to the lengths tensor we passed to pack_padded_sequence (i.e. if we passed [25, 25, 25] to it, the size of packed.data would still be [75, 300] even though the raw tensor does not change). In short, rnn would no even see the pad timestep with pack_padded_sequence

\n\n

And now let's see what's the difference after we pass padded and packed to rnn

\n\n
rnn = torch.nn.RNN(input_size=300, hidden_size=2)\npadded_outp, padded_hn = rnn(padded) # size: [25, 3, 2] / [1, 3, 2]\npacked_outp, packed_hn = rnn(packed) # 'PackedSequence' Obj / [1, 3, 2]\nundo_packed_outp, _ = pad_packed_sequence(packed_outp)\n\n# return \"h_n\"\nprint(padded_hn) # tensor([[[-0.2329, -0.6179], [-0.1158, -0.5430],[ 0.0998, -0.3768]]]) \nprint(packed_hn) # tensor([[[-0.2329, -0.6179], [ 0.5622,  0.1288], [ 0.5683,  0.1327]]]\n\n# the output of last timestep (the 25-th timestep)\nprint(padded_outp[-1]) # tensor([[[-0.2329, -0.6179], [-0.1158, -0.5430],[ 0.0998, -0.3768]]]) \nprint(undo_packed_outp.data[-1]) # tensor([[-0.2329, -0.6179], [ 0.0000,  0.0000], [ 0.0000,  0.0000]]\n
\n\n

The values of padded_hn and packed_hn are different since rnn DOES compute the pad for padded yet not for the 'packed' (PackedSequence object), which also can be observed from the last hidden state: all three batch entry in padded got non-zero last hidden state even if its length is less than 25. But for packed, the last hidden state for shorter data is not computed (i.e. 0)

\n\n

p.s. another observation:

\n\n
print([(undo_packed_outp[:, i, :].sum(-1) != 0).sum() for i in range(3)])\n
\n\n

would give us [tensor(25), tensor(22), tensor(15)], which align to the actual length of our input.

\n", + "system": "" + }, + { + "instruction": "when is a pytorch custom function needed (rather than only a module)?", + "input": "", + "output": "

This information is gathered and summarised from the official PyTorch Documentaion.

\n\n

torch.autograd.Functionreally lies at the heart of the autograd package in PyTorch. Any graph you build in PyTorch and any operation you conduct on Variables in PyTorch is based on a Function. Any function requires an __init__(), forward() and backward() method (see more here: http://pytorch.org/docs/notes/extending.html) . This enables PyTorch to compute results and compute gradients for Variables.

\n\n

nn.Module()in contrast is really just a convenience for organising your model, your different layers, etc. For example, it organises all the trainable parameters in your model in .parameters()and allows you to add another layer to a model easily, etc. etc. It is not the place where you define a backward method, because in the forward() method, you're supposed to use subclasses of Function(), for which you have already defined backward(). Hence, if you have specified the order of operations in forward(), PyTorch already knows how to back-propagate gradients.

\n\n

Now, when should you use what?

\n\n

If you have an operation that is just a composition of existing implemented functions in PyTorch (like your thing above), there's really no point adding any subclass to Function() yourself. Because you can just stack operations up and build a dynamic graph. It's however a sensible idea to bunch these operations together. If any operation involves trainable parameters (for example a linear layer of a neural network), you should subclass nn.Module() and bunch your operations together in the forward method. This allows you to easily access parameters (as outlined above) for use of torch.optim, etc. If you don't have any trainable parameters, I would probably still bunch them together, but a standard Python function, where you take care of the instantination of each operation you use would be sufficient.

\n\n

If you have a new custom operation (e.g. a new stochastic layer with some complicated sampling procedure), you should subclass Function() and define __init__(), forward() and backward() to tell PyTorch how to compute results and how to compute gradients, when you use this operation. Afterwards, you should either create a functional version to take care of instantinating the function and use your operation or create a module, if your operation has trainable parameters. Again, you can read more about this in the link above.

\n", + "system": "" + }, + { + "instruction": "UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR", + "input": "", + "output": "

In version 2.3.0 of pytorch, it prints this unwanted warning even if no exception is thrown: see https://github.com/pytorch/pytorch/pull/125790

\n

As you mentioned, though, the training is processing correctly. If you want to get rid of this warning, you should revert to torch 2.2.2 (you then also have to revert torchvision to 0.17.2):

\n
pip3 install torchvision==0.17.2\npip3 install torch==2.2.2\n
\n", + "system": "" + }, + { + "instruction": "How retain_grad() in pytorch works? I found its position changes the grad result", + "input": "", + "output": "

Okay so what's going on is really weird.

\n

What .retain_grad() essentially does is convert any non-leaf tensor into a leaf tensor, such that it contains a .grad attribute (since by default, pytorch computes gradients to leaf tensors only).

\n

Hence, in your first example, after calling y.retain_grad(), it basically converted y into a leaf tensor with an accessible .grad attribute.

\n

However, in your second example, you initially converted the entire y tensor into a leaf tensor; then, you created a non-leaf tensor (y[1]) within your leaf tensor (y), which is what caused the confusion.

\n
y = torch.zeros((10))  # y is a non-leaf tensor\n\ny[0] = a  # y[0] is a non-leaf tensor\ny.retain_grad()  # y is a leaf tensor (including y[1])\ny[1] = y[0] * 2  # y[1] is a non-leaf tensor, BUT y[0], y[2], y[3], ..., y[9] are all leaf tensors!\n
\n

The confusing part is:

\n

y[1] after calling y.retain_grad() is now a leaf tensor with a .grad attribute. However, y[1] after the computation (y[1] = y[0] * 2) is now not a leaf tensor with a .grad attribute; it is now treated as a new non-leaf variable/tensor.

\n

Therefore, when calling loss.backward(), the Chain rule of the loss w.r.t y, and particularly looking at the Chain rule of the loss w.r.t leaf y[1] now looks something like this:

\n
\n

\"Chain

\n", + "system": "" + }, + { + "instruction": "Is GradScaler necessary with Mixed precision training with pytorch?", + "input": "", + "output": "

Short answer: yes, your model may fail to converge without GradScaler().

\n

There are three basic problems with using FP16:

\n\n", + "system": "" + }, + { + "instruction": "HuggingFace: ValueError: expected sequence of length 165 at dim 1 (got 128)", + "input": "", + "output": "

I fixed this solution by changing the tokenize function to:

\n
def tokenize_function(examples):\n    return tokenizer(examples['text'], padding='max_length', truncation=True, max_length=max_length)\n
\n

(note the padding argument). Also, I used a data collator like so:

\n
data_collator = DataCollatorForLanguageModeling(\n    tokenizer=tokenizer, mlm=True, mlm_probability=0.15\n)\ntrainer = Trainer(\n        model=model,\n        args=training_args,\n        data_collator=data_collator,\n        train_dataset=train_dataset,\n        eval_dataset=eval_dataset\n)\n
\n", + "system": "" + }, + { + "instruction": "Understanding of Pytorch NLLLOSS", + "input": "", + "output": "

Indeed no log is being used to compute the result of nn.NLLLoss so this can be a little confusing. However, I believe the reason why it was called this way is because it expects to receive log-probabilities:

\n
\n

The input given through a forward call is expected to contain log-probabilities of each class. - docs

\n
\n

In the end it does not make much sense to have it in the name since you might as well want to apply this function on non-log-probabilities...

\n", + "system": "" + }, + { + "instruction": "What does the difference between 'torch.backends.cudnn.deterministic=True' and 'torch.set_deterministic(True)'?", + "input": "", + "output": "

torch.backends.cudnn.deterministic=True only applies to CUDA convolution operations, and nothing else. Therefore, no, it will not guarantee that your training process is deterministic, since you're also using torch.nn.MaxPool3d, whose backward function is nondeterministic for CUDA.

\n

torch.set_deterministic(), on the other hand, affects all the normally-nondeterministic operations listed here (note that set_deterministic has been renamed to use_deterministic_algorithms in 1.8): https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html?highlight=use_deterministic#torch.use_deterministic_algorithms

\n

As the documentation states, some of the listed operations don't have a deterministic implementation. So if torch.use_deterministic_algorithms(True) is set, they will throw an error.

\n

If you need to use nondeterministic operations like torch.nn.MaxPool3d, then, at the moment, there is no way for your training process to be deterministic--unless you write a custom deterministic implementation yourself. Or you could open a GitHub issue requesting a deterministic implementation: https://github.com/pytorch/pytorch/issues

\n

In addition, you might want to check out this page: https://pytorch.org/docs/stable/notes/randomness.html

\n", + "system": "" + }, + { + "instruction": "How to handle odd resolutions in Unet architecture PyTorch", + "input": "", + "output": "

It is a very common problem in segmentation networks where skip-connections are often involved in the decoding process. Networks usually (depending on the actual architecture) require input size that has side lengths as integer multiples of the largest stride (8, 16, 32, etc.).

\n

There are two main ways:

\n
    \n
  1. Resize input to the nearest feasible size.
  2. \n
  3. Pad the input to the next larger feasible size.
  4. \n
\n

I prefer (2) because (1) can cause small changes in the pixel level for all the pixels, leading to unnecessary blurriness. Note that we usually need to recover the original shape afterward in both methods.

\n

My favorite code snippet for this task (symmetric padding for height/width):

\n
import torch\nimport torch.nn.functional as F\n\ndef pad_to(x, stride):\n    h, w = x.shape[-2:]\n\n    if h % stride > 0:\n        new_h = h + stride - h % stride\n    else:\n        new_h = h\n    if w % stride > 0:\n        new_w = w + stride - w % stride\n    else:\n        new_w = w\n    lh, uh = int((new_h-h) / 2), int(new_h-h) - int((new_h-h) / 2)\n    lw, uw = int((new_w-w) / 2), int(new_w-w) - int((new_w-w) / 2)\n    pads = (lw, uw, lh, uh)\n\n    # zero-padding by default.\n    # See others at https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.pad\n    out = F.pad(x, pads, "constant", 0)\n\n    return out, pads\n\ndef unpad(x, pad):\n    if pad[2]+pad[3] > 0:\n        x = x[:,:,pad[2]:-pad[3],:]\n    if pad[0]+pad[1] > 0:\n        x = x[:,:,:,pad[0]:-pad[1]]\n    return x\n
\n

A test snippet:

\n
x = torch.zeros(4, 3, 1080, 1920) # Raw data\nx_pad, pads = pad_to(x, 16) # Padded data, feed this to your network \nx_unpad = unpad(x_pad, pads) # Un-pad the network output to recover the original shape\n\nprint('Original: ', x.shape)\nprint('Padded: ', x_pad.shape)\nprint('Recovered: ', x_unpad.shape)\n
\n

Output:

\n
Original:  torch.Size([4, 3, 1080, 1920])\nPadded:  torch.Size([4, 3, 1088, 1920])\nRecovered:  torch.Size([4, 3, 1080, 1920])\n
\n

Reference: https://github.com/seoungwugoh/STM/blob/905f11492a6692dd0d0fa395881a8ec09b211a36/helpers.py#L33

\n", + "system": "" + }, + { + "instruction": "Difference between versions 9.2,10.1,10.2,11.0 of cuda for PyTorch 1.7", + "input": "", + "output": "

PyTorch doesn't use the system's CUDA installation when installed from a package manager (either conda or pip). Instead, it comes with a copy of the CUDA runtime and will work as long as your system is compatible with that version of PyTorch. By compatible I mean that the GPU supports the particular version of CUDA and the GPU's compute capability is one that the PyTorch binaries (for the selected version) are compiled with support for.

\n

Therefore the version reported by nvcc (the version installed on the system) is basically irrelevant. The version you should be looking at is

\n
import torch\n# print the version of CUDA being used by pytorch\nprint(torch.version.cuda)\n
\n

The only time the system's version of CUDA should matter is if you compiled PyTorch from source.

\n

As for which version of CUDA to select. You will probably want the newest version of CUDA that your system is compatible with. This is because newer versions generally include performance improvements compared to older versions.

\n", + "system": "" + }, + { + "instruction": "LSTM Autoencoder problems", + "input": "", + "output": "

Okay, after some debugging I think I know the reasons.

\n

TLDR

\n\n

Analysis

\n

Code used

\n

Let's start with the code (model is the same):

\n
import seaborn as sns\nimport matplotlib.pyplot as plt\n\ndef get_data(subtract: bool = False):\n    # (1, 14, 5)\n    input_tensor = torch.tensor(\n        [\n            [0.5122, 0.0360, 0.7027, 0.0721, 0.1892],\n            [0.5177, 0.0833, 0.6574, 0.1204, 0.1389],\n            [0.4643, 0.0364, 0.6242, 0.1576, 0.1818],\n            [0.4375, 0.0133, 0.5733, 0.1867, 0.2267],\n            [0.4838, 0.0625, 0.6042, 0.1771, 0.1562],\n            [0.4804, 0.0175, 0.6798, 0.1053, 0.1974],\n            [0.5030, 0.0445, 0.6712, 0.1438, 0.1404],\n            [0.4987, 0.0490, 0.6699, 0.1536, 0.1275],\n            [0.4898, 0.0388, 0.6704, 0.1330, 0.1579],\n            [0.4711, 0.0390, 0.5877, 0.1532, 0.2201],\n            [0.4627, 0.0484, 0.5269, 0.1882, 0.2366],\n            [0.5043, 0.0807, 0.6646, 0.1429, 0.1118],\n            [0.4852, 0.0606, 0.6364, 0.1515, 0.1515],\n            [0.5279, 0.0629, 0.6886, 0.1514, 0.0971],\n        ]\n    ).unsqueeze(0)\n\n    if subtract:\n        initial_values = input_tensor[:, 0, :]\n        input_tensor -= torch.roll(input_tensor, 1, 1)\n        input_tensor[:, 0, :] = initial_values\n    return input_tensor\n\n\nif __name__ == "__main__":\n    torch.manual_seed(0)\n\n    HIDDEN_SIZE = 10\n    SUBTRACT = False\n\n    input_tensor = get_data(SUBTRACT)\n    model = LSTMEncoderDecoder(input_tensor.shape[-1], HIDDEN_SIZE)\n    optimizer = torch.optim.Adam(model.parameters())\n    criterion = torch.nn.MSELoss()\n    for i in range(1000):\n        outputs = model(input_tensor)\n        loss = criterion(outputs, input_tensor)\n        loss.backward()\n        optimizer.step()\n        optimizer.zero_grad()\n        print(f"{i}: {loss}")\n        if loss < 1e-4:\n            break\n\n    # Plotting\n    sns.lineplot(data=outputs.detach().numpy().squeeze())\n    sns.lineplot(data=input_tensor.detach().numpy().squeeze())\n    plt.show()\n
\n

What it does:

\n\n

We will only vary HIDDEN_SIZE and SUBTRACT parameters!

\n

NO SUBTRACT, SMALL MODEL

\n\n

In this case we get a straight line. Model is unable to fit and grasp the phenomena presented in the data (hence flat lines you mentioned).

\n

\"enter

\n

1000 iterations limit reached

\n

SUBTRACT, SMALL MODEL

\n\n

Targets are now far from flat lines, but model is unable to fit due to too small capacity.

\n

\"enter

\n

1000 iterations limit reached

\n

NO SUBTRACT, LARGER MODEL

\n\n

It got a lot better and our target was hit after 942 steps. No more flat lines, model capacity seems quite fine (for this single example!)

\n

\"enter

\n

SUBTRACT, LARGER MODEL

\n\n

Although the graph does not look that pretty, we got to desired loss after only 215 iterations.

\n

\"enter

\n

Finally

\n\n

Questions

\n
\n

Okay, question 1: You are saying that for variable x in the time\nseries, I should train the model to learn x[i] - x[i-1] rather than\nthe value of x[i]? Am I correctly interpreting?

\n
\n

Yes, exactly. Difference removes the urge of the neural network to base it's predictions on the past timestep too much (by simply getting last value and maybe changing it a little)

\n
\n

Question 2: You said my calculations for zero bottleneck were\nincorrect. But, for example, let's say I'm using a simple dense\nnetwork as an auto encoder. Getting the right bottleneck indeed\ndepends on the data. But if you make the bottleneck the same size as\nthe input, you get the identity function.

\n
\n

Yes, assuming that there is no non-linearity involved which makes the thing harder (see here for similar case). In case of LSTMs there are non-linearites, that's one point.

\n

Another one is that we are accumulating timesteps into single encoder state. So essentially we would have to accumulate timesteps identities into a single hidden and cell states which is highly unlikely.

\n

One last point, depending on the length of sequence, LSTMs are prone to forgetting some of the least relevant information (that's what they were designed to do, not only to remember everything), hence even more unlikely.

\n
\n

Is num_features * num_timesteps not a bottle neck of the same size as\nthe input, and therefore shouldn't it facilitate the model learning\nthe identity?

\n
\n

It is, but it assumes you have num_timesteps for each data point, which is rarely the case, might be here. About the identity and why it is hard to do with non-linearities for the network it was answered above.

\n

One last point, about identity functions; if they were actually easy to learn, ResNets architectures would be unlikely to succeed. Network could converge to identity and make "small fixes" to the output without it, which is not the case.

\n
\n

I'm curious about the statement : "always use difference of timesteps\ninstead of timesteps" It seem to have some normalizing effect by\nbringing all the features closer together but I don't understand why\nthis is key ? Having a larger model seemed to be the solution and the\nsubstract is just helping.

\n
\n

Key here was, indeed, increasing model capacity. Subtraction trick depends on the data really. Let's imagine an extreme situation:

\n\n

What the neural network would do (what is the easiest here)? It would, probably, discard this 1 or smaller change as noise and just predict 1000 for all of them (especially if some regularization is in place), as being off by 1/1000 is not much.

\n

What if we subtract? Whole neural network loss is in the [0, 1] margin for each timestep instead of [0, 1001], hence it is more severe to be wrong.

\n

And yes, it is connected to normalization in some sense come to think about it.

\n", + "system": "" + }, + { + "instruction": "How to compute Hessian of the loss w.r.t. the parameters in PyTorch using autograd.grad", + "input": "", + "output": "

PyTorch recently-ish added a functional higher level API to torch.autograd which provides torch.autograd.functional.hessian(func, inputs,...) to directly evaluate the hessian of the scalar function func with respect to its arguments at a location specified by inputs, a tuple of tensors corresponding to the arguments of func. hessian itself does not support automatic differentiation, I believe.

\n

Note, however, that as of March 2021 it is still in beta.

\n
\n

Full example using torch.autograd.functional.hessian to create a score-test for non-zero mean (As a (bad) alternative to the one sample t-test):

\n
import numpy as np\nimport torch, torchvision\nfrom torch.autograd import Variable, grad\nimport torch.distributions as td\nimport math\nfrom torch.optim import Adam\nimport scipy.stats\n\n\nx_data = torch.randn(100)+0.0 # observed data (here sampled under H0)\n\nN = x_data.shape[0] # number of observations\n\nmu_null = torch.zeros(1)\nsigma_null_hat = Variable(torch.ones(1), requires_grad=True)\n\ndef log_lik(mu, sigma):\n  return td.Normal(loc=mu, scale=sigma).log_prob(x_data).sum()\n\n# Find theta_null_hat by some gradient descent algorithm (in this case an closed-form expression would be trivial to obtain (see below)):\nopt = Adam([sigma_null_hat], lr=0.01)\nfor epoch in range(2000):\n    opt.zero_grad() # reset gradient accumulator or optimizer\n    loss = - log_lik(mu_null, sigma_null_hat) # compute log likelihood with current value of sigma_null_hat  (= Forward pass)\n    loss.backward() # compute gradients (= Backward pass)\n    opt.step()      # update sigma_null_hat\n    \nprint(f'parameter fitted under null: sigma: {sigma_null_hat}, expected: {torch.sqrt((x_data**2).mean())}')\n#> parameter fitted under null: sigma: tensor([0.9260], requires_grad=True), expected: 0.9259940385818481\n\ntheta_null_hat = (mu_null, sigma_null_hat)\n\nU = torch.tensor(torch.autograd.functional.jacobian(log_lik, theta_null_hat)) # Jacobian (= vector of partial derivatives of log likelihood w.r.t. the parameters (of the full/alternative model)) = score\nI = -torch.tensor(torch.autograd.functional.hessian(log_lik, theta_null_hat)) / N # estimate of the Fisher information matrix\nS = torch.t(U) @ torch.inverse(I) @ U / N # test statistic, often named "LM" (as in Lagrange multiplier), would be zero at the maximum likelihood estimate\n\npval_score_test = 1 - scipy.stats.chi2(df = 1).cdf(S) # S asymptocially follows a chi^2 distribution with degrees of freedom equal to the number of parameters fixed under H0\nprint(f'p-value Chi^2-based score test: {pval_score_test}')\n#> p-value Chi^2-based score test: 0.9203232752568568\n\n# comparison with Student's t-test:\npval_t_test = scipy.stats.ttest_1samp(x_data, popmean = 0).pvalue\nprint(f'p-value Student\\'s t-test: {pval_t_test}')\n#> p-value Student's t-test: 0.9209265268946605\n
\n", + "system": "" + }, + { + "instruction": "Windows keeps crashing when trying to install PyTorch via pip", + "input": "", + "output": "

After troubling shooting and a lot of restart, it seems like the issue came from when pip was trying to load a pre-downloaded file. Essentially, the first time I ran the installation command, pip downloaded files for pytorch but did not install pytorch due to some user privilege issue. The fix is to add --no-cache-dir in the pip install command. This will override the cache (pre-downloaded files) and download the files all over again.

\n

For me specifically, I also needed to add --user.

\n

In other words, the command went from

\n
pip install torch===1.7.0+cu110 torchvision===0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html\n
\n

to

\n
pip --no-cache-dir install torch===1.7.0+cu110 torchvision===0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html --user\n
\n", + "system": "" + }, + { + "instruction": "How to get the total number of batch iteration from pytorch dataloader?", + "input": "", + "output": "

len(dataloader) returns the total number of batches. It depends on the __len__ function of your dataset, so make sure it is set correctly.

\n", + "system": "" } ] \ No newline at end of file