diff --git "a/data/dataset.json" "b/data/dataset.json" --- "a/data/dataset.json" +++ "b/data/dataset.json" @@ -29560,5 +29560,3305 @@ "input": "", "output": "

You can't interact with shell commands after you launch them so you have to either pass the --yes flag:

\n\n
!pip install keras==2.1.2 --yes\n
\n\n

or you can execute your command outside of ipython (jupyter).

\n", "system": "" + }, + { + "instruction": "pytorch: compute vector-Jacobian product for vector function", + "input": "", + "output": "

You should not define tensor y by torch.tensor(), torch.tensor() is a tensor constructor, not an operator, so it is not trackable in the operation graph. You should use torch.stack() instead.

\n

Just change that line to:

\n
y = torch.stack((x[0]**2+x[1], x[1]**2+x[2], x[2]**2))\n
\n

the result of x.grad should be tensor([ 6., 13., 19.])

\n", + "system": "" + }, + { + "instruction": "Loading or building cuda engine crashes occassionaly after upgrading to TensorRT 7", + "input": "", + "output": "

Finally got it!\nI rewrote the CMake.txt and add all required libs and paths and removed duplicate ones. That might be a lib conflict in cuBLAS.

\n", + "system": "" + }, + { + "instruction": "Runtime error: CUDA out of memory by the end of training and doesn\u2019t save model; pytorch", + "input": "", + "output": "

The problem is your loss_train list, which stores all losses from the beginning of your experiment. If the losses you put in were mere float, that would not be an issue, but because of your not returning a float in the train function, you are actually storing loss tensors, with all the computational graph embedded in them. Indeed, a tensor keeps pointers of all tensors that were involved in its computation, and as long as a pointer exist, the allocated memory cannot be freed.

\n

So basically, you keep all tensors from all epochs and prevent pytorch from cleaning them ; it's like a (deliberate) memory leak

\n

You can very easily monitor this type of issue by running nvidia-smi -l 1 after having started your experiment. You will watch your memory usage grow linearly until your GPU runs out of memory (`nvidia-smi is a good tool to use when doing stuff on your GPU).

\n

To prevent this from happening, simply replace the last line of the train function with return loss_train.item(), and the memory issue will vanish

\n", + "system": "" + }, + { + "instruction": "ValueError: not enough values to unpack (expected 3, got 2) in Pytorch", + "input": "", + "output": "

From torchvision.datasets.ImageFolder documentation:

\n

"Returns: (sample, target) where target is class_index of the target class."

\n

So, quite simply, the dataset object you're currently using returns a tuple with 2 items. You'll get an error if you try to store this tuple in 3 variables. The correct line would be:

\n
for i, (batch, targets) in enumerate(val_loader):\n
\n

If you really need the names (which I assume is the file path for each image) you can define a new dataset object that inherits from the ImageFolder dataset and overload the __getitem__ function to also return this information.

\n", + "system": "" + }, + { + "instruction": "How to set and get confidence threshold from custom YOLOv5 model?", + "input": "", + "output": "

It works for me:

\n
model.conf = 0.25  # confidence threshold (0-1)\nmodel.iou = 0.45  # NMS IoU threshold (0-1)  \n
\n

More information:

\n

https://github.com/ultralytics/yolov5/issues/36

\n", + "system": "" + }, + { + "instruction": "Couldn't open shared file mapping error in Pytorch Dataset", + "input": "", + "output": "

Very possibly it happens when GPU memory is getting close to be exhausted. Same problem was observed after every next epoch started. I ran excerpt of ImageNet with 1300 batch size, using 1070Ti with 8GB of memory. Images were loaded from RAM drive.

\n

The problem was related to collate operations performed by torch.utils.data.DataLoader on enumerate call.

\n

For attached screenshot, my failures occurred when graph showed red line and shared memory was 14+ GBytes in use. After batch size was reduced twice, to 650, Task Manager shows mere 2.6 GB in use, and program runs just fine, from epoch to epoch.

\n\n

\"Task

\n", + "system": "" + }, + { + "instruction": "Why a.storage() is b.storage() returns false when a and b reference the same data?", + "input": "", + "output": "

torch.Tensor.storage() returns a new instance of torch.Storage on every invocation. You can see this in the following

\n
a.storage() is a.storage()\n# False\n
\n

To compare the pointers to the underlying data, you can use the following:

\n
a.storage().data_ptr() == b.storage().data_ptr()\n# True\n
\n

There is a discussion of how to determine whether pytorch tensors share memory in this pytorch forum post.

\n
\n

Note the difference between a.data_ptr() and a.storage().data_ptr(). The first returns the pointer to the first element of the tensor, whereas the second seems to the point to the memory address of the underlying data (not the sliced view), though it is not documented.

\n

Knowing the above, we can understand why a.data_ptr() is different from b.data_ptr(). Consider the following code:

\n
import torch\n\na = torch.arange(4, dtype=torch.int64)\nb = a[1:]\nb.data_ptr() - a.data_ptr()\n# 8\n
\n

The address of the first element of b is 8 more than the first element of a because we sliced to remove the first element, and each element is 8 bytes (the dtype is 64-bit integer).

\n

If we use the same code as above but use an 8-bit integer data type, the memory address will be different by one.

\n", + "system": "" + }, + { + "instruction": "Rearrange torch 2D tensors ("Tiles") to be in a particular order", + "input": "", + "output": "

I accepted the correct answer but just wanted to share a more generic version of the code in case it helps anyone.

\n
x, y = tile_x_dim, tile_y_dim   # for readability\nX = X.reshape(-1, num_x_splits, num_y_splits, x, y)\nX = X.permute(0,1,3,2,4).reshape(-1,x*num_x_splits,y*num_y_splits)\n
\n

Where tile_x_dim and tile_y_dim are the x and y dimension respectively of the tile that was formed from the full image. num_x_splits and num_y_splits is the number of times the original picture was split along the x-axis. Note that num_x_splits multiplied by num_y_splits would give the number of tiles the image was cut into.

\n", + "system": "" + }, + { + "instruction": "Difference between numpy.argsort and torch.argsort", + "input": "", + "output": "
\n

... in general, the actual implementation will vary with data type.

\n
\n

From the numpy docs. Since PyTorch tensors are 32bit float precision by default, and numpy arrays 64 bit, the behaviour is not guaranteed to be consistent.

\n", + "system": "" + }, + { + "instruction": "How to use a Pytorch DataLoader for a dataset with multiple labels", + "input": "", + "output": "

You can return a dict of labels for each item in the dataset, and DataLoader is smart enough to collate them for you. i.e. if you provide a dict for each item, the DataLoader will return a dict, where the keys are the label types. Accessing a key of that label type returns a collated tensor of that label type.

\n

See below:

\n
import torch\nfrom torch.utils.data import Dataset, DataLoader\nimport numpy as np\n\nclass M(Dataset):\n    def __init__(self):\n        super().__init__()\n        self.data = np.random.randn(20, 2)\n        print(self.data)\n\n    def __getitem__(self, i):\n        return self.data[i], {'label_1':self.data[i], 'label_2':self.data[i]}\n\n    def __len__(self):\n        return len(self.data)\n\nds = M()\ndl = DataLoader(ds, batch_size=6)\n\nfor x, y in dl:\n    print(x, '\\n', y)\n    print(type(x), type(y))\n
\n
[[-0.33029911  0.36632142]\n [-0.25303721 -0.11872778]\n [-0.35955625 -1.41633132]\n [ 1.28814629  0.38238357]\n [ 0.72908184 -0.09222787]\n [-0.01777293 -1.81824167]\n [-0.85346074 -1.0319562 ]\n [-0.4144832   0.12125039]\n [-1.29546792 -1.56314292]\n [ 1.22566887 -0.71523568]]\ntensor([[-0.3303,  0.3663],\n        [-0.2530, -0.1187],\n        [-0.3596, -1.4163]], dtype=torch.float64) \n {'item_1': tensor([[-0.3303,  0.3663],\n        [-0.2530, -0.1187],\n        [-0.3596, -1.4163]], dtype=torch.float64), 'item_2': tensor([[-0.3303,  0.3663],\n        [-0.2530, -0.1187],\n        [-0.3596, -1.4163]], dtype=torch.float64)}\n<class 'torch.Tensor'> <class 'dict'>\n...\n
\n", + "system": "" + }, + { + "instruction": "WARNING: The following packages were previously imported in this runtime: [PIL] You must restart the runtime in order to use newly installed versions", + "input": "", + "output": "

You have to

\n\n
\n

But first I would try code with the newest modules - without unistalling/installing.

\n

Your tutorial is almost 1 year old and maybe it needed Pillow 5.3.0 at that momen. But it uses also Torch 0.4.0 and today we have 1.7.0 and it may work correctly with Pillow 7.0.0

\n", + "system": "" + }, + { + "instruction": "Is there a way to retrieve the specific parameters used in a random torchvision transform?", + "input": "", + "output": "

I'm afraid there is no easy way around it: Torchvision's random transforms utilities are built in such a way that the transform parameters will be sampled when called. They are unique random transforms, in the sense that (1) parameters used are not accessible by the user and (2) the same random transformation is not repeatable.

\n

As of Torchvision 0.8.0, random transforms are generally built with two main functions:

\n\n

One solution to your problem is sampling the parameters from get_params yourself and calling the functional - deterministic - API instead. So you wouldn't be using RandomRotation, RandomAffine, nor any other Random* transformation for that matter.

\n
\n

For instance, let's look at T.RandomRotation (I have removed the comments for conciseness).

\n
class RandomRotation(torch.nn.Module):\n    def __init__(\n        self, degrees, interpolation=InterpolationMode.NEAREST, expand=False, \n        center=None, fill=None, resample=None):\n        # ...\n\n    @staticmethod\n    def get_params(degrees: List[float]) -> float:\n        angle = float(torch.empty(1).uniform_(float(degrees[0]), \\\n            float(degrees[1])).item())\n        return angle\n\n    def forward(self, img):\n        fill = self.fill\n        if isinstance(img, Tensor):\n            if isinstance(fill, (int, float)):\n                fill = [float(fill)] * F._get_image_num_channels(img)\n            else:\n                fill = [float(f) for f in fill]\n        angle = self.get_params(self.degrees)\n\n        return F.rotate(img, angle, self.resample, self.expand, self.center, fill)\n\n    def __repr__(self):\n        # ...\n
\n

With that in mind, here is a possible override to modify T.RandomRotation:

\n
class RandomRotation(T.RandomRotation):\n    def __init__(*args, **kwargs):\n        super(RandomRotation, self).__init__(*args, **kwargs) # let super do all the work\n\n        self.angle = self.get_params(self.degrees) # initialize your random parameters\n\n    def forward(self): # override T.RandomRotation's forward\n        fill = self.fill\n        if isinstance(img, Tensor):\n            if isinstance(fill, (int, float)):\n                fill = [float(fill)] * F._get_image_num_channels(img)\n            else:\n                fill = [float(f) for f in fill]\n\n        return F.rotate(img, self.angle, self.resample, self.expand, self.center, fill)\n
\n

I've essentially copied T.RandomRotation's forward function, the only difference being that the parameters are sampled in __init__ (i.e. once) instead of inside the forward (i.e. on every call). Torchvision's implementation covers all cases, you generally won't need to copy the full forward. In some cases, you can just call the functional version pretty much straight away. For example, if you don't need to set the fill parameters, you can just discard that part and only use:

\n
class RandomRotation(T.RandomRotation):\n    def __init__(*args, **kwargs):\n        super(RandomRotation, self).__init__(*args, **kwargs) # let super do all the work\n\n        self.angle = self.get_params(self.degrees) # initialize your random parameters\n\n    def forward(self): # override T.RandomRotation's forward\n        return F.rotate(img, self.angle, self.resample, self.expand, self.center)\n
\n
\n

If you want to override other random transforms you can look at the source code. The API is fairly self-explanatory and you shouldn't have too many issues implementing an override for each transform.

\n", + "system": "" + }, + { + "instruction": "PyTorch how to do gathers over multiple dimensions", + "input": "", + "output": "

You can use fancy indexing here to select the desired portion of the tensor.

\n

Essentially, if you generate the index arrays conveying your access pattern beforehand, you can directly use them to extract some slice of the tensor. The shape of the index arrays for each dimension should be same as that of the output tensor or slice you want to extract.

\n
i = torch.arange(bs).reshape(bs, 1, 1) # shape = [bs, 1,      1]\nj = labels.reshape(bs, sample, 1)      # shape = [bs, sample, 1]\nk = torch.arange(v)                    # shape = [v, ]\n\n# Get result as\nt1 = t0[i, j, k]\n
\n

Note the shapes of the above 3 tensors. Broadcasting appends extra dimesions in the front of a tensor, thus essentially reshaping k to [1, 1, v] shape which makes all 3 of them compatible for elementwise operations.

\n

After broadcasting (i, j, k) together will produce 3 [bs, sample, v] shaped arrays and those will (elementwise) index your original tensor to produce the output tensor t1 of shape [bs, sample, v].

\n", + "system": "" + }, + { + "instruction": "How to create tensor directly on GPU, or on device of another tensor?", + "input": "", + "output": "

The documentation is quite clear now about this I think. Here are described the 4 main ways to create a new tensor, and you just have to specify the device to make it on gpu :

\n
t1 = torch.zeros((3,3), device=torch.device('cuda'))\nt2 = torch.ones_like(t1, device=torch.device('cuda'))\nt3 = torch.randn((3,5), device=torch.device('cuda'))\n
\n

And this link adds further information about the torch.tensor() constructor. Again, the device is an argument to be specified.

\n

If you want to use the device of another tensor, you can access it with tensor.device:

\n
t4 = torch.empty((2,2), device=t3.device)\n
\n", + "system": "" + }, + { + "instruction": "Cannot export PyTorch model to ONNX", + "input": "", + "output": "

Like I have mentioned in a comment, this is because slicing in torch.onnx supports only step = 1 but there are 2-step slicing in the model:

\n
self.model2(conv1_2[:,:,::2,::2])\n
\n

Your only option as for now is to rewrite slicing to be some other ops. You can do it by using range and reshape to obtain proper indices. Consider the following function "step-less-arange" (I hope it is generic enough for anyone with similar problem):

\n
def sla(x, step):\n    diff = x % step\n    x += (diff > 0)*(step - diff) # add length to be able to reshape properly\n    return torch.arange(x).reshape((-1, step))[:, 0]\n
\n

usage:

\n
>> sla(11, 3)\ntensor([0, 3, 6, 9])\n
\n

Now you can replace every slice like this:

\n
conv2_2 = self.model2(conv1_2[:,:,self.sla(conv1_2.shape[2], 2),:][:,:,:, self.sla(conv1_2.shape[3], 2)])\n
\n

NOTE: you should optimize it. Indices are calculated for every call so it might be wise to pre-compute it.

\n

I have tested it with my fork of the repo and I was able to save the model:

\n

https://github.com/prokotg/colorization

\n", + "system": "" + }, + { + "instruction": "Running two different independent PyTorch programs on a single GPU", + "input": "", + "output": "

I do not know the details of how this works, but I can tell from experience that both programs will run well (as long as they do not need more than 16GB of RAM when combined), and execution times should stay roughly the same.

\n

However, computer vision usually requires a lot of IO (mostly reading images), if the other task needs to read files too, this part may become slower than when running both programs individually.

\n", + "system": "" + }, + { + "instruction": "Torch gather middle dimension", + "input": "", + "output": "

You can create the indices manually. The indices tensor has to be flattened if it has the shape of your example data.

\n
a[torch.arange(len(a)),indices.view(-1)]\n# equal to a[[0,1,2],[0,1,0]]\n
\n

Out:

\n
tensor([[ 0,  1,  2],\n        [ 9, 10, 11],\n        [12, 13, 14]])\n
\n", + "system": "" + }, + { + "instruction": "How can i add a Bi-LSTM layer on top of bert model?", + "input": "", + "output": "

You can do it as follows:

\n
from transformers import BertModel\nclass CustomBERTModel(nn.Module):\n    def __init__(self):\n          super(CustomBERTModel, self).__init__()\n          self.bert = BertModel.from_pretrained("bert-base-uncased")\n          ### New layers:\n          self.lstm = nn.LSTM(768, 256, batch_first=True,bidirectional=True)\n          self.linear = nn.Linear(256*2, <number_of_classes>)\n          \n\n    def forward(self, ids, mask):\n          sequence_output, pooled_output = self.bert(\n               ids, \n               attention_mask=mask)\n\n          # sequence_output has the following shape: (batch_size, sequence_length, 768)\n          lstm_output, (h,c) = self.lstm(sequence_output) ## extract the 1st token's embeddings\n          hidden = torch.cat((lstm_output[:,-1, :256],lstm_output[:,0, 256:]),dim=-1)\n          linear_output = self.linear(hidden.view(-1,256*2)) ### assuming that you are only using the output of the last LSTM cell to perform classification\n\n          return linear_output\n\ntokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")\nmodel = CustomBERTModel()\n\n
\n", + "system": "" + }, + { + "instruction": "How does the nn.Embedding module relate intuitively to the idea of an embedding in general?", + "input": "", + "output": "

As you said, the aim when using an embedding is to reduce the dimension of your data. However, it does not learn a lower dimensional representation of a larger input space on its own. Starting from a random initialization you can improve this embedding through a learning process. This requires finding a suitable task to train the embedding on, I think, for another question. I believe it's called a "pretext task", where ultimately the objective is to have an accurate embedding matrix.

\n

You can check the parameters of any nn.Module with .parameters(). It will return an generator.

\n
<< [x for x in nn.Embedding(10, 2).parameters()][0].shape\n>> torch.Size([10, 2])\n
\n

Here, there are 10*2 parameters (i.e. dimension_input*dimension_output or by PyTorch's naming num_embeddings*embedding_dims). However it is, still, a lookup table: given an index it will return an embedding of size embedding_dims. But you these embeddings (the values of this matrix) can be changed.

\n

Here's a little experiment:

\n
E = nn.Embedding(10, 2)\noptim = optim.SGD(E.parameters(), lr=0.01)\nX = torch.randint(0, 10, size=(100,))\n\nloss_before = E(X).mean()\nloss_before.backward()\noptim.step()\n\nloss_after = E(X).mean()\n
\n

As expected, loss_before and loss_after are different which shows nn.Embedding's parameters are learnable.

\n
\n

Edit: your question comes down to, "how do I encode my data?".

\n

For those examples you gave precisely:

\n
\n

Let's say I have some input x. This input might be a vectorized image or maybe some sequence daily temperature data. In any case, this input x has 100 elements (100 days of temperature, or a 10x10 image).

\n
\n

You can't use a nn.Embedding to solve these cases. Embedding layers are different to a reduction matrix. The latter can be used to reduce every single vector of dimension d into dimension n where n<<d. The prerequisite to using an embedding layer is having a finite dictionnary of possible elements. For example, you might want to represent a word with a vector of size n then you would use a embedding of nb_possible_words x n. This way, for any given word in the dictionnary the layer will produce the corresponding n-size vector.

\n

As I said in the comments below, num_embeddings is the number of unique elements you are working with and embedding_dim is the size of the embedding, i.e. the size of the output vector.

\n

nn.Embedding is usually used at the head of a network to cast encoded data into a lower dimensionality space. It won't solve your problem by magically reducing your dimensions.

\n

If you have a sequence of temperatures you want to analyse. You could encode each temperature into a one-hot-encoding. But this vector representation might be very large (depending on the number of different temperatures). Using an embedding layer would allow to reduce the size of these vectors. This is important when the aim is to analyse the data with a RNN any other MLP for that matter. Since the bigger your input size, the more paramaters you will have!

\n", + "system": "" + }, + { + "instruction": "Implement SeparableConv2D in Pytorch", + "input": "", + "output": "

The linked definitions are generally agreeing. The best one is in the article.

\n\n

I suggest a few correction to your SeparableConv2d class:

\n\n

This is the updated code, should be similar to tf.keras.layers.SeparableConv2D implementation:

\n
class SeparableConv2d(nn.Module):\n\ndef __init__(self, in_channels, out_channels, kernel_size, bias=False):\n    super(SeparableConv2d, self).__init__()\n    self.depthwise = nn.Conv2d(in_channels, in_channels, kernel_size=kernel_size, \n                               groups=in_channels, bias=bias, padding=1)\n    self.pointwise = nn.Conv2d(in_channels, out_channels, \n                               kernel_size=1, bias=bias)\n\ndef forward(self, x):\n    out = self.depthwise(x)\n    out = self.pointwise(out)\n    return out\n
\n", + "system": "" + }, + { + "instruction": "Feeding Multiple Inputs to LSTM for Time-Series Forecasting using PyTorch", + "input": "", + "output": "

After muddling through for a couple of weeks, I solved the issue. This has been a fruitful journey for me, so I'd like to share what I have discovered. If you'd like to have a look at the complete walk-through with code, please check out my Medium post on the matter.

\n

Just as in Pandas, I found that things tend to work faster and smoother when I stick to the PyTorch way. Both libraries rely on NumPy, and I'm sure one can do pretty much all the table and matrix operations explicitly with NumPy arrays and functions. However, doing so does eliminate all the nice abstractions and performance improvements these libraries provide and turn each step into a CS exercise. It's fun until it isn't.

\n

Rather than shaping all the training and validation sets manually to pass them to the model, PyTorch's TensorDataset and DataLoaders classes have immensely helped me. Scaling the feature and target sets for training and validation, we then have NumPy arrays. We can transform these arrays into Tensors and use these Tensors to create our TensorDataset, or a custom Dataset depending on your requirements. Finally, DataLoaders allow us to iterate over such datasets with much less hassle than otherwise as they already provide built-in batching, shuffling, and dropping the last batch options.

\n
train_features = torch.Tensor(X_train_arr)\ntrain_targets = torch.Tensor(y_train_arr)\n\nval_features = torch.Tensor(X_val_arr)\nval_targets = torch.Tensor(y_val_arr)\n\ntrain = TensorDataset(train_features, train_targets)\ntrain_loader = DataLoader(train, batch_size=64, shuffle=False, drop_last=True)\n\nval = TensorDataset(val_features, val_targets)\nval_loader = DataLoader(val, batch_size=64, shuffle=False, drop_last=True)\n
\n

After transforming our data into iterable datasets, they can later be used to do mini-batch training. Instead of explicitly defining batches or wrestling with matrix operations, we can easily iterate over them via DataLoaders as follows.

\n
model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim)\n\ncriterion = nn.MSELoss(reduction='mean')\noptimizer = optim.Adam(model.parameters(), lr=1e-2)\n\ntrain_losses = []\nval_losses = []\ntrain_step = make_train_step(model, criterion, optimizer)\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\n\nfor epoch in range(n_epochs):\n    batch_losses = []\n    for x_batch, y_batch in train_loader:\n        x_batch = x_batch.view([batch_size, -1, n_features]).to(device)\n        y_batch = y_batch.to(device)\n        loss = train_step(x_batch, y_batch)\n        batch_losses.append(loss)\n    training_loss = np.mean(batch_losses)\n    train_losses.append(training_loss)    \n    with torch.no_grad():\n        batch_val_losses = []\n        for x_val, y_val in val_loader:\n            x_val = x_val.view([batch_size, -1, n_features]).to(device)\n            y_val = y_val.to(device)        \n            model.eval()\n            yhat = model(x_val)\n            val_loss = criterion(y_val, yhat).item()\n            batch_val_losses.append(val_loss)\n        validation_loss = np.mean(batch_val_losses)\n        val_losses.append(validation_loss)\n    \n    print(f"[{epoch+1}] Training loss: {training_loss:.4f}\\t Validation loss: {validation_loss:.4f}")\n
\n

Another cool feature that PyTorch provides is the view() function, which allows faster and memory-efficient reshaping of tensors. Since I earlier defined my LSTM model with batch_first = True, the batch tensor for the feature set must have the shape of (batch size, time steps, number of features). The line in the code above x_batch = x_batch.view([batch_size, -1, n_features]).to(device) just does that.

\n

I hope this answer helps those dealing with similar problems or at least gives an idea of which direction to take. I had changed a lot in the code shared in the original post, but I'll not put them all here for the sake of simplicity. Feel free to check out the rest of it in my other SO post here.

\n", + "system": "" + }, + { + "instruction": "PyTorch's DataParallel is only using one GPU", + "input": "", + "output": "

Had the same problem and solved it by realizing I was using it wrong. Maybe this will help someone save some time:

\n

If you wrap your model like model = nn.DataParallel(model) it will wrap only the call to exactly this model (call means model(input)). So if your model has a linear layer and you now use that directly like model.my_linear(input) it will not be wrapped.

\n

If you pass around references to your model instance in your code you may end up with a reference to the unwrapped model which then also will not work.

\n

The easiest test would be to wrap your model immediately before calling it and to check if that works. If it does, your wrapping code before probably has one of the issues above.

\n", + "system": "" + }, + { + "instruction": "PyTorch installation fails Could not find a version that satisfies the requirement", + "input": "", + "output": "

I tried with Python 3.8.0, Python 3.8.5, Python 3.8.6 and Python 3.9.0. It seems to work only with 3.8.6 version.

\n", + "system": "" + }, + { + "instruction": ""Didn't find engine for operation quantized" error while using dynamic quantization with Huggingface transformer", + "input": "", + "output": "

Is qnnpack in the list when you run print(torch.backends.quantized.supported_engines)?

\n

Does torch.backends.quantized.engine = 'qnnpack' work for you?

\n", + "system": "" + }, + { + "instruction": "How to run a pytorch project with CPU?", + "input": "", + "output": "

Here's the simplest fix I can think of:

\n
    \n
  1. Put the following line near the top of your code:
  2. \n
\n
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n
\n
    \n
  1. Do a global replace. Change .cuda() to .to(device), where device is the variable set in step 1.
  2. \n
\n", + "system": "" + }, + { + "instruction": "Explanation of the tuple grad_fn.next_functions in PyTorch", + "input": "", + "output": "

I am not an expert on Pytorch, but trying to answer your question, from the example:

\n
a, b = torch.randn(2, requires_grad=True).unbind()\nc = a+b\nprint(c.grad_fn.next_functions)\n>>> ((<UnbindBackward object at 0x7f0ea438de80>, 0), (<UnbindBackward object at 0x7f0ea438de80>, 1))\n
\n

Now, think of any Pytorch function as producing "a list of outputs", rather than "an output". So, if a function produces just one output (typical case); it produces a list that is equal to [output]. However, if the function produces several outputs, then it has a list of len > 1 outputs, eg: [output0, output1].\nFrom that, I understand the tuple constituents as follows:
\n(grad_fn: the function object that resulted in this tensor, i: index of the tensor in the function outputs list.. which is typically zero since functions typically have one output)

\n

Applying this understanding on the code, the unbind function has two outputs: a at index 0 of the outputs 'list', and b at index 1 of the outputs 'list'. The following points reason through the graph:

\n\n

i.e. the second entry in the grad_fn tuple is the index of the tensor in the producing-function list of outputs.

\n", + "system": "" + }, + { + "instruction": "Why is the implementation of cross entropy different in Pytorch and Tensorflow?", + "input": "", + "output": "

Answering here by combining answers from the comment section for the benefit of the community.

\n

Since you have addressed the issue of numeric overflow in PyTorch, that is handled by subtracting the max value like below(from here).

\n
scalar_t z = std::exp(input_data[d * dim_stride] - max_input);\n
\n

Coming to TensorFlow's implementation of Cross entropy the issue of underflow is not that major since it is numerically ignored with the dominating large value.

\n", + "system": "" + }, + { + "instruction": "I am getting a ValueError: All bounding boxes should have positive height and width", + "input": "", + "output": "

This is happening because of resize transform applied in fasterRCNN in detection module. If you are explicitly applying a resize operation, the bounding box generated coordinates will change as per the resize definition but if you haven't applied a resize transform and your image min and max size is outsider (800,1333) then a default resize transform is applied.

\n

check the below snippets from the pytorch git repo.

\n

Module: /torchvision/models/detection; Exception generated in generalized_rcnn.py

\n
        degenerate_boxes = boxes[:, 2:] <= boxes[:, :2]\n        if degenerate_boxes.any():\n            # print the first degenerate box\n            bb_idx = torch.where(degenerate_boxes.any(dim=1))[0][0]\n            degen_bb: List[float] = boxes[bb_idx].tolist()\n            raise ValueError("All bounding boxes should have positive height and width."\n                             " Found invalid box {} for target at index {}."\n                             .format(degen_bb, target_idx))\n
\n

Default min max image size; Module: detection/ faster_rcnn

\n
def __init__(self, backbone, num_classes=None,\n             # transform parameters\n             min_size=800, max_size=1333,\n             image_mean=None, image_std=None,\n             ............................\n\ntransform = GeneralizedRCNNTransform(min_size, max_size, image_mean, image_std)\n
\n

Target resizing; Module: detection / transform.py

\n
def resize_boxes(boxes, original_size, new_size):\n    # type: (Tensor, List[int], List[int]) -> Tensor\n    ratios = [\n        torch.tensor(s, dtype=torch.float32, device=boxes.device) /\n        torch.tensor(s_orig, dtype=torch.float32, device=boxes.device)\n        for s, s_orig in zip(new_size, original_size)\n    ]\n    ratio_height, ratio_width = ratios\n    xmin, ymin, xmax, ymax = boxes.unbind(1)\n\n    xmin = xmin * ratio_width\n    xmax = xmax * ratio_width\n    ymin = ymin * ratio_height\n    ymax = ymax * ratio_height\n    return torch.stack((xmin, ymin, xmax, ymax), dim=1)\n
\n

You can fix this by removing/correcting bounding boxes where xmin and xmax or ymin and ymax are equal in the original dataset.

\n", + "system": "" + }, + { + "instruction": "Pytorch inference CUDA out of memory when multiprocessing", + "input": "", + "output": "

From my experience of parallel training and inference, it is almost impossible to squeeze the last bit of the GPU memory. Probably the best you can do is to estimate the maximum number of processes that can run in parallel, then restrict your code to run up to that many processes at the same time. Using semaphore is the typical way to restrict the number of parallel processes and automatically start a new process when there is an open slot.

\n

To make it easier to initialize and share semaphore between processes, you can use a multiprocessing.Pool and the pool initializer as follows.

\n
semaphore = mp.BoundedSemaphore(n_process)\nwith mp.Pool(n_process, initializer=pool_init, initargs=(semaphore,)) as pool:\n    # here, each process can access the shared variable pool_semaphore\n\ndef pool_init(semaphore):\n    global pool_semaphore\n    pool_semaphore = semaphore\n
\n

On the other hand, the greedy approach is to run with a try ... except block in a while loop and keep trying to use GPU. However, this may come with significant performance overhead, so maybe not a good idea.

\n", + "system": "" + }, + { + "instruction": "Index tensor must have the same number of dimensions as input tensor error encountered when using torch.gather()", + "input": "", + "output": "

Torch.gather is described here. If we take your code, this line

\n
torch.gather(qval, 3, action.view(-1,1,1,1))\n
\n

is equivalent to

\n
act_view = action.view(10,1,1,1)\nout = torch.zeros_like(act_view)\nfor i in range(10):\n    for j in range(1):\n         for k in range(1):\n              for p in range(1):\n                   out[i,j,k,p] = qval[i,j,k, act_view[i,j,k,p]]\nreturn out\n
\n

which obviously makes very little sense. In particular, qval is not 4-D and thus cannot be indexed like this. The number of for loops is determined by the shape of your input tensors, and they should all have the same number of dimensions for this to work (this is what your error tells you by the way). Here, qval is 2D and act_view is 4D.

\n

I'm not sure what you wanted to do with this, but if you can explain your goal and remove all the useless stuff in your example (mostly the training and backprop related code) to get a minimal reproducible example, I could help you further in finding the correct way to do it :)

\n", + "system": "" + }, + { + "instruction": "Difference between hidden dimension and n_layers in rnn using pytorch", + "input": "", + "output": "

Actually documentation is really clear about their differences. Hidden size is number of features of the hidden state for RNN. So if you increase hidden size then you compute bigger feature as hidden state output.

\n

However, num_layers is just multiple RNN units which contain hidden states with given hidden size.

\n
\n

num_layers=2 would mean stacking two RNNs together to form a stacked\nRNN, with the second RNN taking in outputs of the first RNN and\ncomputing the final results

\n
\n

I want to explain how RNN works with the image below. Each RNN unit (blue rectangles) takes one h_n (hidden state) and one input. Hidden dimension determines the feature vector size of the h_n (hidden state). At each timestep (t, horizontal propagation in the image) your rnn will take a h_n and input. Then if you have n_layers >1 it will create a intermediate output and give it to the upper layer(vertical). So hidden dimension determine the size of horizontal h_n in the image, whereas num_layers determine the number of blue cells in vertical axis in the image.

\n

\"RNN\"

\n", + "system": "" + }, + { + "instruction": "TransformerEncoder with a padding mask", + "input": "", + "output": "

The required shapes are shown in nn.Transformer.forward - Shape (all building blocks of the transformer refer to it). The relevant ones for the encoder are:

\n\n\n\n

where S is the sequence length, N the batch size and E the embedding dimension (number of features).

\n\n

The padding mask should have shape [95, 20], not [20, 95]. This assumes that your batch size is 95 and the sequence length is 20, but if that is the other way around, you would have to transpose the src instead.

\n\n

Furthermore, when calling the encoder, you are not specifying the src_key_padding_mask, but rather the src_mask, as the signature of torch.nn.TransformerEncoder.forward is:

\n\n
forward(src, mask=None, src_key_padding_mask=None)\n
\n\n

The padding mask must be specified as the keyword argument src_key_padding_mask not as the second positional argument. And to avoid confusion, your src_mask should be renamed to src_key_padding_mask.

\n\n
src_key_padding_mask = torch.randint(0,2,(95, 20))\noutput =  encoder(src, src_key_padding_mask=src_key_padding_mask)\n
\n", + "system": "" + }, + { + "instruction": "how to use custom python object in torchscript", + "input": "", + "output": "

You have to annotate your Vocab with torchscript.jit like this:

\n\n
@torch.jit.script\nclass Vocab(object):\n    def __init__(self, name: str):\n        self.name = name\n\n    def show(self):\n        print(\"dict:\" + self.name)\n
\n\n

Also note specification name: str as it's also needed for torchscript to infer it's type (PyTorch supports >=Python3.6 type annotations, you could use a comment as well, but it's way less clear).

\n\n

Please see Torchscript classes and Default Types and other related torchscript info over there.

\n", + "system": "" + }, + { + "instruction": "Where can I find a pre-trained variational autoencoder that's trained on natural images?", + "input": "", + "output": "

Take a look at https://github.com/AntixK/PyTorch-VAE.

\n

The models were all trained on the celebrity faces dataset CelebA.

\n", + "system": "" + }, + { + "instruction": "Pytorch / device problem(cpu, gpu) when load state dict for optimizer", + "input": "", + "output": "

Seems like the state was on cuda when you saved and now trying to use it on cpu or vice-versa. To avoid this error, a simple way is to pass the map_location argument to load.

\n\n

Just pass map_location=<device you want to use> in torch.load and it should work fine. Also, see https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-across-devices

\n", + "system": "" + }, + { + "instruction": "Pad torch tensors of different sizes to be equal", + "input": "", + "output": "

rnn.pad_sequence only pads the sequence dimension, it requires all other dimensions to be equal. You cannot use it to pad images across two dimensions (height and width).

\n\n

To pad an image torch.nn.functional.pad can be used, but you need to manually determine the height and width it needs to get padded to.

\n\n
import torch.nn.functional as F\n\n# Determine maximum height and width\n# The mask's have the same height and width\n# since they mask the image.\nmax_height = max([img.size(1) for img in image_batch])\nmax_width = max([img.size(2) for img in image_batch])\n\nimage_batch = [\n    # The needed padding is the difference between the\n    # max width/height and the image's actual width/height.\n    F.pad(img, [0, max_width - img.size(2), 0, max_height - img.size(1)])\n    for img in image_batch\n]\nmask_batch = [\n    # Same as for the images, but there is no channel dimension\n    # Therefore the mask's width is dimension 1 instead of 2\n    F.pad(mask, [0, max_width - mask.size(1), 0, max_height - mask.size(0)])\n    for mask in mask_batch\n]\n
\n\n

The padding lengths are specified in reverse order of the dimensions, where every dimension has two values, one for the padding at the beginning and one for the padding at the end. For an image with the dimensions [channels, height, width] the padding is given as: [width_beginning, width_end, height_beginning, height_top], which can be reworded to [left, right, top, bottom]. Therefore the code above pads the images to the right and bottom. The channels are left out, because they are not being padded, which also means that the same padding could be directly applied to the masks.

\n", + "system": "" + }, + { + "instruction": "Adding residual connections to simple CNN", + "input": "", + "output": "

I think only three layers is not deep enough for the residual links to "kick in" - I guess this is the main reason why training without residual connections works better.\nMoreover, your fully-connected layer has 32K input dimension - this is HUGE.

\n

How does your residual link works for the first conv layer where res has 3 channels and X has 32?

\n

I would try the following:

\n
    \n
  1. Do not add residual links for the first conv layer.
  2. \n
  3. Add nn.BatchNorm2d after the conv and before the relu.
  4. \n
  5. Add strides - sub sample the feature maps, so you'll end up with significantly less than 32K features for the fully connected layer.
  6. \n
\n", + "system": "" + }, + { + "instruction": "Loading image data from pandas to pytorch", + "input": "", + "output": "

Datasets

\n\n

You have to use torch.utils.data.Dataset structure to define it.\nHere is how you can do it in plain pytorch (I'm using pillow to load the images and torchvision to transform them to torch.Tensor objects):

\n\n
import torch\nimport torchvision\nfrom PIL import Image\n\n\nclass MyDataset(torch.utils.data.Dataset):\n    def __init__(self, dataframe):\n        self.dataframe = dataframe\n\n    def __len__(self):\n        return len(self.dataframe)\n\n    def __getitem__(self, index):\n        row = self.dataframe.iloc[index]\n        return (\n            torchvision.transforms.functional.to_tensor(Image.open(row[\"Path\"])),\n            row[\"Score\"],\n        )\n\ndataset = MyDataset(dataframe)\n
\n\n

Alternatively, you can use torchdata (disclaimer: shameless self-promotion as I'm the author...) which allows you to decouple Path and Scores like this:

\n\n
import torchvision\nfrom PIL import Image\n\nimport torchdata\n\n\nclass ImageDataset(torchdata.datasets.FilesDataset):\n    def __getitem__(self, index):\n        return Image.open(self.files[index])\n\n\nclass Labels(torchdata.Dataset):\n    def __init__(self, scores):\n        super().__init__()\n        self.scores = scores\n\n    def __len__(self):\n        return len(self.scores)\n\n    def __getitem__(self, index):\n        return self.scores[index]\n\n# to_numpy for convenience\n# I assume all your images are in /folder and have *.jpg extension\ndataset = ImageDataset.from_folder(\"/folder\", regex=\"*.jpg\").map(\n    torchvision.transforms.ToTensor()\n) | Labels(dataframe[\"Score\"].to_numpy())\n
\n\n

(or you could implement it just like in regular pytorch but inheriting from torchdata.Dataset and calling super().__init__() in the constructor).

\n\n

torchdata allows you to cache your images easily or apply some other transformations via .map as shown there, check github repository for more info or ask in the comment.

\n\n

DataLoader

\n\n

Either way you choose you should wrap your dataset in torch.utils.data.DataLoader to create batches and iterate over them, like this:

\n\n
dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True)\n\nfor images, scores in dataloader:\n    # Rest of your code to train neural network or smth\n    ...\n
\n\n

Do with those images and scores what you want in the loop.

\n", + "system": "" + }, + { + "instruction": "How can I slice a PyTorch tensor with another tensor?", + "input": "", + "output": "

Here you go (EDIT: you probably need to copy tensors to cpu using tensor=tensor.cpu() before doing following operations):

\n\n
index = tensor([[124, 583, 158, 529],\n    [172, 631, 206, 577]], device='cuda:0')\n#create a concatenated list of ranges of indices you desire to slice\nindexer = np.r_[tuple([np.s_[i:j] for (i,j) in zip(index[0,:],index[1,:])])]\n#slice using numpy indexing\nsliced_inp = inp[:, indexer, :]\n
\n\n

Here is how it works:

\n\n

np.s_[i:j] creates a slice object (simply a range) of indices from start=i to end=j.

\n\n

np.r_[i:j, k:m] creates a list ALL indices in slices (i,j) and (k,m) (You can pass more slices to np.r_ to concatenate them all together at once. This is an example of concatenating only two slices.)

\n\n

Therefore, indexer creates a list of ALL indices by concatenating a list of slices (each slice is a range of indices).

\n\n

UPDATE: If you need to remove interval overlaps and sort intervals:

\n\n
indexer = np.unique(indexer)\n
\n\n

if you want to remove interval overlaps but not sort and keep original order (and first occurrences of overlaps)

\n\n
uni = np.unique(indexer, return_index=True)[1]\nindexer = [indexer[index] for index in sorted(uni)]\n
\n", + "system": "" + }, + { + "instruction": "Expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]", + "input": "", + "output": "

The dimensions of the inputs for the convolution are not correct for a 2D convolution. Let's have a look at the dimensions you're passing to F.conv2d:

\n\n
self.channels[:, 1].size()\n# => torch.Size([1, 15, 17])\ntorch.tensor([[[0,1,0],[0,0,0],[0,0,0]]]).size()\n# => torch.Size([1, 3, 3])\n
\n\n

The correct sizes should be

\n\n\n\n

Because your weight has only 3 dimensions, it is considered to be a 1D convolution, but since you called F.conv2d the stride and padding will be tuples and therefore it won't work.

\n\n

For the input you indexed the second dimension, which selects that particular element across that dimensions and eliminates that dimensions. To keep that dimension you can index it with a slice of just one element.\nAnd for the weight you are missing one dimension as well, which can just be added directly. Also your weight is of type torch.long, since you are only using integers in the tensor creation, but the weight needs to be of type torch.float.

\n\n
F.conv2d(self.channels[:, 1:2], torch.tensor([[[[0,1,0],[0,0,0],[0,0,0]]]], dtype=torch.float), padding=1)\n
\n\n
\n\n

On a different note, I don't think that convolutions are appropriate for this use case, because you're not using a key property of the convolution, which is to capture the surroundings. Those are just too many unnecessary computations to achieve what you want, most of them are multiplications with 0.

\n\n

For example, a move up is much easier to achieve by removing the first row and adding a new row of zeros at the end, so everything is shifted up (assuming that the first row is the top and the last row is the bottom of the board).

\n\n
head = self.channels[:, 1:2]\nbatch_size, channels, height, width = head.size()\n# Take everything but the first row of the head\n# Add a row of zeros to the end by concatenating them across the height (dimension 2)\nnew_head = torch.cat([head[:, :, 1:], torch.zeros(batch_size, channels, 1, width)], dim=2)\n\n# Or if you want to wrap it around the board, it's even simpler.\n# Move the first row to the end\nwrap_around_head = torch.cat([head[:, :, 1:], head[:, :, 0:1]], dim=2)\n
\n", + "system": "" + }, + { + "instruction": "what is the difference between if-else statement and torch.where in pytorch?", + "input": "", + "output": "

Tracking based AD, like pytorch, works by tracking. You can't track through things that are not function calls intercepted by the library. By using an if statement like this, there's no connection between x and y, whereas with where, x and y are linked in the expression tree.

\n\n

Now, for the differences:

\n\n\n\n

(You can do such things even in Python, but that requires more sophisticated technology like source transformation. I don't thing it is possible with pytorch.)

\n", + "system": "" + }, + { + "instruction": "IndexError: tensors used as indices must be long, byte or bool tensors", + "input": "", + "output": "

The previous answer somehow does not work for me in recent versions of pytorch. But I have solved it by passing the indices as long -> i.long(),lab.long()

\n

So, this should work,

\n
NumClass = 10\nmask = torch.zeros(batch_size, self.mem_dim, 4, 4)\nones = torch.ones(1, 4, 4)\nNumRows = self.mem_dim\nElements = NumRows//NumClass\nfor i in range(batch_size):\n    lab = torch.arange(Elements * label[i], Elements*(label[i]+1), 1)\n    mask[i.long(),lab.long()] = ones\n
\n

There is a catch, As I don't know the exact error message, I have no way to be sure, if one or both the indices need to be converted into long.

\n", + "system": "" + }, + { + "instruction": "How to interpret the P numbers that fairseq generate produces?", + "input": "", + "output": "
\n

Q: I'm wondering if it is reasonable to say a low (absolute) number in the P row means higher confidence in that particular word?

\n
\n\n
\n

Q: What I'm trying to work out is if I can use either the H number, or somehow to use the individual P numbers, to get a confidence measure in its translation.

\n
\n\n", + "system": "" + }, + { + "instruction": "The result is different when I apply torch.manual_seed before loading cuda() after loading the model", + "input": "", + "output": "

The Net.cuda() has no effect on the random number generator. Under the hood it just calls cuda() for all of the model parameters. So basically it's multiple calls to Tensor.cuda().

\n\n

https://github.com/pytorch/pytorch/blob/ecd3c252b4da3056797f8a505c9ebe8d68db55c4/torch/nn/modules/module.py#L293

\n\n

We can test this by doing the following:

\n\n
torch.random.manual_seed(42)\nx = torch.rand(1)\nx.cuda()\ny = torch.rand(1)\ny.cuda()\nprint(x, y)\n\n# the above prints the same as below\ntorch.random.manual_seed(42)\nprint(torch.rand(1), torch.rand(1))\n
\n\n

So that means Net() is using the number generator to initialize random weights within the layers.

\n\n
torch.manual_seed(args.seed)\nmodel = Net()\nprint(torch.rand(1))\n\n# the below will print a different result\ntorch.manual_seed(args.seed)\nmodel = Net()\ntorch.manual_seed(args.seed)\nprint(torch.rand(1))\n
\n\n

I would recommend narrowing the scope of how random numbers are managed within your Python source code. So that a global block of code outside of the Model isn't responsible for how internal values are generated.

\n\n

Simply said, pass the seed as a parameter to the __init__ of the model.

\n\n
model = Net(args.seed)\nprint(torch.rand(1))\n
\n\n

This will force developers to always provide a seed for consistency when using the model, and you can make the parameter optional if seeding isn't always necessary.

\n\n

I'd avoid using the same seed all the time, because you're going to learn to use parameters that work best with that seed.

\n", + "system": "" + }, + { + "instruction": "How to store and load training data comprised 50 millions 25x25 numpy arrays while training a multi-class CNN model?", + "input": "", + "output": "

How to store np.arrays() on disk ?

\n\n

Storing them in a HDF5 file is a good idea. The basic HDF5 type is a Datasets, which contain multidimensional arrays of an homogeneous type. HDF5 Datasets files can be assembled together into HDF5 Groups files, which can also contain other groups, to create more complex structures. Another way is to pickle your numpy arrays or more abstract dataset objects directly from disk, but then your file would be readable by Python only. It is also discouraged for security reasons. Finally if you want to optimize your data format for TensorFlow read/write operations you can use the TFRecord file format. Saving your numpy array in a TFRecord format can be tricky but thanksfully someone created a script to do that.

\n\n

Should I use a database to store these matrices and to query from them during training?

\n\n

You could, but then you would reinvent the wheel. What you need is one or more separate processes in parralel of your training process, reading the next batch of training observations (prefetch), and applying some transformations to them while the training process is working on the previous batch. This way you avoid any IO and preprocessing delay, and can get some significant performance gains. AI frameworks developed their own tools for this problem. In Pytorch, there is the class torch.utils.data.DataLoader. Here is a tutorial that shows how to efficiently load HDF5 files using a Dataloader. In TensorFlow, you can create an input pipeline using the class tf.data.Dataset. A basic approach is to first open a file(s) (1), read the data from the file(s) into the memory (2), then train your model using what's in memory (3). Let's mock a TF Dataset and training loop:

\n\n
import tf, time\n\nclass MyDataset(tf.data.Dataset):\n    def __new__(self, filename=\"image_dataset.proto\"):\n        time.sleep(0.01) # mock step (1) delay\n        return tf.data.TFRecordDataset([filename])\n\ndef train(dataset, nb_epoch=10):\n    start_time = time.perf_counter()\n    for epoch_num in range(nb_epoch):\n        for sample in dataset: # where step (2) delay takes place\n            time.sleep(0.01) # mock step (3) delay\n        tf.print(\"Execution time:\", time.perf_counter() - start_time)\n
\n\n

You can just apply steps (1, 2, 3) sequentially:

\n\n
train(MyDataset())\n
\n\n

A better way is to read the next batch of data while the training process is still training on the previous batch, such that steps (2, 3) can happen in parralel. Apply transformations to the next batch while still training on the previous batch is also possible. To prefetch:

\n\n
train(MyDataset().prefetch(tf.data.experimental.AUTOTUNE))\n
\n\n

Additionally you can have multiple processes to read your file(s) and have a sequence of steps (1, 2) running in parralel:

\n\n
train( tf.data.Dataset.range(2).interleave(\\\n    MyDataset().prefetch(tf.data.experimental.AUTOTUNE),\\\n    num_parallel_calls=tf.data.experimental.AUTOTUNE))\n
\n\n

Learn more in the documentation.

\n\n

Should I switch to Pytorch instead ?

\n\n

Almost everything that Pytorch can do, TensorFlow can do too. TensorFlow has been the most production ready AI framework for a while, used by Google for their TPUs. Pytorch is catching up though. I would say that Pytorch is more research/development oriented, while TensorFlow is more production oriented. Another difference is how you design your neural networks: Pytorch works by adding layers on top of each others, while in TensorFlow you first design a computational graph that you run on some input data at some point. People often develop their models in Pytorch, and then export them in a TensorFlow format to use in production.

\n", + "system": "" + }, + { + "instruction": "In PyTorch, what is the difference between forward() and an ordinary method?", + "input": "", + "output": "

forward() method does accept any type of parameters. However, the goal of the forward() method is to encapsulate the forward computational steps. forward() is called in the __call__ function. In the forward() method, PyTorch call the nested model itself to perform the forward pass.

\n\n

It is encouraged to:

\n\n
\n

NOT call the forward(x) method. You should call the whole model itself, as in model(x) to perform a forward pass and output predictions.

\n
\n\n

What happens if you do not do that?

\n\n

If you call the .forward() method, and have hooks in your model, the hooks won\u2019t have any effect.

\n", + "system": "" + }, + { + "instruction": "How to change activation layer in Pytorch pretrained module?", + "input": "", + "output": "

here is a general function for replacing any layer

\n
def replace_layers(model, old, new):\n    for n, module in model.named_children():\n        if len(list(module.children())) > 0:\n            ## compound module, go inside it\n            replace_layers(module, old, new)\n            \n        if isinstance(module, old):\n            ## simple module\n            setattr(model, n, new)\n\nreplace_layer(model, nn.ReLU, nn.ReLU6())\n
\n

I struggled with it for a few days. So, I did some digging & wrote a kaggle notebook explaining how different types of layers / modules are accessed in pytorch.

\n", + "system": "" + }, + { + "instruction": "Freezing Individual Weights in Pytorch", + "input": "", + "output": "

What you have seems like it would work provided you did it after loss.backward() and before optimizer.step() (referring to the common usage for these variable names). That said, it seems a bit convoluted. Also, if your weights are floating point values then comparing them to exactly zero is probably a bad idea, we could introduce an epsilon to account for this. IMO the following is a little cleaner than the solution you proposed:

\n\n
# locate zero-value weights before training loop\nEPS = 1e-6\nlocked_masks = {n: torch.abs(w) < EPS for n, w in model.named_parameters() if n.endswith('weight')}\n\n...\n\nfor ... #training loop\n\n    ...\n\n    optimizer.zero_grad()\n    loss.backward()\n    # zero the gradients of interest\n    for n, w in model.named_parameters():                                                                                                                                                                           \n        if w.grad is not None and n in locked_masks:                                                                                                                                                                                   \n            w.grad[locked_masks[n]] = 0 \n    optimizer.step()\n
\n", + "system": "" + }, + { + "instruction": "GroupNorm is considerably slower and consumes higher GPU memory than BatchNorm in Pytorch", + "input": "", + "output": "

Yes, you are right GN does use more resources compared to BN. I'm guessing this is because it has to calculate the mean and variance for every group of channels, whereas BN only has to calculate once over the whole batch.

\n\n

But the advantage with GN, is that you can lower your Batch Size up to 2, without reducing any performance, as stated within the paper, so you can make up for the overhead computation.

\n", + "system": "" + }, + { + "instruction": "embedding layer outputs nan", + "input": "", + "output": "

It is usually the inputs more than the weights which tend to become nan (either goes too high or too low). Maybe these are incorrect to start out with and worsen after some gradients. You can identify these inputs by running the tensor or np.array thru' a simple condition check like:

\n
print("Inp value too high") if len(bert_embeddings[bert_embeddings>1000]) > 1 else None\n
\n

A common mistake for a beginner is to use a torch.empty instead of torch.zeros. This invariably leads to Nan over time.

\n

If all your inputs are good, then it is the vanishing or exploding gradients issue. See if the problem worsens after a few iterations. Explore different activations or clipping gradients which usually fix these type of issues. If you are using latest optimizers you usually need not worry about adjusting the learning rate.

\n", + "system": "" + }, + { + "instruction": "epsilon parameter in Adam opitmizer", + "input": "", + "output": "

Indeed, you can check this in the docs for the TF Adam optimizer. To quote the relevant part:

\n\n
\n

The default value of 1e-8 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1. Note that since AdamOptimizer uses the formulation just before Section 2.1 of the Kingma and Ba paper rather than the formulation in Algorithm 1, the \"epsilon\" referred to here is \"epsilon hat\" in the paper.

\n
\n\n

If you check the \"the formulation just before section 2.1\" in the paper, they actually include the time dependence in alpha, resulting in a time-dependent \"step size\" alpha_t but a fixed epsilon. Note that at the end of the day, this is just rewriting/interpreting parameters in a slightly different fashion and doesn't change the actual workings of the algorithm. But you will need to be aware that choosing the same epsilon in the PyTorch and TF implementations will apparently not lead to the same results...

\n", + "system": "" + }, + { + "instruction": "Loss is not converging in Pytorch but does in Tensorflow", + "input": "", + "output": "

I think the problem is in dataloaders, here I noticed, that you're not passing samplers to loaders here:

\n\n
# define samplers for obtaining training and validation batches\ntrain_sampler = SubsetRandomSampler(train_idx)\nvalid_sampler = SubsetRandomSampler(valid_idx)\n\ntrain_loader = torch.utils.data.DataLoader(\n        train_dataset,\n        batch_size=16,\n        num_workers=0,\n        shuffle=True\n    )\n\ntest_loader = torch.utils.data.DataLoader(\n        test_dataset,\n        batch_size=16,\n        num_workers=0,\n        shuffle=True\n    )\n
\n\n

I have never used Samplers, so I don't now how to correctly use them, but I suppose you wanted to do smth like this:

\n\n
# define samplers for obtaining training and validation batches\ntrain_sampler = SubsetRandomSampler(train_idx)\nvalid_sampler = SubsetRandomSampler(valid_idx)\n\ntrain_loader = torch.utils.data.DataLoader(\n        train_dataset,\n        sampler=train_sampler,\n        batch_size=16,\n        num_workers=0,\n        shuffle=True\n    )\n\ntest_loader = torch.utils.data.DataLoader(\n        test_dataset,\n        sampler=valid_sampler,\n        batch_size=16,\n        num_workers=0,\n        shuffle=True\n    )\n
\n\n

And according to docs:

\n\n
\n

sampler (Sampler, optional) \u2013 defines the strategy to draw samples from the dataset. If specified, shuffle must be False.

\n
\n\n

if you are using samplers you should turn off shuffle.

\n", + "system": "" + }, + { + "instruction": "Adding batch normalization decreases the performance", + "input": "", + "output": "

My interpretation of the phenomenon you are observing,, is that instead of reducing the covariance shift, which is what the Batch Normalization is meant for, you are increasing it. In other words, instead of decrease the distribution differences between train and test, you are increasing it and that's what it is causing you to have a bigger difference in the accuracies between train and test. Batch Normalization does not assure better performance always, but for some problems it doesn't work well. I have several ideas that could lead to an improvement:

\n\n", + "system": "" + }, + { + "instruction": "How to get the full Jacobian of a derivative in PyTorch?", + "input": "", + "output": "

torch.autograd.grad in PyTorch is aggregated. To have a vector auto-differentiated with respect to the input, use torch.autograd.functional.jacobian.

\n", + "system": "" + }, + { + "instruction": "Pytorch Build from Source gives Error make: *** No rule to make target 'install'. Stop", + "input": "", + "output": "

I solved this problem with reference to this link.

\n

This error happened at my second installation attempt. During the first I forgot to install the c++ compiler and got CMAKE_CXX_COMPILER not found. With a proper compiler installed, the second attempt gave me the "No rule to make target 'install'" error as mentioned in the question.

\n

The problem was solved by removing the build/ directory and re-run python setup.py install.

\n

So it seems this is caused by the cached build information.

\n", + "system": "" + }, + { + "instruction": "How to obtain input data from ONNX model?", + "input": "", + "output": "

Onnx library provides APIs to extract the names and shapes of all the inputs as follows:

\n
model = onnx.load(onnx_model)\ninputs = {}\nfor inp in model.graph.input:\n    shape = str(inp.type.tensor_type.shape.dim)\n    inputs[inp.name] = [int(s) for s in shape.split() if s.isdigit()]\n
\n", + "system": "" + }, + { + "instruction": "How can I use KNN, Random Forest models in Pytorch?", + "input": "", + "output": "

Update

\n

Currently, there are some sklearn alternatives utilizing GPU, most prominent being cuML (link here) provided by rapidsai.

\n

Previous answer

\n

I would advise against using PyTorch solely for the purpose of using batches.

\n

Argumentation goes as follows:

\n
    \n
  1. scikit-learn has docs about scaling where one can find MiniBatchKMeans and there are other options like partial_fit method or warm_start arguments (as is the case with RandomForest, check this approach).
  2. \n
  3. KNN cannot be easily used without hand-made implementation with disk caching as it stores whole dataset in memory (and you lack RAM). This approach would be horribly inefficient either way, do not try.
  4. \n
  5. You most probably will not be able to create algorithms on-par with those from scikit (at least not solo and not without considerable amount of work). Your best bet is to go with quite battle-tested solutions (even though it's still 0.2x currently). It should be possible to get some speed improvements through numba but that's beside the scope of this question. Maybe you could utilize CUDA for different algorithms but it's even more non-trivial task.
  6. \n
\n

All in all PyTorch is suited for deep learning computations with heavy CUDA usage. If you need neural networks, this framework is one of the best out there, otherwise go with something like sklearn or other frameworks allowing incremental training. You can always bridge those two easily with numpy() and few other calls in pytorch.

\n

EDIT: I have found KNN implementation possibly suiting your requirements in this github repository

\n", + "system": "" + }, + { + "instruction": "Understanding Gradient in Pytorch", + "input": "", + "output": "

Backpropagation is based on the chain-rule for calculating derivatives. This means the gradients are computed step-by-step from tail to head and always passed back to the previous step (\"previous\" w.r.t. to the preceding forward pass).

\n\n

For scalar output the process is initiated by assuming a gradient of d (out1) / d (out1) = 1 to start the process. If you're calling backward on a (non-scalar) tensor though you need to provide the initial gradient since it is not unambiguous.

\n\n

Let's look at an example that involves more steps to compute the output:

\n\n
a = torch.tensor(1., requires_grad=True)\nb = a**2\nc = 5*b\nc.backward()\nprint(a.grad)  # Prints: 10.\n
\n\n

So what happens here?

\n\n
    \n
  1. The process is initiated by using d(c)/d(c) = 1.
  2. \n
  3. Then the previous gradient is computed as d(c)/d(b) = 5 and multiplied with the downstream gradient (1 in this case), i.e. 5 * 1 = 5.
  4. \n
  5. Again the previous gradient is computed as d(b)/d(a) = 2*a = 2 and multiplied again with the downstream gradient (5 in this case), i.e. 2 * 5 = 10.
  6. \n
  7. Hence we arrive at a gradient value of 10 for the initial tensor a.
  8. \n
\n\n

Now in effect this calculates d(c)/d(a) and that's all there is to it. It is the gradient of c with respect to a and hence no notion of a \"target loss\" is used (even if the loss was zero that doesn't mean the gradient has to be; it is up to the optimizer to step into the correct (downhill) direction and to stop once the loss got sufficiently small).

\n", + "system": "" + }, + { + "instruction": "How to transfer the following tensorflow code into pytorch", + "input": "", + "output": "

The pytorch equivalent of the tensorflow part of the code will be, explained with comments in the code itself, you have to import truncnorm from scipy.

\n\n
from scipy.stats import truncnorm #extra import equivalent to tf.trunc initialise\n\npooled_outputs_u = []\n\nfor i, filter_size in enumerate(filter_sizes):\n\n\n    filter_shape = [filter_size, embedding_size, 1, num_filters]\n\n    #W is just a tensor now that will act as weight to the conv layer\n    W=torch.tensor(truncnorm.rvs(-1,1,size=[10,10]))\n    #bias initialized with 0.1 initial values \n    b=torch.zeros([num_filters])+0.1\n    #conv layer with the same parameters as the tensorflow layer more on this in the link\n    conv=torch.nn.functional.conv2d(self.embedded_users,W,bias=b,strides=[1,1,1,1],padding=0)\n    #can use torch sequential to include it all in a single line but did it like this for better understanding.\n    h=torch.relu(conv)\n    #look at link2 for what a max pool layer does, basically it is a kind of feature extraction\n    pooled=torch.nn.functional.max_pool(h,kernal_size=[1,user_length-filter_size+1,1,1],strides=[1,1,1,1],padding=0)\n    pooled_outputs_u.append(pooled)\nnum_filters_total = num_filters * len(filter_sizes)\nself.h_pool_u=torch.concat(3,pooled_outputs_u)\nself.h_pool_flat_u=torch.reshape(self.h_pool_u,[-1,num_filters_total])\n
\n\n

Reference:

\n\n

Link 1

\n\n

Link 2

\n", + "system": "" + }, + { + "instruction": "How to add augmented images to original dataset using Pytorch?", + "input": "", + "output": "

Why do you want it? Generally speaking, it is enough to increase the number of epochs over the dataset, and your model will see the original and the augmented version of every image at least once (assuming a relatively high number of epochs).

\n

Explanation:

\n

For instance, if your augmentation has a chance of 50% to be applied, after 100 epochs, for every sample you will get ~50 samples of the original image and ~50 augmented samples. So, increasing the dataset size is equivalent to add epochs but (maybe) less efficient in terms of memory (need to store the images in memory to have high performances).

\n", + "system": "" + }, + { + "instruction": "'ToPILImage' object has no attribute 'show'", + "input": "", + "output": "

The ToPILImage method accepts a tensor or an ndarray as input, source.

\n\n

You will need to cast a single image tensor to the ToPILImage method. From your post, I suspect you are passing batches of image tensor instead of one, hence the error.

\n\n

Assumed if you want to visualize image from tensor c,

\n\n
img = transforms.ToPILImage()(c)\nimg.show()\n
\n", + "system": "" + }, + { + "instruction": "Pytorch equivalent of Numpy's logical_and and kin?", + "input": "", + "output": "

Update: With Pytorch 1.2, PyTorch introduced torch.bool datatype, which can be used using torch.BoolTensor:

\n\n
>>> a = torch.BoolTensor([False, True, True, False])  # or pass [0, 1, 1, 0]\n>>> b = torch.BoolTensor([True, True, False, False])\n\n>>> a & b  # logical and\ntensor([False,  True, False, False])\n\n
\n\n
\n\n

PyTorch supports logical operations on ByteTensor. You can use logical operations using &, |, ^, ~ operators as follows:

\n\n
>>> a = torch.ByteTensor([0, 1, 1, 0])\n>>> b = torch.ByteTensor([1, 1, 0, 0])\n\n>>> a & b  # logical and\ntensor([0, 1, 0, 0], dtype=torch.uint8)\n\n>>> a | b  # logical or\ntensor([1, 1, 1, 0], dtype=torch.uint8)\n\n>>> a ^ b  # logical xor\ntensor([1, 0, 1, 0], dtype=torch.uint8)\n\n>>> ~a  # logical not\ntensor([1, 0, 0, 1], dtype=torch.uint8)\n
\n", + "system": "" + }, + { + "instruction": "PyTorch: get input layer size", + "input": "", + "output": "

Assuming your model is called model, this will give the number of input features of the fc1 layer:

\n\n
model.fc1.in_features\n
\n\n

This is useful inside the .forward() method:

\n\n
def forward(self, x):\n    x = x.view(-1, self.fc1.in_features)  # resize the input to match the input layer\n    ...\n
\n", + "system": "" + }, + { + "instruction": "Error in Google Colaboratory - AttributeError: module 'PIL.Image' has no attribute 'register_decoder'", + "input": "", + "output": "

First, check the version of pillow you have by using:

\n\n
import PIL\nprint(PIL.PILLOW_VERSION)\n
\n\n

and make sure you have the newest version, the one I am using right now is 5.3.0\nIf you have like 4.0.0, install a new version by using:\n!pip install Pillow==5.3.0 in the Colab environment.

\n\n

Second, restart your Google colab environment, and check the version again, it should be updated.

\n\n

I had the same problem, and I spent some time trying to solve it.

\n\n

Note: I was using PyTorch 0.4.

\n\n

I hope this will solve your problem.

\n", + "system": "" + }, + { + "instruction": "How to learn the embeddings in Pytorch and retrieve it later", + "input": "", + "output": "

model.parameters() returns all the parameters of your model, including the embeddings.

\n\n

So all these parameters of your model are handed over to the optimizer (line below) and will be trained later when calling optimizer.step() - so yes your embeddings are trained along with all other parameters of the network.
(you can also freeze certain layers by setting i.e. embedding.weight.requires_grad = False, but this is not the case here).

\n\n
# summing it up:\n# this line specifies which parameters are trained with the optimizer\n# model.parameters() just returns all parameters\n# embedding class weights are also parameters and will thus be trained\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n
\n\n

You can see that your embedding weights are also of type Parameter by doing so:

\n\n
import torch\nembedding_maxtrix = torch.nn.Embedding(10, 10)\nprint(type(embedding_maxtrix.weight))\n
\n\n

This will output the type of the weights, which is Parameter:

\n\n
<class 'torch.nn.parameter.Parameter'>\n
\n\n
\n\n

I'm not entirely sure what mean by retrieve. Do you mean getting a single vector, or do you want just the whole matrix to save it, or do something else?

\n\n
embedding_maxtrix = torch.nn.Embedding(5, 5)\n# this will get you a single embedding vector\nprint('Getting a single vector:\\n', embedding_maxtrix(torch.LongTensor([0])))\n# of course you can do the same for a seqeunce\nprint('Getting vectors for a sequence:\\n', embedding_maxtrix(torch.LongTensor([1, 2, 3])))\n# this will give the the whole embedding matrix\nprint('Getting weights:\\n', embedding_maxtrix.weight.data)\n
\n\n

Output:

\n\n
Getting a single vector:\n tensor([[-0.0144, -0.6245,  1.3611, -1.0753,  0.5020]], grad_fn=<EmbeddingBackward>)\nGetting vectors for a sequence:\n tensor([[ 0.9277, -0.1879, -1.4999,  0.2895,  0.8367],\n        [-0.1167, -2.2139,  1.6918, -0.3483,  0.3508],\n        [ 2.3763, -1.3408, -0.9531,  2.2081, -1.5502]],\n       grad_fn=<EmbeddingBackward>)\nGetting weights:\n tensor([[-0.0144, -0.6245,  1.3611, -1.0753,  0.5020],\n        [ 0.9277, -0.1879, -1.4999,  0.2895,  0.8367],\n        [-0.1167, -2.2139,  1.6918, -0.3483,  0.3508],\n        [ 2.3763, -1.3408, -0.9531,  2.2081, -1.5502],\n        [-0.5829, -0.1918, -0.8079,  0.6922, -0.2627]])\n
\n\n

I hope this answers your question, you can also take a look at the documentation, there you can find some useful examples as well.

\n\n

https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding

\n", + "system": "" + }, + { + "instruction": "PyTorch softmax with dim", + "input": "", + "output": "

You have a 1x3x4x2 tensor train_dataset. Your softmax function's dim parameter determines across which dimension to perform Softmax operation. First dimension is your batch dimension, second is depth, third is rows and last one is columns. Please look at picture below (sorry for horrible drawing) to understand how softmax is performed when you specify dim as 1.\n\"enter

\n\n

In short, sum of each corresponding entry of your 4x2 matrices are equal to 1.

\n\n

Update: The question which dimension the softmax should be applied depends on what data your tensor store, and what is your goal.

\n\n

Update: For image classification task, please see the tutorial on official pytorch website. It covers basics of image classification with pytorch on a real dataset and its a very short tutorial. Although that tutorial does not perform Softmax operation, what you need to do is just use torch.nn.functional.log_softmax on output of last fully connected layer. See MNIST classifier with pytorch for a complete example. It does not matter whether your image is RGB or grayscale after flattening it for fully connected layers (also keep in mind that same code for MNIST example might not work for you, depends on which pytorch version you use).

\n", + "system": "" + }, + { + "instruction": "PyTorch - create padded tensor from sequences of variable length", + "input": "", + "output": "

As an add-on to the answer already given by @iacolippo:

\n

I just stumbled over torch.nn.utils.rnn.pad_sequence, as this works a bit differently as the solution by @iacolippo I post it here.

\n

It takes a list of tensors of variable length and combines them to a matrix - padding all sequences to the longest given sequence.

\n

Code example:

\n
import torch\n\na = torch.tensor([1,2,3])\nb = torch.tensor([1,2])\nc = torch.tensor([1])\ntorch.nn.utils.rnn.pad_sequence((a,b,c), batch_first=True)\n
\n

Output - padded sequences:

\n
tensor([[ 1,  2,  3],\n        [ 1,  2,  0],\n        [ 1,  0,  0]])\n
\n
\n

Signature of torch.nn.utils.rnn.pad_sequence:

\n
\n

torch.nn.utils.rnn.pad_sequence (sequences, batch_first=False, padding_value=0)

\n\n
\n", + "system": "" + }, + { + "instruction": "dimension out of range (expected to be in range of [-2, 1], but got 2)", + "input": "", + "output": "

torch.mm:

\n\n
\n

Performs a matrix multiplication of the matrices mat1 and mat2.

\n \n

If mat1 is a (n\u00d7m) tensor, mat2 is a (m\u00d7p) tensor, out will be a (n\u00d7p)\n tensor.

\n
\n\n

torch.bmm:

\n\n
\n

Performs a batch matrix-matrix product of matrices stored in batch1\n and batch2.

\n \n

batch1 and batch2 must be 3-D tensors each containing the same number\n of matrices.

\n \n

If batch1 is a (b\u00d7n\u00d7m) tensor, batch2 is a (b\u00d7m\u00d7p) tensor, out will be\n a (b\u00d7n\u00d7p) tensor.

\n
\n\n

The following code snippet works.

\n\n
import torch\n\nx = torch.mm(torch.randn(100, 78) , torch.randn(78, 10))\nbsize = 16\nx = torch.bmm(torch.randn(bsize, 100, 78) , torch.randn(bsize, 78, 10))\n
\n", + "system": "" + }, + { + "instruction": "PyTorch: DecoderRNN: RuntimeError: input must have 3 dimensions, got 2", + "input": "", + "output": "

Your GRU input needs to be 3-dimensional:

\n\n
\n

input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence.

\n
\n\n

Further you need to provide the hidden state (last encoder hidden state in this case) as second parameter:

\n\n
self.gru(input, h_0)\n
\n\n

Where input is your actual input and h_0 the hidden state which needs to be 3-dimensional as well:

\n\n
\n

h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor\n containing the initial hidden state for each element in the batch.\n Defaults to zero if not provided.

\n
\n\n

https://pytorch.org/docs/master/nn.html#torch.nn.GRU

\n", + "system": "" + }, + { + "instruction": "PyTorch: purpose of addmm function", + "input": "", + "output": "

The addmm function is an optimized version of the equation beta*mat + alpha*(mat1 @ mat2). I ran some tests and timed their execution.

\n\n\n", + "system": "" + }, + { + "instruction": "Masking diagonal to a specific value with PyTorch tensors", + "input": "", + "output": "

One way to do it:

\n\n
>>> import torch\n>>> n = 3\n>>> t = torch.zeros((n,n))\n>>> t[torch.eye(n).byte()] = 5\n>>> t\n\n 5  0  0\n 0  5  0\n 0  0  5\n[torch.FloatTensor of size 3x3]\n
\n", + "system": "" + }, + { + "instruction": "What is the difference between softmax and log-softmax?", + "input": "", + "output": "

By default, torch.log provides the natural logarithm of the input, so the output of PyTorch is correct:

\n\n
ln([0.5611,0.4389])=[-0.5778,-0.8236]\n
\n\n

Your last results are obtained using the logarithm with base 10.

\n", + "system": "" + }, + { + "instruction": "How to use pack_padded_sequence with multiple variable-length input with the same label in pytorch", + "input": "", + "output": "

Let's do it step by step.

\n\n
\n

Input Data Processing

\n
\n\n
a = (([0,1,2], [3,4], [5,6,7,8]), 1)\n\n# store length of each element in an array\nlen_a = np.array([len(a) for a in a[0]]) \nvariable_a  = np.zeros((len(len_a), np.amax(len_a)))\nfor i, a in enumerate(a[0]):\n    variable_a[i, 0:len(a)] = a\n\nvocab_size = len(np.unique(variable_a))\nVariable(torch.from_numpy(variable_a).long())\nprint(variable_a)\n
\n\n

It prints:

\n\n
Variable containing:\n 0  1  2  0\n 3  4  0  0\n 5  6  7  8\n[torch.DoubleTensor of size 3x4]\n
\n\n
\n

Defining embedding and RNN layer

\n
\n\n

Now, let's say, we have an Embedding and RNN layer class as follows.

\n\n
class EmbeddingLayer(nn.Module):\n\n    def __init__(self, input_size, emsize):\n        super(EmbeddingLayer, self).__init__()\n        self.embedding = nn.Embedding(input_size, emsize)\n\n    def forward(self, input_variable):\n        return self.embedding(input_variable)\n\n\nclass Encoder(nn.Module):\n\n    def __init__(self, input_size, hidden_size, bidirection):\n        super(Encoder, self).__init__()\n        self.input_size = input_size\n        self.hidden_size = hidden_size\n        self.bidirection = bidirection\n        self.rnn = nn.LSTM(self.input_size, self.hidden_size, batch_first=True, \n                                    bidirectional=self.bidirection)\n\n    def forward(self, sent_variable, sent_len):\n        # Sort by length (keep idx)\n        sent_len, idx_sort = np.sort(sent_len)[::-1], np.argsort(-sent_len)\n        idx_unsort = np.argsort(idx_sort)\n\n        idx_sort = torch.from_numpy(idx_sort)\n        sent_variable = sent_variable.index_select(0, Variable(idx_sort))\n\n        # Handling padding in Recurrent Networks\n        sent_packed = nn.utils.rnn.pack_padded_sequence(sent_variable, sent_len, batch_first=True)\n        sent_output = self.rnn(sent_packed)[0]\n        sent_output = nn.utils.rnn.pad_packed_sequence(sent_output, batch_first=True)[0]\n\n        # Un-sort by length\n        idx_unsort = torch.from_numpy(idx_unsort)\n        sent_output = sent_output.index_select(0, Variable(idx_unsort))\n\n        return sent_output\n
\n\n
\n

Embed and encode the processed input data

\n
\n\n

We can embed and encode our input as follows.

\n\n
emb = EmbeddingLayer(vocab_size, 50)\nenc = Encoder(50, 100, False, 'LSTM')\n\nemb_a = emb(variable_a)\nenc_a = enc(emb_a, len_a)\n
\n\n

If you print the size of enc_a, you will get torch.Size([3, 4, 100]). I hope you understand the meaning of this shape.

\n\n

Please note, the above code runs only on CPU.

\n", + "system": "" + }, + { + "instruction": "Pytorch network parameter calculation", + "input": "", + "output": "

Most layer modules in PyTorch (e.g. Linear, Conv2d, etc.) group parameters into specific categories, such as weights and biases. Each of the five layer instances in your network has a \"weight\" and a \"bias\" parameter. This is why \"10\" is printed.

\n\n

Of course, all of these \"weight\" and \"bias\" fields contain many parameters. For example, your first fully connected layer self.fc1 contains 16 * 5 * 5 * 120 = 48000 parameters. So len(params) doesn't tell you the number of parameters in the network--it gives you just the total number of \"groupings\" of parameters in the network.

\n", + "system": "" + }, + { + "instruction": "Pytorch: Trying to apply the transform to a numpy array... fails with an error", + "input": "", + "output": "

Most transforms method take only PIL objects as input. But you can add another transform called transforms.ToPILImage() which takes an nd-array as input, to convert an nd-array to PIL object. So in your case, dictionary variable should become:

\n\n
data_transforms = {\n'train': transforms.Compose([\n    transforms.ToPILImage()\n    transforms.Scale(256),\n    transforms.Pad(4,0),\n    transforms.ToTensor(),\n    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n]),\n'val': transforms.Compose([\n    transforms.Scale(256),\n    transforms.Pad(4,0),\n    transforms.ToTensor(),\n    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n]),\n}\n
\n\n

Note that these transformations work sequentially. So it's necessary you add the toPILImage transform as the first transformation. Hence your nd-array is first converted to PIL object, and then other transformations are applied.

\n", + "system": "" + }, + { + "instruction": "How two rows can be swapped in a torch tensor?", + "input": "", + "output": "

The other answer does not work, as some dimensions get overwritten before they are copied:

\n
>>> var = [[0, 1, -4, 8],\n       [2, -3, 2, 1],\n       [5, -8, 7, 1]]\n>>> x = torch.tensor(var)\n>>> index = torch.LongTensor([1, 0, 2])\n>>> x[index] = x\n>>> x\ntensor([[ 0,  1, -4,  8],\n        [ 0,  1, -4,  8],\n        [ 5, -8,  7,  1]])\n
\n

For me, it suffices to create a new tensor (with separate underlying storage) to hold the result:

\n
>>> x = torch.tensor(var)\n>>> index = torch.LongTensor([1, 0, 2])\n>>> y = torch.zeros_like(x)\n>>> y[index] = x\n
\n

Alternatively, you can use index_copy_ (following this explanation in discuss.pytorch.org), although I don't see an advantage for either way at the moment.

\n", + "system": "" + }, + { + "instruction": "Best way to use Python iterator as dataset in PyTorch", + "input": "", + "output": "

PyTorch's DataLoader actually has official support for an iterable dataset, but it just has to be an instance of a subclass of torch.utils.data.IterableDataset:

\n
\n

An iterable-style dataset is an instance of a subclass of\nIterableDataset that implements the __iter__() protocol, and\nrepresents an iterable over data samples

\n
\n

So your code would be written as:

\n
from torch.utils.data import IterableDataset\n\nclass MyIterableDataset(IterableDataset):\n    def __init__(self, iterable):\n        self.iterable = iterable\n\n    def __iter__(self):\n        return iter(self.iterable)\n\n...\n\nBATCH_SIZE = 3\n\ntrain_dataloader = DataLoader(MyIterableDataset(example_generator()),\n                              batch_size = BATCH_SIZE,\n                              shuffle=False)\n
\n", + "system": "" + }, + { + "instruction": "Implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW", + "input": "", + "output": "

If you comment out both these lines:

\n
import torch_optimizer as optim\nfrom transformers import AdamW\n
\n

and then use:

\n
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=1e-5)\n
\n

does it work? If not, what is the error?

\n
\n

To switch optimizer, put optim="adamw_torch" in your TrainingArguments (the default is "adamw_hf")

\n
\n

This is referring to Huggingface Trainer, which is configured with a TrainingArguments instance. But as you are using your own training loop, this is not applicable to you.

\n", + "system": "" + }, + { + "instruction": ""TypeError: getattr(): attribute name must be string" in PyTorch diffusers, how to fix it?", + "input": "", + "output": "

The device should be mps (device='mps'). Mac M1 has no inbuilt Nvidia GPU.

\n

Also, I would suggest you check How to use Stable Diffusion in Apple Silicon (M1/M2)\nHG blog and make sure all the requirements are satisfied.

\n

Also, check for your installed diffusers version.

\n
import diffusers\nprint(diffusers.__version__)\n
\n

If it is <=0.4.0, please update it using,

\n
pip install --upgrade diffusers transformers scipy\n
\n", + "system": "" + }, + { + "instruction": "Cannot plot model graph with pytorch HiddenLayer - module 'torch.onnx' has no attribute '_optimize_trace'", + "input": "", + "output": "

I followed error recommendation and changed _optimize_trace to _optimize_graph in pytorch_builder.py line 71. After that It worked correctly.

\n", + "system": "" + }, + { + "instruction": "How to create a normal 2d distribution in pytorch", + "input": "", + "output": "

Sampling from the multivariate normal distribution

\n

You can use MultivariateNormal to sample from a multivariate normal.

\n
>>> h, w = 200, 200\n>>> fmap = torch.zeros(h, w)\n
\n

Fill fmap with the origin points:

\n
>>> pts = torch.rand(20, 2)\n>>> pts *= torch.tensor([h, w])\n>>> x, y = pts.T.long()\n>>> x, y = x.clip(0, h), y.clip(0, w)\n\n>>> fmap[x, y] = 1\n
\n

Following this, we can sample from the following distribution (you can adjust the covariance matrix accordingly):

\n
>>> sampler = MultivariateNormal(pts.T, 10*torch.eye(len(pts)))\n\n>>> for x in range(10):\n...    x, y = sampler.sample()\n...    x, y = x.clip(0, h).long(), y.clip(0, w).long()\n...    fmap[x, y] = 1\n
\n

As a result, you can end up with something like:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Origin pointsNormal sampling
\"!\"\"!\"
\n
\n

This is not documented well enough, but you can pass the sample shape to the sample function. This allows you to sample multiple points per call, i.e. you only need one to populate your canvas.

\n

Here is a function to draw from MultivariateNormal:

\n
def multivariate_normal_sampler(mean, cov, k):\n    sampler = MultivariateNormal(mean, cov)\n    return sampler.sample((k,)).swapaxes(0,1).flatten(1)\n
\n

Then you can call it as:

\n
>>> x, y = multivariate_normal_sampler(mean=pts.T, cov=50*torch.eye(len(pts)), k=1000)\n
\n

Clip the samples:

\n
>>> x, y = x.clip(0, h-1).long(), y.clip(0, w-1).long()\n
\n

Finally insert into fmap and draw:

\n
>>> fmap[x, y] += .1\n
\n

Here is an example preview:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
k=1,000k=50,000
\"!\"\"!\"
\n
\n

The utility function is available as torch.distributions.multivariate_normal.MultivariateNormal

\n
\n

Computing the density map using the pdf

\n

Alternatively, instead of sampling from the normal distribution, you could compute the density values based on its probability density function (pdf):

\n

\"!\"

\n

A particular example of a two-dimensional Gaussian function is:

\n

\"enter

\n

Origin points:

\n
>>> h, w = 50, 50\n>>> x0, y0 = torch.rand(2, 20)\n>>> origins = torch.stack((x0*h, y0*w)).T\n
\n

Define the gaussian 2D pdf:

\n
def gaussian_2d(x=0, y=0, mx=0, my=0, sx=1, sy=1):\n    return 1 / (2*math.pi*sx*sy) * \\\n      torch.exp(-((x - mx)**2 / (2*sx**2) + (y - my)**2 / (2*sy**2)))\n
\n

Construct the grid and accumulate the gaussians from each origin points:

\n
x = torch.linspace(0, h, h)\ny = torch.linspace(0, w, w)\nx, y = torch.meshgrid(x, y)\n\nz = torch.zeros(h, w)\nfor x0, y0 in origins:\n  z += gaussian_2d(x, y, mx=x0, my=y0, sx=h/10, sy=w/10)\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n
Multivariate normal distributions
\"!\"
\n
\n

The code to plot the grid of values is simply using matplotlib.pyplot.pcolormesh: plt.pcolormesh(x, y, z).

\n", + "system": "" + }, + { + "instruction": "PyTorch nn.CrossEntropyLoss IndexError: Target 2 is out of bounds", + "input": "", + "output": "

You have two classes, which means the maximum target label is 1 not 2 because the classes are indexed from 0 (see official documentation). You essentially have to subtract 1 to your labels tensor, such that class n\u00b01 is assigned the value 0, and class n\u00b02 value 1.

\n

In turn the labels of the batch you printed would look like:

\n
tensor([1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1,\n        1, 1, 1, 1, 1, 0, 0, 1])\n
\n", + "system": "" + }, + { + "instruction": "How to fix RuntimeError CUDA error CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm`?", + "input": "", + "output": "

It may be an incomplete error reporting of a shape error:

\n

A mismatch in dimension of a nn.Linear module and its inpput, for example x.shape == [a, b] going into a nn.Linear(c, c, bias=False) with c not matching the shape of x, will result in this error message.

\n

See the Pytorch forum conversation.

\n", + "system": "" + }, + { + "instruction": "RTX 3070 compatibility with Pytorch", + "input": "", + "output": "

It might be because you have installed a torch package with cuda==10.* (e.g. torch==1.9.0+cu102) . I'd suggest trying:

\n
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html\n
\n", + "system": "" + }, + { + "instruction": "What is the correct way to calculate the norm, 1-norm, and 2-norm of vectors in PyTorch?", + "input": "", + "output": "

To compute the 0-, 1-, and 2-norm you can either use torch.linalg.norm, providing the ord argument (0, 1, and 2 respectively). Or directly on the tensor: Tensor.norm, with the p argument. Here are the three variants: manually computed, with torch.linalg.norm, and with Tensor.norm.

\n\n", + "system": "" + }, + { + "instruction": "AttributeError: 'numpy.ndarray' object has no attribute 'unsqueeze'", + "input": "", + "output": "

Make sure image is a tensor in the shape of [batch size, channels, height, width] before entering any Pytorch layers.

\n

Here you have\nimage=np.asarray(image)

\n

I would remove this numpy conversion and keep it a torch.tensor.

\n

Or if you really want it to be a numpy array, then right before it enters your generator make sure to use torch.from_numpy() as shown in this documentation on your numpy image before it gets unsqueezed: https://pytorch.org/docs/stable/generated/torch.from_numpy.html

\n

This function is ofcourse an alternative if you don't want to get rid of that original conversion.

\n

Sarthak Jain

\n", + "system": "" + }, + { + "instruction": "Where can I get official PyTorch documentation in pdf form?", + "input": "", + "output": "

I don't think there is an official pdf. The pytorch documentation uses sphinx to generate the web version of the documentation. But sphinx can also generate PDFs.

\n

So you could download the git repo of pytorch, install sphinx, and then generate the PDF yourself using sphinx.

\n

The instructions to built the HTML can be found here, and generating the PDF should be no different.

\n", + "system": "" + }, + { + "instruction": "Related to SubsetRandomSampler", + "input": "", + "output": "

Did you mean train and validation not test and validation?

\n

If so, the SubsetRandomSampler uses randomly select samples from indices. Therefore you can just randomly split the indices of each class before put them in train_indices and val_indices.

\n

Like

\n
indexs = [[] for _ in range(len(dataset.classes))]  # you can't use `[[]] * len(dataset.classes)`. Although there might be better ways but I don't know\nfor idx, (_, class_idx) in enumerate(dataset):\n    indexs[class_idx].append(idx)\ntrain_indices, val_indices = [], []\nfor cl_idx in indexs:\n    size = len(cl_idx)\n    split = int(np.floor(validation_split * size))\n    np.random.shuffle(cl_idx)\n    train_indices.extend(cl_idx[split:])\n    val_indices.extend(cl_idx[:split])\ntrain_sampler = SubsetRandomSampler(train_indices)\nvalid_sampler = SubsetRandomSampler(val_indices)\n
\n", + "system": "" + }, + { + "instruction": "Getting negative (inverted) image in Pytorch", + "input": "", + "output": "

Don't struggle a lot just use 255-image it will give you a negative image...Try it

\n", + "system": "" + }, + { + "instruction": "Using Pytorch in AWS lambda Error- "Unable to import module 'lambda_function': No module named 'torch._C'",", + "input": "", + "output": "

Make sure your AWS Lambda runtime and your container (when using a layer) or your EC2 instance (when installing on EFS from EC2) use the same Python version.
\nPyTorch C extensions are compiled for a specific Python version and discrepancies will result in the error you got.

\n", + "system": "" + }, + { + "instruction": "Understanding convolutional layers shapes", + "input": "", + "output": "

TLDR; 256x256x32 refers to the layer's output shape rather than the layer itself.

\n
\n

There are many articles and posts out there explaining how convolution layers work. I'll try to answer your question without going into too many details, just focusing on shapes.

\n

Assuming you are working with 2D convolution layers, your input and output will both be three-dimensional. That is, without considering the batch which would correspond to a 4th axis... Therefore, the shape of a convolution layer input will be (c, h, w) (or (h, w, c) depending on the framework) where c is the number of channels, h is the width of the input and w the width. You can see it as a c-channel hxw image.\nThe most intuitive example of such input is the input of the first convolution layer of your convolutional neural network: most likely an image of size hxw with c channels for example c=1 for greyscale or c=3 for RGB...

\n

What's important is that for all pixels of that input, the values on each channel gives additional information on that pixel. Having three channels will give each pixel ('pixel' as in position in the 2D input space) a richer content than having a single. Since each pixel will be encoded with three values (three channels) vs. a single one (one channel). This kind of intuition about what channels represent can be extrapolated to a higher number of channels. As we said an input can have c channels.

\n

Now going back to convolution layers, here is a good visualization. Imagine having a 5x5 1-channel input. And a convolution layer consisting of a single 3x3 filter (i.e. kernel_size=3)

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
inputfilterconvolutionoutput
shape(1, 5, 5)(3, 3)\"\"(3,3)
representation\"\"\"\"\"\"\"\"
\n
\n

Now keep in mind the dimension of the output will depend on the stride and padding of the convolution layer. Here the shape of the output is the same as the shape of the filter, it does not necessarily have to be! Take an input shape of (1, 5, 5), with the same convolution settings, you would end up with a shape of (4, 4) (which is different from the filter shape (3, 3).

\n

Also, something to note is that if the input had more than one channel: shape (c, h, w), the filter would have to have the same number of channels. Each channel of the input would convolve with each channel of the filter and the results would be averaged into a single 2D feature map. So you would have an intermediate output of (c, 3, 3), which after averaging over the channels, would leave us with (1, 3, 3)=(3, 3). As a result, considering a convolution with a single filter, however many input channels there are, the output will always have a single channel.

\n

From there what you can do is assemble multiple filters on the same layer. This means you define your layer as having k 3x3 filters. So a layer consists k filters. For the computation of the output, the idea is simple: one filter gives a (3, 3) feature map, so k filters will give k (3, 3) feature maps. These maps are then stacked into what will be the channel dimension. Ultimately, you're left with an output shape of... (k, 3, 3).

\n

Let k_h and k_w, be the kernel height and kernel width respectively. And h', w' the height and width of one outputted feature map:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
inputlayeroutput
shape(c, h, w)(k, c, k_h, k_w)(k, h', w')
descriptionc-channel hxw feature mapk filters of shape (c, k_h, k_w)k-channel h'xw' feature map
\n

\n

Back to your question:

\n
\n

Layers have 3 dimensions like 256x256x32. What is this third number? I assume the first two numbers are the number of nodes but I don't know what the depth is.

\n
\n

Convolution layers have four dimensions, but one of them is imposed by your input channel count. You can choose the size of your convolution kernel, and the number of filters. This number will determine is the number of channels of the output.

\n

256x256 seems extremely high and you most likely correspond to the output shape of the feature map. On the other hand, 32 would be the number of channels of the output, which... as I tried to explain is the number of filters in that layer. Usually speaking the dimensions represented in visual diagrams for convolution networks correspond to the intermediate output shapes, not the layer shapes.

\n

As an example, take the VGG neural network:

\n

\"!\"\nVery Deep Convolutional Networks for Large-Scale Image Recognition

\n

Input shape for VGG is (3, 224, 224), knowing that the result of the first convolution has shape (64, 224, 224) you can determine there is a total of 64 filters in that layer.

\n

As it turns out the kernel size in VGG is 3x3. So, here is a question for you: knowing there is a single bias parameter per filter, how many total parameters are in VGG's first convolution layer?

\n", + "system": "" + }, + { + "instruction": "can I train(optimize) on f1 score loss with pytorch", + "input": "", + "output": "

F1 score is not a smooth function, so it cannot be optimized directly with gradient descent. With gradually changing network parameters, the output probability changes smoothly but the F1 score only changes when the probability crosses the boundary of 0.5. As a result, the gradient of F1 score is zero almost everywhere.

\n

You can use a soft version of the F-measure as described here. The trick is that you basically replace the count of true positives and false positives with a sort of probabilistic version:

\n

\"enter

\n

where oi is the network output and ti is the ground truth target probability. Then you continue with computing F-measure as usual.

\n

These definitions are then used in the formula F1=2TP/(2TP+FP+FN).

\n

Also, you might find this Kaggle tutorial useful.

\n", + "system": "" + }, + { + "instruction": "How to concatenate 2 pytorch models and make the first one non-trainable in PyTorch", + "input": "", + "output": "

One easy way to do that is to detach the output tensor of the model that you don't want to update and it will not backprop gradient to the connected model. In your case, you can simply detach x2 tensor just before concatinating with x1 in the forward function of MyEnsemble model to keep the weight of modelB unchanged.

\n

So, the new forward function should be like following:

\n
def forward(self, x1, x2):\n        x1 = self.modelA(x1)\n        x2 = self.modelB(x2)\n        x = torch.cat((x1, x2.detach()), dim=1)  # Detaching x2, so modelB wont be updated\n        x = self.classifier(F.relu(x))\n        return x\n
\n", + "system": "" + }, + { + "instruction": "Defining named parameters for a customized NN module in Pytorch", + "input": "", + "output": "

You are registering your parameter properly, but you should use nn.Module.named_parameters rather than nn.Module.parameters to access the names. Currently you are attempting to access Parameter.name, which is probably not what you want. The name attribute of Parameter and Tensor do not appear to be documented, but as far as I can tell, they refer to an internal attribute of the _C._TensorBase class and cannot be modified externally.

\n

Every time you assign a Parameter to an attribute of your module it is registered with a name (this occurs in nn.Module.__setattr__ here). The parameter always takes the same name as the attribute itself, so "mu" in this case. To iterate over all the parameters and their associated names use nn.Module.named_parameters. For example,

\n
my_layer = My_Layer()\nfor n, p in my_layer.named_parameters():\n    print('Parameter name:', n)\n    print(p.data)\n    print('requires_grad:', p.requires_grad)\n
\n

which prints

\n
Parameter name: mu\ntensor([[0.],\n        [1.]])\nrequires_grad: True\n
\n", + "system": "" + }, + { + "instruction": "How to create a copy of nn.Sequential in torch?", + "input": "", + "output": "

Well, I just use torch.load and torch.save with io.BytesIO

\n
import io, torch\n\n\n# write to a buffer\nbuffer = io.BytesIO()\ntorch.save(model, buffer) #<--- model is some nn.module\nprint(buffer.tell()) #<---- no of bytes written \n\ndel model\n\n# read from buffer\nbuffer.seek(0) #<--- must see to origin every time before reading\nmodel = torch.load(buffer)\ndel buffer\n
\n", + "system": "" + }, + { + "instruction": "What is the machine precision in pytorch and when should one use doubles?", + "input": "", + "output": "

I think you misunderstood how floating points work. There are many good resources (e.g.) about what floating points are, so I am not going into details here.

\n

The key is that floating points are dynamic. They can represent the addition of very large values up to a certain accuracy, or the addition of very small values up to a certain accuracy, but not the addition of a very large value with a very small value. They adjust their ranges on-the-go.

\n

So this is why your testing result is different than the explanation in "machine precision" -- you are adding two very small values, but that paragraph explicitly said "1+eps". 1 is a much larger value than 1e-6. The following thus will work as expected:

\n
import torch\n\nx1 = torch.tensor(1).float()\neps = torch.tensor(1e-11)\n\nprint(x1.dtype)\nprint(x1)\nprint(x1+eps)\n
\n

Output:

\n
torch.float32\ntensor(1.)\ntensor(1.)\n
\n

The second question -- when should you use double?

\n

Pros - higher accuracy.

\n

Cons - Much slower (hardware are configured to like float most of the time), doubled memory usage.

\n

That really depends on your application. Most of the time I would just say no. As I said, you need double when you have very large values and very small values coexist in the network. That should not be happening anyway with proper normalization of data.

\n

(Another reason is the overflow of exponent, say when you need to represent very very very large/small values, beyond 1e-38 and 1e38)

\n", + "system": "" + }, + { + "instruction": "How to Find Confusion Matrix and Plot it for Image Classifier in PyTorch", + "input": "", + "output": "

You have to save all the predictions and targets of the test set.

\n
predictions, targets = [], []\nfor images, labels in testloader:\n    logps = model(images)\n    output = torch.exp(logps)\n    pred = torch.argmax(output, 1)\n\n    # convert to numpy arrays\n    pred = pred.detach().cpu().numpy()\n    labels = labels.detach().cpu().numpy()\n    \n    for i in range(len(pred)):\n        predictions.append(pred[i])\n        targets.append(labels[i])\n
\n

Now you have all the predictions and actual targets of the test-set stored.\nNext step is to create the confusion matrix. I think I can just give you my function I always use:

\n
def create_confusion_matrix(y_true, y_pred, classes):\n    """ creates and plots a confusion matrix given two list (targets and predictions)\n    :param list y_true: list of all targets (in this case integers bc. they are indices)\n    :param list y_pred: list of all predictions (in this case one-hot encoded)\n    :param dict classes: a dictionary of the countries with they index representation\n    """\n\n    amount_classes = len(classes)\n\n    confusion_matrix = np.zeros((amount_classes, amount_classes))\n    for idx in range(len(y_true)):\n        target = y_true[idx][0]\n\n        output = y_pred[idx]\n        output = list(output).index(max(output))\n\n        confusion_matrix[target][output] += 1\n\n    fig, ax = plt.subplots(1)\n\n    ax.matshow(confusion_matrix)\n    ax.set_xticks(np.arange(len(list(classes.keys()))))\n    ax.set_yticks(np.arange(len(list(classes.keys()))))\n\n    ax.set_xticklabels(list(classes.keys()))\n    ax.set_yticklabels(list(classes.keys()))\n\n    plt.setp(ax.get_xticklabels(), rotation=45, ha="left", rotation_mode="anchor")\n    plt.setp(ax.get_yticklabels(), rotation=45, ha="right", rotation_mode="anchor")\n\n    plt.show()\n
\n

So y_true are all the targets, y_pred all the predictions and classes is a dictionary which maps the labels to the actual class-names, for example:

\n
classes = {"dog": [1, 0], "cat": [0, 1]}\n
\n

Then simply call:

\n
create_confusion_matrix(targets, predictions, classes)\n
\n

Probably you will have to adapt it to your code a little but I hope this works for you. :)

\n", + "system": "" + }, + { + "instruction": "Getting model class labels from torchvision pretrained models", + "input": "", + "output": "

You cannot, unfortunately, get class label names directly from the torchvision models. However, these models are trained on the ImageNet dataset (hence the 1000 classes).

\n

You have to get the class name mapping off the web as far as I know; there's no way to get it off torch. Previously, you could download ImageNet directly using torchvision.datasets.ImageNet, which had a built-in label to class name converter. Now the download link isn't publicly available and requires a manual download, before it can be used by datasets.ImageNet.

\n

So you can simply search for the class to label mapping of ImageNet online, rather than downloading the data or attempting with torch. Try here for example.

\n", + "system": "" + }, + { + "instruction": "Loading model from checkpoint is not working", + "input": "", + "output": "

Posting the answer from comments:

\n
experiment.load_state_dict(checkpoint['state_dict'])\n
\n", + "system": "" + }, + { + "instruction": "How to use pytorch in flask and run it on wsgi_mod for apache2", + "input": "", + "output": "

Add following line to the file sites-available/your-site.conf

\n
WSGIApplicationGroup %{GLOBAL}  \n
\n", + "system": "" + }, + { + "instruction": "Index multidimensional torch tensor by another multidimensional tensor", + "input": "", + "output": "

From what I understand from the comments, you need idx to be index in the last dimension and each index in idx corresponds to similar index in x (except for the last dimension). In that case (this is the numpy version, you can convert it to torch):

\n
ind = np.indices(idx.shape)\nind[-1] = idx\nx[tuple(ind)]\n
\n

output:

\n
[[10]\n [43]]\n
\n", + "system": "" + }, + { + "instruction": "ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) throws IndexError: Target 42 is out of bounds", + "input": "", + "output": "

You are using CIFAR-100, which has 100 classes (hence the name). But your model only predicts 10 classes. Naturally, any class above 10 will result in an error.

\n\n

The output of the last linear linear in the model's classifier needs to be changed to 100:

\n\n
self.classifier = nn.Sequential(\n    nn.Dropout(),\n    nn.Linear(512, 4096),\n    nn.ReLU(True),\n    nn.Dropout(),\n    nn.Linear(4096, 4096),\n    nn.ReLU(True),\n    nn.Linear(4096, 100),\n)\n
\n", + "system": "" + }, + { + "instruction": "how to keep pytorch model in redis cache to access model faster for video streaming?", + "input": "", + "output": "

If you only need to keep model state on RAM, Redis is not necessary. You could instead mount RAM as a virtual disk and store model state there. Check out tmpfs.

\n", + "system": "" + }, + { + "instruction": "Pytorch batch indexing", + "input": "", + "output": "

To select only one element per batch you need to enumerate the batch indices, which can be done easily with torch.arange.

\n\n
output[torch.arange(output.size(0)), index]\n
\n\n

That essentially creates tuples between the enumerated tensor and your index tensor to access the data, which results in indexing output[0, 24], output[1, 10] etc.

\n", + "system": "" + }, + { + "instruction": "Unable to update PyTorch 1.4.0 to 1.5.0 using Conda", + "input": "", + "output": "

Install Validates Constraints

\n\n

The Conda install first checks to see if a constraint is satisfied, rather than blindly trying to install the latest of everything. A better reading of the command:

\n\n
conda install -c pytorch pytorch torchvision\n
\n\n

would be

\n\n
\n

With the pytorch channel prioritized, ensure that the currently activated environment has some version of pytorch and torchvision installed.

\n
\n\n

Your environment already satisfies this constraint, so there is nothing to do.

\n\n

Updating Packages, or Constraints

\n\n

If you want to update a package, then look into the conda update command or, if you know a minimum version you require, then specify it:

\n\n
conda install -c pytorch pytorch[version='>=1.5'] torchvision\n
\n\n

which effectively changes the constraint.

\n\n

Better Practice (Recommended)

\n\n

Best practice though is to simply make a new env when you require changes to packages. Every time one changes the packages in an env, one risks breaking/invalidating existing code.

\n\n
conda create -n pytorch_1_5 -c pytorch pytorch torchvison\n
\n\n

And this will grab the latest possible versions by default.

\n", + "system": "" + }, + { + "instruction": "Pytorch - Indexing a range of multiple Indices?", + "input": "", + "output": "

Without loops of any kind

\n
mytensor = torch.randn(4, 10)\nstart = torch.tensor([1, 3, 1, 3]).unsqueeze(-1)\nend = torch.tensor([3, 5, 7, 4]).unsqueeze(-1)\n
\n

create an array of indices in the same shape as the tensor you want to index

\n
index = torch.arange(10).repeat(4).reshape(4, 10)\n>>> index\n\ntensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n        [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n        [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n        [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])\n\n
\n

Then simply convert to a boolean mask

\n
gte_start = start <= index\nlt_end = index < end\nmask = gte_start & lt_end\n>>> mask\ntensor([[False,  True,  True,  True, False, False, False, False, False, False],\n        [False, False, False,  True,  True,  True, False, False, False, False],\n        [False,  True,  True,  True,  True,  True,  True,  True, False, False],\n        [False, False, False,  True,  True, False, False, False, False, False]])\n
\n

and apply it

\n
mytensor[mask] = 42.0\n
\n

masks are my go-to for multi-dimensional operations if looping is undesirable (which it often is)

\n", + "system": "" + }, + { + "instruction": "How to revert BERT/XLNet embeddings?", + "input": "", + "output": "

Not sure if it's too late, but I've experimented a bit with your code and it can be reverted. :)

\n
bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased")\ntok = transformers.BertTokenizer.from_pretrained("bert-base-cased")\n\nsent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding."))\nprint("Initial sentence:", sent)\nenc = bert.get_input_embeddings()(sent)\ndec = bert.get_output_embeddings()(enc)\n\nprint("Decoded sentence:", tok.decode(dec.softmax(0).argmax(1)))\n
\n

For this, you get the following output:

\n
Initial sentence: tensor([  101,   146,  1355,  1106,  1103,  2984,  1103,  1168,  1285,   117,\n         1122,  1108,  1304, 10703,  1158,   119,   102])  \nDecoded sentence: [CLS] I went to the store the other day, it was very rewarding. [SEP]\n
\n", + "system": "" + }, + { + "instruction": "How do I assign a numpy.int64 to a torch.cuda.FloatTensor?", + "input": "", + "output": "

You have to convert scale to a torch tensor of the same type and device as tmpScale before assignment.

\n\n
tmpScale[:, j] = torch.from_numpy(scale).to(tmpScale)\n
\n\n

Note that this is casting scale from an int64 to a float32 which will likely result in a loss of precision if values in scale have magnitude larger than 224 (about 16 million).

\n", + "system": "" + }, + { + "instruction": "What does "*" operator do before the tensor size in PyTorch?", + "input": "", + "output": "

* used like this in Python denotes (argument) unpacking. When you prepend it to an iterable (which is what x.size() returns), it unpacks and (here) passes its items to the function as positional arguments. E.g.:

\n\n
def f(a1, a2):\n    print(a1, a2)\n\nf(*[\"Hello\", \"World\"])\n
\n\n

You can check out the Docs links for another example and more detailed description.

\n", + "system": "" + }, + { + "instruction": "Does BertForSequenceClassification classify on the CLS vector?", + "input": "", + "output": "

The short answer: Yes, you are correct. Indeed, they use the CLS token (and only that) for BertForSequenceClassification.

\n\n

Looking at the implementation of the BertPooler reveals that it is using the first hidden state, which corresponds to the [CLS] token.\nI briefly checked one other model (RoBERTa) to see whether this is consistent across models. Here, too, classification only takes place based on the [CLS] token, albeit less obvious (check lines 539-542 here).

\n", + "system": "" + }, + { + "instruction": "Load multiple .npy files (size > 10GB) in pytorch", + "input": "", + "output": "

According to numpy.load, you can set the argument mmap_mode='r' to receive a memory-mapped array numpy.memmap.

\n\n
\n

A memory-mapped array is kept on disk. However, it can be accessed and sliced like any ndarray. Memory mapping is especially useful for accessing small fragments of large files without reading the entire file into memory.

\n
\n\n

I tried implementing a dataset that use memory maps. First, I generated some data as follows:

\n\n
import numpy as np\n\nfeature_size = 16\ntotal_count = 0\nfor index in range(10):\n    count = 1000 * (index + 1)\n    D = np.random.rand(count, feature_size).astype(np.float32)\n    S = np.random.rand(count, 1).astype(np.float32)\n    np.save(f'data/d{index}.npy', D)\n    np.save(f'data/s{index}.npy', S)\n    total_count += count\n\nprint(\"Dataset size:\", total_count)\nprint(\"Total bytes:\", total_count * (feature_size + 1) * 4, \"bytes\")\n
\n\n

The output was:

\n\n
Dataset size: 55000\nTotal bytes: 3740000 bytes\n
\n\n

Then, my implementation of the dataset is as follows:

\n\n
import numpy as np\nimport torch\nfrom bisect import bisect\nimport os, psutil # used to monitor memory usage\n\nclass BigDataset(torch.utils.data.Dataset):\n    def __init__(self, data_paths, target_paths):\n        self.data_memmaps = [np.load(path, mmap_mode='r') for path in data_paths]\n        self.target_memmaps = [np.load(path, mmap_mode='r') for path in target_paths]\n        self.start_indices = [0] * len(data_paths)\n        self.data_count = 0\n        for index, memmap in enumerate(self.data_memmaps):\n            self.start_indices[index] = self.data_count\n            self.data_count += memmap.shape[0]\n\n    def __len__(self):\n        return self.data_count\n\n    def __getitem__(self, index):\n        memmap_index = bisect(self.start_indices, index) - 1\n        index_in_memmap = index - self.start_indices[memmap_index]\n        data = self.data_memmaps[memmap_index][index_in_memmap]\n        target = self.target_memmaps[memmap_index][index_in_memmap]\n        return index, torch.from_numpy(data), torch.from_numpy(target)\n\n# Test Code\nif __name__ == \"__main__\":\n    data_paths = [f'data/d{index}.npy' for index in range(10)]\n    target_paths = [f'data/s{index}.npy' for index in range(10)]\n\n    process = psutil.Process(os.getpid())\n    memory_before = process.memory_info().rss\n\n    dataset = BigDataset(data_paths, target_paths)\n\n    used_memory = process.memory_info().rss - memory_before\n    print(\"Used memory:\", used_memory, \"bytes\")\n\n    dataset_size = len(dataset)\n    print(\"Dataset size:\", dataset_size)\n    print(\"Samples:\")\n    for sample_index in [0, dataset_size//2, dataset_size-1]:\n        print(dataset[sample_index])\n
\n\n

The output was as follows:

\n\n
Used memory: 299008 bytes\nDataset size: 55000\nSamples:\n(0, tensor([0.5240, 0.2931, 0.9039, 0.9467, 0.8710, 0.2147, 0.4928, 0.8309, 0.7344, 0.2861, 0.1557, 0.7009, 0.1624, 0.8608, 0.5378, 0.4304]), tensor([0.7725]))\n(27500, tensor([0.8109, 0.3794, 0.6377, 0.4825, 0.2959, 0.6325, 0.7278, 0.6856, 0.1037, 0.3443, 0.2469, 0.4317, 0.6690, 0.4543, 0.7007, 0.5733]), tensor([0.7856]))\n(54999, tensor([0.4013, 0.9990, 0.9107, 0.9897, 0.0204, 0.2776, 0.5529, 0.5752, 0.2266, 0.9352, 0.2130, 0.9542, 0.4116, 0.4959, 0.1436, 0.9840]), tensor([0.6342]))\n
\n\n

According to the results, the memory usage is only 10% from the total size. I didn't try my code with very large file sizes so I don't know how efficient it will be with >200 GB of files. If you can try it and tell me the memory usage with and without memmaps, I would be grateful.

\n", + "system": "" + }, + { + "instruction": "How does one vectorize reinforcement learning environments?", + "input": "", + "output": "

You could write a custom class that iterates over an internal tuple of environments while maintaining the basic Gym API. In practice, there will be some differences, because the underlying environments don't terminate on the same timestep. Consequently, it's easier to combine the standard step and reset functions in\none method called step. Here's an example:

\n\n
class VectorEnv:\n    def __init__(self, make_env_fn, n):\n        self.envs = tuple(make_env_fn() for _ in range(n))\n\n    # Call this only once at the beginning of training (optional):\n    def seed(self, seeds):\n        assert len(self.envs) == len(seeds)\n        return tuple(env.seed(s) for env, s in zip(self.envs, seeds))\n\n    # Call this only once at the beginning of training:\n    def reset(self):\n        return tuple(env.reset() for env in self.envs)\n\n    # Call this on every timestep:\n    def step(self, actions):\n        assert len(self.envs) == len(actions)\n        return_values = []\n        for env, a in zip(self.envs, actions):\n            observation, reward, done, info = env.step(a)\n            if done:\n                observation = env.reset()\n            return_values.append((observation, reward, done, info))\n        return tuple(return_values)\n\n    # Call this at the end of training:\n    def close(self):\n        for env in self.envs:\n            env.close()\n
\n\n

Then you can just instantiate it like this:

\n\n
import gym\nmake_env_fn = lambda: gym.make('CartPole-v0')\nenv = VectorEnv(make_env_fn, n=4)\n
\n\n

You'll have to do a little bookkeeping for your agent to handle the tuple of return values when you call step. This is also why I prefer to pass a function make_env_fn to __init__, because it's easy to add wrappers like gym.wrappers.Monitor that track statistics for each environment individually and automatically.

\n", + "system": "" + }, + { + "instruction": "Creating LSTM model with pytorch", + "input": "", + "output": "

If I correctly understand your question, given a sequence of length 62 you want to predict a sequence of length 8, in the sense that the order of your outputs have an importance, this is the case if you are doing some time series forcasting). In that case using a seq2seq model will be a good choice, here is a tutorial for this link. Globbaly, you nedd to implement an encoder and a decoder, here is en exemple of such an implemtation:

\n\n
class EncoderRNN(nn.Module):\n    def __init__(self, input_dim=42, hidden_dim=100):\n        super(EncoderRNN, self).__init__()\n        self.hidden_size = hidden_size\n\n        self.lstm = nn.LSTM(input_dim, hidden_dim)\n\n    def forward(self, input, hidden):\n        output, hidden = self.lstm(input, hidden)\n        return output, hidden\n\n    def initHidden(self):\n        return torch.zeros(1, 1, self.hidden_size, device=device)\n\n\nclass DecoderRNN(nn.Module):\n    def __init__(self, hidden_dim, output_dim):\n        super(DecoderRNN, self).__init__()\n        self.hidden_dim = hidden_dim\n\n        self.lstm = nn.LSTM(hidden_dim, hidden_dim)\n        self.out = nn.Linear(hidden_dim, output_dim)\n        self.softmax = nn.LogSoftmax(dim=1)\n\n   def forward(self, input, hidden):\n        output, hidden = self.lstm(input, hidden)\n        output = self.softmax(self.out(output[0]))\n        return output, hidden\n\n   def initHidden(self):\n        return torch.zeros(1, 1, self.hidden_size, device=device)\n
\n\n

If your the order of your 8 outputs has no importance, then you can simply add a Linear layer with 8 units after the LSTM layer. You can use this code directly in that case

\n\n
  class Net(nn.Module):\n      def __init__(self, hidden_dim=100, input_dim=42, output_size=8):\n          super(Net, self).__init__()\n          self.hidden_dim = hidden_dim\n\n          self.lstm = nn.LSTM(input_dim, hidden_dim, batch_first=True)\n\n          # The linear layer that maps from hidden state space to tag space\n          self.fc = nn.Linear(hidden_dim, output_size_size)\n\n      def forward(self, seq):\n          lstm_out, _ = self.lstm(seq)\n          output = self.fc(lstm_out)\n          return output\n
\n", + "system": "" + }, + { + "instruction": "Local fully connected layer - Pytorch", + "input": "", + "output": "

This is currently triaged on the PyTorch issue tracker, in the mean time you can get a similar behavious using fold and unfold. See this answer:

\n\n

https://github.com/pytorch/pytorch/issues/499#issuecomment-503962218

\n\n
class LocalLinear(nn.Module):\n    def __init__(self,in_features,local_features,kernel_size,padding=0,stride=1,bias=True):\n        super(LocalLinear, self).__init__()\n        self.kernel_size = kernel_size\n        self.stride = stride\n        self.padding = padding\n\n        fold_num = (in_features+2*padding-self.kernel_size)//self.stride+1\n        self.weight = nn.Parameter(torch.randn(fold_num,kernel_size,local_features))\n        self.bias = nn.Parameter(torch.randn(fold_num,local_features)) if bias else None\n\n    def forward(self, x:torch.Tensor):\n        x = F.pad(x,[self.padding]*2,value=0)\n        x = x.unfold(-1,size=self.kernel_size,step=self.stride)\n        x = torch.matmul(x.unsqueeze(2),self.weight).squeeze(2)+self.bias\n        return x\n
\n", + "system": "" + }, + { + "instruction": "Conda package install [Errno 13] Permission denied while installing conda-forge::protobuf-3.8.0", + "input": "", + "output": "

I faced a similar error like this when I was trying to update/uninstall a python package(matplotlib) from my environment. The reason turned out to be that I had another python application which was running and had a matplotlib plot window open, so therefore since a process was accessing the package, it couldn't be deleted.\nWhen i closed all the python programs, I was able to upgrade the package without the permission error.

\n

So moral of the story, if you trying to update/uninstall a package, make sure all your python scripts are not running.

\n", + "system": "" + }, + { + "instruction": "How to take a transpose for each matrix in a batch in Pytorch?", + "input": "", + "output": "

I will drop some benchmarks here for the sake of performance. Using the same tensor proposed in the OP's answer.

\n\n
In[2]: import torch\nIn[3]: x = torch.randn(2, 3, 5)\nIn[4]: x.size()\nOut[4]: torch.Size([2, 3, 5])\nIn[5]: %timeit x.permute(1, 0, 2)\n1.03 \u00b5s \u00b1 41.7 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each)\nIn[6]: %timeit torch.transpose(x, 0, 1)\n892 ns \u00b1 9.61 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each)\nIn[7]: torch.transpose(x, 0, 1).equal(x.permute(1, 0, 2))\nOut[7]: True\n
\n\n

It is clear that torch.transpose is faster, so It is advised to use it when possible.

\n", + "system": "" + }, + { + "instruction": "Model() got multiple values for argument 'nr_class' - SpaCy multi-classification model (BERT integration)", + "input": "", + "output": "

This is a regression in the most recent version we released of spacy-pytorch-transformers. Sorry about this!

\n\n

The root cause is, this is another case of the evils of **kwargs. I'm looking forward to refining the spaCy API to prevent these issues in future.

\n\n

You can see the offending line here: https://github.com/explosion/spacy-pytorch-transformers/blob/c1def95e1df783c69bff9bc8b40b5461800e9231/spacy_pytorch_transformers/pipeline/textcat.py#L71 . We provide the nr_class positional argument, which overlaps with the explicit argument you passed in during the config.

\n\n

In order to workaround the problem, you can simply remove the nr_class key from your the config dict you're passing into spacy.create_pipe().

\n", + "system": "" + }, + { + "instruction": "Translating Pytorch program into Keras: different results", + "input": "", + "output": "

Two major mistakes that I see (likely related):

\n\n
    \n
  1. The last convolutional layer in the original model does not have an activation function, while your translation uses relu.
  2. \n
  3. The original model uses CrossEntropyLoss as loss function, while your model uses categorical_crossentropy with logits=False (a default argument). Without mathematical background the difference is tricky to explain, but in short: CrossEntropyLoss has a softmax built in, that's why the model doesn't have one on the last layer. To do the same in keras, use k.categorical_crossentropy(..., logits=True). \"logits\" means the input values are expected not to be \"softmaxed\", i.e. all values can be arbitrary. Currently, your loss function expects the output values to be \"softmaxed\", i.e. all values must be between 0 and 1 (and sum up to 1).
  4. \n
\n\n

Update:

\n\n

One other mistake, likely a huge one: In Keras, you calculate the output once in the beginning and never change it from there on. Then you train your model to fit on this initially generated output.

\n\n

In the original pytorch code, target (which is the variable being trained on) gets updated in every training loop. \nSo, you cannot use Keras' fit method which is designed for doing the entire training for you (given fixed training data). You will have to replicate the training loop manually, just as it is done in the pytorch code. I'm not sure if this is easily doable with the API Keras provides. train_on_batch is one method you surely will need in your manual loop. You will have to do some more work, I'm afraid...

\n", + "system": "" + }, + { + "instruction": "Why do we pass nn.Module as an argument to class definition for neural nets?", + "input": "", + "output": "

This line

\n\n
class Generator(nn.Module):\n
\n\n

simple means the Generator class will inherit the nn.Module class, it is not an argument.

\n\n

However, the dunder init method:

\n\n
def __init__(self, input_size, hidden_size, output_size, f):\n
\n\n

Has self which is why you may consider this as an argument.

\n\n

Well this is Python class instance self. There were tinkering battles should it stay or should it go, but Guido, explained in his blog why it has to stay.

\n", + "system": "" + }, + { + "instruction": "Validation loss not moving with MLP in Regression", + "input": "", + "output": "

From the code you provided, it is tough to say why the validation loss is constant but I see several problems in your code.

\n\n
    \n
  1. Why do you validate for each training mini-batch? Instead, you should validate your model after you do the training for one complete epoch (iterating over your full dataset once). So, the skeleton should be like:
  2. \n
\n\n
for _e in range(num_epochs):\n    for batch in tqdm(train_dataloader):\n        # training code\n\n    with torch.no_grad():\n        for batch in tqdm(valid_dataloader):\n            # validation code\n\n    # plot your loss values\n
\n\n

Also, you can plot after each epoch, not after each mini-batch training.

\n\n
    \n
  1. Did you check whether the model parameters are getting updated after optimizer.step() during training? How many validation examples do you have? Why don't you use mini-batch computation during validation?

  2. \n
  3. Why do you do: optimizer.zero_grad() during validation? It doesn't make sense because, during validation, you are not going to do anything related to optimization.

  4. \n
  5. You should use model.eval() during validation to turn off the dropouts. See PyTorch documentation to learn about .train() and .eval() methods.

  6. \n
  7. The learning rate is set to 1e-8, isn't it too small? Why don't you use the default learning rate for Adam (1e-3)?

  8. \n
\n\n

The following requires some reasoning.

\n\n
    \n
  1. Why are you using such a large batch size? What is your training dataset size?

  2. \n
  3. You can directly plot the MSELoss, instead of taking the square root.

  4. \n
\n\n

My suggestion would be: use some existing resources for MLP in PyTorch. Don't do it from scratch if you do not have sufficient knowledge at this point. It would make you suffer a lot.

\n", + "system": "" + }, + { + "instruction": "Torchtext AttributeError: 'Example' object has no attribute 'text_content'", + "input": "", + "output": "

This problem arises when the fields are not passed in the same order as they are in the csv/tsv file. Order must be same. Also check if no extra or less fields are mentioned than there are in the csv/tsv file..

\n", + "system": "" + }, + { + "instruction": "pytorch dataloader stucked if using opencv resize method", + "input": "", + "output": "

I had a very similar problem and that's how I solved it:

\n\n

when you import cv2 set cv2.setNumThreads(0)\nand then you can set num_workers>0 in the dataloader in PyTorch.

\n\n

Seems like OpenCV tries to multithread and somewhere something goes into a deadlock.

\n\n

Hope it helps.

\n", + "system": "" + }, + { + "instruction": "How do we analyse a loss vs epochs graph?", + "input": "", + "output": "

The first conclusion is obviously that the first model performs worse than the second, and that is generally true, as long as you use the same data for validation. In the case where you train a model with different splits, that might not necessarily be the case.

\n\n

Furthermore, to answer your question regarding overfitting/underfitting:\nA typical graph for overfitting looks like this:
\n\"Learning

\n\n

So, in your case, you clearly just reach convergence, but don't actually overfit! (This is great news!) On the other hand, you could ask yourself whether you could achieve even better results. I am assuming that you are decaying your learning rate, which lets you pan out at some form of plateau. If that is the case, try reducing the learning rate less at first, and see if you can reduce your loss even further.
\nMoreover, if you still see a very long plateau, you can also consider stopping early, since you effectively gain no more improvements. Depending on your framework, there are implementations of this (for example, Keras has callbacks for early stopping, which is generally tied to the validation/testing error). If your validation error increases, similar to the image, you should consider using the loweste validation error as a point for early stopping. One way I like to do this is to checkpoint the model every now and then, but only if the validation error improved.
\nAnother inference you can make is the learning rate in general: Is it too large, your graph will likely be very \"jumpy/jagged\", whereas a very low learning rate will have only a small decline in the error, and not so much exponentially decaying behavior.
\nYou can see a weak form of this by comparing the steepness of the decline in the first few epochs in your two examples, where the first one (with the lower learning rate) takes longer to converge.

\n\n

Lastly, if your training and test error are very far apart (as in the first case), you might ask yourself whether you are actually accurately describing or modeling the problem; in some instances, you might realize that there is some problem in the (data) distribution that you might have overlooked. Since the second graph is way better, though, I doubt this is the case in your problem.

\n", + "system": "" + }, + { + "instruction": "Does tensorflow have the function similar to pytorch's "masked_fill_"", + "input": "", + "output": "

I have used a math calculate method to instead. It's valid and much faster.

\n\n
def mask_fill_inf(matrix, mask):\n    negmask = 1 - mask\n    num = 3.4 * math.pow(10, 38)\n    return (matrix * mask) + (-((negmask * num + num) - num))\n
\n\n

Do anyone have the better method?

\n", + "system": "" + }, + { + "instruction": "How does pytorch compute the gradients for a simple linear regression model?", + "input": "", + "output": "

PyTorch has this concept of tensors and variables. When you use nn.Linear the function creates 2 variables namely W and b.In pytorch a variable is a wrapper that encapsulates a tensor , its gradient and information about its create function. you can directly access the gradients by

\n\n
w.grad\n
\n\n

When you try it before calling the loss.backward() you get None. Once you call the loss.backward() it will contain now gradients. Now you can update these gradient manually with the below simple steps.

\n\n
w.data -= learning_rate * w.grad.data\n
\n\n

When you have a complex network ,the above simple step could grow complex. So optimisers like SGD , Adam takes care of this. When you create the object for these optimisers we pass in the parameters of our model. nn.Module contains this parameters() function which will return all the learnable parameters to the optimiser. Which can be done using the below step.

\n\n
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)\n
\n", + "system": "" + }, + { + "instruction": "How does one make sure that everything is running on GPU automatically in Pytorch?", + "input": "", + "output": "

I think that there is no such thing.

\n\n

From what I've seen, people usually create classes that:
\ni) Inherit from nn.Module.
\nii) Have an attribute describing model parameters (e.g. self.opt);
\niii) Set each variable/parameters as attributes (e.g. self.my_var)
\niv) Then call .cuda() on it if a kind of -use_gpu parameter is set.

\n\n

I also use a maybe_cuda(variable) function inside my classes in order to create variables easier (pass a Variable, return variable.cuda() if opt.cuda is True.

\n\n

In fact, I did something like this (may not be perfect, but found it practical):

\n\n
class MyModule(nn.Module):\n    def __init__(self, opt):\n        super(MyModule, self).__init__()\n        self.opt = opt\n\n    def maybe_cuda(self, variable):\n        if self.opt.cuda:\n            return variable.cuda()\n        return variable\n\nclass Model(MyModule):\n    def __init__(self, opt, other_arg):\n        super(Model, self).__init__(opt)\n\n        self.linear = nn.Linear(opt.size1, opt.size2)\n        self.W_out = nn.Parameter(_____)\n\n    def forward(self, ____):\n        # create a variable, put it on GPU if possible\n        my_var = self.maybe_cuda(Variable(torch.zeros(___)))\n
\n", + "system": "" + }, + { + "instruction": "Correct way to swap PyTorch tensors without copying", + "input": "", + "output": "

First, you should know that every tensor in PyTorch has an underlying storage that holds its actual data. You can use .storage() to retrieve the underlying storage of a tensor. Then, you should use .set_() to replace the tensor's underlying storage with a new storage.

\n
with torch.no_grad():\n\n    x_storage = x.untyped_storage()\n    y_storage = y.untyped_storage()\n\n    x.set_(y_storage, y.storage_offset(), y.size(), y.stride())\n    y.set_(x_storage, x.storage_offset(), x.size(), x.stride())\n
\n

Note: the swapping process does not affect refences to the tensors themselves. Also, swapping the storage does not interfere with the reference counting/GC since PyTorch handles reference counting and garbage collection automatically. Another reason is that you are not creating new tensors or modifying the reference counting directly.

\n

Update

\n

After mentioning in the comments that the swapping will be within an optimizer class, this can potentially affect the autograd graph. Swapping storage objects between tensors, it turns out that the data the autograd graph references is changed. This can lead to inconsistencies between the calculated gradients and the actual computations performed on the swapped tensors. In brief, swapping data directly within an optimizer can be problematic for autograd graph. Therefore, I do not recommend swapping tensors directly within an optimizer class.

\n

The only solution is to use temporary tensors for safe swapping.

\n", + "system": "" + }, + { + "instruction": "Is it possible to save the training/validation loss in a list during training in HuggingFace?", + "input": "", + "output": "

One way of proceeding might be the following: you can access training and evaluation losses via the trainer.state.log_history object after training. An example below (accuracy and f1 might be ignored as they derive from the specific compute_metrics function passed as parameter to the trainer instance):

\n

\"enter

\n

It is a list of dicts which contains some logged values per logged step; among the keys of the different dictionaries you should find 'loss' and 'eval_loss', whose values you might retrieve as follows (analogously for validation losses).

\n
train_loss = []\nfor elem in trainer.state.log_history:\n    if 'loss' in elem.keys():\n        train_loss.append(elem['loss'])\n
\n

The loss is computed via the .compute_loss() method of the Trainer class, which you might override for custom behaviour as described at https://huggingface.co/docs/transformers/v4.27.1/en/main_classes/trainer#trainer.

\n", + "system": "" + }, + { + "instruction": "Custom Operations on Multi-dimensional Tensors", + "input": "", + "output": "

You can construct a helper tensor containing the resulting values sum(o) + o' + p':

\n
>>> v = o.sum(2, True) + o_prime[...,None] + o_prime[...,None]\ntensor([[[  7.2000],\n         [ 25.4000],\n         [ 43.6000],\n         [ 61.8000],\n         [ 80.0000]],\n\n        [[ 98.2000],\n         [116.4000],\n         [134.6000],\n         [152.8000],\n         [169.2200]]])\n
\n

Then you can assemble a mask for the final tensor via broadcasting:

\n
>>> eq = o[:,None] == p[:,:,None]\n
\n

Ensuring all three elements on the last dimension match:

\n
>>> eq.all(dim=-1)\ntensor([[[False, False, False,  True, False],\n         [False,  True, False, False, False],\n         [False, False,  True, False, False],\n         [ True, False, False, False, False],\n         [False, False, False, False,  True]],\n\n        [[False, False, False, False,  True],\n         [ True, False, False, False, False],\n         [False,  True, False, False, False],\n         [False, False,  True, False, False],\n         [False, False, False,  True, False]]])\n
\n

Finally, you can simply multiply both tensor and auto-broadcasting will handle the rest:

\n
>>> R = eq.all(dim=-1) * v\ntensor([[[  0.0000,   0.0000,   0.0000,   7.2000,   0.0000],\n         [  0.0000,  25.4000,   0.0000,   0.0000,   0.0000],\n         [  0.0000,   0.0000,  43.6000,   0.0000,   0.0000],\n         [ 61.8000,   0.0000,   0.0000,   0.0000,   0.0000],\n         [  0.0000,   0.0000,   0.0000,   0.0000,  80.0000]],\n\n        [[  0.0000,   0.0000,   0.0000,   0.0000,  98.2000],\n         [116.4000,   0.0000,   0.0000,   0.0000,   0.0000],\n         [  0.0000, 134.6000,   0.0000,   0.0000,   0.0000],\n         [  0.0000,   0.0000, 152.8000,   0.0000,   0.0000],\n         [  0.0000,   0.0000,   0.0000, 169.2200,   0.0000]]])\n
\n
\n
\n

I wanted to know how do you visualize such problems and then come up with a solution? Any pointers would be beneficial.

\n
\n

I was going to say it depends a lot on the problem at hand but this wouldn't get you very far! I believe having a toolbox of functions/tricks and scenarios you've come across (i.e. experience) helps greatly. This is true for problem-solving more generally speaking. I can try to explain how I came up with the solution and my thought process behind it. The initial idea for this problem is to perform an outer equality check between o and p. By that I mean we are trying to construct a structure which evaluates every (o[i], p[j]) relation in batch-wise.

\n

Turns out this is rather a common operation usually seen as an outer summation or outer product. In fact, this type of operation is also applicable to the equality operator: here we are looking to construct a 5x5 matrix of o[i] == p[j]. Keeping in mind throughout the process we have a leading dimension containing three elements, but that doesn't change the process. We just need to account for it by checking that all three checks are indeed True, hence the all(dim=-1) call.

\n

Since the desired result doesn't depend on column position inside the mask, i.e. result = sum(0) + o' + p' whatever the column index, we can just precompute the results for each row beforehand. The final operation is simple multiply the mask (which of course only contains ones at the desired locations) with the vector of dimensions. Intuitively, all columns will get multiplied by the same value but only the 1s will allow for the value to be set.

\n

But most importantly, we have to acknowledge that your figure did all the hard work. This is in my opinion the first step before starting with any reasoning or implementation. So to summarize, I would suggest:

\n
    \n
  1. start with a minimal example reducing the number of variables while still making it relevant for the actual problem it is supposed to solve.

    \n
  2. \n
  3. think about how you can solve it step by step by trying to get closer and closer to the solution. Iteratively, and trying out with this minimal setup.

    \n
  4. \n
\n

Most importantly it comes with practice, with time you will find it easier to reason about your problem and use the right tools to manipulate your data.

\n", + "system": "" + }, + { + "instruction": "Unable to build vocab for a torchtext text classification", + "input": "", + "output": "

The very small length of vocabulary is because under the hood, build_vocab_from_iterator uses a Counter from the Collections standard library, and more specifically its update function. This function is used in a way that assumes that what you are passing to build_vocab_from_iterator is an iterable wrapping an iterable containing words/tokens.

\n

This means that in its current state, because strings can be iterated upon, your code will create a vocab able to encode all letters, not words, comprising your dataset, hence the very small vocab size.

\n

I do not know if that is intended by Python/Pytorch devs, but because of this you need to wrap your simple iterator in a list, for example like this :

\n
vocab = build_vocab_from_iterator([yield_tokens(train_iter)], specials=["<unk>"])\n
\n

Note : If your vocab gives only zeros, it is not because it is taking from the label field, it is just returning the integer corresponding to an unknown token, since all words that are not just a character will be unknown to it.

\n

Hope this helps!

\n", + "system": "" + }, + { + "instruction": "CUDA error: invalid device ordinal when using python 3.9", + "input": "", + "output": "
"cuda:2"\n
\n

selects the third GPU in your system. If you don't have 3 GPUs (at least) in your system, you'll get this error.

\n

Assuming you have at least 1 properly installed and set up CUDA GPU available, try:

\n
"cuda:0"\n
\n", + "system": "" + }, + { + "instruction": "Does using queue in a multiprocessing Process use pickling?", + "input": "", + "output": "

Yes, mutiprocessing's queues does use Pickle internally. This can be seen in multiprocessing/queues.py of the CPython implementation. In fact, AFAIK CPython uses Pickle for transferring any object between interpreter processes. The only way to avoid this is to use shared memory but it introduces strong limitation and cannot basically be used for any type of objects.

\n

Multithreading is limited by the Global Interpreter Lock (GIL) which basically prevent any parallel speed up except of operations releasing the GIL (eg. some Numpy functions) and IO-based ones.

\n

Python (and especially CPython) is not the best languages for parallel computing (nor for high performance). It has not been designed with that in mind and this is nowadays a pretty strong limitation regarding the recent sharp increase of the number of cores per processor.

\n", + "system": "" + }, + { + "instruction": "Create a PyTorch tensor of sequences which excludes specified value", + "input": "", + "output": "

We can construct a tensor with the desired sequences and index with tensor a.

\n
import torch\n\nn = 3\na = torch.Tensor([0, 1, 2, 1, 2, 0]) # using torch.tensor is recommended\n\ndef exclude_gather(a, n):\n    sequences = torch.nonzero(torch.arange(n) != torch.arange(n)[:,None], as_tuple=True)[1].reshape(-1, n-1)\n    return sequences[a.long()]\n\nexclude_gather(a, n)\n
\n

Output

\n
tensor([[1, 2],\n        [0, 2],\n        [0, 1],\n        [0, 2],\n        [0, 1],\n        [1, 2]])\n
\n

We can add a batch dimension with functorch.vmap

\n
from functorch import vmap\n\nn = 4\nb = torch.Tensor([[0, 1, 2, 1, 3, 0],[0, 3, 1, 0, 2, 1]])\n\nvmap(exclude_gather, in_dims=(0, None))(b, n)\n
\n

Output

\n
tensor([[[1, 2, 3],\n         [0, 2, 3],\n         [0, 1, 3],\n         [0, 2, 3],\n         [0, 1, 2],\n         [1, 2, 3]],\n\n        [[1, 2, 3],\n         [0, 1, 2],\n         [0, 2, 3],\n         [1, 2, 3],\n         [0, 1, 3],\n         [0, 2, 3]]])\n
\n", + "system": "" + }, + { + "instruction": "Understand Adam optimizer intuitively", + "input": "", + "output": "

Your code implementation is almost there, but one difference you should note between your implementation and the algorithm is that you are erroneously accumulating the bias correction term m/(1-beta_m**i) with the variable m. You should assign a separate variable m_hat for bias correction.

\n

The same applies to v: assign the bias corrected value to another variable like v_hat.

\n

Doing this will avoid including the bias correction in the accumulation of m and v.

\n

Your code can stay the same, but change the calculation of the bias corrected values along with the list appends. If you do this, you will get the result you're after.

\n
for i in range(1,num+1):\n    g = g_list[i-1]\n\n    # calculate m and v\n    m = beta_m*m + (1 - beta_m)*g\n    v = beta_v*v + (1 - beta_v)*(g**2)\n\n    # assign bias corrected values to m_hat and v_hat respectively\n    m_hat = m/(1-beta_m**i)\n    v_hat = v/(1-beta_v**i)\n\n    # append to lists\n    m_list.append(m_hat)\n    v_list.append(np.sqrt(v_hat))\n
\n", + "system": "" + }, + { + "instruction": "What is the type annotation for return value of __getitem__?", + "input": "", + "output": "

For this you can use Unions as the actual annotation of the function and overload to allow you type checker to know that a slice is only returned when a slice is given and an int is given when only an int is given.

\n
from typing import overload, Union\n\nclass MyExample:\n    def __init__(self, some_list: list[int]):\n        self.some_list = some_list\n\n    @overload\n    def __getitem__(self, index: int) -> int: ...\n\n    @overload\n    def __getitem__(self, index: slice) -> list[int]: ...\n\n    def __getitem__(self, index: Union[slice, int]) -> Union[list[int], int]: # Or Slice[int] if that's a thing\n        return self.some_list[index]\n
\n

If your example is not only for list of ints but for general lists, you can change to Generic types.

\n\n", + "system": "" + }, + { + "instruction": "The host has cuda and GPU installed, but pytorch (WSL2) cannot find it", + "input": "", + "output": "

The tricky thing with WSL is you could have multiple versions of python. Be it the distribution versions, windows version, or anaconda and really many others. So you need to ensure you are using the right version.

\n

If you are using Ubuntu they have recommended steps for setting up CUDA. It is actually quite easy. Check here - https://ubuntu.com/tutorials/enabling-gpu-acceleration-on-ubuntu-on-wsl2-with-the-nvidia-cuda-platform#1-overview

\n

But basically the steps are as follows

\n
sudo apt-key del 7fa2af80\nwget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin\nsudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600\nsudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/3bf863cc.pub\nsudo add-apt-repository 'deb https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/ /'\nsudo apt-get update\nsudo apt-get -y install cuda\n
\n

Basically you do not want to use the default cuda version provided by your distribution. It needs to match what Windows has installed.

\n

Now you could compile their test application to see if CUDA is working like so.

\n
git clone https://github.com/nvidia/cuda-samples\ncd cuda-samples/Samples/1_Utilities/deviceQuery\nmake\n./deviceQuery\n
\n

Also I should add using pytorch website to download their latest stable version also works. You should go to their website and not copy this as it is probably old depending on when you are seeing this post.\npip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

\n", + "system": "" + }, + { + "instruction": "Is there any thing like 'TensorList\u2019 in pytorch?", + "input": "", + "output": "

What are these tensors? Are these tensors parameters of your nn.Module? If so, you need to use the proper container.
\nFor example, using nn.ParameterList. This way calling your module's .paramters() methods will yield these tensors as well. Otherwise you'll get errors like this one.

\n", + "system": "" + }, + { + "instruction": "Set Pytorch to run on AMD GPU", + "input": "", + "output": "

ROCm translates CUDA codes. So torch.cuda.is_available() should work.

\n", + "system": "" + }, + { + "instruction": "Pytorch Conv1D gives different size to ConvTranspose1d", + "input": "", + "output": "

1. Regarding input and output shapes:
\npytorch's doc has the explicit formula relating input and output sizes.\nFor convolution:\n\"enter

\n

Similarly for pooling:\n\"enter

\n

For transposed convolution:\n\"enter

\n

And for unpooling:\n\"enter

\n

Make sure your padding and output_padding values add up to the proper output shape.

\n

2. Is there a better way?
\nTransposed convolution has its faults, as you already noticed. It also tends to produce "checkerboard artifacts".

\n

One solution is to use pixelshuffle: that is, predict for each low-res point twice the number of channels, and then split them into two points with the desired number of features.

\n

Alternatively, you can interpolate using a fixed method from the low resolution to the higher one. Apply regular convolutions to the upsampled vectors.\nIf you choose this path, you might consider using ResizeRight instead of pytorch's interpolate - it has better handling of edge cases.

\n", + "system": "" + }, + { + "instruction": "Whence MaskRCNN's segm IoU metrics = 0?", + "input": "", + "output": "

As your input image size is (850, 600) (H, W) and considering that for this given image you have 4 objects, not 850 with (600, 600) masks.\nyour masks tensor should have dimension (number of objects, 850, 600), thus your input should be:

\n
image   -)  shape: torch.Size([3, 850, 600]),   dtype: torch.float32, min: tensor(0.0431),               max: tensor(0.9137)\nboxes   -)  shape: torch.Size([4, 4]),          dtype: torch.float32, min: tensor(47.),                  max: tensor(807.)\nmasks   -)  shape: torch.Size([4, 850, 600]), dtype: torch.uint8,   min: tensor(0, dtype=torch.uint8), max: tensor(1, dtype=torch.uint8)\nareas   -)  shape: torch.Size([4]),             dtype: torch.float32, min: tensor(1479.),                max: tensor(8014.)\nlabels  -)  shape: torch.Size([4]),             dtype: torch.int64,   min: tensor(1),                    max: tensor(1)\niscrowd -)  shape: torch.Size([4]),             dtype: torch.int64,   min: tensor(0),                    max: tensor(0)\n
\n

How to fix it\nBecause you are trying to solve an instance segmentation problem, ensure that each of your (850, 600) masks are stacked so as to yield a tensor in the (number of masks, 850, 600) shape.

\n", + "system": "" + }, + { + "instruction": "Understanding the TimeSeriesDataSet in pytorch forecasting", + "input": "", + "output": "

Concerning validation dataset:

\n

training dataset is all data except the last max_prediction_length data points of each Time series (each time series correspond to datapoints with same group_ids).\nThose last datapoints are filtered by the training cutoff (cutoff is the same for each time series because they are of same size)

\n

validation data is the last max_prediction_length data points use as targets for each time series (which mean validation data are the last encoder_length + max_prediction_length of each time series).\nThis is done by using parameter min_prediction_idx=training_cutoff + 1 which make the dataset taking only data with time_index with value superior to training_cutoff + 1 (minimal decoder index is always >= min_prediction_idx)

\n", + "system": "" + }, + { + "instruction": "how to convert a csv file to character level one-hot-encode matrices?", + "input": "", + "output": "

I think it would be best to keep pd.DataFrame as is and do the transformation "on the fly" within PyTorch Dataset.

\n

First, dummy data similar to yours:

\n
df = pd.DataFrame(\n    {\n        "ID": [1, 2, 3],\n        "Source": ["Serbia", "Poland", "Germany"],\n        "Sequence": ["ABCDE", "EBCDA", "AAD"],\n    }\n)\n
\n

After that, we can create torch.utils.data.Dataset class (example alphabet is shown, you might change it to anything you want):

\n
class Dataset(torch.utils.data.Dataset):\n    def __init__(self, df: pd.DataFrame):\n        self.df = df\n        # Change alphabet to anything you need\n        alphabet = ["A", "B", "C", "D", "E", "F"]\n        self.mapping = dict((c, i) for i, c in enumerate(alphabet))\n\n    def __getitem__(self, index):\n        sample = df.iloc[index]\n        sequence = sample["Sequence"]\n        target = torch.nn.functional.one_hot(\n            torch.tensor([self.mapping[letter] for letter in sequence]),\n            num_classes=len(self.mapping),\n        )\n        return sample.drop("Sequence"), target\n\n    def __len__(self):\n        return len(self.df)\n
\n

This code simply transforms indices of letters to their one-hot encoding via torch.nn.functional.one_hot function.

\n

Usage is pretty simple:

\n
ds = Dataset(df)\nds[0]\n
\n

which returns (you might want to change how your sample is created though as I'm not sure about the format and only focused on hot-encoded targets) the following targets (ID and Source omitted):

\n
tensor([ [1., 0., 0., 0., 0., 0.],\n         [0., 1., 0., 0., 0., 0.],\n         [0., 0., 1., 0., 0., 0.],\n         [0., 0., 0., 1., 0., 0.],\n         [0., 0., 0., 0., 1., 0.]]))\n
\n", + "system": "" + }, + { + "instruction": "What does x[x!=x] mean?", + "input": "", + "output": "

It's a combination of fancy indexing with a boolean mask, and a "trick" (although intended by design) to check for NaN: x != x holds iff x is NaN (for floats, that is).

\n

They could alternatively have written

\n
lprobs[torch.isnan(lprobs)] = torch.tensor(-math.inf).to(lprobs)\n
\n

or, probably even more idiomatically, used torch.nan_to_num (but beware that the latter also has special behaviour towards infinities).

\n

A non-updating variant of the above would be

\n
torch.where(torch.isnan(lprobs), torch.tensor(-math.inf), lprobs)\n
\n", + "system": "" + }, + { + "instruction": "How to load/fetch the next data batches for the next epoch, during the current epoch?", + "input": "", + "output": "

You can prefetch the next batches from iterator in a background thread.

\n
class _ThreadedIterator(threading.Thread):\n    """\n    Prefetch the next queue_length items from iterator in a background thread.\n\n    Example:\n    >> for i in bg_iterator(range(10)):\n    >>     print(i)\n    """\n\n    class _End:\n        pass\n\n    def __init__(self, generator: Iterable, maxsize: int) -> None:\n        threading.Thread.__init__(self)\n        self.queue: Queue = Queue(maxsize)\n        self.generator = generator\n        self.daemon = True\n        self.start()\n\n    def run(self) -> None:\n        for item in self.generator:\n            self.queue.put(item)\n        self.queue.put(self._End)\n\n    def __iter__(self) -> Any:\n        return self\n\n    def __next__(self) -> Any:\n        next_item = self.queue.get()\n        if next_item == self._End:\n            raise StopIteration\n        return next_item\n\n    # Required for Python 2.7 compatibility\n    def next(self) -> Any:\n        return self.__next__()\n\n\ndef bg_iterator(iterable: Iterable, maxsize: int) -> Any:\n    return _ThreadedIterator(iterable, maxsize=maxsize)\n
\n

UPD.\nUsage:

\n
model = model.to(device, non_blocking=True)\nfor inputs, targets in bg_iterator(data_loader, maxsize=2):\n    inputs = inputs.to(device, non_blocking=True)\n    targets = targets.to(device, non_blocking=True)\n
\n

example

\n", + "system": "" + }, + { + "instruction": "Cannot setup package in conda environment with Pytorch installed", + "input": "", + "output": "

Finally, I find the solution by just using the pip from the Pytorch official website.

\n
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html\n
\n", + "system": "" + }, + { + "instruction": "IndexError: Dimension out of range - PyTorch dimension expected to be in range of [-1, 0], but got 1", + "input": "", + "output": "

The error

\n
\n

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

\n
\n

means that you're trying to access an index that doesn't exist in the tensor. For instance, the following code would cause the same IndexError you're experiencing.

\n
# sample input tensors\nIn [210]: x = torch.arange(4)\nIn [211]: z = torch.arange(6)\n\n# trying to concatenate along the second dimension \n# but the tensors have only one dimension (i.e., `0`).\n\nIn [212]: torch.cat([x, z], dim=1)\n
\n

So, one way to overcome this is to promote the tensors to higher dimensions before concatenation, if that is what you need.

\n
# promoting tensors to 2D before concatenation\nIn [216]: torch.cat([x[None, :], z[None, :]], dim=1)\nOut[216]: tensor([[0, 1, 2, 3, 0, 1, 2, 3, 4, 5]])\n
\n

Thus, in your case, you've to analyze and understand what shape you need for x so that it can be concatenated with z along dimension 1 and then the tc passed as input to self.rnn() along with hidden.

\n

As far as I can see, x[None, :] , z[None, :] should work.

\n
\n

Debugging for successful training

\n

The code you posted has been written for PyTorch v0.4.1. A lot has changed in the PyTorch Python API since then, but the code was not updated.

\n

Below are the changes you need to make the code run and train successfully. Copy the below functions and paste it at appropriate places in your code.

\n
def generate(self, hidden=None, temperature=None):\n        if hidden is None:\n            hidden=Variable(torch.zeros(1,64))\n        if temperature is None:\n            temperature = 0.8\n        # 1. h => z\n        z_prior = self.prior(hidden)\n        # sampling\n        z = Variable(torch.randn(z_prior.size(0),64))*z_prior[:,64:].exp()+z_prior[:,:64]\n        z = self.phi_z(z)\n        # 2. h + z => x\n        x_out = self.decoder(torch.cat([hidden, z], dim=1))\n        # sampling\n        x_sample = x = x_out.div(temperature).exp().multinomial(1).squeeze()\n        x = self.phi_x(x)\n        # 3. x + z => h\n        x = x[None, ...]   # changed here\n        xz = torch.cat([x,z], dim=1)  # changed here\n        hidden_next = self.rnn(xz,hidden) # changed here\n        return x_sample, hidden_next\n
\n
\n
def generate_text(self, hidden=None,temperature=None, n=100):\n        res = []\n        hidden = None\n        for _ in range(n):\n            x_sample, hidden = self.generate(hidden,temperature)\n            res.append(chr(x_sample.data))      # changed here\n        return "".join(res)\n
\n
\n
for epoch in range(max_epoch):\n    batch = next(g)\n    loss_seq = 0\n    loss1_seq, loss2_seq = 0, 0\n    optimizer.zero_grad()\n    for x in batch:\n        loss1, loss2, hidden = net.calculate_loss(Variable(x),hidden)\n        loss1_seq += loss1.data  # changed here\n        loss2_seq += loss2.data  # changed here\n        loss_seq = loss_seq + loss1+loss2\n    loss_seq.backward()\n    optimizer.step()\n    hidden.detach_()\n    if epoch%100==0:\n        print('>> epoch {}, loss {:12.4f}, decoder loss {:12.4f}, latent loss {:12.4f}'.format(epoch, loss_seq.data, loss1_seq, loss2_seq))  # changed here\n        print(net.generate_text())\n        print()\n
\n
\n

Note: After these changes, the training loop at my end proceeds without any errors on PyTorch v1.7.1. Have a look at the comments with # changed here to understand the changes.

\n", + "system": "" + }, + { + "instruction": "How to process TransformerEncoderLayer output in pytorch", + "input": "", + "output": "

So the input and output shape of the transformer-encoder is batch-size, sequence-length, embedding-size).\nThere are three possibilities to process the output of the transformer encoder (when not using the decoder).

\n
    \n
  1. you take the mean of the sequence-length dimension:
  2. \n
\n
x = self.transformer_encoder(x)\nx = x.reshape(batch_size, seq_size, embedding_size)      \nx = x.mean(1)\n
\n
    \n
  1. sum it up as you said:
  2. \n
\n
x = self.transformer_encoder(x)\nx = x.reshape(batch_size, seq_size, embedding_size)      \nx = x.sum(1)\n
\n
    \n
  1. using a recurrent neural network to combine the information along the sequence-length dimension:
  2. \n
\n
x = self.transformer_encoder(x)\nx = x.reshape(batch_size, seq_len, embedding_size)      \n\n# init hidden state\nhidden = torch.zeros(layers, batch_size, embedding_size).to(device=device)  \nx, hidden = self.rnn(x, hidden)\nx = x.reshape(batch_size, seq_size, embedding_size)\n\n# take last output\nx = x[:, -1]\n
\n

Taking the last element of the Transformer output isnt a good idea I think. Because then you only take 1 / seq-len of the information. But using rnn, the last output has still information of every other output.

\n

Id say that taking the mean is the best idea.

\n

And to the learning rate: For me it always worked very much better when I used warmup training. If you dont know what that is: You start at a low learning rate, for example 0.00001 and you increase it until you have reached some target lr, for example 0.002. And from then you just decay the lr as usual.

\n", + "system": "" + }, + { + "instruction": "Pytorch: How to create a random int tensor where a certain percent are of a certain value? For example, 25% are 1s, and rest 0s", + "input": "", + "output": "

You can use rand to generate a random tensor between 0,1 and compare that to 0.25:

\n
(torch.rand(size=(2,5)) < 0.25).int()\n
\n

Output:

\n
tensor([[0, 0, 0, 0, 1],\n        [1, 0, 0, 0, 0]], dtype=torch.int32)\n
\n", + "system": "" + }, + { + "instruction": "In 2020 what is the optimal way to train a model in Pytorch on more than one GPU on one computer?", + "input": "", + "output": "

If you cannot fit all the layers of your model on a single GPU, then you can use model parallel (that article describes model parallel on a single machine, with layer0.to('cuda:0') and layer1.to('cuda:1') like you mentioned).

\n

If you can, then you can try distributed data parallel - each worker will hold its own copy of the entire model (all layers), and will work on a small portion of the data in each batch. DDP is recommended instead of DP, even if you only use a single machine.

\n

Do you have some examples that can reproduce the issues you're having?\nHave you tried running your code with tiny inputs, and adding print statements to see whether progress is being made?

\n", + "system": "" + }, + { + "instruction": "How does pytorch compute derivatives for simple functions?", + "input": "", + "output": "

See this paper for exact answer, specifically section 2.1 or figure 2.

\n

In short, PyTorch has a list of basic functions and the expression of their derivatives. So, what is done in your case (y =xx), is evaluating $$ y' = 2x $$.

\n

The numerical method you mentioned is called numerical differentiation or finite differences, and it is an approximation of the derivative. But it is not what PyTorch does.

\n", + "system": "" + }, + { + "instruction": "Why use Caffe2 or Core-ML instead of LibTorch(.pt file) on iOS?", + "input": "", + "output": "

Core ML can use the Apple Neural Engine (ANE), which is much faster than running the model on the CPU or GPU. If a device has no ANE, Core ML can automatically fall back to the GPU or CPU.

\n\n

I haven't really looked into PyTorch Mobile in detail, but I think it currently only runs on the CPU, not on the GPU. And it definitely won't run on the ANE because only Core ML can do that.

\n\n

Converting models can be a hassle, especially from PyTorch which requires going through ONNX first. But you do end up with a much faster way to run those models.

\n", + "system": "" + }, + { + "instruction": "Pytorch: Index with tensor along multiple axes OR scatter to more than one index at once", + "input": "", + "output": "

These operations work exactly the same in their PyTorch counterparts, except for torch.nonzero, which by default returns a tensor of size [z, n] (where z is the number of non-zero elements and n the number of dimensions) instead of a tuple of n tensors with size [z] (as NumPy does), but that behaviour can be changed by setting as_tuple=True.

\n\n

Other than that you can directly translate it to PyTorch, but you need to make sure that the types match, because you cannot assign a tensor of type torch.long (default of torch.randint) to a tensor of type torch.float (default of torch.ones). In this case, data is probably meant to have type torch.long:

\n\n
#set up the array containing the data\ndata = 100*torch.ones((10,10,2), dtype=torch.long)\ndata[5:,:,:] = 0\n#select the data points that I want to update\nidxs = torch.nonzero(data.sum(2), as_tuple=True)\n#generate the updates that I am going to do\nupdates = torch.randint(5,size=(idxs[0].shape[0],2))\n#update the data\ndata[idxs[0],idxs[1],:] = updates\n
\n", + "system": "" + }, + { + "instruction": "Recommended way to replace several values in a tensor at once?", + "input": "", + "output": "

If you don't have any duplicate elements in your input tensor, here's one straightforward way using masking and value assignment using basic indexing. (I'll assume that the data type of the input tensor is int. But, you can simply adapt this code in a straightforward manner to other dtypes). Below is a reproducible illustration, with explanations interspersed in inline comments.

\n\n
# input tensors to work with\nIn [75]: old_values    \nOut[75]: tensor([1, 2, 3, 4, 5], dtype=torch.int32)\n\nIn [77]: old_new_value      \nOut[77]:\ntensor([[ 2, 22],\n        [ 3, 33]], dtype=torch.int32)\n
\n\n
\n\n
# generate a boolean mask using the values that need to be replaced (i.e. 2 & 3)\nIn [78]: boolean_mask = (old_values == old_new_value[:, :1]).sum(dim=0).bool() \n\nIn [79]: boolean_mask \nOut[79]: tensor([False,  True,  True, False, False])\n\n# assign the new values by basic indexing\nIn [80]: old_values[boolean_mask] = old_new_value[:, 1:].squeeze() \n\n# sanity check!\nIn [81]: old_values \nOut[81]: tensor([ 1, 22, 33,  4,  5], dtype=torch.int32)\n
\n\n

A small note on efficiency: Throughout the whole process, we never made any copy of the data (i.e. we operate only on new views by massaging the shapes according to our needs). Therefore, the runtime would be blazing fast.

\n", + "system": "" + }, + { + "instruction": "How to divide the dataset when it is distributed", + "input": "", + "output": "

You can split torch.utils.data.Dataset before creating torch.utils.data.DataLoader.

\n\n

Simply use torch.utils.data.random_split like this:

\n\n
train, validation =\n    torch.utils.data.random_split(\n        dataset, \n        (len(dataset)-val_length, val_length)\n    )\n
\n\n

This would give you two separate datasets which could be used with dataloaders however you wish.

\n", + "system": "" + }, + { + "instruction": "Pytorch device and .to(device) method", + "input": "", + "output": "

This code is deprecated. Just do:

\n
def forward(self, inputs, hidden):\n    embed_out = self.embeddings(inputs)\n    logits = torch.zeros((self.seq_len, self.batch_size, self.vocab_size), device=inputs.device)\n
\n

Note that to(device) is cost-free if the tensor is already on the requested device. And do not use get_device() but rather device attribute. It is working fine with cpu and gpu out of the box.

\n

Also, note that torch.tensor(np.array(...)) is a bad practice for several reasons. First, to convert numpy array to torch tensor either use as_tensor or from_numpy. THen, you will get a tensor with default numpy dtype instead of torch. In this case it is the same (int64), but for float it would be different. Finally, torch.tensor can be initialized using a list, just as numpy array, so you can get rid of numpy completely and call torch directly.

\n", + "system": "" + }, + { + "instruction": "AttributeError: dataset object has no attribute 'c' FastAI", + "input": "", + "output": "

You should add the attribute c into your NumbersDataset, like this:

\n\n
def __init__(self, inputs, labels, c):\n  self.inputs = inputs\n  self.labels = labels\n  self.c = c\n
\n", + "system": "" + }, + { + "instruction": "Why is very simple PyTorch LSTM model not learning?", + "input": "", + "output": "

LSTMs are made of neurons that generate an internal state based upon a feedback loop from previous training data. Each neuron has four internal gates that take multiple inputs and generate multiple outputs. It's one of the more complex neurons to work with and understand, and I'm not really skilled enough to give an in-depth answer.

\n\n

What I see in your example code is a lack of understanding of how they work, and it seems like you're assuming they work like a linear layer. I say that, because your forward method doesn't handle the internal state and you're not reshaping the outputs.

\n\n

You define the LSTM like this:

\n\n
     self.lstm = nn.LSTM(input_size=input_size, hidden_size=input_size, num_layers=2)\n
\n\n

The hidden_size relates to how memory and features work with the gates.

\n\n

PyTorch documentation says the following:

\n\n
\n

hidden_size \u2013 The number of features in the hidden state h

\n
\n\n

It is referring to the size of the hidden state used to train the internal gates for long and short term memory. The gates are a function across the hidden features that store previous gate outputs. Each time a neuron is trained the hidden state is updated and is used again for the next training data.

\n\n

So why is this so important?

\n\n

You are throwing away the hidden state data during training, and I don't know what happens if you don't define the hidden state. I assume the LSTM works as if there is never any history.

\n\n

The forward function should look something like this:

\n\n
    def forward(self, x, hidden):\n        lstm_output, hidden = self.lstm(x, hidden)\n        return lstm_output, hidden\n
\n\n

During training you have to keep track of the hidden state yourself.

\n\n
for i in range(epochs):\n   hidden = (torch.zeros(num_layers, batch_size, num_hidden),\n             torch.zeros(num_layers, batch_size, num_hidden))\n\n   for x, y in generate_batches(...):\n        # missing code....\n        lstm_output, hidden = model.forward(x, hidden)\n
\n\n

Take note of the shape for the hidden state. It's different from what you usually do with linear layers.

\n\n

There are some steps above missing that relate to resetting the hidden state, but I can't remember how that part works.

\n\n

LSTMs on their own only describe features much like convolution layers. It's unlikely that the outputs from a LSTMs is what you're interested in using.

\n\n

Most models that use LSTMs or convolutions will have a bottom section of fully connected layers (for example: nn.Linear()). These layers will train on the features to predict the outputs you're interested in.

\n\n

The problem here is that the outputs from LSTMs are in the wrong shape, and you have to reshape the tensors so that a linear layer can use them.

\n\n

Here is an example LSTM forward function that I have used:

\n\n
    def forward(self, x, hidden):\n        lstm_output, hidden = self.lstm(x, hidden)\n        drop_output = self.dropout(lstm_output)\n        drop_output = drop_output.contiguous().view(-1, self.num_hidden)\n        final_out = self.fc_linear(drop_output)\n        return final_out, hidden\n
\n\n

LSTMs are definitely an advanced topic in machine learning, and PyTorch isn't an easy library to learn to begin with. I would recommend reading up on LSTMs using the TensorFlow documentation and online blogs to get a better grasp of how they work.

\n", + "system": "" + }, + { + "instruction": "Can't use GPU with Pytorch", + "input": "", + "output": "

You can fix this error by installing CUDA 10.2 (The Latest Version) and, additionally re-install Pytorch with this command:

\n\n
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch\n
\n", + "system": "" + }, + { + "instruction": "How to apply bounds on a variable when performing optimisation in Pytorch?", + "input": "", + "output": "

I meet the same problem with you.\nI want to apply bounds on a variable in PyTorch, too.\nAnd I solved this problem by the below Way3.

\n

Your example is a little compliex but I am still learning English.\nSo I give a simpler example below.

\n

For example, there is a trainable variable v, its bounds is (-1, 1)

\n
v = torch.tensor((0.5, \uff09, require_grad=True)\nv_loss = xxxx\noptimizer.zero_grad()\nv_loss.backward()\noptimizer.step()\n
\n

Way1. RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

\n
v.clamp_(-1, 1)             \n
\n

Way2. RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed.

\n
v = torch.clamp(v, -1, +1)  # equal to v = v.clamp(-1, +1)  \n
\n

Way3. NotError. I solved this problem in Way3.

\n
with torch.no_grad():\n    v[:] = v.clamp(-1, +1)  # You must use v[:]=xxx instead of v=xxx\n
\n", + "system": "" + }, + { + "instruction": "Compare Number of Equal Elements in Tensors", + "input": "", + "output": "

You can just use the == operator to check for equality and then sum the resulting tensor:

\n\n
# Import torch and create dummy tensors\n>>> import torch\n>>> A = torch.randint(2, (10,))\n>>> A\ntensor([0, 0, 0, 1, 0, 1, 0, 0, 1, 1])\n>>> B = torch.randint(2, (10,))\n>>> B\ntensor([0, 1, 1, 0, 1, 0, 0, 1, 1, 0])\n\n# Checking for number of equal values\n>>> (A == B).sum()\ntensor(3)\n
\n\n
\n\n

Edit:

\n\n

torch.eq yields to the same result. So if you for some reason prefer that:

\n\n
>>> torch.eq(A, B).sum()\ntensor(3)\n
\n", + "system": "" + }, + { + "instruction": "How do I merge 2D Convolutions in PyTorch?", + "input": "", + "output": "

If we have y = x * a * b (where * means convolution and a, b are your kernels), we can define c = a * b such that y = x * c = x * a * b as follows:

\n
import torch\n\ndef merge_conv_kernels(k1, k2):\n    """\n    :input k1: A tensor of shape ``(out1, in1, s1, s1)``\n    :input k2: A tensor of shape ``(out2, in2, s2, s2)``\n    :returns: A tensor of shape ``(out2, in1, s1+s2-1, s1+s2-1)``\n      so that convolving with it equals convolving with k1 and\n      then with k2.\n    """\n    padding = k2.shape[-1] - 1\n    # Flip because this is actually correlation, and permute to adapt to BHCW\n    k3 = torch.conv2d(k1.permute(1, 0, 2, 3), k2.flip(-1, -2),\n                      padding=padding).permute(1, 0, 2, 3)\n    return k3\n
\n

To illustrate the equivalence, this example combines two kernels with 900 and 5000 parameters respectively into an equivalent kernel of 28 parameters:

\n
# Create 2 conv. kernels\nout1, in1, s1 = (100, 1, 3)\nout2, in2, s2 = (2, 100, 5)\nkernel1 = torch.rand(out1, in1, s1, s1, dtype=torch.float64)\nkernel2 = torch.rand(out2, in2, s2, s2, dtype=torch.float64)\n\n# propagate a random tensor through them. Note that padding\n# corresponds to the "full" mathematical operation (s-1)\nb, c, h, w = 1, 1, 6, 6\nx = torch.rand(b, c, h, w, dtype=torch.float64) * 10\nc1 = torch.conv2d(x, kernel1, padding=s1 - 1)\nc2 = torch.conv2d(c1, kernel2, padding=s2 - 1)\n\n# check that the collapsed conv2d is same as c2:\nkernel3 = merge_conv_kernels(kernel1, kernel2)\nc3 = torch.conv2d(x, kernel3, padding=kernel3.shape[-1] - 1)\nprint(kernel3.shape)\nprint((c2 - c3).abs().sum() < 1e-5)\n
\n

Note: The equivalence is assuming that we have unlimited numerical resolution. I think there was research on stacking many low-resolution-float linear operations and showing that the networks profited from numerical error, but I am unable to find it. Any reference is appreciated!

\n", + "system": "" + }, + { + "instruction": "RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'mat2' how to fix it?", + "input": "", + "output": "

The weights are Floats, the inputs are Longs. This is not allowed. In fact, I don't think torch supports anything else than Floats in neural networks.

\n\n

If you remove all calls to long, and define your input as floats, it will work (it does, I tried).

\n\n

(You will then get another unrelated error: you need to sum your loss)

\n", + "system": "" + }, + { + "instruction": "how to install pytorch in python2.7?", + "input": "", + "output": "

Here's the link to the PyTorch official download page

\n\n

From here, you can choose the python version (2.7) and CUDA (None) and other relevant details based on your environment and OS.

\n\n

Other helpful links:

\n\n\n", + "system": "" + }, + { + "instruction": "Image size of 256x256 (not 299x299) fed into Inception v3 model (PyTorch) and works?", + "input": "", + "output": "

It's because the pytorch implementation of inception v3 uses an adaptive average pooling layer right before the fully-connected layer.

\n\n

If you take a look at the Inception3 class in torchvision/models/inception.py, the operation of most interest with respect to your question is x = F.adaptive_avg_pool2d(x, (1, 1)). Since the average pooling is adaptive the height and width of x before pooling are independent of the output shape. In other words, after this operation we always get a tensor of size [b,c,1,1] where b and c are the batch size and number of channels respectively. This way the input to the fully connected layer is always the same size so no exceptions are raised.

\n\n

That said, if you're using the pretrained inception v3 weights then the model was originally trained for input of size 299x299. Using inputs of different sizes may have a negative impact on loss/accuracy, although smaller input images will almost certainly decrease computational time and memory footprint since the feature maps will be smaller.

\n", + "system": "" + }, + { + "instruction": "Poor fits for simple 2D Gaussian processes in `GPyTorch`", + "input": "", + "output": "

I was running into similar issues when trying to fit high-dimensional Gaussian Processes. A couple of suggestions (not sure if these will work):

\n
    \n
  1. Try using a ZeroMean, rather than a constant mean. It could be that having more hyperparameters (constant mean hyperparameter values) could lead the -mll objective to a local minima, rather than to a global minima. Using a different optimizer, e.g. lbfgs (which is second-order, rather than adam or sgd, which are both first-order) may help with this as well.

    \n
  2. \n
  3. Try normalizing your input data using min-max normalization, and your targets using standard normal N(0,1) normalization. In short, these normalization steps ensure that your data is consistent with the default priors for these GPR models. For more info on this, check out this GitHub issue.

    \n
  4. \n
  5. Try changing the learning rate for optimizing your hyperparameters, or the number of epochs you train this for.

    \n
  6. \n
  7. If you see any issues with numerical precision (particularly after normalization, if you choose to use it), try changing your model and datasets to double, 64-bit precision by converting torch tensors to double-precision tensors with tensor.double().

    \n
  8. \n
\n

Again, can't guarantee that these will fix the issues you're running into, but hopefully they help!

\n

Here are some tutorials I've put together as well.

\n", + "system": "" + }, + { + "instruction": "Implementing FFT with Pytorch", + "input": "", + "output": "

You calculated the power rather than the amplitude.\nYou simply need to add the line zx = zx.pow(0.5) to take the square root to get the amplitude.

\n", + "system": "" + }, + { + "instruction": "pytorch masked_fill: why can't I mask all zeros?", + "input": "", + "output": "

Your mask is wrong. Try

\n\n
scores = scores.masked_fill(scores == 0, -np.inf)\n
\n\n

scores now looks like

\n\n
tensor([[1.4796, 1.2361, 1.2137, 0.9487,   -inf,   -inf],\n        [0.6889, 0.4428, 0.6302, 0.4388,   -inf,   -inf],\n        [0.8842, 0.7614, 0.8311, 0.6431,   -inf,   -inf],\n        [0.9884, 0.8430, 0.7982, 0.7323,   -inf,   -inf],\n        [  -inf,   -inf,   -inf,   -inf,   -inf,   -inf],\n        [  -inf,   -inf,   -inf,   -inf,   -inf,   -inf]])\n
\n", + "system": "" + }, + { + "instruction": "How to convert some tensorflow code into pytorch version", + "input": "", + "output": "

Here is my Implementation (I am taking an example of logits of dimension [3,5]):

\n\n

Tensorflow Version:

\n\n
import tensorflow as tf\ndef kl_loss_compute(logits1, logits2):\n    \"\"\" KL loss\n    \"\"\"\n    pred1 = tf.nn.softmax(logits1)\n    print(pred1.eval())\n    pred2 = tf.nn.softmax(logits2)\n    print(pred2.eval())\n    loss = tf.reduce_mean(tf.reduce_sum(pred2 * tf.log(1e-8 + pred2 / (pred1 + 1e-8)), 1))\n    return loss\n\nx1 = tf.random.normal([3, 5], dtype=tf.float32)\nx2 = tf.random.normal([3, 5], dtype=tf.float32)\nwith tf.Session() as sess:\n    x1 = sess.run(x1)\n    print(x1)\n    x2 = sess.run(x2)\n    print(x2)\n    print(30*'=')\n    print(sess.run(kl_loss_compute(x1, x2)))\n
\n\n

Output:

\n\n
[[ 0.9801388  -0.2514422  -0.28299806  0.85130763  0.4565948 ]\n [-1.0744809   0.20301117  0.21026622  1.0385195   0.41147012]\n [ 1.2385081   1.1003486  -2.0818367  -1.0446491   1.8817908 ]]\n[[ 0.04036871  0.82306993  0.82962424  0.5209219  -0.10473887]\n [ 1.7777447  -0.6257034  -0.68985045 -1.1191329  -0.2600192 ]\n [ 0.03387258  0.44405013  0.08010675  0.9131149   0.6422863 ]]\n==============================\n[[0.32828477 0.09580362 0.09282765 0.2886025  0.19448158]\n [0.04786159 0.17170973 0.17296004 0.39596024 0.21150835]\n [0.2556382  0.22265059 0.00923886 0.02606533 0.48640704]]\n[[0.12704821 0.27790183 0.27972925 0.20543297 0.10988771]\n [0.7349108  0.06644011 0.062312   0.04056362 0.09577343]\n [0.12818882 0.19319147 0.13425465 0.30881628 0.23554876]]\n0.96658206\n
\n\n

PyTorch Version:

\n\n
def kl_loss_compute(logits1, logits2):\n    \"\"\" KL loss\n    \"\"\"\n    pred1 = torch.softmax(logits1, dim=-1, dtype=torch.float32)\n    print(pred1)\n    pred2 = torch.softmax(logits2, dim=-1, dtype=torch.float32)\n    print(pred2)\n    loss = torch.mean(torch.sum(pred2 * torch.log(1e-8 + pred2 / (pred1 + 1e-8)), -1))\n    return loss\n# same inputs are used here as above(see the inputs used in tensorflow code in the output)\nx = torch.Tensor([[ 0.9801388,  -0.2514422 , -0.28299806 , 0.85130763,  0.4565948 ],\n                 [-1.0744809 ,  0.20301117,  0.21026622,  1.0385195,   0.41147012],\n                 [ 1.2385081  , 1.1003486,  -2.0818367,  -1.0446491,   1.8817908 ]])\ny = torch.Tensor([[ 0.04036871 , 0.82306993,  0.82962424,  0.5209219,  -0.10473887],\n                 [ 1.7777447  ,-0.6257034,  -0.68985045, -1.1191329,  -0.2600192 ],\n                 [ 0.03387258 , 0.44405013 , 0.08010675,  0.9131149,   0.6422863 ]]) \nprint(kl_loss_compute(x, y))\n
\n\n

Output:

\n\n
tensor([[0.3283, 0.0958, 0.0928, 0.2886, 0.1945],\n        [0.0479, 0.1717, 0.1730, 0.3960, 0.2115],\n        [0.2556, 0.2227, 0.0092, 0.0261, 0.4864]])\ntensor([[0.1270, 0.2779, 0.2797, 0.2054, 0.1099],\n        [0.7349, 0.0664, 0.0623, 0.0406, 0.0958],\n        [0.1282, 0.1932, 0.1343, 0.3088, 0.2355]])\ntensor(0.9666)\n
\n", + "system": "" + }, + { + "instruction": "Apply a PyTorch CrossEntropy method for multiclass segmentation", + "input": "", + "output": "

Look at the description of nn.CrossEntropyLoss function, the prediction out you provide to nn.CrossEntropyLoss are not treated as class probabilities, but rather as logits; The loss function derive the class probabilities from out using soft max therefore nn.CrossEntropyLoss will never output exactly zero loss.

\n", + "system": "" + }, + { + "instruction": "A vector and matrix rows cosine similarity in pytorch", + "input": "", + "output": "

You can use torch.nn.functional.cosine_similarity function for computing cosine similarity. And torch.argsort to extract top 1000.

\n\n

Here is an example:

\n\n
x = torch.rand(10000,300)\ny = torch.rand(1,300)\ndist = F.cosine_similarity(x,y)\nindex_sorted = torch.argsort(dist)\ntop_1000 = index_sorted[:1000]\n
\n\n

Please note the shape of y, don't forget to reshape before calling similarity function. Also note that argsort simply returns the indexes of closest vectors. To access those vectors themselves, just write x[top_1000], which will return a matrix shaped (1000,300).

\n", + "system": "" + }, + { + "instruction": "TypeError: unhashable type: 'list' when calling .iloc()", + "input": "", + "output": "

You don't need the parentheses with iloc:

\n\n
self.landmarks_frame.iloc[index, 0]\n
\n", + "system": "" + }, + { + "instruction": "How is a 3-d tensor indexed by two 2d tensors?", + "input": "", + "output": "

From my guess X must be a 3D tensor, since it usually represents a batch of training data.

\n\n

As far as the functionality of this function is concerned, it randomly permutes the input data tensor X and it does this using the following steps:

\n\n\n\n

Now, the below line is where all the magic happens:

\n\n
X[idx, adx[None, :]].transpose(1, 2)\n
\n\n

We use the indices we got before idx and adx (adx[None, :] is simply a row vector of two dimension). Once we have that, we transpose the axes 1 and 2 exactly like what we did at the beginning of the function in the line:

\n\n
X = X.transpose(1, 2)\n
\n\n
\n\n

Here is a contrived example, for better understanding:

\n\n
# our input tensor\nIn [51]: X = torch.rand(64, 64, 32)\n\nIn [52]: X = X.transpose(1, 2)\n\nIn [53]: X.shape\nOut[53]: torch.Size([64, 32, 64])\n\nIn [54]: b = torch.rand((X.size(0), X.size(1)))\n\n# sort `b` which returns a tuple and take only indices\nIn [55]: idx = b.sort(0)[1]\n\nIn [56]: idx.shape\nOut[56]: torch.Size([64, 32])\n\nIn [57]: adx = torch.arange(0, X.size(1)).long()\n\nIn [58]: adx.shape\nOut[58]: torch.Size([32])\n\nIn [59]: X[idx, adx[None, :]].transpose(1, 2).shape\nOut[59]: torch.Size([64, 64, 32])\n
\n\n

The important thing to note here is how we got the same shape in the last step as the shape of the input tensor which is (64, 64, 32).

\n", + "system": "" + }, + { + "instruction": "Pytorch super(class, self).__init__() vs super().__init__()", + "input": "", + "output": "

I asked on the official form and it seems to be just for backward compatibility and nothing else.

\n", + "system": "" + }, + { + "instruction": "AttentionDecoderRNN without MAX_LENGTH", + "input": "", + "output": "

Attention modulates the input to the decoder. That is attention modulates the encoded sequence which is of the same length as the input sequence. Thus, MAX_LENGTH should be the maximum sequence length of all your input sequences.

\n", + "system": "" + }, + { + "instruction": "Correct way to swap PyTorch tensors without copying", + "input": "", + "output": "

First, you should know that every tensor in PyTorch has an underlying storage that holds its actual data. You can use .storage() to retrieve the underlying storage of a tensor. Then, you should use .set_() to replace the tensor's underlying storage with a new storage.

\n
with torch.no_grad():\n\n    x_storage = x.untyped_storage()\n    y_storage = y.untyped_storage()\n\n    x.set_(y_storage, y.storage_offset(), y.size(), y.stride())\n    y.set_(x_storage, x.storage_offset(), x.size(), x.stride())\n
\n

Note: the swapping process does not affect refences to the tensors themselves. Also, swapping the storage does not interfere with the reference counting/GC since PyTorch handles reference counting and garbage collection automatically. Another reason is that you are not creating new tensors or modifying the reference counting directly.

\n

Update

\n

After mentioning in the comments that the swapping will be within an optimizer class, this can potentially affect the autograd graph. Swapping storage objects between tensors, it turns out that the data the autograd graph references is changed. This can lead to inconsistencies between the calculated gradients and the actual computations performed on the swapped tensors. In brief, swapping data directly within an optimizer can be problematic for autograd graph. Therefore, I do not recommend swapping tensors directly within an optimizer class.

\n

The only solution is to use temporary tensors for safe swapping.

\n", + "system": "" + }, + { + "instruction": "IndexError: Dimension out of range - PyTorch dimension expected to be in range of [-1, 0], but got 1", + "input": "", + "output": "

The error

\n
\n

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

\n
\n

means that you're trying to access an index that doesn't exist in the tensor. For instance, the following code would cause the same IndexError you're experiencing.

\n
# sample input tensors\nIn [210]: x = torch.arange(4)\nIn [211]: z = torch.arange(6)\n\n# trying to concatenate along the second dimension \n# but the tensors have only one dimension (i.e., `0`).\n\nIn [212]: torch.cat([x, z], dim=1)\n
\n

So, one way to overcome this is to promote the tensors to higher dimensions before concatenation, if that is what you need.

\n
# promoting tensors to 2D before concatenation\nIn [216]: torch.cat([x[None, :], z[None, :]], dim=1)\nOut[216]: tensor([[0, 1, 2, 3, 0, 1, 2, 3, 4, 5]])\n
\n

Thus, in your case, you've to analyze and understand what shape you need for x so that it can be concatenated with z along dimension 1 and then the tc passed as input to self.rnn() along with hidden.

\n

As far as I can see, x[None, :] , z[None, :] should work.

\n
\n

Debugging for successful training

\n

The code you posted has been written for PyTorch v0.4.1. A lot has changed in the PyTorch Python API since then, but the code was not updated.

\n

Below are the changes you need to make the code run and train successfully. Copy the below functions and paste it at appropriate places in your code.

\n
def generate(self, hidden=None, temperature=None):\n        if hidden is None:\n            hidden=Variable(torch.zeros(1,64))\n        if temperature is None:\n            temperature = 0.8\n        # 1. h => z\n        z_prior = self.prior(hidden)\n        # sampling\n        z = Variable(torch.randn(z_prior.size(0),64))*z_prior[:,64:].exp()+z_prior[:,:64]\n        z = self.phi_z(z)\n        # 2. h + z => x\n        x_out = self.decoder(torch.cat([hidden, z], dim=1))\n        # sampling\n        x_sample = x = x_out.div(temperature).exp().multinomial(1).squeeze()\n        x = self.phi_x(x)\n        # 3. x + z => h\n        x = x[None, ...]   # changed here\n        xz = torch.cat([x,z], dim=1)  # changed here\n        hidden_next = self.rnn(xz,hidden) # changed here\n        return x_sample, hidden_next\n
\n
\n
def generate_text(self, hidden=None,temperature=None, n=100):\n        res = []\n        hidden = None\n        for _ in range(n):\n            x_sample, hidden = self.generate(hidden,temperature)\n            res.append(chr(x_sample.data))      # changed here\n        return "".join(res)\n
\n
\n
for epoch in range(max_epoch):\n    batch = next(g)\n    loss_seq = 0\n    loss1_seq, loss2_seq = 0, 0\n    optimizer.zero_grad()\n    for x in batch:\n        loss1, loss2, hidden = net.calculate_loss(Variable(x),hidden)\n        loss1_seq += loss1.data  # changed here\n        loss2_seq += loss2.data  # changed here\n        loss_seq = loss_seq + loss1+loss2\n    loss_seq.backward()\n    optimizer.step()\n    hidden.detach_()\n    if epoch%100==0:\n        print('>> epoch {}, loss {:12.4f}, decoder loss {:12.4f}, latent loss {:12.4f}'.format(epoch, loss_seq.data, loss1_seq, loss2_seq))  # changed here\n        print(net.generate_text())\n        print()\n
\n
\n

Note: After these changes, the training loop at my end proceeds without any errors on PyTorch v1.7.1. Have a look at the comments with # changed here to understand the changes.

\n", + "system": "" + }, + { + "instruction": "In 2020 what is the optimal way to train a model in Pytorch on more than one GPU on one computer?", + "input": "", + "output": "

If you cannot fit all the layers of your model on a single GPU, then you can use model parallel (that article describes model parallel on a single machine, with layer0.to('cuda:0') and layer1.to('cuda:1') like you mentioned).

\n

If you can, then you can try distributed data parallel - each worker will hold its own copy of the entire model (all layers), and will work on a small portion of the data in each batch. DDP is recommended instead of DP, even if you only use a single machine.

\n

Do you have some examples that can reproduce the issues you're having?\nHave you tried running your code with tiny inputs, and adding print statements to see whether progress is being made?

\n", + "system": "" + }, + { + "instruction": "Why am I getting "RuntimeError: Trying to backward through the graph a second time"?", + "input": "", + "output": "

You have created a lot of leaf nodes (gradient-requiring variables), including:

\n
ref_image = apply_ellipse_mask(torch.zeros(image_width, image_height, requires_grad=True), sphere_position, [sphere_radius, sphere_radius, sphere_radius])\n
\n

which creates a leaf node (with torch.zeros(image_width, image_height, requires_grad=True)) and applied some computations so you get a computation graph. But then you reuse the result every iteration. You do not recompute it every iteration so you are trying to go backward the same graph several times. The only things that should have require_grad = True are parameters you optimize on.

\n
\n

You're having a differentiability problem

\n
\n

You're trying to flow gradient to ellipsoid_axes through computation of the mask, but the computation of the mask is not differentiable in axes (it returns 0-1 anyway). You should make the mask "soft" using some kind of sigmoid instead.

\n
\n

on your apply_ellipse_mask function

\n
\n

This is more of a comment as this code will still cause the same error. Avoid for-loops like this with PyTorch as they are slow. Instead you could write:

\n
def apply_ellipse_mask(img, pos, axes):\n    r = torch.arange(image_height)[:, None]\n    c = torch.arange(image_width)[None, :]\n    val_array = ((c - pos[0])**2) / axes[0]**2 + ((r - pos[1])**2) / axes[1]**2\n    mask = torch.where(0.9 < val < 1, torch.tensor(1.0),  torch.tensor(0.0))\n\n    return img * (1.0 - mask) + mask\n
\n", + "system": "" + }, + { + "instruction": "Why is a namespace-qualified name available as a candidate?", + "input": "", + "output": "

Because the pytorch headers contain a

\n
using namespace at;\n
\n

directive inside the torch namespace. See here.

\n

Because you use using namespace torch; this then transitively also makes all of at available in the global scope.

\n

The comment above the linked line indicates that they didn't consider the user using using namespace torch; in order to make unqualified calls. It seems that they expect the user to make qualified calls instead. As the comment states, qualified lookup behaves differently.

\n", + "system": "" + }, + { + "instruction": "A lot of incorrect detection using YOLOv8", + "input": "", + "output": "

I was facing same problem with yolov8n.pt model. I tried with downgrading ultralytics version from 8.2.70 to 8.2.60. The issue is resolved now.

\n", + "system": "" + }, + { + "instruction": "Pruning nn.Linear weights inplace causes unexpected error, requires slightly weird workarounds. Need explanation", + "input": "", + "output": "

Like you guessed, the issue is with the computational graph that gets created when you do backpropagation.

\n

Let me explain the above point:

\n

When you initialize a tensor in pytorch, it usually signals that the operations you perform on them should be tracked. When you do a forward pass, the functions for backward prop are set up and the graph is set.

\n

In case 2, you are deleting the tensor and hence the entire process is reset -- the computation graph is reset. In case 3, you are clearly resetting the parameters.

\n

The output tensor and the model parameters are connected to the graph.

\n

If you want to clearly visualize where the TBackward0 function is, use torchviz to visualize the computational graph.

\n", + "system": "" + }, + { + "instruction": "CUDA out of memory error during PEFT LoRA fine tuning", + "input": "", + "output": "

Have you tried with re-enabling gradient_checkpointing or enabling it at all? According to https://huggingface.co/docs/transformers/v4.18.0/en/performance even if everything else is set up fine, it might happen that you run into out of memory issues due to gradient calculation overhead. During gradient calculation, all activations from the forward pass are saved and this can cause a huge memory consumption spike. In order to fix this issue, the Huggingface docs recommend to enable gradient_checkpointing in the TrainingArguments. I see that you have the line

\n
peft_model.config.gradient_checkpointing = True\n
\n

commented out. I haven't tried this, but maybe you should re-enable this. Otherwise try to follow the docs and just enable gradient_checkpointing in the TrainingArguments.

\n

I had the same problem when fine-tuning llama 2 with peft and LoRA with quantization. The models itself would fit easily onto one gpu (my setup is 2 gpus with 24GB each). However, during fine-tuning the memory consumption spiked extremely. A 4-bit quantized llama-2-13B would consume around 7-8 GB regularly and suddenly spike to above 24GB during fine-tuning. The approach mentioned in the Huggingface docs fixed the problem for me.

\n", + "system": "" + }, + { + "instruction": "What is the difference between register_forward_hook and register_module_forward_hook in PyTorch?", + "input": "", + "output": "

I was just trying to figure out the same question and found your question when Googling for it.

\n

From some digging:

\n\n

It appears former needs to be set on a per-module basis, while later is a global hook you set once to run for every module.

\n
 test_fwd = nn.modules.module.register_module_forward_hook(lambda *args: fw_hook(1, *args))\n
\n

Looking at blame for register_module_forward_hook\nshows this relevant issue with more details from 3 months ago.

\n

It sounds like the latter is the better choice for your case. In particular, considering comments from latest commit, as it makes it compatible with context managers.

\n

For example you can use it to compute per-example activation norms on every layer by using a context manager like this

\n
@contextmanager\ndef module_hook(hook: Callable):\n    handle = nn.modules.module.register_module_forward_hook(hook, always_call=True)\n    yield\n    handle.remove()\n\ndef compute_norms(layer: nn.Module, inputs: Tuple[torch.Tensor], _output: torch.Tensor):\n    A = inputs[0].detach()\n    layer.norms2 = (A * A).sum(dim=1)\n\nwith module_hook(compute_norms):\n    outputs = model(data)\n\nprint("layer", "norms squared")\nfor name, layer in model.named_modules():\n    if not name:\n        continue\n    print(f"{name:20s}: {layer.norms2.cpu().numpy()}")\n\n
\n

\"enter

\n

Full code from colab

\n
from contextlib import contextmanager\nfrom typing import Callable, Tuple\n\nimport torch\nimport torch.nn as nn\n\nimport numpy as np\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\ndata = torch.tensor([[1., 0.], [1., 1.]]).to(device)\nbs = data.shape[0]  # batch size\n\ndef simple_model(d, num_layers):\n    """Creates simple linear neural network initialized to 2*identity"""\n    layers = []\n    for i in range(num_layers):\n        layer = nn.Linear(d, d, bias=False)\n        layer.weight.data.copy_(2 * torch.eye(d))\n        layers.append(layer)\n    return torch.nn.Sequential(*layers)\n\nnorms = [torch.zeros(bs).to(device)]\n\ndef compute_norms(layer: nn.Module, inputs: Tuple[torch.Tensor], _output: torch.Tensor):\n    assert len(inputs) == 1, "multi-input layer??"\n    A = inputs[0].detach()\n    layer.norms2 = (A * A).sum(dim=1)\n\nmodel = simple_model(2, 3).to(device)\n\n@contextmanager\ndef module_hook(hook: Callable):\n    handle = nn.modules.module.register_module_forward_hook(hook, always_call=True)\n    yield\n    handle.remove()\n\nwith module_hook(compute_norms):\n    outputs = model(data)\n\nnp.testing.assert_allclose(model[0].norms2.cpu(), [1, 2])\nnp.testing.assert_allclose(model[1].norms2.cpu(), [4, 8])\nnp.testing.assert_allclose(model[2].norms2.cpu(), [16, 32])\n\nprint(f"{'layer':20s}: {'norms squared'}")\nfor name, layer in model.named_modules():\n    if not name:\n        continue\n    print(f"{name:20s}: {layer.norms2.cpu().numpy()}")\n#     print(name, layer.norms2)\n\nassert not torch.nn.modules.module._global_forward_hooks, "Some hooks remain"\n
\n", + "system": "" + }, + { + "instruction": "PyTorch matrix multiplication does not respect slicing", + "input": "", + "output": "

Welcome to the brave world of floating point arithmetic! float operations introduces rounding errors, and matrix multiplications accumulate them to significant values.\nhttps://pytorch.org/docs/stable/notes/numerical_accuracy.html\nIf you avoid inprecise rounding with

\n
vec = torch.floor (torch.rand(n, 20)*10)\na = torch.floor( torch.rand(30, 20)*10 )\n
\n

you'l get all True-s.

\n

Possible solution is to use torch.isclose.

\n
\n

DEL

\n", + "system": "" + }, + { + "instruction": "Why is pytorch slower on m1 pro 10 core vs linux CPU?", + "input": "", + "output": "

John Zavialov answer goes over the general issues, I'll briefly list them here. This portion is basically going to be summarizing that answer and get into how to speed it up.

\n

1.Optimization Progress: PyTorch's adaptation to the Apple Silicon architecture is still undergoing refinement and is not as mature as Linux's setup.

\n

Architecture-Specific Tuning: PyTorch is setup for specific Architectures, which means that it may not have as solid performance on each system.

\n

Instruction Set Variation: The instruction set architecture significantly impacts the execution efficiency of various operations with ARM-based systems(M1 Pros) being distinct then x86_84 (Linux) which can lead to big differences in performance

\n

In order to fix this or speed this up. Mac has added new Metal Performance Shader. If you activate this for Pytorch on Mac you should see a performance boost. You can see installation instructions in the link, I put down a code example to test and activate it:

\n
\nimport torch\nif torch.backends.mps.is_available():\n    mps_device = torch.device("mps")\n    x = torch.ones(1, device=mps_device)\n    print (x)\nelse:\n    print ("MPS device not found.")\n\n
\n", + "system": "" + }, + { + "instruction": "Training custom dataset on mmsegmentation", + "input": "", + "output": "

I believe you also need to import it in this file and add it to __all__.

\n", + "system": "" + }, + { + "instruction": "How to resolve TypeError: dispatch_model() got an unexpected keyword argument 'offload_index'?", + "input": "", + "output": "

please try updating your accelerate package, for example

\n
pip install accelerate==0.18.0\n
\n", + "system": "" + }, + { + "instruction": "Is there any way to specify a typehinted Pytorch DataLoader?", + "input": "", + "output": "

It doesn't seem that this is currently supported, nor possible.

\n

For more details, see:

\n\n

I partial solution is to declare the dataset as:

\n
class MyDataset(torch.utils.data.Dataset[MyCustomType]):\n    ...\n\ndataloader = Dataloader(MyDataset,...)\n
\n

Although this will not allow you to infer types when doing batch = next(iter(dataloader)), at least you'll get some information e.g. by hovering the dataloader variable if you're using VSCode or some other IDE.

\n", + "system": "" + }, + { + "instruction": "Docker with tensorflow, pytorch, gdal, and jupyter notebook", + "input": "", + "output": "

I built an image for environment integrating cuda, conda, jupyter, ssh, pytorch and tensorflow. You can refer to it GitHub and change it accordingly.

\n

I created a new root user called sjtu. It avoids many problems. Also, conda allows new environment flexibly.

\n", + "system": "" + }, + { + "instruction": "Modifying Pytorch Pretrained Model's Parameter in `forward()` Makes Training Slowing", + "input": "", + "output": "
\n

ptrblck: Based on your code snippet you are detaching A, which is the fixed\ntensor, while you are adding B to params potentially including its\nentire computation graph. Could you double check this, please?

\n
\n

Well, let's investigate! This code is main suspect:

\n
def forward(self, x, A):\n    for i, params in enumerate(self.pre_model.parameters()):\n        params.copy_(A[i].detach().clone()) # detached because I don't need to train A\n        params.add_(self.B[i])      # I need to train B so no detach\n        params.retain_grad()\n\n    x = self.pre_model(x)\n    return x\n
\n

We will vary iterations in yours for i in tqdm(range(300)) while watching graph size.

\n

Check model body with torchinfo.summary

\n
!pip install torchinfo\n\nfrom torchinfo import summary\n\n...\ndef forward(self, x, A=None):\nif A is None:\n    A=self.B\n...\n\nsummary( modelwithAB, input_size=image.shape, depth=10)\n
\n

for i in tqdm(range(2)):

\n
===============================================================================================\nLayer (type:depth-idx)                        Output Shape              Param #\n===============================================================================================\nNet                                           [2, 1000]                 11,689,512\n\u251c\u2500ResNet: 1-1                                 [2, 1000]                 --\n\u2502    \u2514\u2500Conv2d: 2-1                            [2, 64, 112, 112]         9,408\n\u2502    \u2514\u2500BatchNorm2d: 2-2                       [2, 64, 112, 112]         128\n\u2502    \u2514\u2500ReLU: 2-3                              [2, 64, 112, 112]         --    \n...\n===============================================================================================\nTotal params: 23,379,024\nTrainable params: 23,379,024\nNon-trainable params: 0\nTotal mult-adds (G): 3.63\n===============================================================================================\nInput size (MB): 1.20\nForward/backward pass size (MB): 79.49\nParams size (MB): 46.76\nEstimated Total Size (MB): 127.46\n===============================================================================================\n
\n

for i in tqdm(range(30)):

\n
===============================================================================================\nTotal params: 23,379,024\nTrainable params: 23,379,024\nNon-trainable params: 0\nTotal mult-adds (G): 3.63\n===============================================================================================\nInput size (MB): 1.20\nForward/backward pass size (MB): 79.49\nParams size (MB): 46.76\nEstimated Total Size (MB): 127.46\n===============================================================================================\n
\n

Well, looks like all right here, need to go deeper. Now let's check gradient computation graph, we will count all backward nodes of different kind.

\n
xxx = dict()\ndef add_nodes(var, consumer =None, grad =None ):\n    if hasattr(var, 'next_functions'):\n        try:\n            grads = var(grad)\n            grads = grads if isinstance(grads, tuple) else [grads]\n            if ( not hasattr(grads, '__iter__') ):\n                grads = [grads]\n        except:\n            grads = map (  (lambda x:None), var.next_functions )\n        for i, (u, grad) in enumerate(zip(var.next_functions, grads)):\n            #print(i,type( var ).__name__)\n            xxx[type( var ).__name__] = xxx.get(type( var ).__name__,0) +1\n            for uu in u:\n                add_nodes(uu, var, grad)\n\nadd_nodes( modelwithAB(image, A).grad_fn )\nprint(xxx)\n\n1:\n{'AddmmBackward0': 3, 'AddBackward0': 10214, 'CopyBackwards': 9704, 'ReshapeAliasBackward0': 1, 'MeanBackward1': 1, 'ReluBackward0': 766, 'CudnnBatchNormBackward0': 2424, 'ConvolutionBackward0': 2424, 'MaxPool2DWithIndicesBackward0': 256, 'TBackward0': 1}\n\n2:\n{'AddmmBackward0': 3, 'AddBackward0': 15066, 'CopyBackwards': 14556, 'ReshapeAliasBackward0': 1, 'MeanBackward1': 1, 'ReluBackward0': 766, 'CudnnBatchNormBackward0': 2424, 'ConvolutionBackward0': 2424, 'MaxPool2DWithIndicesBackward0': 256, 'TBackward0': 1}\n\n3:\n{'AddmmBackward0': 3, 'AddBackward0': 19918, 'CopyBackwards': 19408, 'ReshapeAliasBackward0': 1, 'MeanBackward1': 1, 'ReluBackward0': 766, 'CudnnBatchNormBackward0': 2424, 'ConvolutionBackward0': 2424, 'MaxPool2DWithIndicesBackward0': 256, 'TBackward0': 1}\n\n30:\n{'AddmmBackward0': 3, 'AddBackward0': 150922, 'CopyBackwards': 150412, 'ReshapeAliasBackward0': 1, 'MeanBackward1': 1, 'ReluBackward0': 766, 'CudnnBatchNormBackward0': 2424, 'ConvolutionBackward0': 2424, 'MaxPool2DWithIndicesBackward0': 256, 'TBackward0': 1}\n
\n

Bingo. Count of AddBackward0 and CopyBackwards nodes rizes every trainng step. So, whetever you expected to acheive, you can't do it with such parameter manipulation. Can't suggest a fix, because as well as ptrblck I am not sure what you are up to. Why you don't happy with standart aproach - train head with fixed trained backbone?

\n", + "system": "" + }, + { + "instruction": "overlaying the ground truth mask on an image", + "input": "", + "output": "

I need to do similar things pretty often. In my favorite StackOverflow fashion, here is a script that you can copy and paste. I hope the code itself is self-explanatory. There are a few things that you can tune and try (e.g., color maps, overlay styles). It uses multiprocessing.Pool for faster batch-processing, resizes the mask to match the shape of the image, assumes the mask is in .png format, and depends on the file structure that you posted.

\n
import os\nfrom os import path\nimport cv2\nimport numpy as np\n\nfrom argparse import ArgumentParser\nfrom multiprocessing import Pool\n\n\ndef create_overlay(image, mask):\n    """\n    image: H*W*3 numpy array\n    mask: H*W numpy array\n    If dimensions do not match, the mask is upsampled to match that of the image\n\n    Returns a H*W*3 numpy array\n    """\n    h, w = image.shape[:2]\n    mask = cv2.resize(mask, dsize=(w,h), interpolation=cv2.INTER_CUBIC)\n\n    # color options: https://docs.opencv.org/4.x/d3/d50/group__imgproc__colormap.html\n    mask_color = cv2.applyColorMap(mask, cv2.COLORMAP_HOT).astype(np.float32)\n    mask = mask[:, :, None] # create trailing dimension for broadcasting\n    mask = mask.astype(np.float32)/255\n\n    # different other options that you can use to merge image/mask\n    overlay = (image*(1-mask)+mask_color*mask).astype(np.uint8)\n    # overlay = (image*0.5 + mask_color*0.5).astype(np.uint8)\n    # overlay = (image + mask_color).clip(0,255).astype(np.uint8)\n\n    return overlay\n\ndef process_video(video_name):\n    """\n    Processing frames in a single video\n    """\n    vid_image_path = path.join(image_path, video_name)\n    vid_mask_path = path.join(mask_path, video_name)\n    vid_output_path = path.join(output_path, video_name)\n    os.makedirs(vid_output_path, exist_ok=True)\n\n    frames = sorted(os.listdir(vid_image_path))\n    for f in frames:\n        image = cv2.imread(path.join(vid_image_path, f))\n        mask = cv2.imread(path.join(vid_mask_path, f.replace('.jpg','.png')), cv2.IMREAD_GRAYSCALE)\n        overlay = create_overlay(image, mask)\n        cv2.imwrite(path.join(vid_output_path, f), overlay)\n\n\nparser = ArgumentParser()\nparser.add_argument('--image_path')\nparser.add_argument('--mask_path')\nparser.add_argument('--output_path')\nargs = parser.parse_args()\n\nimage_path = args.image_path\nmask_path = args.mask_path\noutput_path = args.output_path\n\nif __name__ == '__main__':\n    videos = sorted(\n        list(set(os.listdir(image_path)).intersection(\n                set(os.listdir(mask_path))))\n    )\n\n    print(f'Processing {len(videos)} videos.')\n\n    pool = Pool()\n    pool.map(process_video, videos)\n\n    print('Done.')\n\n
\n

Output:\n\"output

\n

EDIT: Made it work on Windows; changed pool.apply to pool.map.

\n", + "system": "" + }, + { + "instruction": "Expanding a non-singleton dimension in PyTorch, but without copying data in memory?", + "input": "", + "output": "

I don't think this is possible, and here is a minimal example to illustrate my point.

\n

Consider a torch.Tensor [1, 2, 3], which has size (3,). If we want to expand it without performing a copy, we would create a new view of the tensor. Imagine for example that we want to create a view that contains twice the values of the original tensor, i.e. [1, 2, 3, 1, 2, 3] which would have size (2*3,). But it is not possible to define such a view only playing with the strides, here is why: to step from 1 to 2, or from 2 to 3, we would need the stride value to be 1. But to step from 3 to 1, we would need the stride value to be -2, but the system of strides as implemented cannot have different values for a given axis.

\n

I am not 100% sure that it is not possible. Maybe there exists a very clever trick by using storage_offset parameter of torch.as_strided() function, or something else. Also, Perhaps this feature will be implemented in a future version, for example if you try to set a negative stride value, you have the error

\n
>>> torch.as_strided(torch.tensor([1, 2, 3]), size=(1,), stride=(-1,))\nTraceback (most recent call last):\n  File "<stdin>", line 1, in <module>\nRuntimeError: as_strided: Negative strides are not supported at the moment, got strides: [-1]\n
\n

indicating that this functionality could change in the future (here I used pytorch version 1.13.0).

\n

One could argue that you could first expand without copying in a new dimension using torch.Tensor.expand(), and then flatten() the result, but this does not work, let me explain why. In the documentation of expand(), it is explained that this function returns a new view of the tensor (so this does not do any copy), and in the documentation of flatten(), it is explained that this function will try to return a view of the flattened tensor, and if not possible it will return a copy. Then let's try this method, and check the memory size of the tensor at each step using tensor.storage().nbytes():

\n
>>> a = torch.tensor([1, 2, 3])\n>>> print(a)\ntensor([1, 2, 3])\n>>> print(a.storage().nbytes())\n24\n>>> b = a.expand([2, 3])\n>>> print(b)\ntensor([[1, 2, 3],\n        [1, 2, 3]])\n>>> print(b.storage().nbytes())\n24\n>>> c = b.flatten()\n>>> print(c)\ntensor([1, 2, 3, 1, 2, 3])\n>>> print(c.storage().nbytes())\n48\n
\n

As we can see, flatten() does not seem capable to return a view of the flattened tensor since c takes twice memory compared to a. And if the pyTorch implementation of flatten() is not capable of doing that, this probably means that indeed it is not possible to do it.

\n", + "system": "" + }, + { + "instruction": "In PyTorch, how do I update a neural network via the average gradient from a list of losses?", + "input": "", + "output": "

Gradient is a linear operation so gradient of the average is the same as the average of the gradient.

\n

Take some example data

\n
import torch\na = torch.randn(1, 4, requires_grad=True);\nb = torch.randn(5, 4);\n
\n

You could store all the losses and compute the mean as you are doing,

\n
a.grad = None\nx = (a * b).mean(axis=1)\nx.mean().backward() # gradient of the mean\nprint(a.grad)\n
\n

Or every iteration to compute the back propagation to get the contribution of that loss to the gradient.

\n
a.grad = None\nfor bi in b:\n    (a * bi / len(b)).mean().backward()\nprint(a.grad)\n
\n

Performance

\n

I don't know the internal details of the pytorch backward implementation, but I can tell that

\n

(1) the graph is destroyed by default after the backward pass ratain_graph=True or create_graph=True to backward().

\n

(2) The gradient is not kept except for leaf tensors, unless you specify retain_grad;

\n

(3) if you evaluate a model twice using different inputs, you can perform the backward pass to individual variables, this means that they have separate graphs. This can be verified with the following code.

\n
a.grad = None\n# compute all the variables in advance\nr = [ (a * b / len(b)).mean() for bi in b ]\nfor ri in r:\n    # This depends on the graph of r[i] but the graph or r[i-1]\n    # was already destroyed, it means that r[i] graph is independent\n    # of r[i-1] graph, hence they require separate memory.\n    ri.backward()  # this will remove the graph of ri\nprint(a.grad)\n
\n

So if you update the gradient after each episode it will accumulate the gradient of the leaf nodes, that's all the information you need for the next optimization step, so you can discard that loss freeing up resources for further computations. I would expect a memory usage reduction, potentially even a faster execution if the memory allocation can efficiently use the just deallocated pages for the next allocation.

\n", + "system": "" + }, + { + "instruction": "How to use OneCycleLR?", + "input": "", + "output": "

The documentation says that you should give total_steps or both epochs & steps_per_epoch as arguments. The simple relation between them is total_steps = epochs * steps_per_epoch.

\n

And total_steps is the total number of steps in the cycle. OneCycle in the name means there is only one cycle through the training.

\n

max_lr is the maximum learning rate of OneCycleLR. To be exact, the learning rate will increate from max_lr / div_factor to max_lr in the first pct_start * total_steps steps, and decrease smoothly to max_lr / final_div_factor then.

\n
\n

Edit: For those who are not familiar with lr_scheduler, you can plot the learning rate curve, e.g.

\n
EPOCHS = 10\nBATCHES = 10\nsteps = []\nlrs = []\nmodel = ... # Your model instance\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) # Wrapped optimizer\nscheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer,max_lr=0.9,total_steps=EPOCHS * BATCHES)\nfor epoch in range(EPOCHS):\n    for batch in range(BATCHES):\n        scheduler.step()\n        lrs.append(scheduler.get_last_lr()[0])\n        steps.append(epoch * BATCHES + batch)\n\nplt.figure()\nplt.legend()\nplt.plot(steps, lrs, label='OneCycle')\nplt.show()\n
\n", + "system": "" + }, + { + "instruction": "Are the pre-trained layers of the Huggingface BERT models frozen?", + "input": "", + "output": "

They are not frozen. All parameters are trainable by default. You can also check that with:

\n
for name, param in model.named_parameters():\n    print(name, param.requires_grad)\n
\n

Output:

\n
bert.embeddings.word_embeddings.weight True\nbert.embeddings.position_embeddings.weight True\nbert.embeddings.token_type_embeddings.weight True\nbert.embeddings.LayerNorm.weight True\nbert.embeddings.LayerNorm.bias True\nbert.encoder.layer.0.attention.self.query.weight True\nbert.encoder.layer.0.attention.self.query.bias True\nbert.encoder.layer.0.attention.self.key.weight True\nbert.encoder.layer.0.attention.self.key.bias True\nbert.encoder.layer.0.attention.self.value.weight True\nbert.encoder.layer.0.attention.self.value.bias True\nbert.encoder.layer.0.attention.output.dense.weight True\nbert.encoder.layer.0.attention.output.dense.bias True\nbert.encoder.layer.0.attention.output.LayerNorm.weight True\nbert.encoder.layer.0.attention.output.LayerNorm.bias True\nbert.encoder.layer.0.intermediate.dense.weight True\nbert.encoder.layer.0.intermediate.dense.bias True\nbert.encoder.layer.0.output.dense.weight True\nbert.encoder.layer.0.output.dense.bias True\nbert.encoder.layer.0.output.LayerNorm.weight True\nbert.encoder.layer.0.output.LayerNorm.bias True\nbert.encoder.layer.1.attention.self.query.weight True\nbert.encoder.layer.1.attention.self.query.bias True\nbert.encoder.layer.1.attention.self.key.weight True\nbert.encoder.layer.1.attention.self.key.bias True\nbert.encoder.layer.1.attention.self.value.weight True\nbert.encoder.layer.1.attention.self.value.bias True\nbert.encoder.layer.1.attention.output.dense.weight True\nbert.encoder.layer.1.attention.output.dense.bias True\nbert.encoder.layer.1.attention.output.LayerNorm.weight True\nbert.encoder.layer.1.attention.output.LayerNorm.bias True\nbert.encoder.layer.1.intermediate.dense.weight True\nbert.encoder.layer.1.intermediate.dense.bias True\nbert.encoder.layer.1.output.dense.weight True\nbert.encoder.layer.1.output.dense.bias True\nbert.encoder.layer.1.output.LayerNorm.weight True\nbert.encoder.layer.1.output.LayerNorm.bias True\nbert.encoder.layer.2.attention.self.query.weight True\nbert.encoder.layer.2.attention.self.query.bias True\nbert.encoder.layer.2.attention.self.key.weight True\nbert.encoder.layer.2.attention.self.key.bias True\nbert.encoder.layer.2.attention.self.value.weight True\nbert.encoder.layer.2.attention.self.value.bias True\nbert.encoder.layer.2.attention.output.dense.weight True\nbert.encoder.layer.2.attention.output.dense.bias True\nbert.encoder.layer.2.attention.output.LayerNorm.weight True\nbert.encoder.layer.2.attention.output.LayerNorm.bias True\nbert.encoder.layer.2.intermediate.dense.weight True\nbert.encoder.layer.2.intermediate.dense.bias True\nbert.encoder.layer.2.output.dense.weight True\nbert.encoder.layer.2.output.dense.bias True\nbert.encoder.layer.2.output.LayerNorm.weight True\nbert.encoder.layer.2.output.LayerNorm.bias True\nbert.encoder.layer.3.attention.self.query.weight True\nbert.encoder.layer.3.attention.self.query.bias True\nbert.encoder.layer.3.attention.self.key.weight True\nbert.encoder.layer.3.attention.self.key.bias True\nbert.encoder.layer.3.attention.self.value.weight True\nbert.encoder.layer.3.attention.self.value.bias True\nbert.encoder.layer.3.attention.output.dense.weight True\nbert.encoder.layer.3.attention.output.dense.bias True\nbert.encoder.layer.3.attention.output.LayerNorm.weight True\nbert.encoder.layer.3.attention.output.LayerNorm.bias True\nbert.encoder.layer.3.intermediate.dense.weight True\nbert.encoder.layer.3.intermediate.dense.bias True\nbert.encoder.layer.3.output.dense.weight True\nbert.encoder.layer.3.output.dense.bias True\nbert.encoder.layer.3.output.LayerNorm.weight True\nbert.encoder.layer.3.output.LayerNorm.bias True\nbert.encoder.layer.4.attention.self.query.weight True\nbert.encoder.layer.4.attention.self.query.bias True\nbert.encoder.layer.4.attention.self.key.weight True\nbert.encoder.layer.4.attention.self.key.bias True\nbert.encoder.layer.4.attention.self.value.weight True\nbert.encoder.layer.4.attention.self.value.bias True\nbert.encoder.layer.4.attention.output.dense.weight True\nbert.encoder.layer.4.attention.output.dense.bias True\nbert.encoder.layer.4.attention.output.LayerNorm.weight True\nbert.encoder.layer.4.attention.output.LayerNorm.bias True\nbert.encoder.layer.4.intermediate.dense.weight True\nbert.encoder.layer.4.intermediate.dense.bias True\nbert.encoder.layer.4.output.dense.weight True\nbert.encoder.layer.4.output.dense.bias True\nbert.encoder.layer.4.output.LayerNorm.weight True\nbert.encoder.layer.4.output.LayerNorm.bias True\nbert.encoder.layer.5.attention.self.query.weight True\nbert.encoder.layer.5.attention.self.query.bias True\nbert.encoder.layer.5.attention.self.key.weight True\nbert.encoder.layer.5.attention.self.key.bias True\nbert.encoder.layer.5.attention.self.value.weight True\nbert.encoder.layer.5.attention.self.value.bias True\nbert.encoder.layer.5.attention.output.dense.weight True\nbert.encoder.layer.5.attention.output.dense.bias True\nbert.encoder.layer.5.attention.output.LayerNorm.weight True\nbert.encoder.layer.5.attention.output.LayerNorm.bias True\nbert.encoder.layer.5.intermediate.dense.weight True\nbert.encoder.layer.5.intermediate.dense.bias True\nbert.encoder.layer.5.output.dense.weight True\nbert.encoder.layer.5.output.dense.bias True\nbert.encoder.layer.5.output.LayerNorm.weight True\nbert.encoder.layer.5.output.LayerNorm.bias True\nbert.encoder.layer.6.attention.self.query.weight True\nbert.encoder.layer.6.attention.self.query.bias True\nbert.encoder.layer.6.attention.self.key.weight True\nbert.encoder.layer.6.attention.self.key.bias True\nbert.encoder.layer.6.attention.self.value.weight True\nbert.encoder.layer.6.attention.self.value.bias True\nbert.encoder.layer.6.attention.output.dense.weight True\nbert.encoder.layer.6.attention.output.dense.bias True\nbert.encoder.layer.6.attention.output.LayerNorm.weight True\nbert.encoder.layer.6.attention.output.LayerNorm.bias True\nbert.encoder.layer.6.intermediate.dense.weight True\nbert.encoder.layer.6.intermediate.dense.bias True\nbert.encoder.layer.6.output.dense.weight True\nbert.encoder.layer.6.output.dense.bias True\nbert.encoder.layer.6.output.LayerNorm.weight True\nbert.encoder.layer.6.output.LayerNorm.bias True\nbert.encoder.layer.7.attention.self.query.weight True\nbert.encoder.layer.7.attention.self.query.bias True\nbert.encoder.layer.7.attention.self.key.weight True\nbert.encoder.layer.7.attention.self.key.bias True\nbert.encoder.layer.7.attention.self.value.weight True\nbert.encoder.layer.7.attention.self.value.bias True\nbert.encoder.layer.7.attention.output.dense.weight True\nbert.encoder.layer.7.attention.output.dense.bias True\nbert.encoder.layer.7.attention.output.LayerNorm.weight True\nbert.encoder.layer.7.attention.output.LayerNorm.bias True\nbert.encoder.layer.7.intermediate.dense.weight True\nbert.encoder.layer.7.intermediate.dense.bias True\nbert.encoder.layer.7.output.dense.weight True\nbert.encoder.layer.7.output.dense.bias True\nbert.encoder.layer.7.output.LayerNorm.weight True\nbert.encoder.layer.7.output.LayerNorm.bias True\nbert.encoder.layer.8.attention.self.query.weight True\nbert.encoder.layer.8.attention.self.query.bias True\nbert.encoder.layer.8.attention.self.key.weight True\nbert.encoder.layer.8.attention.self.key.bias True\nbert.encoder.layer.8.attention.self.value.weight True\nbert.encoder.layer.8.attention.self.value.bias True\nbert.encoder.layer.8.attention.output.dense.weight True\nbert.encoder.layer.8.attention.output.dense.bias True\nbert.encoder.layer.8.attention.output.LayerNorm.weight True\nbert.encoder.layer.8.attention.output.LayerNorm.bias True\nbert.encoder.layer.8.intermediate.dense.weight True\nbert.encoder.layer.8.intermediate.dense.bias True\nbert.encoder.layer.8.output.dense.weight True\nbert.encoder.layer.8.output.dense.bias True\nbert.encoder.layer.8.output.LayerNorm.weight True\nbert.encoder.layer.8.output.LayerNorm.bias True\nbert.encoder.layer.9.attention.self.query.weight True\nbert.encoder.layer.9.attention.self.query.bias True\nbert.encoder.layer.9.attention.self.key.weight True\nbert.encoder.layer.9.attention.self.key.bias True\nbert.encoder.layer.9.attention.self.value.weight True\nbert.encoder.layer.9.attention.self.value.bias True\nbert.encoder.layer.9.attention.output.dense.weight True\nbert.encoder.layer.9.attention.output.dense.bias True\nbert.encoder.layer.9.attention.output.LayerNorm.weight True\nbert.encoder.layer.9.attention.output.LayerNorm.bias True\nbert.encoder.layer.9.intermediate.dense.weight True\nbert.encoder.layer.9.intermediate.dense.bias True\nbert.encoder.layer.9.output.dense.weight True\nbert.encoder.layer.9.output.dense.bias True\nbert.encoder.layer.9.output.LayerNorm.weight True\nbert.encoder.layer.9.output.LayerNorm.bias True\nbert.encoder.layer.10.attention.self.query.weight True\nbert.encoder.layer.10.attention.self.query.bias True\nbert.encoder.layer.10.attention.self.key.weight True\nbert.encoder.layer.10.attention.self.key.bias True\nbert.encoder.layer.10.attention.self.value.weight True\nbert.encoder.layer.10.attention.self.value.bias True\nbert.encoder.layer.10.attention.output.dense.weight True\nbert.encoder.layer.10.attention.output.dense.bias True\nbert.encoder.layer.10.attention.output.LayerNorm.weight True\nbert.encoder.layer.10.attention.output.LayerNorm.bias True\nbert.encoder.layer.10.intermediate.dense.weight True\nbert.encoder.layer.10.intermediate.dense.bias True\nbert.encoder.layer.10.output.dense.weight True\nbert.encoder.layer.10.output.dense.bias True\nbert.encoder.layer.10.output.LayerNorm.weight True\nbert.encoder.layer.10.output.LayerNorm.bias True\nbert.encoder.layer.11.attention.self.query.weight True\nbert.encoder.layer.11.attention.self.query.bias True\nbert.encoder.layer.11.attention.self.key.weight True\nbert.encoder.layer.11.attention.self.key.bias True\nbert.encoder.layer.11.attention.self.value.weight True\nbert.encoder.layer.11.attention.self.value.bias True\nbert.encoder.layer.11.attention.output.dense.weight True\nbert.encoder.layer.11.attention.output.dense.bias True\nbert.encoder.layer.11.attention.output.LayerNorm.weight True\nbert.encoder.layer.11.attention.output.LayerNorm.bias True\nbert.encoder.layer.11.intermediate.dense.weight True\nbert.encoder.layer.11.intermediate.dense.bias True\nbert.encoder.layer.11.output.dense.weight True\nbert.encoder.layer.11.output.dense.bias True\nbert.encoder.layer.11.output.LayerNorm.weight True\nbert.encoder.layer.11.output.LayerNorm.bias True\nbert.pooler.dense.weight True\nbert.pooler.dense.bias True\nclassifier.weight True\nclassifier.bias True\n
\n", + "system": "" + }, + { + "instruction": "Pytorch: Assign values from one mask to another, masked by itself", + "input": "", + "output": "

There are a few solutions, I will also give their speed as measured by timeit, 10k repetitions, on 2021 macbook pro.

\n

The simplest solution, taking 0.260s:

\n
active[active.clone()] = ~terminated\n
\n

We can use masked_scatter_ inplace operation for abt. 2x speedup (0.136s):

\n
active.masked_scatter_(\n        active,\n        ~terminated,\n    )\n
\n

Out of place operation, taking 0.161s, would be:

\n
active = torch.masked_scatter(\n        active,\n        active,\n        ~terminated,\n    )\n
\n", + "system": "" + }, + { + "instruction": "Pytorch Python Distributed Multiprocessing: Gather/Concatenate tensor arrays of different lengths/sizes", + "input": "", + "output": "

As it is not directly possible to gather using built in methods, we need to write custom function with the following steps:

\n
    \n
  1. Use dist.all_gather to get sizes of all arrays.
  2. \n
  3. Find the max size.
  4. \n
  5. Pad local array to max size using zeros/constants.
  6. \n
  7. Use dist.all_gather to get all padded arrays.
  8. \n
  9. Unpad the added zeros/constants using sizes found in step 1.
  10. \n
\n

The below function does this:

\n
def all_gather(q, ws, device):\n    """\n    Gathers tensor arrays of different lengths across multiple gpus\n    \n    Parameters\n    ----------\n        q : tensor array\n        ws : world size\n        device : current gpu device\n        \n    Returns\n    -------\n        all_q : list of gathered tensor arrays from all the gpus\n\n    """\n    local_size = torch.tensor(q.size(), device=device)\n    all_sizes = [torch.zeros_like(local_size) for _ in range(ws)]\n    dist.all_gather(all_sizes, local_size)\n    max_size = max(all_sizes)\n\n    size_diff = max_size.item() - local_size.item()\n    if size_diff:\n        padding = torch.zeros(size_diff, device=device, dtype=q.dtype)\n        q = torch.cat((q, padding))\n\n    all_qs_padded = [torch.zeros_like(q) for _ in range(ws)]\n    dist.all_gather(all_qs_padded, q)\n    all_qs = []\n    for q, size in zip(all_qs_padded, all_sizes):\n        all_qs.append(q[:size])\n    return all_qs\n
\n

Once, we are able to do the above, we can then easily use torch.cat to further concatenate into a single array if needed:

\n
torch.cat(all_q)\n[torch.tensor([1.5, 2.3, 5.3])\n
\n

Adapted from: github

\n", + "system": "" + }, + { + "instruction": "Simple MultiGPU during inference with huggingface", + "input": "", + "output": "

I found it's not possible with the pipelines, so:

\n

two ways :

\n\n", + "system": "" + }, + { + "instruction": "An error when calling tensor.backward() in pytorch may caused by grad_fn=<CopySlices>", + "input": "", + "output": "

To reduce memory usage, during the .backward() call, all the intermediary results are deleted when they are not needed anymore. Hence if you try to call .backward() again, the intermediary results don\u2019t exist and the backward pass cannot be performed (and you get the error you see).\nYou can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option.

\n
c[0] = a*2\n\n#c[0]:tensor(4., grad_fn=<SelectBackward>)\n\n#c:tensor([4.0000e+00, 3.1720e+00, 1.0469e-38, 9.2755e-39], grad_fn=<CopySlices>)\n\nc[0].backward(retain_graph=True)\n
\n
\nc[1] = b*2\n\nc[1].backward(retain_graph=True)\n```\n
\n", + "system": "" + }, + { + "instruction": "Switch function/class implementation between Numpy & PyTorch:?", + "input": "", + "output": "
    \n
  1. What you are looking for is exactly array-api-compat.
  2. \n
\n
import array_api_compat\n\ndef foo(x, y):\n    xp = array_api_compat.array_namespace(x, y)\n    return xp.add(x, y)\n
\n
import numpy as np\nimport torch\n\nprint(foo(torch.tensor(1), torch.tensor(2)))\nprint(foo(np.array(1), np.array(2)))\n
\n
$ python test.py\ntensor(3)\n3\n
\n
    \n
  1. You can also use ivy (a library primarily intended to wrap various arrays in their own ivy.Array), but you must be very careful as it is not FOSS.
  2. \n
\n
import ivy\nfrom ivy import Array, NativeArray\n\nivy.set_array_mode(False) # Without this, the function will return ivy.array\n\ndef foo(x: Array | NativeArray, y: Array | NativeArray) -> Array:\n    return ivy.add(x, y)\n
\n
import numpy as np\nimport torch\n\nprint(foo(torch.tensor(1), torch.tensor(2)))\nprint(foo(np.array(1), np.array(2)))\n
\n
$ python test2.py\ntensor(3)\n3\n
\n", + "system": "" + }, + { + "instruction": "Runtime Error: mat1 and mat2 shapes cannot be multiplied in pytorch", + "input": "", + "output": "

The size mismatch error is shown as 32x119072 and 800x300. The first shape refers to the input tensor, while the second is the parameter of the layer. If you look into your model definition you will see that it matches the first fully connected layer, the one following the flatten. Indeed, nn.Linear(800, 300) was expecting 800-feature tensors, but got 119072-feature tensors instead.

\n

You need to modify this linear layer to match the incoming tensor flattened spatial shape. But notice how this value will depend on the image that is fed to the CNN, ultimately this will dictate the size of the tensor fed to the classifier. The general way to solve this is to use adaptive layers: such as nn.AdaptiveMaxPool2d which will always provide the same output shape regardless of the input dimension size.

\n", + "system": "" + }, + { + "instruction": "Why doesn't torch pruning actually remove filters or weights?", + "input": "", + "output": "
TLDR; PyTorch prune's function just works as a weight mask, that's all it does. There are no memory savings associated with using torch.nn.utils.prune.
\n
\n

As the documentation page for torch.nn.utils.prune.remove states:

\n
\n

Removes the pruning reparameterization from a module and the pruning method from the forward hook.

\n
\n

In effect, this means it will remove the mask - that prune.l1_unstructured added - from the parameter. As a side effect, removing the prune will imply having zeros on the previously masked values but these won't stay as 0s. In the end, PyTorch prune will only take more memory compared to not using it. So this is not actually the functionality you are looking for.

\n

You can read more on this comment.

\n
\n

Here is an example:

\n
>>> module = nn.Linear(10,3)\n>>> prune.l1_unstructured(module, name='weight', amount=0.3)\n
\n

The weight parameters are masked:

\n
>>> module.weight\ntensor([[-0.0000,  0.0000, -0.1397, -0.0942,  0.0000,  0.0000,  0.0000, -0.1452,\n          0.0401,  0.1098],\n        [ 0.2909, -0.0000,  0.2871,  0.1725,  0.0000,  0.0587,  0.0795, -0.1253,\n          0.0764, -0.2569],\n        [ 0.0000, -0.3054, -0.2722,  0.2414,  0.1737, -0.0000, -0.2825,  0.0685,\n          0.1616,  0.1095]], grad_fn=<MulBackward0>)\n
\n

Here is the mask:

\n
>>> module.weight_mask\ntensor([[0., 0., 1., 1., 0., 0., 0., 1., 1., 1.],\n        [1., 0., 1., 1., 0., 1., 1., 1., 1., 1.],\n        [0., 1., 1., 1., 1., 0., 1., 1., 1., 1.]])\n
\n

Notice that when applying prune.remove, the pruning is removed. And, the masked values remain at zero but are "unfrozen":

\n
>>> prune.remove(module, 'weight')\n\n>>> module.weight\ntensor([[-0.0000,  0.0000, -0.1397, -0.0942,  0.0000,  0.0000,  0.0000, -0.1452,\n          0.0401,  0.1098],\n        [ 0.2909, -0.0000,  0.2871,  0.1725,  0.0000,  0.0587,  0.0795, -0.1253,\n          0.0764, -0.2569],\n        [ 0.0000, -0.3054, -0.2722,  0.2414,  0.1737, -0.0000, -0.2825,  0.0685,\n          0.1616,  0.1095]], grad_fn=<MulBackward0>)\n
\n

And the mask is gone:

\n
>>> hasattr(module, 'weight_mask')\nFalse\n
\n", + "system": "" + }, + { + "instruction": "Are anchor box sizes in torchvision's AnchorGenerator with respect to the input image, feature map, or something else?", + "input": "", + "output": "
\n

Is the "sizes" argument to AnchorGenerator with respect to the\noriginal image size, or with respect to the feature map being output\nfrom the backbone?

\n
\n

the sizes argument is the size of each bounding box applied on the input image. If you are interested in detecting objects that are 32x32 pixels, you should use

\n
anchor_generator = AnchorGenerator(sizes=((32,),),\n                                   aspect_ratios=((1.0,),))\n
\n", + "system": "" + }, + { + "instruction": "how is stacked rnn (num layers > 1) implemented on pytorch?", + "input": "", + "output": "
\n

Does this mean that the output of the final cell of the first layer of the GRU is fed as input to the next layer? Or does it mean the outputs of each cell (at each timestep) is fed as an input to the cell at the same timestep of the next layer?

\n
\n

The latter. Each time step's output from the first layer is used as input for the same time step of the second layer.

\n

This figure from a Keras tutorial shows how multilayer RNNs are structured:

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "PyTorch's nn.Conv2d with half-precision (fp16) is slower than fp32", + "input": "", + "output": "

Well, the problem lays on the fact that Mixed/Half precision tensor calculations are accelerated via Tensor Cores.

\n

Theoretically (and practically) Tensor Cores are designed to handle lower precision matrix calculations, where, for instance you add the fp32 multiplication product of 2 fp16 matrix calculation to the accumulator.

\n

As long as GTX 1660 TI doesn't come with Tensor Cores, we can conclude that CUDA won't be able to utilize acceleration with Mixed/Half precisions on that GPU.

\n", + "system": "" + }, + { + "instruction": "Ray[tune] for pytorch TypeError: ray.cloudpickle.dumps", + "input": "", + "output": "

I tried running your code. Didn't run into any serialization issue. Here is the result (error only because I didn't include every pytorch dependencies)

\n
| Trial name                    | status   | loc   |     lr |   mom | nest   |\n|-------------------------------+----------+-------+--------+-------+--------|\n| training_function_dc6c3_00000 | ERROR    |       | 0.01   |  0.98 | False  |\n| training_function_dc6c3_00001 | ERROR    |       | 0.001  |  0.98 | True   |\n| training_function_dc6c3_00002 | ERROR    |       | 0.0001 |  0.98 | False  |\n| training_function_dc6c3_00003 | ERROR    |       | 1e-05  |  0.98 | True   |\n| training_function_dc6c3_00004 | ERROR    |       | 0.01   |  0.9  | False  |\n| training_function_dc6c3_00005 | ERROR    |       | 0.001  |  0.9  | True   |\n| training_function_dc6c3_00006 | ERROR    |       | 0.0001 |  0.9  | True   |\n| training_function_dc6c3_00007 | ERROR    |       | 1e-05  |  0.9  | True   |\n| training_function_dc6c3_00008 | ERROR    |       | 0.01   |  0.8  | False  |\n| training_function_dc6c3_00009 | ERROR    |       | 0.001  |  0.8  | False  |\n| training_function_dc6c3_00010 | ERROR    |       | 0.0001 |  0.8  | True   |\n| training_function_dc6c3_00011 | ERROR    |       | 1e-05  |  0.8  | False  |\n+-------------------------------+----------+-------+--------+-------+--------+\nNumber of errored trials: 12\n+-------------------------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+\n| Trial name                    |   # failures | error file                                                                                                                                                   |\n|-------------------------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| training_function_dc6c3_00000 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00000_0_lr=0.01,mom=0.98,nest=False_2021-09-11_07-52-19/error.txt   |\n| training_function_dc6c3_00001 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00001_1_lr=0.001,mom=0.98,nest=True_2021-09-11_07-52-19/error.txt   |\n| training_function_dc6c3_00002 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00002_2_lr=0.0001,mom=0.98,nest=False_2021-09-11_07-52-19/error.txt |\n| training_function_dc6c3_00003 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00003_3_lr=1e-05,mom=0.98,nest=True_2021-09-11_07-52-19/error.txt   |\n| training_function_dc6c3_00004 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00004_4_lr=0.01,mom=0.9,nest=False_2021-09-11_07-52-19/error.txt    |\n| training_function_dc6c3_00005 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00005_5_lr=0.001,mom=0.9,nest=True_2021-09-11_07-52-19/error.txt    |\n| training_function_dc6c3_00006 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00006_6_lr=0.0001,mom=0.9,nest=True_2021-09-11_07-52-19/error.txt   |\n| training_function_dc6c3_00007 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00007_7_lr=1e-05,mom=0.9,nest=True_2021-09-11_07-52-19/error.txt    |\n| training_function_dc6c3_00008 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00008_8_lr=0.01,mom=0.8,nest=False_2021-09-11_07-52-19/error.txt    |\n| training_function_dc6c3_00009 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00009_9_lr=0.001,mom=0.8,nest=False_2021-09-11_07-52-19/error.txt   |\n| training_function_dc6c3_00010 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00010_10_lr=0.0001,mom=0.8,nest=True_2021-09-11_07-52-19/error.txt  |\n| training_function_dc6c3_00011 |            1 | /Users/xwjiang/ray_results/training_function_2021-09-11_07-52-18/training_function_dc6c3_00011_11_lr=1e-05,mom=0.8,nest=False_2021-09-11_07-52-19/error.txt  |\n
\n

btw, when you call tune.run, do it like tune.run(train_func, config=config); otherwise, the config you passed in is mistaken as name.

\n", + "system": "" + }, + { + "instruction": "Whats the purpose of torch.positive?", + "input": "", + "output": "

It seems like pytorch added pytorch.positive in parity with numpy which has a function of the same name.

\n

So back to your question, positive is a unary operator which basically multiplies everything by +1. This is not a particularly useful operation, but is symmetric to negative which multiplies everything by -1.

\n
>>> import numpy as np\n>>> a = np.arange(10)\n>>> a\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n>>> np.positive(a)\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n>>> np.negative(a)\narray([ 0, -1, -2, -3, -4, -5, -6, -7, -8, -9])\n
\n

So why make a positive function that effectively returns (a copy of) the array you passed? It can be used to write general code where e.g. numpy.positive and numpy.negative can be used dynamically based on some condition.

\n", + "system": "" + }, + { + "instruction": "How to calculate dimensions of first linear layer of a CNN", + "input": "", + "output": "

Given the input spatial dimension w, a 2d convolution layer will output a tensor with the following size on this dimension:

\n
int((w + 2*p - d*(k - 1) - 1)/s + 1)\n
\n

The exact same is true for nn.MaxPool2d. For reference, you can look it up here, on the PyTorch documentation.

\n

The convolution part of your model is made up of three (Conv2d + MaxPool2d) blocks. You can easily infer the spatial dimension size of the output with this helper function:

\n
def conv_shape(x, k=1, p=0, s=1, d=1):\n    return int((x + 2*p - d*(k - 1) - 1)/s + 1)\n
\n

Calling it recursively, you get the resulting spatial dimension:

\n
>>> w = conv_shape(conv_shape(32, k=3, p=1), k=2, s=2)\n>>> w = conv_shape(conv_shape(w, k=3), k=2, s=2)\n>>> w = conv_shape(conv_shape(w, k=3), k=2, s=2)\n\n>>> w\n2\n
\n

Since your convolutions have squared kernels and identical strides, paddings (horizontal equals vertical), the above calculations hold true for the width and the height dimensions of the tensor. Lastly, looking at the last convolution layer conv3, which has 64 filters, the resulting number of elements per batch element before your fully connected layer is: w*w*64, i.e. 256.

\n
\n

However, nothing stops you from calling your layers to find out the output shape!

\n
class Net(nn.Module):\n    def __init__(self):\n        super().__init__()\n        \n        self.feature_extractor = nn.Sequential(\n            nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1),\n            nn.ReLU(),\n            nn.MaxPool2d(2, 2),\n            nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3),\n            nn.ReLU(),\n            nn.MaxPool2d(2,2),\n            nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3),\n            nn.ReLU(),\n            nn.MaxPool2d(2,2),\n            nn.Flatten())\n\n        n_channels = self.feature_extractor(torch.empty(1, 3, 32, 32)).size(-1)\n\n        self.classifier = nn.Sequential(\n            nn.Linear(n_channels, 200),\n            nn.ReLU(),\n            nn.Dropout(0.25),\n            nn.Linear(200, 100),\n            nn.ReLU(),\n            nn.Dropout(0.25),\n            nn.Linear(100, 10))\n\n    def forward(self, x):\n        features = self.feature_extractor(x)\n        out = self.classifier(features)\n        return out\n\nmodel = Net()\n
\n", + "system": "" + }, + { + "instruction": "Calculate angles in pytorch", + "input": "", + "output": "

You can do this manually using the relation between the dot product of two vectors and the angle between them:

\n
# normalize the vectors\nnVs = Vs / torch.norm(Vs, p=2, dim=-1, keepdim=True)\n# compute cosine of the angles using dot product\ncos_ij = torch.einsum('bni,bmi->bnm', nVs, nVs)\n
\n", + "system": "" + }, + { + "instruction": "PyTorch custom forward function does not work with DataParallel", + "input": "", + "output": "

You are forcing the input x and the model to be on 'cuda:0' device, but when working on multiple GPUs, you should not specify any particular device.
\nTry:

\n
x = torch.randn(8,10)  \nmodel = Net('B')\nmodel =  nn.DataParallel(model, device-ids=[0, 1]).cuda()  # assuming 2 GPUs\npred = model(x, x)\n
\n", + "system": "" + }, + { + "instruction": "Graph Neural Network Regression", + "input": "", + "output": "

add a linear layer,and don't forget use a regression loss function

\n
class GCN(torch.nn.Module):\n    def __init__(self, hidden_channels):\n        super(GCN, self).__init__()\n        torch.manual_seed(12345)\n        self.conv1 = GCNConv(dataset.num_features, hidden_channels)\n        self.conv2 = GCNConv(hidden_channels, dataset.num_classes)\n        self.linear1 = torch.nn.Linear(100,1)\n    def forward(self, x, edge_index):\n        x = self.conv1(x, edge_index)\n        x = x.relu()\n        x = F.dropout(x, p=0.5, training=self.training)\n        x = self.conv2(x, edge_index)\n        x = self.linear1(x)\n        return x\n
\n", + "system": "" + }, + { + "instruction": "Difference between MultiplicativeLR and LambdaLR", + "input": "", + "output": "

The main difference is that they use a different function for computing the learning rate of the function.

\n

LambdaLR's function is:

\n

\"enter

\n

While MultiplicativeLR's function is:

\n

\"enter

\n

Thus they would have a different result for the learning rate. Both are useful for a variety of scenarios.

\n", + "system": "" + }, + { + "instruction": "TensorFlow equivalent of PyTorch's transforms.Normalize()", + "input": "", + "output": "

The workaround that you mentioned seems ok. But using for...loop to compute normalization to each RGB channel for a single image can be a bit problematic when you deal with a large dataset in the data pipeline (generator or tf.data). But it's ok anyway. Here is the demonstration of your approach, and later we will provide two possible alternatives that might work for you easily.

\n
from PIL import Image \nfrom matplotlib.pyplot import imshow, subplot, title, hist\n\n# load image (RGB)\nimg = Image.open('/content/9.jpg')\n\ndef normalize_image(image, mean, std):\n    for channel in range(3):\n        image[:,:,channel] = (image[:,:,channel] - mean[channel]) / std[channel]\n    return image\n\nOP_approach = normalize_image(np.array(img) / 255.0, \n                            mean=[0.485, 0.456, 0.406], \n                            std=[0.229, 0.224, 0.225])\n
\n

Now, let's observe the transform properties afterward.

\n
plt.figure(figsize=(25,10))\nsubplot(121); imshow(OP_approach); title(f'Normalized Image \\n min-px: \\\n    {OP_approach.min()} \\n max-pix: {OP_approach.max()}')\nsubplot(122); hist(OP_approach.ravel(), bins=50, density=True); \\ \n                                    title('Histogram - pixel distribution')\n
\n

\"enter

\n

The range of minimum and maximum pixel after normalization are (-2.1179039301310043, 2.6399999999999997) respectively.

\n

Option 2

\n

We can use the tf. keras...Normalization preprocessing layer to do the same. It takes two important arguments which are mean and, variance (square of the std).

\n
from tensorflow.keras.experimental.preprocessing import Normalization\n\ninput_data = np.array(img)/255\nlayer = Normalization(mean=[0.485, 0.456, 0.406], \n                      variance=[np.square(0.299), \n                                np.square(0.224), \n                                np.square(0.225)])\n\nplt.figure(figsize=(25,10))\nsubplot(121); imshow(layer(input_data).numpy()); title(f'Normalized Image \\n min-px: \\\n   {layer(input_data).numpy().min()} \\n max-pix: {layer(input_data).numpy().max()}')\nsubplot(122); hist(layer(input_data).numpy().ravel(), bins=50, density=True);\\\n   title('Histogram - pixel distribution')\n
\n

\"enter

\n

The range of minimum and maximum pixel after normalization are (-2.0357144, 2.64) respectively.

\n

Option 3

\n

This is more like subtracting the average mean and divide by the average std.

\n
norm_img = ((tf.cast(np.array(img), tf.float32) / 255.0) - 0.449) / 0.226\n\nplt.figure(figsize=(25,10))\nsubplot(121); imshow(norm_img.numpy()); title(f'Normalized Image \\n min-px: \\\n{norm_img.numpy().min()} \\n max-pix: {norm_img.numpy().max()}')\nsubplot(122); hist(norm_img.numpy().ravel(), bins=50, density=True); \\\ntitle('Histogram - pixel distribution')\n
\n

\"enter

\n

The range of minimum and maximum pixel after normalization are (-1.9867257, 2.4380531) respectively. Lastly, if we compare to the pytorch way, there is not that much difference among these approaches.

\n
import torchvision.transforms as transforms\n\ntransform_norm = transforms.Compose([\n    transforms.ToTensor(),\n    transforms.Normalize(mean=[0.485, 0.456, 0.406],\n                            std=[0.229, 0.224, 0.225]),\n])\nnorm_pt = transform_norm(img)\n\nplt.figure(figsize=(25,10))\nsubplot(121); imshow(np.array(norm_pt).transpose(1, 2, 0));\\\n  title(f'Normalized Image \\n min-px: \\\n  {np.array(norm_pt).min()} \\n max-pix: {np.array(norm_pt).max()}')\nsubplot(122); hist(np.array(norm_pt).ravel(), bins=50, density=True); \\\n  title('Histogram - pixel distribution')\n
\n

\"enter

\n

The range of minimum and maximum pixel after normalization are (-2.117904, 2.64) respectively.

\n", + "system": "" + }, + { + "instruction": "Pytorch semantic segmentation loss function", + "input": "", + "output": "

You are using the wrong loss function.

\n

nn.BCEWithLogitsLoss() stands for Binary Cross-Entropy loss: that is a loss for Binary labels. In your case, you have 5 labels (0..4).
\nYou should be using nn.CrossEntropyLoss: a loss designed for discrete labels, beyond the binary case.

\n

Your models should output a tensor of shape [32, 5, 256, 256]: for each pixel in the 32 images of the batch, it should output a 5-dim vector of logits. The logits are the "raw" scores for each class, to be later on normalize to class probabilities using softmax function.
\nFor numerical stability and computational efficiency, nn.CrossEntropyLoss does not require you to explicitly compute the softmax of the logits, but does it internally for you. As the documentation read:

\n
\n

This criterion combines LogSoftmax and NLLLoss in one single class.

\n
\n", + "system": "" + }, + { + "instruction": "Is it advisable to use the same torch Dataset class for training and predicting?", + "input": "", + "output": "

PyTorch's DataSet class is really simple. So, do not overthink it. It's not much more than a wrapper for accessing your data.

\n

You don't have to return a tuple, not even Tensors. You can return whatever data you want. Commonly, it will be in one of those styles:

\n\n

It is also common to use the same DataSet class for train / test.

\n

So, in your case, simply return the sample or a tuple (sample, None) as done in torchvision and adjust your pipeline accordingly. I'd not suggest using np.nan as it would fail a simple None check (np.nan == None). Also, I'd encourage you to inherit from torch.data.Dataset.

\n

If however, your pipeline forces you to use a tuple or has other constraints I'd suggest to rephrase your question.

\n", + "system": "" + }, + { + "instruction": "Split a torch tensor using a same-sized tensor of indices", + "input": "", + "output": "

One could do it using argsort for general case:

\n
def mask_split(tensor, indices):\n    sorter = torch.argsort(indices)\n    _, counts = torch.unique(indices, return_counts=True)\n    return torch.split(t[sorter], counts.tolist())\n\n\nmask_split(t, indices)\n
\n

Though it might be better to use @flawr answer if this is your real use case (also list comprehension might also be faster as it does not require sorting), something like this:

\n
def mask_split(tensor, indices):\n    unique = torch.unique(indices)\n    return [tensor[indices == i] for i in unique]\n
\n", + "system": "" + }, + { + "instruction": "PyTorch's grid_sample conversion to CoreML (via coremltools)", + "input": "", + "output": "

Well, this is not exact answer, rather some research. grid_sample by it's nature is sparse matrix operation, the idea is to try make it dense. The code below is demonstrates how it could be done. It may be slow, and requires grid to be static to eliminate grid_sample from model to be converted, but kinda works.

\n

The goal is to get our transformation in linear form. Here, to get the dense matrix, we feed unit diagonal to 'grid_sample', and the result is matrix holding transform we are looking for. To do named transform, multiply flattened image to this matrix.\nAs you can see batch=1 here, the conversion must be done for each grid independently.

\n

Your code:

\n
in_sz  = 2;    out_sz = 4;    batch  = 1;    ch     = 3\n\nclass GridSample(torch.nn.Module):\n    def forward(self, inputs, grid):\n        # Rest could be the default behaviour, e.g. bilinear\n        return torch.nn.functional.grid_sample(inputs, grid, align_corners=True)\n\nimage = torch.randn( batch, ch, in_sz, in_sz)  # (batch, in_channels, width, height)\ngrid = torch.randint(low=-1, high=2, size=( batch, out_sz, out_sz, 2)).float()\n\nlayer = GridSample()\nscripted = torch.jit.trace(layer, (image, grid))\nprint(scripted(image, grid))\n
\n

out:

\n
tensor([[[[-0.8226, -0.4457, -0.3382, -0.0795],\n          [-0.4457, -0.0052, -0.8226, -0.6341],\n          [-0.4457, -0.8226, -0.4457, -0.6341],\n          [-0.4510, -0.3382, -0.4457, -0.0424]],\n\n         [[-1.0090, -1.6029, -1.3813, -0.1212],\n          [-1.6029, -2.7920, -1.0090, -1.3060],\n          [-1.6029, -1.0090, -1.6029, -1.3060],\n          [-0.5651, -1.3813, -1.6029, -1.4566]],\n\n         [[ 0.1482,  0.7313,  0.8916,  1.8723],\n          [ 0.7313,  0.8144,  0.1482,  0.4398],\n          [ 0.7313,  0.1482,  0.7313,  0.4398],\n          [ 1.0103,  0.8916,  0.7313,  1.3434]]]])\n
\n

Conversion:

\n
oness  = torch.ones( in_sz*in_sz )\ndiagg  = torch.diag( oness ).reshape( 1, in_sz*in_sz, in_sz, in_sz )\ndenser = torch.nn.functional.grid_sample( diagg, grid, align_corners=True).reshape( in_sz*in_sz, out_sz*out_sz ).transpose(0,1)\nprint (denser.shape)\nprint (image.shape)\nimage_flat = image.reshape( batch, ch, in_sz*in_sz )\nprint (image_flat.shape)\nprint( torch.nn.functional.linear( image_flat, denser ).reshape( batch, ch, out_sz, out_sz ) )\n
\n

Out:

\n
torch.Size([16, 4])\ntorch.Size([1, 3, 2, 2])\ntorch.Size([1, 3, 4])\ntensor([[[[-0.8226, -0.4457, -0.3382, -0.0795],\n          [-0.4457, -0.0052, -0.8226, -0.6341],\n          [-0.4457, -0.8226, -0.4457, -0.6341],\n          [-0.4510, -0.3382, -0.4457, -0.0424]],\n\n         [[-1.0090, -1.6029, -1.3813, -0.1212],\n          [-1.6029, -2.7920, -1.0090, -1.3060],\n          [-1.6029, -1.0090, -1.6029, -1.3060],\n          [-0.5651, -1.3813, -1.6029, -1.4566]],\n\n         [[ 0.1482,  0.7313,  0.8916,  1.8723],\n          [ 0.7313,  0.8144,  0.1482,  0.4398],\n          [ 0.7313,  0.1482,  0.7313,  0.4398],\n          [ 1.0103,  0.8916,  0.7313,  1.3434]]]])\n         \n
\n

Well, may be not very effective, I hope this amuses at least.

\n", + "system": "" + }, + { + "instruction": "MMDetection loading from own training checkpoint for inference produces garbage detections", + "input": "", + "output": "

One of the mistakes in your code is that you have not updated num_classes for mask_head.

\n

Our aim here should be to replicate the same config file that was used for training should also be used for testing/validation. If you have trained the model using 1 num_classes for bbox_head and mask_head in the config file but for validation/testing you are using 80 num_classes as default, then that will cause a mismatch in the testing process, leading to garbage detections and segmentations.

\n

There are 2 solutions for achieving the required result:

\n
    \n
  1. Change the num_classes in config file before doing inference
  2. \n
  3. Save the model and config file as pickle, as soon as training is completed.
  4. \n
\n

Note: The first solution is standard but the second solution is more simpler

\n

1. Change the num_classes in config file before doing inference.

\n

First, find the total number of classes in your dataset. Here num_classes is total number of classes in the training dataset.

\n

Locate to this path:\nmmdetection/configs/model_name (model_name is name used for training)

\n

Here, inside model_name folder, find the ..._config.py that you have used for training.\nInside this config file, if you have found model = dict(...) then change the num_classes for each of these keys: bbox_head, mask_head.

\n

bbox_head might be list. so, change num_classes for each keys in the list.

\n

If model = dict(...) is not found, then at the first line there is\n_base_ = '...' So, open that config file and check whether model=dict(...) is found or not. If not found keep on opening the file location of _base_.

\n

After changing the num_classes, use this code for inference:

\n

Code after changing the num_classes:

\n
from mmdet.apis import init_detector, inference_detector\nimport mmcv\nimport numpy as np\nimport cv2\nimport os\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nconfig_file = './configs/scnet/scnet_x101_64x4d_fpn_20e_coco.py' #(I have used SCNet for training)\n\ncheckpoint_file = 'tutorial_exps/epoch_40.pth' #(checkpoint saved after training)\n\nmodel = init_detector(config_file, checkpoint_file, device='cuda:0') #loading the model\n\nimg = 'test.png'\n\nresult = inference_detector(model, img)\n\n#visualize the results in a new window\nim1 = cv2.imread(img)[:,:,::-1]\n#im_ones = np.ones(im1.shape, dtype='uint')*255\n# model.show_result(im_ones, result, out_file='fine_result6.jpg')\nplt.imshow(model.show_result(im1, result))\n\n
\n
\n

2. Save the model and config as pickle as soon as training is completed.

\n

Another simple solution is to save both model and config as pickle as soon as the training is completed, irrespective of depending on mmdetection to do it.

\n
Note:
The pickle files should be saved right after training is completed.\n
\n
\n

Code for saving as pickle:

\n
import pickle\n\nwith open('mdl.pkl','wb') as f:\n    pickle.dump(model, f)\n\nwith open('cfg.pkl','wb') as f:\n    pickle.dump(cfg, f)\n
\n

You can use this model/config wherever and whenever you want. For inference with the saved model, use this:

\n
import pickle, mmcv\nfrom mmdet.apis import inference_detector, show_result_pyplot\n\nmodel = pickle.load(open('mdl.pkl','rb'))\ncfg = pickle.load(open('cfg.pkl','rb'))\n\nimg = mmcv.imread('images/test.png')\n\nmodel.cfg = cfg\nresult = inference_detector(model, img)\nshow_result_pyplot(model, img, result)\n
\n", + "system": "" + }, + { + "instruction": "RuntimeError: CUDA error: device-side assert triggered on loss function", + "input": "", + "output": "

There might be two reasons of the error:

\n
    \n
  1. As the log says input_val is not between the range [0; 1]. So you should ensure that model outputs are in that range. You can use torch.clamp() of pytorch. Before calculating the loss add the following line:
  2. \n
\n
    out = out.clamp(0, 1)\n
\n
    \n
  1. Maybe you are sure that model outputs are in the range [0; 1]. Then very common problem is output contains some nan values which triggers assert as well. To prevent this you can use the following trick, again before calculating the loss:
  2. \n
\n
    out[out!=out] = 0 # or 1 depending on your model's need\n
\n

Here the trick is using nan!=nan property, we should change them to some valid number.

\n", + "system": "" + }, + { + "instruction": "pytorch best way to call unique_consecutive() on certain dimension of a tensor?", + "input": "", + "output": "

found a decent solution from PyTorch forum https://discuss.pytorch.org/t/best-way-to-run-unique-consecutive-on-certain-dimension/112662, quoted here for quick reference.

\n
\n    x = tensor([[3, 3, 5, 5, 5],\n                [3, 3, 2, 2, 3]])\n    unique_x, indices = torch.unique_consecutive(x, return_inverse=True)\n    indices -= indices.min(dim=1, keepdims=True)[0]\n    result = -torch.ones_like(x)\n    result = result.scatter_(1, indices, x)\n\n
\n", + "system": "" + }, + { + "instruction": "Conditionally apply tensor operations in PyTorch", + "input": "", + "output": "

If I understand correctly, you are looking to return a tensor either way (hence the mapping) but by checking the condition element-wise. Assuming the shapes of tensor_a, tensor_b, and tensor_c are all two dimensional, as in "simple matrices", here is a possible solution.

\n

What you're looking for is probably torch.where, it's fairly close to a mapping where based on a condition, it will return one value or another element-wise.

\n

It works like torch.where(condition, value_if, value_else) where all three tensors have the same shape (value_if and value_else can actually be floats which will be cast to tensors, filled with the same value). Also, condition is a bool tensor which defines which value to assign to the outputted tensor: it's a boolean mask.

\n

For the purpose of this example, I have used random tensors:

\n
>>> a = torch.rand(2, 2, dtype=float)*100\n>>> b = torch.rand(2, 2, dtype=float)*0.01\n>>> c = torch.rand(2, 2, dtype=float)*10\n\n>>> torch.where(a*(b@c) < 1, -a*b, 0.)\ntensor([[ 0.0000,  0.0000],\n        [ 0.0000, -0.0183]], dtype=torch.float64)\n
\n

More generally though, this will work if tensor_a and tensor_b have a shape of (m, n), and tensor_c has a shape of (n, m) because of the operation constraints. In your experiment I'm guessing you only had columns.

\n", + "system": "" + }, + { + "instruction": "TypeError: expected CPU (got CUDA)", + "input": "", + "output": "

To transfer the variables to GPU, try the following:

\n
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n\nX_train=torch.FloatTensor(X_train).to(device)\nX_test=torch.FloatTensor(X_test).to(device)\ny_train=torch.LongTensor(y_train).to(device)\ny_test=torch.LongTensor(y_test).to(device)\n
\n", + "system": "" + }, + { + "instruction": "How can I run both TensorFlow and Torch on Mac M1 MacBook Pro?", + "input": "", + "output": "

After some research, I found this answer: https://github.com/pytorch/pytorch/issues/48145 .

\n

So if someone tries to run both Tensorflow and Torch on Mac M1 with PyCharm Apple Silicon Version, here's how to proceed:

\n\n

Now you can open a PyCharm project with your freshly created virtual environment, and you'll be able to import both Tensorflow and Torch. However, a lot of librairies will be tricky to install like PyTorch...

\n", + "system": "" + }, + { + "instruction": "How to draw a scatter plot in Tensorboard Pytorch?", + "input": "", + "output": "

If I understand your question right, you could use add_images, add_figure to add image or figure to tensorboard(docs).

\n

Sample code:

\n
from torch.utils.tensorboard import SummaryWriter\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# create summary writer\nwriter = SummaryWriter('lightning_logs')\n\n# write dummy image to tensorboard\nimg_batch = np.zeros((16, 3, 100, 100))\nwriter.add_images('my_image_batch', img_batch, 0)\n\n# write dummy figure to tensorboard\nplt.imshow(np.transpose(img_batch[0], [1, 2, 0]))\nplt.title('example title')\nwriter.add_figure('my_figure_batch', plt.gcf(), 0)\nwriter.close()\n
\n", + "system": "" + }, + { + "instruction": "Pytorch doesn't work with CUDA in PyCharm/IntelliJ", + "input": "", + "output": "

It was driving me mad as well... What finally helped me was the first link that says to use PyCharm "Terminal" to run the pip install command (from the PyTorch website). That fixed all my problems. (I had installed pytorch 3 times by that time and tried different interpreters...)

\n

https://www.datasciencelearner.com/how-to-install-pytorch-in-pycharm/

\n

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

\n

I hope this helps save someone hours of headache. :)

\n", + "system": "" + }, + { + "instruction": "How to set hydra's parameter HYDRA_FULL_ERROR?", + "input": "", + "output": "

You set an environment variable in the shell.\nFor a specific run:

\n
$ HYDRA_FULL_ERROR=1 python foo.py\n
\n

Or for all runs in this shell session:

\n
$ export HYDRA_FULL_ERROR=1\n$ python foo.py\n
\n

However, you shouldn't normally need to set it. This is more of a debugging backdoor in case of issues with Hydra itself.\nIf you hit a case where you can only understand your issue after setting HYDRA_FULL_ERROR, please file an issue.

\n", + "system": "" + }, + { + "instruction": "What parameters do I change to train a pytorch model from scratch?", + "input": "", + "output": "

The function signature is

\n
\n

torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, trainable_backbone_layers=3, **kwargs)

\n
\n

Setting pretrained=False will tell PyTorch not to download model pre-trained on COCO train2017. You want it as you're interested in training.

\n

Usually, this is enough if you want to train on a different dataset.

\n

When you set pretrained=False, PyTorch will download pretrained ResNet50 on ImageNet. And by default, it'll freeze first two blocks named conv1 and layer1. This is how it was done in Faster R-CNN paper which frooze the initial layers of pretrained backbone.

\n

(Just print model to check its structure).

\n
layers_to_train = ['layer4', 'layer3', 'layer2', 'layer1', 'conv1'][:trainable_layers]\n
\n

Now, if you don't even want the first two layers to freeze, you can set trainable_backbone_layers=5 (done automatically, when you set pretrained_backbone=False), which will train the entire resnet backbone from scratch.

\n

Check PR#2160.

\n", + "system": "" + }, + { + "instruction": "How can I fold a Tensor that I unfolded with PyTorch that has overlap?", + "input": "", + "output": "

For that specific configuration, since 63840 is divisible by 160 and the step size is a multiple of the slice size, you can simply select every second element along that dimension and then flatten the resulting tensor:

\n
inp_unfolded[:, ::2, :].flatten(1, 2)\n
\n

More generally, for t.unfold(i, n, s), if t.shape[i] % n == 0 and n % s == 0 holds, then you can restore the original tensor via:

\n
index = [slice(None) for __ in t.shape]\nindex[i] = slice(None, None, n // s)\noriginal = t.unfold(i, n, s)[tuple(index)].flatten(i, i+1)\n
\n

Of course you can also use slice notation, if the dimension i is known beforehand. For example i == 1 as in your example:

\n
original = t.unfold(1, n, s)[:, ::n//s, ...].flatten(1, 2)\n
\n", + "system": "" + }, + { + "instruction": "Convert a simple cnn from keras to pytorch", + "input": "", + "output": "

Your PyTorch equivalent of the Keras model would look like this:

\n
class CNN(nn.Module):\n    \n    def __init__(self, ):\n        super(CNN, self).__init__()\n        \n        self.maxpool = nn.MaxPool3d((2, 2, 2))\n        \n        self.conv1 = nn.Conv3d(in_channels=1, out_channels=8, kernel_size=3)\n        self.conv2 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)\n        self.conv3 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)\n        \n        self.linear1 = nn.Linear(4800, 2000)\n        self.dropout1 = nn.Dropout3d(0.5)\n        \n        self.linear2 = nn.Linear(2000, 500)\n        self.dropout2 = nn.Dropout3d(0.5)\n        \n        self.linear3 = nn.Linear(500, 3)\n        \n    def forward(self, x):\n        \n        out = self.maxpool(self.conv1(x))\n        out = self.maxpool(self.conv2(out))\n        out = self.maxpool(self.conv3(out))\n        \n        # Flattening process\n        b, c, d, h, w = out.size() # batch_size, channels, depth, height, width\n        out = out.view(-1, c * d * h * w)\n        \n        out = self.dropout1(self.linear1(out))\n        out = self.dropout2(self.linear2(out))\n        out = self.linear3(out)\n        \n        out = torch.softmax(out, 1)\n        \n        return out\n
\n

A driver program to test the model:

\n
inputs = torch.randn(8, 1, 64, 96, 96)\nmodel = CNN()\noutputs = model(inputs)\nprint(outputs.shape) # torch.Size([8, 3])\n
\n", + "system": "" + }, + { + "instruction": "Difference between these implementations of LSTM Autoencoder?", + "input": "", + "output": "

There is no official or correct way of designing the architecture of an LSTM based autoencoder... The only specifics the name provides is that the model should be an Autoencoder and that it should use an LSTM layer somewhere.

\n

The implementations you found are each different and unique on their own even though they could be used for the same task.

\n

Let's describe them:

\n\n

In the end you choose the architecture of your model depending on the data you want to train on, specifically: the nature (text, audio, images), the input shape, the amount of data you have and so on...

\n", + "system": "" + }, + { + "instruction": "converting einsum notation to for loops", + "input": "", + "output": "

Let's call x_1 and x_2 the two inputs:

\n
np.einsum('bijc,bijd->bcd', x_1, x_2)\n
\n

bijc,bijd->bcd boils down to ijc,ijd->cd since the first dimension is not used. Imagine you have c channels of ixj on one hand, and d channels of ixj on the other. The result we're looking for is a cxb matrix. Combining each ixj layer from x_1 (there are c in total) to each ixj layer from x_2 (there are d in total) makes a total of c*d values, this is what we're looking for. It's actually the sum of what's called the Hadamard product between the two ixj layers.

\n

Since c is first, c will be in the first dim (the rows) while d will be the number of columns.

\n

Here's an idea:

\n
b_s, i_s, j_s, c_s = x_1.shape\nd_s = x_2.shape[3]\ny = np.zeros((b_s, c_s, d_s))\nfor b in range(b_s):\n  for i in range(i_s):\n    for j in range(j_s):\n      for c in range(c_s):\n        for d in range(d_s):\n          y[b, c, d] += x_1[b, i, j, c]*x_2[b, i, j, d]\n
\n

This post might give you a better idea

\n

Also try the following to see what happens in a simple case with i=2, j=2, c=1 and d=1:

\n
a = [[[1], [0]], [[0], [1]]]; b = [[[4], [1]], [[2], [2]]]\nnp.einsum('ijc,ijd->cd', a, b)\n
\n

result is a d*c matrix of size... 1x1 (since c and d are both equal to 1). Here the result is [6]

\n", + "system": "" + }, + { + "instruction": "OSError: libcurand.so.10: cannot open shared object file: No such file or directory", + "input": "", + "output": "

emm\nI found this answer as followed...

\n

https://forums.developer.nvidia.com/t/mounting-cuda-onto-l4t-docker-image-issues-libcurand-so-10-cannot-open-no-such-file-or-directory/121545

\n

The key is : "You can use JetPack4.4 for CUDA 10.2 and JetPack4.3 for CUDA 10.0."

\n

Maybe downloading Pytorch v1.4.0 and Jetpack 4.2/4.3 would solve this question...

\n

Anyway, it is helped for me... good luck

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "How can I save my training progress in PyTorch for a certain batch no.?", + "input": "", + "output": "

Whenever you want to save your training progress, you need to save two things:

\n\n

This can be done in the following way:

\n
def save_checkpoint(model, optimizer, save_path, epoch):\n    torch.save({\n        'model_state_dict': model.state_dict(),\n        'optimizer_state_dict': optimizer.state_dict(),\n        'epoch': epoch\n    }, save_path)\n
\n

To resume training, you can restore your model and optimizer's state dict.

\n
def load_checkpoint(model, optimizer, load_path):\n    checkpoint = torch.load(load_path)\n    model.load_state_dict(checkpoint['model_state_dict'])\n    optimizer.load_state_dict(checkpoint['optimizer_state_dict'])\n    epoch = checkpoint['epoch']\n    \n    return model, optimizer, epoch\n
\n

You can save your model at any point in training, wherever you need to. However, it should be ideal to save after finishing an epoch.

\n", + "system": "" + }, + { + "instruction": "Pytorch geometric: Having issues with tensor sizes", + "input": "", + "output": "

I agree with @trialNerror -- it is a data problem. Your edge_index should refer to the data nodes and its max should not be that high. Since you don't want to show us the data and ask for "creating a graph on any kind of data ", here it is.

\n

I mostly left your Net unchanged. You can play around with the constants stated to match with your data.

\n
import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom torch_geometric.nn import GCNConv\nfrom torch_geometric.data import Data\n\nnum_node_features = 100\nnum_classes = 2\nnum_nodes = 678\nnum_edges = 1500\nnum_hidden_nodes = 128\n\nx = torch.randn((num_nodes, num_node_features), dtype=torch.float32)\nedge_index = torch.randint(low=0, high=num_nodes, size=(2, num_edges), dtype=torch.long)\ny = torch.randint(low=0, high=num_classes, size=(num_nodes,), dtype=torch.long)\n\nclass Net(torch.nn.Module):\n    def __init__(self):\n        super(Net, self).__init__()\n        self.conv1 = GCNConv(num_node_features, num_hidden_nodes)\n        self.conv2 = GCNConv(num_hidden_nodes, num_classes)\n\n    def forward(self, data):\n        x, edge_index = data.x, data.edge_index\n        x = self.conv1(x, edge_index)\n        x = F.relu(x)\n        x = F.dropout(x, training=self.training)\n        x = self.conv2(x, edge_index)\n\n        return F.log_softmax(x, dim=1)\n\ndata = Data(x=x, edge_index=edge_index, y=y)\n\nnet = Net()\n\noptimizer = torch.optim.Adam(net.parameters(), lr=1e-2)\nfor i in range(1000):\n    output = net(data)\n    loss = F.cross_entropy(output, data.y)\n    optimizer.zero_grad()\n    loss.backward()\n    optimizer.step()\n\n    if i % 100 == 0:\n        print('Accuracy: ', (torch.argmax(output, dim=1)==data.y).float().mean())\n
\n

Output

\n
Accuracy:  tensor(0.5059)\nAccuracy:  tensor(0.8702)\nAccuracy:  tensor(0.9159)\nAccuracy:  tensor(0.9233)\nAccuracy:  tensor(0.9336)\nAccuracy:  tensor(0.9484)\nAccuracy:  tensor(0.9602)\nAccuracy:  tensor(0.9676)\nAccuracy:  tensor(0.9705)\nAccuracy:  tensor(0.9749)\n
\n

(yes we can overfit to random data)

\n", + "system": "" + }, + { + "instruction": "fastapi could not find model defintion when run with uvicorn", + "input": "", + "output": "

With the help from @lsabi I found the solution here https://stackoverflow.com/a/51397373/13947506

\n

With the custom unpickler my problem was solved:

\n
class CustomUnpickler(pickle.Unpickler):\n\n    def find_class(self, module, name):\n        if name == 'Model_II_b':\n            from model_ii_b import Model_II_b\n            return Model_II_b\n        return super().find_class(module, name)\n\ncurrent_model = 'model_v2b_c2_small_ep24.pkl'\n\nmodel = CustomUnpickler(open('models/' + current_model, 'rb')).load()\n
\n", + "system": "" + }, + { + "instruction": "Checking the contents of python dataloader", + "input": "", + "output": "

Datasets are iterables, you can get the first element with next(iter(test_data)).

\n", + "system": "" + }, + { + "instruction": "Python matplotlib, invalid shape for image data", + "input": "", + "output": "

The matplotlib function 'imshow' gets 3-channel pictures as (h, w, 3) as you can see in the documentation.

\n\n

It seems that you passed a \"batch\" of single image (the first dimention) of three channels (second dimention) of the image (h and w are the third and forth dimention).

\n\n

You need to reshape or view your image (after converting to cpu, try to use:

\n\n
image1.squeeze().permute(1,2,0)\n
\n\n

The result will be an image of the desired shape (128, 128, 3).

\n\n

The squeeze() function will remove the first dimention. And the premute() function will transpose the dimenstion where the first will shift to the third position and the two other will shift to the beginning.

\n\n

Also, have a look here for further talk on the GPU and CPU issues:\nlink

\n\n

Hope that helps.

\n", + "system": "" + }, + { + "instruction": "Pytorch Concatenate rows in alternate order", + "input": "", + "output": "

I would simply use the assignment operator for this:

\n\n
c = torch.zeros(8, 5)\nc[::2, :] = a   # Index every second row, starting from 0\nc[1::2, :] = b  # Index every second row, starting from 1 \n
\n\n

When timing the two solutions, I used the following:

\n\n
import timeit\nimport torch\na = torch.arange(20).reshape(4,5) \nb = a * 2\n\nsuggested = timeit.timeit(\"c = torch.cat([torch.stack([a_row, b_row]) for a_row, b_row in zip (a, b)])\", \n                          setup=\"import torch; from __main__ import a, b\", number=10000)\nprint(suggested/10000)\n# 4.5105120493099096e-05\n\nimproved = timeit.timeit(\"c = torch.zeros(8, 5); c[::2, :] = a; c[1::2, :] = b\", \n                         setup=\"import torch; from __main__ import a, b\", number=10000)\nprint(improved/10000)\n# 2.1489459509029985e-05\n
\n\n

The second approach takes consistently less (approximately half) the time, even though a single iteration is still very fast. Of course, you would have to test this for your actual tensor sizes, but that is the most straightforward solution I could come up with.\nCan't wait to see if anyone has some nifty low-level solution for this that is even faster!

\n\n

Also, keep in mind that I did not time the creation of b, assuming that the tensors you want to interweave are already given.

\n", + "system": "" + }, + { + "instruction": "LSTM in PyTorch Classifying Names", + "input": "", + "output": "

Lets dig into the solution step by step

\n\n

Frame the problem

\n\n

Given your problem statement, you will have to use LSTM for making a classification rather then its typical use of tagging. The LSTM is unrolled for certain timestep and this is the reason why input and ouput dimensions of a recurrent models are

\n\n\n\n

Now since you want to use it for classification, you have two options:

\n\n
    \n
  1. Put a dense layer over the output of all the timesteps/unrollings [My example below uses this]
  2. \n
  3. Ignore the all timestep outputs except the last, and put a dense layer over the last timestep
  4. \n
\n\n

So the input to our LSTM model are the names fed as one character per LSTM timestep and output will be the class corresponding to its language.

\n\n

How to handle variable length inputs/names

\n\n

We have two options again here.

\n\n
    \n
  1. Batch same length names together. This is called bucketing
  2. \n
  3. Fix max length based on the average size of names you have. Pad the smaller names and chop off the longer names [My example below uses max length of 10]
  4. \n
\n\n

Do we need Embedding layer ?

\n\n

No. Embedding layers are typically used to learn a good vector representations of words. But in the case of character model, the input is a character not a word so adding an embedding layers does not help. Character can be directly encoded to number and embedding layers does very little in capturing relationship between different characters. You can still use embedding layer, but I strongly believe it will not help.

\n\n

Toy character LSTM model code

\n\n
import numpy as np\nimport torch\nimport torch.nn as nn\n\n# Model architecture \nclass Recurrent_Model(nn.Module):\n    def __init__(self, output_size, time_steps=10):\n        super(Recurrent_Model, self).__init__()\n        self.time_steps = time_steps\n        self.lstm = nn.LSTM(1,32, bidirectional=True, num_layers=2)\n        self.linear = nn.Linear(32*2*time_steps, output_size)\n\n    def forward(self, x):        \n        lstm_out, _ = self.lstm(x)\n        return self.linear(lstm_out.view(-1,32*2*self.time_steps))\n\n# Sample input and output\nnames = ['apple', 'dog', 'donkey', \"elephant\", \"hippopotamus\"]\nlang = [0,1,2,1,0]\n\ndef pad_sequence(name, max_len=10):\n    x = np.zeros((len(name), max_len))\n    for i, name in enumerate(names):\n        for j, c in enumerate(name):\n            if j >= max_len:\n                break\n            x[i,j] = ord(c)\n    return torch.FloatTensor(x)\n\nx = pad_sequence(names)\nx = torch.unsqueeze(x, dim=2)\ny = torch.LongTensor(lang)\n\nmodel = Recurrent_Model(3)\ncriterion = nn.CrossEntropyLoss()\noptimizer = torch.optim.SGD(model.parameters(), 0.01)\n\nfor epoch in range(500):\n    model.train()\n    output = model(x)\n    loss = criterion(output, y)\n    print (f\"Train Loss: {loss.item()}\")\n    optimizer.zero_grad()\n    loss.backward()\n    optimizer.step()\n
\n\n

Note

\n\n
    \n
  1. All the tensors are loaded into memory so if you have huge dataset, you will have to use a dataset and dataloader to avoid OOM error.
  2. \n
  3. You will have to split data into train test and validate on the test dataset (the standard model building stuff)
  4. \n
  5. You will have to normalize the input tensors before passing it to the model (again the standard model building stuff)
  6. \n
\n\n

Finally

\n\n

so how do you make sure your model architecture does not have bugs or is learning. As Andrej karpathy says, overfit the model on a small dataset and if it is overfitting then we are fine.

\n", + "system": "" + }, + { + "instruction": "Use of scheduler with self-ajusting optimizers in PyTorch", + "input": "", + "output": "

The learning rate you define for optimizers like ADAM are upper bounds. You can see this in the paper in Section 2.1. The stepsize \u03b1 in the paper is the learning rate.

\n\n
\n

The effective magnitude of the steps taken in parameter space at each are approximately bounded by the stepsize setting \u03b1

\n
\n\n

Also this stepsize \u03b1 is directly used and multiplied with the step size correction, which is learned. So changing the learning rate e.g. reducing it will reduce all individual learning rates and reduces the upper bound. This can be helpful during the \"end\" of an training, to reduce the overall step sizes, so only smaller steps occur and might help the network to find a minima in the loss function.

\n\n

I saw learning rate decay in some papers using ADAM and used it myself and it did help. What I found is that you should do it slower than e.g. with SGD. With one model I just multiply it with 0.8 every 10 epochs. So it is a gradual decay which I think works better than more drastic steps since you don't \"invalidate\" the estimated momentums to much. But this is just my theory.

\n", + "system": "" + }, + { + "instruction": "how to duplicate the input channel in a tensor?", + "input": "", + "output": "

I am fairly certain that this is already a duplicate question, but I could not find a fitting answer myself, which is why I am going ahead and answer this by referring to both the PyTorch documentation and PyTorch forum

\n\n

Essentially, torch.Tensor.expand() is the function that you are looking for, and can be used as follows:

\n\n
x = torch.rand([39, 1, 20, 256, 256])\ny = x.expand(39, 3, 20, 256, 256)\n
\n\n

Note that this works only on singleton dimensions, which is the case in your example, but may not work for arbitrary dimensions prior to expansion. Also, this is basically just providing a different memory view, which means that, according to the documentation, you have to keep the following in mind:

\n\n
\n

More than one element of an expanded tensor may refer to a single\n memory location. As a result, in-place operations (especially ones\n that are vectorized) may result in incorrect behavior. If you need to\n write to the tensors, please clone them first.

\n
\n\n

For a newly allocated memory version, see torch.Tensor.repeat, which is outlined in this (slightly related) answer. The syntax works otherwise exactly the same as expand().

\n", + "system": "" + }, + { + "instruction": "Autoencoder MaxUnpool2d missing 'Indices' argument", + "input": "", + "output": "

Similar to the question here, the solution seems to be to separate the maxunpool layer from the decoder and explicitly pass its required parameters. nn.Sequential only takes one parameter.

\n\n
class SimpleConvAE(nn.Module):\ndef __init__(self):\n    super().__init__()\n\n    # input: batch x 3 x 32 x 32 -> output: batch x 16 x 16 x 16\n    self.encoder = nn.Sequential(\n        ...\n        nn.MaxPool2d(2, stride=2, return_indices=True),\n    )\n\n    self.unpool = nn.MaxUnpool2d(2, stride=2, padding=0)\n\n    self.decoder = nn.Sequential(\n        ...\n    )\n\ndef forward(self, x):\n    encoded, indices = self.encoder(x)\n    out = self.unpool(encoded, indices)\n    out = self.decoder(out)\n    return (out, encoded)\n
\n", + "system": "" + }, + { + "instruction": "How to use pytorch to construct multi-task DNN, e.g., for more than 100 tasks?", + "input": "", + "output": "

You can (and should) use nn containers such as nn.ModuleList or nn.ModuleDict to manage arbitrary number of sub-modules.

\n\n

For example (using nn.ModuleList):

\n\n
class MultiHeadNetwork(nn.Module):\n    def __init__(self, list_with_number_of_outputs_of_each_head):\n        super(MultiHeadNetwork, self).__init__()\n        self.backbone = ...  # build the basic \"backbone\" on top of which all other heads come\n        # all other \"heads\"\n        self.heads = nn.ModuleList([])\n        for nout in list_with_number_of_outputs_of_each_head:\n            self.heads.append(nn.Sequential(\n              nn.Linear(10, nout * 2),\n              nn.ReLU(inplace=True),\n              nn.Linear(nout * 2, nout)))\n\n    def forward(self, x):\n        common_features = self.backbone(x)  # compute the shared features\n        outputs = []\n        for head in self.heads:\n            outputs.append(head(common_features))\n        return outputs\n
\n\n

Note that in this example each head is more complex than a single nn.Linear layer.
\nThe number of different \"heads\" (and number of outputs) is determined by the length of the argument list_with_number_of_outputs_of_each_head.

\n\n
\n\n

Important notice: it is crucial to use nn containers, rather than simple pythonic lists/dictionary to store all sub modules. Otherwise pytorch will have difficulty managing all sub modules.
\nSee, e.g., this answer, this question and this one.

\n", + "system": "" + }, + { + "instruction": "RuntimeError: Error(s) in loading state_dict for Generator: size mismatch for weights and biases using Pytorch", + "input": "", + "output": "

The model you loaded and the target model is not identical, so the error raise to inform about mismatches of size, layers, check again your code, or your saved model may not be saved properly

\n", + "system": "" + }, + { + "instruction": "what is the default weight initialization used in Pytorch embedding layer?", + "input": "", + "output": "

In Embedding, by default, the weights are initialization from the Normal distribution. You can check it from the reset_parameters() method:

\n\n
def reset_parameters(self):\n        init.normal_(self.weight)\n        ...\n
\n", + "system": "" + }, + { + "instruction": "MultiLabel Soft Margin Loss in PyTorch", + "input": "", + "output": "

If you know that for each example you only have 1 of 10 possible classes, you should be using CrossEntropyLoss, to which you pass your networks predictions, of shape [batch, n_classes], and labels of shape [batch] (each element of labels is an integer between 0 and n_classes-1).

\n\n

The loss you're looking at is designed for situations where each example can belong to multiple classes (say a person can be classified as both female and old). I think it's this \"multi\" that confuses you - it stands for the multiple possible classifications per example, not just multiple potential labels in the whole \"universe\".

\n\n

In the sense of two/more labels in the universe, in which you seem to have been thinking, the counterpart to CrossEntropyLoss would be BCELoss (BCE stands for Binary Cross Entropy), which is just a simplification of CrossEntropyLoss for the case of two labels.

\n", + "system": "" + }, + { + "instruction": "Adapting Pytorch "NLP from Scratch" for bidirectional GRU", + "input": "", + "output": "

So I'm not sure if this is 100% correct as I'm just learning how to program RNNs, but i changed my code in a couple of extra areas.

\n\n

For one you'll notice that the error says m1: [1x384] so the result of

\n\n

torch.cat((embedded[0], hidden[0]), 1))

\n\n

when putting this through the attn weight layer is not a dimension ending with 512, the expected input size. This is because hidden is a tensor of shape [2, 1, 256] instead of some shape [1, 1, 512] or something. Since your dimensions don't exactly match mine I'm not exactly sure what is different, so in train(...) where it just sets

\n\n

decoder_hidden = encoder_hidden

\n\n

I do

\n\n

decoder_hidden = torch.cat((encoder_hidden[0], encoder_hidden[1]) , 1)\n decoder_hidden = decoder_hidden.view((1, 1, 512))

\n\n

Hopefully that helps in some way

\n", + "system": "" + }, + { + "instruction": "How to fix this error: list indices must be integers or slices, not tuple", + "input": "", + "output": "

outputs[2][0] and outputs[2][1] both return an object (tensor I suppose).
\noutputs[2][0:1] returns a list of those objects.
\nWhat I think you are looking for is something like outputs[2][0:1][:,0,:,:] or [a[0,:,:] for a in outputs[2][0:1]]

\n", + "system": "" + }, + { + "instruction": "Recreating char level RNN for generating text", + "input": "", + "output": "

I later finally found a way to achive something close to real sentences, maybe it will help someone. Here is an example result:

\n\n
\n

-I have not seen him and the prince was a signt of the streme of the sumpering of the property of th

\n
\n\n

In my case the important change was to not inizialize the hidden and cell state to zero every batch but only every epoch. For this to work I had to rewrite the batch generator, so that it produces batches following on each other.

\n", + "system": "" + }, + { + "instruction": "Weighted summation of embeddings in pytorch", + "input": "", + "output": "

You can implement this via 1D convolution with kernel_size = 1

\n\n
import torch\n\nbatch_size=2\n\ninputs = torch.randn(batch_size, 12, 256)\naggregation_layer = torch.nn.Conv1d(in_channels=12, out_channels=1, kernel_size=1)\nweighted_sum = aggregation_layer(inputs)\n
\n\n

Such convolution will have 12 parameters. Each parameter will be a equal to e_i in formula you provided.

\n\n

In other words this convolution will ran over dimetion with size 256 and sum it with learnable weights.

\n", + "system": "" + }, + { + "instruction": "How to add L1 Regularization to PyTorch NN Model?", + "input": "", + "output": "

I haven't run the code in question, so please reach back if something doesn't exactly work. Generally, I would say that the code you linked is needlessly complicated (it may be because it tries to be generic and allow all the following kinds of regularization). The way it is to be used is, I suppose

\n\n
model = FFNNModel(30,5,[100,200,300,100],0.2)\nregularizer = L1Regularizer(model, lambda_reg=0.1)\n
\n\n

and then

\n\n
out = model(inputs)\nloss = criterion(out, target) + regularizer.regularized_all_param(0.)\n
\n\n

You can check that regularized_all_param will just iterate over parameters of your model and if their name ends with weight, it will accumulate their sum of absolute values. For some reason the buffer is to be manually initialized, that's why we pass in the 0..

\n\n

Really though, if you wish to efficiently regularize L1 and don't need any bells and whistles, the more manual approach, akin to your first link, will be more readable. It would go like this

\n\n
l1_regularization = 0.\nfor param in model.parameters():\n    l1_regularization += param.abs().sum()\nloss = criterion(out, target) + l1_regularization\n
\n\n

This is really what is at heart of both approaches. You use the Module.parameters method to iterate over all model parameters and you sum up their L1 norms, which then becomes a term in your loss function. That's it. The repo you linked comes up with some fancy machinery to abstract it away but, judging by your question, fails :)

\n", + "system": "" + }, + { + "instruction": "PyTorch: Expected input batch_size (12) to match target batch_size (64)", + "input": "", + "output": "

The error occurs because your model output, out, has shape (12, 10), while your target has a length of 64.

\n\n

Since you are using a batch size of 64 and predicting the probabilities of 10 classes, you would expect your model output to be of shape (64, 10), so clearly there is something amiss in the forward() method.

\n\n

Going through it line by line and noting the size of x at every step, we can try to find out what is going wrong:

\n\n
    ...\n    # x.shape = (64, 20, 4, 4) at this point as seen in your print statement\n    x = x.view(-1, 320)             # x.shape = (64, 320)\n    x = self.fc1(x)                 # x.shape = (64, 60)\n    x = x.view(-1, 320)             # x.shape = (12, 320)\n    x = F.relu(self.fc1(x))         # x.shape = (12, 60)\n    x = self.fc2(x)                 # x.shape = (12, 10)\n    return F.log_softmax(x, dim=0)  # x.shape = (12, 10)\n
\n\n

What you actually most likely want is:

\n\n
    ...\n    # x.shape = (64, 20, 4, 4) at this point as seen in your print statement\n    x = x.view(-1, 320)             # x.shape = (64, 320)\n    x = F.relu(self.fc1(x))         # x.shape = (64, 60)\n    x = self.fc2(x)                 # x.shape = (64, 10)\n    return F.log_softmax(x, dim=1)  # x.shape = (64, 10)\n
\n\n

Note: While not related to the error, note also that you want to softmax over dim=1 since that is the dimension that contains the logits for the classes.

\n", + "system": "" + }, + { + "instruction": "best_state changes with the model during training in pytorch", + "input": "", + "output": "

model.state_dict() is OrderedDict

\n\n
from collections import OrderedDict\n
\n\n

You can use:

\n\n
from copy import deepcopy\n
\n\n

To fix the problem

\n\n

Instead:

\n\n
best_state = model.state_dict() \n
\n\n

You should use:

\n\n
best_state = copy.deepcopy(model.state_dict())\n
\n\n

Deep (not shallow) copy makes the mutable OrderedDict instance not to mutate best_state as it goes.

\n\n

You may check my other answer on saving the state dict in PyTorch.

\n", + "system": "" + }, + { + "instruction": "How to calculate output sizes after a convolution layer in a configuration file?", + "input": "", + "output": "

In short, there is a common formula for output dims calculation:

\n\n

\"formula\"

\n\n

You can find explanation in A guide to receptive field arithmetic for Convolutional Neural Networks.

\n\n

In addition, I'd like to recommend amazing article A guide to convolution arithmetic for deep learning.

\n\n

And this repo conv_arithmetic with convolution animations.

\n", + "system": "" + }, + { + "instruction": "Why is the derivative of f(x) with respect to 'x' 'x' and not 1 in pytorch?", + "input": "", + "output": "

First of all, given z = torch.linspace(-1, 1, steps=5, requires_grad=True) and y = z, the function is a vector-valued function, so the derivative of y w.r.t z is not as simple as 1 but a Jacobian matrix. Actually in your case z = [z1, z2, z3, z4, z5]T , the upper case T means z is a row vector. Here is what the official doc says:

\n\n

\"enter

\n\n

Secondly, notice the official doc says: Now in this case y is no longer a scalar. torch.autograd could not compute the full Jacobian directly, but if we just want the vector-Jacobian product, simply pass the vector to backward as argument link. In that case x.grad is not the actual gradient value (matrix) but the vector-Jacobian product.

\n\n

EDIT:\nx.grad is the actual gradient if your output y is a scalar.\nSee the example here:

\n\n
z = torch.linspace(-1, 1, steps=5, requires_grad=True)\ny = torch.sum(z)\ny.backward()\nz.grad \n
\n\n

This will output:

\n\n
tensor([1., 1., 1., 1., 1.])\n
\n\n

As you can see, it is the gradient. Notice the only difference is that y is a scalar value here while a vector value in your example. grad can be implicitly created only for scalar outputs

\n\n

You might wonder what if the gradient is not a constant, like dependent on input z as in this case

\n\n
z = torch.linspace(-1, 1, steps=5, requires_grad=True)\ny = torch.sum(torch.pow(z,2))\ny.backward()\nz.grad\n
\n\n

The output is:

\n\n
tensor([-2., -1.,  0.,  1.,  2.])\n
\n\n

It is the same as

\n\n
z = torch.linspace(-1, 1, steps=5, requires_grad=True)\ny = torch.sum(torch.pow(z,2))\ny.backward(torch.tensor(1.))\nz.grad\n
\n\n

The blitz tutorial is kind of brief so it is actually quite hard to understand for beginners.

\n", + "system": "" + }, + { + "instruction": "TypeError: forward() missing 1 required positional argument: 'hidden'", + "input": "", + "output": "

I think the error message is pretty straight forward. You have two positional arguments input_tokens and hidden for your forward().

\n\n

Python complains that one of them (hidden) is missing when you call your forward() function.

\n\n

Looking at your code, you call your forward like this:

\n\n
model(x)\n
\n\n

So x is mapped to input_tokens, but you need to hand over a second argument hidden.

\n\n

So you need to call it like this, providing your hidden state:

\n\n
model(x, hidden)\n
\n", + "system": "" + }, + { + "instruction": "What is the best way to use multiprocessing CPU inference for PyTorch models?", + "input": "", + "output": "

Another possible way to get better performance would be to reduce the model as much as possible.

\n\n

One of the most promising techniques is quantized and binarized neural networks. Here are some references:

\n\n
    \n
  1. https://arxiv.org/abs/1603.05279
  2. \n
  3. https://arxiv.org/abs/1602.02505
  4. \n
\n", + "system": "" + }, + { + "instruction": "how to resolve the error while installing pytorch", + "input": "", + "output": "

I also got same error with older python versions. It resolved for me when i tried with latest python version 3.7

\n\n

Hope this information may help you.

\n", + "system": "" + }, + { + "instruction": "Best way to handle OOV words when using pretrained embeddings in PyTorch", + "input": "", + "output": "

FastText builds character ngram vectors as part of model training. When it finds an OOV word, it sums the character ngram vectors in the word to produce a vector for the word. You can find more detail here.

\n", + "system": "" + }, + { + "instruction": "Why Skorch show NAN in the every epoch?", + "input": "", + "output": "

For anyone who's experiencing a similar issue in case of classification problems, the loss function (criterion) used by Skorch by default is NLLLoss, which computes the log for you (doc and related issue). Therefore it is expected that a Softmax (in case of multiple classes) layer is present as last step in your architecture to be able to produce probabilities.

\n

You can either:

\n
    \n
  1. Add a Softmax layer to produce probabilities and leave the default NLLLoss;
  2. \n
  3. Change the default loss to CrossEntropyLoss:
  4. \n
\n
    net = NeuralNetClassifier(\n        ...\n        criterion=torch.nn.CrossEntropyLoss\n    )\n
\n", + "system": "" + }, + { + "instruction": "What does the "greater than" operator ">" mean for PyTorch tensors?", + "input": "", + "output": "

Using the > operator is the same as using the torch.gt() function.

\n\n

In other words,

\n\n
it > 0\n
\n\n

is the same as

\n\n
torch.gt(it, 0)\n
\n\n

and it returns a ByteTensor (a Boolean tensor) of the same shape as it where out[i] is True if it[i] > 0 and False otherwise.

\n", + "system": "" + }, + { + "instruction": "Module not found in pycharm (Windows)", + "input": "", + "output": "

Ok I solved this problem: First install anaconda and open the prompt then type conda install pytorch -c pytorch and pip3 install torchvision. Then go to PyCharm and create an Project and set the Project Interpreter to the Anaconda one (there is in the path: \\Anaconda.x.x\\python.exe ). Then you go to the Run settings and click Run... and then you go to Edit Configurations and then you select the Project Default interpreter and apply and you should be done! Thanks to the ppl who helped me =)

\n", + "system": "" + }, + { + "instruction": "Setting custom kernel for CNN in pytorch", + "input": "", + "output": "

Thanks to ptrblck I was able to solve it.\nI can define a new convolution layer as conv and as per the example I can set the identity kernel using -

\n\n
weights = ch.Tensor([[0, 0, 0], [0, 1, 0], [0, 0, 0]]).unsqueeze(0).unsqueeze(0)\nweights.requires_grad = True\n\n\nconv = nn.Conv2d(1, 1, kernel_size=3, stride=1, padding=1, bias=False)\n\nwith ch.no_grad():\n    conv.weight = nn.Parameter(weights)\n
\n\n

I can then continue to use conv as my regular nn.Conv2d layer.

\n", + "system": "" + }, + { + "instruction": "PyTorch and CUDA driver", + "input": "", + "output": "

Ever since https://github.com/pytorch/pytorch/releases/tag/v0.3.1, PyTorch binary releases had removed support for old GPUs' with CUDA capability 3.0. According to https://en.wikipedia.org/wiki/CUDA, the compute capability of Quadro K3000M is 3.0.

\n\n

Therefore, you might have to build pytorch from source or try other packages. Please refer to this thread for more information -- https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803.

\n", + "system": "" + }, + { + "instruction": "pyTorch can backward twice without setting retain_graph=True", + "input": "", + "output": "

\nThe reason why it works w/o retain_graph=True in your case is you have very simple graph that probably would have no internal intermediate buffers, in turn no buffers will be freed, so no need to use retain_graph=True.

\n\n

But everything is changing when adding one more extra computation to your graph:

\n\n

Code:

\n\n
x = torch.ones(2, 2, requires_grad=True)\nv = x.pow(3)\ny = v + 2\n\ny.backward(torch.ones(2, 2))\n\nprint('Backward 1st time w/o retain')\nprint('x.grad:', x.grad)\n\nprint('Backward 2nd time w/o retain')\n\ntry:\n    y.backward(torch.ones(2, 2))\nexcept RuntimeError as err:\n    print(err)\n\nprint('x.grad:', x.grad)\n
\n\n

Output:

\n\n
Backward 1st time w/o retain\nx.grad: tensor([[3., 3.],\n                [3., 3.]])\nBackward 2nd time w/o retain\nTrying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.\nx.grad: tensor([[3., 3.],\n                [3., 3.]]).\n
\n\n

In this case additional internal v.grad will be computed, but torch doesn't store intermediate values (intermediate gradients etc), and with retain_graph=False v.grad will be freed after first backward.

\n\n

So, if you want to backprop second time you need to specify retain_graph=True to \"keep\" the graph.

\n\n

Code:

\n\n
x = torch.ones(2, 2, requires_grad=True)\nv = x.pow(3)\ny = v + 2\n\ny.backward(torch.ones(2, 2), retain_graph=True)\n\nprint('Backward 1st time w/ retain')\nprint('x.grad:', x.grad)\n\nprint('Backward 2nd time w/ retain')\n\ntry:\n    y.backward(torch.ones(2, 2))\nexcept RuntimeError as err:\n    print(err)\nprint('x.grad:', x.grad)\n
\n\n

Output:

\n\n
Backward 1st time w/ retain\nx.grad: tensor([[3., 3.],\n                [3., 3.]])\nBackward 2nd time w/ retain\nx.grad: tensor([[6., 6.],\n                [6., 6.]])\n
\n", + "system": "" + }, + { + "instruction": "Expected parameters of Conv2d", + "input": "", + "output": "

The error comes from your final fully-connected layer self.fc = nn.Linear(7*7*32, num_classes), not your convolution ones.

\n\n

Given your input dimensions ((10, 100)), the shape of out = self.layer2(out) is (batch_size, 32, 25, 2), and thus the shape of out = out.reshape(out.size(0), -1) is (batch_size, 32*25*2) = (batch_size, 1600).

\n\n

On the other hand, your fully-connected layer is defined for inputs of shape (batch_size, 32*7*7) = (batch_size, 1568).

\n\n

This mismatch between the shape of your 2nd convolution output and the expected shape for your fully-connected layer is causing the error (notice how the shapes mentioned in the trace correspond to the aforementioned ones).

\n", + "system": "" + }, + { + "instruction": "Pytorch- why is \u201caccumulating\u201d the default mode of .gradient?", + "input": "", + "output": "

Because if you use the same network twice (or same weights) in the forward pass, it should accumulate and not override. Also, since pytorch computation graph is defined by the run, so it makes sense to accumulate. See https://discuss.pytorch.org/t/why-do-we-need-to-set-the-gradients-manually-to-zero-in-pytorch/4903/9

\n", + "system": "" + }, + { + "instruction": "Multi-label, multi-class image classifier (ConvNet) with PyTorch", + "input": "", + "output": "

Maybe I am missing something, but if you want to convert your columns 1..N (N = 4 here) into a label vector or shape (N,) (e.g. given your example data, label(img1) = [0, 0, 0, 1], label(img3) = [1, 0, 1, 0], ...), why not:

\n\n
    \n
  1. Read all the label columns into self.label_arr:

    \n\n
    self.label_arr = np.asarray(self.data_info.iloc[:, 1:]) # columns 1 to N\n
  2. \n
  3. Return accordingly the labels in __getitem__() (no change here):

    \n\n
    single_image_label = self.label_arr[index]\n
  4. \n
\n\n

To train your classifier, you could then compute e.g. the cross-entropy between your (N,) predictions and the target labels.

\n", + "system": "" + }, + { + "instruction": "Pytorch - Purpose of images preprocessing in the transfer learning tutorial", + "input": "", + "output": "

Regarding RandomResizedCrop

\n\n
    \n
  1. Why ...ResizedCrop? - This answer is straightforward. Resizing crops to the same dimensions allows you to batch your input data. Since the training images in your toy dataset have different dimensions, this is the best way to make your training more efficient.

  2. \n
  3. Why Random...? - Generating different random crops per image every iteration (i.e. random center and random cropping dimensions/ratio before resizing) is a nice way to artificially augment your dataset, i.e. feeding your network different-looking inputs (extracted from the same original images) every iteration. This helps to partially avoid over-fitting for small datasets, and makes your network overall more robust.

    \n\n

    You are however right that, since some of your training images are up to 500px wide and the semantic targets (ant/bee) sometimes cover only a small portion of the images, there is a chance that some of these random crops won't contain an insect... But as long as the chances this happens stay relatively low, it won't really impact your training. The advantage of feeding different training crops every iteration (instead of always the same non-augmented images) vastly counterbalances the side-effect of sometimes giving \"empty\" crops. You could verify this assertion by replacing RandomResizedCrop(224) by Resize(224) (fixed resizing) in your code and compare the final accuracies on the test set.

    \n\n

    Furthermore, I would add that neural networks are smart cookies, and sometimes learn to recognize images through features you wouldn't expect (i.e. they tend to learn recognition shortcuts if your dataset or losses are biased, c.f. over-fitting). I wouldn't be surprised if this toy network is performing so well despite being trained sometimes on \"empty\" crops just because it learns e.g. to distinguish between usual \"ant backgrounds\" (ground floor, leaves, etc.) and \"bee backgrounds\" (flowers).

  4. \n
\n\n

Regarding RandomHorizontalFlip

\n\n

Its purpose is also to artificially augment your dataset. For the network, an image and its flipped version are two different inputs, so you are basically artificially doubling the size of your training dataset for \"free\".

\n\n
\n\n

There are plenty more operations one can use to augment training datasets (e.g. RandomAffine, ColorJitter, etc). One has however to be careful to choose transformations which are meaningful for the target use-case / which are not impacting the target semantic information (e.g. for ant/bee classification, RandomHorizontalFlip is fine as you will probably get as many images of insects facing right than facing left; however RandomVerticalFlip doesn't make much sense as you won't get pictures of insects upside-down most certainly).

\n", + "system": "" + }, + { + "instruction": "PyTorch: copy_if failed to synchronize: device-side assert triggered", + "input": "", + "output": "

The following code works for me.

\n\n
n_classes = 81\ngt_class_ids = torch.from_numpy(numpy.random.randint(1, 100, size=128)).long()\n\nif torch.nonzero(gt_class_ids > n_classes).size(0) > 0:\n    print('okay')\n
\n\n

One suggestion: run the code without using cuda and then you will be able to see the real error message. Sometimes when we run code using cuda, it gives error message having device-side assert triggered which hides the real error message.

\n", + "system": "" + }, + { + "instruction": "How can I enable pytorch GPU support in Google Colab?", + "input": "", + "output": "

You can enable GPU by clicking on "Change Runtime Type" under the "Runtime" menu. There is also "TPU" support available in these days.

\n

\"Runtime

\n

\"Runtime

\n

You can define define device using torch.device:

\n
import torch\n\nDEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")\n
\n", + "system": "" + }, + { + "instruction": "Is it possible to implement a multilayered LSTM with LSTMCells modules in PyTorch?", + "input": "", + "output": "

LSTMCell is the basic building block of an LSTM network. You should use the LSTM module (which uses LSTMCell internally). If you want to do this yourself, the best way is to read the source code (https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/rnn.py).

\n\n

Basically you want to use one LSTMCell for each layer, and you should be careful on how to go from input to output, layer by layer taking into account the hidden states.\nI also have basic implementation of a convolutional LSTM but the idea is the same. You can check it here:\nhttps://github.com/rogertrullo/pytorch_convlstm/

\n", + "system": "" + }, + { + "instruction": "How to leverage the world-size parameter for DistributedDataParallel in Pytorch example for multiple GPUs?", + "input": "", + "output": "

World size argument is the number of nodes in your distributed training, so if you set the world size to 2 you need to run the same command with a different rank on the other node. If you just want to increase the number of GPUs on a single node you need to change ngpus_per_node instead. Take a look at the multiple node example in this Readme.

\n", + "system": "" + }, + { + "instruction": "pytorch: how to directly find gradient w.r.t. loss", + "input": "", + "output": "

It turns out that this is reallyy easy. Just use torch.autograd.grad

\n\n

Example:

\n\n
import torch\nimport numpy as np\nfrom torch.autograd import grad\n\nx = torch.autograd.Variable(torch.from_numpy(np.random.randn(5, 4)))\nw = torch.autograd.Variable(torch.from_numpy(np.random.randn(4, 3)), requires_grad=True)\ny = torch.autograd.Variable(torch.from_numpy(np.random.randn(5, 3)))\nloss = ((x.mm(w) - y)**2).sum()\n(d_loss_d_w, ) = grad(loss, w)\n\nassert np.allclose(d_loss_d_w.data.numpy(), (x.transpose(0, 1).mm(x.mm(w)-y)*2).data.numpy())\n
\n\n

Thanks to JerryLin for answering the question here.

\n", + "system": "" + }, + { + "instruction": "overlaying the ground truth mask on an image", + "input": "", + "output": "

I need to do similar things pretty often. In my favorite StackOverflow fashion, here is a script that you can copy and paste. I hope the code itself is self-explanatory. There are a few things that you can tune and try (e.g., color maps, overlay styles). It uses multiprocessing.Pool for faster batch-processing, resizes the mask to match the shape of the image, assumes the mask is in .png format, and depends on the file structure that you posted.

\n
import os\nfrom os import path\nimport cv2\nimport numpy as np\n\nfrom argparse import ArgumentParser\nfrom multiprocessing import Pool\n\n\ndef create_overlay(image, mask):\n    """\n    image: H*W*3 numpy array\n    mask: H*W numpy array\n    If dimensions do not match, the mask is upsampled to match that of the image\n\n    Returns a H*W*3 numpy array\n    """\n    h, w = image.shape[:2]\n    mask = cv2.resize(mask, dsize=(w,h), interpolation=cv2.INTER_CUBIC)\n\n    # color options: https://docs.opencv.org/4.x/d3/d50/group__imgproc__colormap.html\n    mask_color = cv2.applyColorMap(mask, cv2.COLORMAP_HOT).astype(np.float32)\n    mask = mask[:, :, None] # create trailing dimension for broadcasting\n    mask = mask.astype(np.float32)/255\n\n    # different other options that you can use to merge image/mask\n    overlay = (image*(1-mask)+mask_color*mask).astype(np.uint8)\n    # overlay = (image*0.5 + mask_color*0.5).astype(np.uint8)\n    # overlay = (image + mask_color).clip(0,255).astype(np.uint8)\n\n    return overlay\n\ndef process_video(video_name):\n    """\n    Processing frames in a single video\n    """\n    vid_image_path = path.join(image_path, video_name)\n    vid_mask_path = path.join(mask_path, video_name)\n    vid_output_path = path.join(output_path, video_name)\n    os.makedirs(vid_output_path, exist_ok=True)\n\n    frames = sorted(os.listdir(vid_image_path))\n    for f in frames:\n        image = cv2.imread(path.join(vid_image_path, f))\n        mask = cv2.imread(path.join(vid_mask_path, f.replace('.jpg','.png')), cv2.IMREAD_GRAYSCALE)\n        overlay = create_overlay(image, mask)\n        cv2.imwrite(path.join(vid_output_path, f), overlay)\n\n\nparser = ArgumentParser()\nparser.add_argument('--image_path')\nparser.add_argument('--mask_path')\nparser.add_argument('--output_path')\nargs = parser.parse_args()\n\nimage_path = args.image_path\nmask_path = args.mask_path\noutput_path = args.output_path\n\nif __name__ == '__main__':\n    videos = sorted(\n        list(set(os.listdir(image_path)).intersection(\n                set(os.listdir(mask_path))))\n    )\n\n    print(f'Processing {len(videos)} videos.')\n\n    pool = Pool()\n    pool.map(process_video, videos)\n\n    print('Done.')\n\n
\n

Output:\n\"output

\n

EDIT: Made it work on Windows; changed pool.apply to pool.map.

\n", + "system": "" + }, + { + "instruction": "pytorch model returns NANs after first round", + "input": "", + "output": "

You should switch to full precision when updating the gradients and to half precision upon training

\n\n
loss.backward()\nmodel.float() # add this here\noptimizer.step()\n
\n\n

Switch back to half precission

\n\n
for images, scores in train_loader:\n    model.half()  # add this here\n    process_batch()\n
\n", + "system": "" + }, + { + "instruction": "pytorch - use device inside 'with statement'", + "input": "", + "output": "

Unfortunately in the current implementation the with-device statement doesn't work this way, it can just be used to switch between cuda devices.

\n\n
\n\n

You still will have to use the device parameter to specify which device is used (or .cuda() to move the tensor to the specified GPU), with a terminology like this when:

\n\n
# allocates a tensor on GPU 1\na = torch.tensor([1., 2.], device=cuda)\n
\n\n

So to access cuda:1:

\n\n
cuda = torch.device('cuda')\n\nwith torch.cuda.device(1):\n    # allocates a tensor on GPU 1\n    a = torch.tensor([1., 2.], device=cuda)\n
\n\n

And to access cuda:2:

\n\n
cuda = torch.device('cuda')\n\nwith torch.cuda.device(2):\n    # allocates a tensor on GPU 2\n    a = torch.tensor([1., 2.], device=cuda)\n
\n\n

However tensors without the device parameter will still be CPU tensors:

\n\n
cuda = torch.device('cuda')\n\nwith torch.cuda.device(1):\n    # allocates a tensor on CPU\n    a = torch.tensor([1., 2.])\n
\n\n

To sum it up:

\n\n
\n

No - unfortunately it is in the current implementation of the with-device\n statement not possible to use in a way you described in your\n question.

\n
\n\n
\n\n

Here are some more examples from the documentation:

\n\n
cuda = torch.device('cuda')     # Default CUDA device\ncuda0 = torch.device('cuda:0')\ncuda2 = torch.device('cuda:2')  # GPU 2 (these are 0-indexed)\n\nx = torch.tensor([1., 2.], device=cuda0)\n# x.device is device(type='cuda', index=0)\ny = torch.tensor([1., 2.]).cuda()\n# y.device is device(type='cuda', index=0)\n\nwith torch.cuda.device(1):\n    # allocates a tensor on GPU 1\n    a = torch.tensor([1., 2.], device=cuda)\n\n    # transfers a tensor from CPU to GPU 1\n    b = torch.tensor([1., 2.]).cuda()\n    # a.device and b.device are device(type='cuda', index=1)\n\n    # You can also use ``Tensor.to`` to transfer a tensor:\n    b2 = torch.tensor([1., 2.]).to(device=cuda)\n    # b.device and b2.device are device(type='cuda', index=1)\n\n    c = a + b\n    # c.device is device(type='cuda', index=1)\n\n    z = x + y\n    # z.device is device(type='cuda', index=0)\n\n    # even within a context, you can specify the device\n    # (or give a GPU index to the .cuda call)\n    d = torch.randn(2, device=cuda2)\n    e = torch.randn(2).to(cuda2)\n    f = torch.randn(2).cuda(cuda2)\n    # d.device, e.device, and f.device are all device(type='cuda', index=2)\n
\n", + "system": "" + }, + { + "instruction": "pytorch - use device inside 'with statement'", + "input": "", + "output": "

Unfortunately in the current implementation the with-device statement doesn't work this way, it can just be used to switch between cuda devices.

\n\n
\n\n

You still will have to use the device parameter to specify which device is used (or .cuda() to move the tensor to the specified GPU), with a terminology like this when:

\n\n
# allocates a tensor on GPU 1\na = torch.tensor([1., 2.], device=cuda)\n
\n\n

So to access cuda:1:

\n\n
cuda = torch.device('cuda')\n\nwith torch.cuda.device(1):\n    # allocates a tensor on GPU 1\n    a = torch.tensor([1., 2.], device=cuda)\n
\n\n

And to access cuda:2:

\n\n
cuda = torch.device('cuda')\n\nwith torch.cuda.device(2):\n    # allocates a tensor on GPU 2\n    a = torch.tensor([1., 2.], device=cuda)\n
\n\n

However tensors without the device parameter will still be CPU tensors:

\n\n
cuda = torch.device('cuda')\n\nwith torch.cuda.device(1):\n    # allocates a tensor on CPU\n    a = torch.tensor([1., 2.])\n
\n\n

To sum it up:

\n\n
\n

No - unfortunately it is in the current implementation of the with-device\n statement not possible to use in a way you described in your\n question.

\n
\n\n
\n\n

Here are some more examples from the documentation:

\n\n
cuda = torch.device('cuda')     # Default CUDA device\ncuda0 = torch.device('cuda:0')\ncuda2 = torch.device('cuda:2')  # GPU 2 (these are 0-indexed)\n\nx = torch.tensor([1., 2.], device=cuda0)\n# x.device is device(type='cuda', index=0)\ny = torch.tensor([1., 2.]).cuda()\n# y.device is device(type='cuda', index=0)\n\nwith torch.cuda.device(1):\n    # allocates a tensor on GPU 1\n    a = torch.tensor([1., 2.], device=cuda)\n\n    # transfers a tensor from CPU to GPU 1\n    b = torch.tensor([1., 2.]).cuda()\n    # a.device and b.device are device(type='cuda', index=1)\n\n    # You can also use ``Tensor.to`` to transfer a tensor:\n    b2 = torch.tensor([1., 2.]).to(device=cuda)\n    # b.device and b2.device are device(type='cuda', index=1)\n\n    c = a + b\n    # c.device is device(type='cuda', index=1)\n\n    z = x + y\n    # z.device is device(type='cuda', index=0)\n\n    # even within a context, you can specify the device\n    # (or give a GPU index to the .cuda call)\n    d = torch.randn(2, device=cuda2)\n    e = torch.randn(2).to(cuda2)\n    f = torch.randn(2).cuda(cuda2)\n    # d.device, e.device, and f.device are all device(type='cuda', index=2)\n
\n", + "system": "" + }, + { + "instruction": "pytorch model returns NANs after first round", + "input": "", + "output": "

You should switch to full precision when updating the gradients and to half precision upon training

\n\n
loss.backward()\nmodel.float() # add this here\noptimizer.step()\n
\n\n

Switch back to half precission

\n\n
for images, scores in train_loader:\n    model.half()  # add this here\n    process_batch()\n
\n", + "system": "" + }, + { + "instruction": "torch.manual_seed(seed) get RuntimeError: CUDA error: device-side assert triggered", + "input": "", + "output": "

In my experience, this error may occur because of some kind of inconsistency between the number of labels in your targets and the number of classes in your model.

\n

To solve it you can try to:

\n
    \n
  1. Make sure that the label in your target data starts from 0. If you have n classes in your data, your target classes should be [0, 1, 2,..., n-1]
  2. \n
  3. Make sure that the model you are using is set to work with n classes
  4. \n
\n", + "system": "" + }, + { + "instruction": "How to create upper triangular matrix in Pytorch?", + "input": "", + "output": "
import torch\nupper_tri = torch.ones(rol, col).triu()\n
\n

Eg:

\n
>> mat = torch.ones(3, 3).triu()\n>> print(mat)\ntensor([[1., 1., 1.],\n        [0., 1., 1.],\n        [0., 0., 1.]])\n
\n", + "system": "" + }, + { + "instruction": "Fastai on google colab", + "input": "", + "output": "

The following should get you up and running with the 0.7.0 version of fast.ai (used by v2 of the course) on Google Colab:

\n\n
!pip install -q \"fastai==0.7.0\" Pillow==4.1.1 torchtext==0.2.3\n!apt-get -qq install -y libsm6 libxext6 && pip install -q -U opencv-python\n\nfrom wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag\nplatform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())\n# !apt update -q\n!apt install -y libsm6 libxext6\n\nfrom os import path\naccelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'\ntorch_whl = f\"http://download.pytorch.org/whl/{accelerator}/torch-0.3.0.post4-{platform}-linux_x86_64.whl\"\n!pip install -q {torch_whl} torchvision image\n
\n", + "system": "" + }, + { + "instruction": "Fastai on google colab", + "input": "", + "output": "

The following should get you up and running with the 0.7.0 version of fast.ai (used by v2 of the course) on Google Colab:

\n\n
!pip install -q \"fastai==0.7.0\" Pillow==4.1.1 torchtext==0.2.3\n!apt-get -qq install -y libsm6 libxext6 && pip install -q -U opencv-python\n\nfrom wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag\nplatform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())\n# !apt update -q\n!apt install -y libsm6 libxext6\n\nfrom os import path\naccelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'\ntorch_whl = f\"http://download.pytorch.org/whl/{accelerator}/torch-0.3.0.post4-{platform}-linux_x86_64.whl\"\n!pip install -q {torch_whl} torchvision image\n
\n", + "system": "" + }, + { + "instruction": "Calculating two gradients in pytorch and reusing an intermediate gradient", + "input": "", + "output": "

You can decouple the nested function using chain rule. However, there will be some differences due to numerical issues.

\n
import torch\nimport time\n\ndef slow_fun(x):\n    A = x*torch.ones((1000,1000))\n    B = torch.matrix_exp(1j*A)\n    return torch.real(torch.trace(B))\n\n# baseline z1\nx = torch.tensor(1.0, requires_grad = True)\ny = slow_fun(x)\nz1 = y**2\nz1.backward()\ndz1_dx = x.grad\nprint(dz1_dx)\n> tensor(-1648274.2500)\n\n# baseline z2\nx = torch.tensor(1.0, requires_grad = True)\ny = slow_fun(x)\nz2 = torch.sqrt(y)\nz2.backward()\ndz2_dx = x.grad\nprint(dz2_dx)\n> tensor(-13.1979)\n\n# compute just dy/dx\nx = torch.tensor(1.0, requires_grad = True)\ny = slow_fun(x)\ny.backward()\ndy_dx = x.grad\n\n# detach y to prevent full backprop\ny1 = y.detach()\ny1.requires_grad = True\nz1 = y1**2\nz1.backward()\ndz1_dy = y1.grad\n# compute gradient with chain rule\ndz1_dx = dz1_dy * dy_dx\nprint(dz1_dx)\n> tensor(-1672148.5000)\n\n# detach y to prevent full backprop\ny2 = y.detach()\ny2.requires_grad = True\nz2 = torch.sqrt(y2)\nz2.backward()\ndz2_dy = y2.grad\n# compute gradient with chain rule\ndz2_dx = dz2_dy * dy_dx\nprint(dz2_dx)\n> tensor(-13.1980)\n
\n", + "system": "" + }, + { + "instruction": "Pytorch Lightning: How to save a checkpoint for every validation epoch?", + "input": "", + "output": "
max_epochs = 100\nval_every_n_epochs = 1\n\ncheckpoint_callback = ModelCheckpoint(\n        # dirpath=checkpoints_path, # <--- specify this on the trainer itself for version control\n        filename="fa_classifier_{epoch:02d}",\n        every_n_epochs=val_every_n_epochs,\n        save_top_k=-1,  # <--- this is important!\n    )\n trainer = Trainer(\n        callbacks=[checkpoint_callback],     \n        default_root_dir=checkpoints_path,\n        check_val_every_n_epoch=val_every_n_epochs,\n        max_epochs=max_epochs,\n        gpus=1\n    )\n
\n

this will not delete saved checkpoints.

\n", + "system": "" + }, + { + "instruction": "How to solve "CUDA: Invalid Device Ordinal" Error in PyTorch Single-GPU Inference on a Model Trained with DataParallel", + "input": "", + "output": "

TLDR

\n

I prove with this answer that one of the ways to get this error is if the Python code inside the file sets the environment variable which overrides the elsewise working CLI-set environment variable at the beginning. There is working code as a proof.

\n

Code that throws this error

\n

The error "RuntimeError: CUDA error: invalid device ordinal" is thrown with the following code in a Jupyter Notebook cell:

\n
!export CUDA_VISIBLE_DEVICES='4,5,6,7'\n\nimport os\nos.environ['CUDA_VISIBLE_DEVICES'] = '4,5,6,7'\n\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\nmodel_name = "dbmdz/german-gpt2"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\nconfig = model.config\n\n# model.to('cuda:4')\n# # RuntimeError: CUDA error: invalid device ordinal\n# model.to('cuda:5')\n# # RuntimeError: CUDA error: invalid device ordinal\n# model.to('cuda:6')\n# # RuntimeError: CUDA error: invalid device ordinal\n# model.to('cuda:7')\n# # RuntimeError: CUDA error: invalid device ordinal\n\n# While instead, and astonishingly, all other devices 0-3 work:\nmodel.to('cuda:0')\n# works\nmodel.to('cuda:1')\n# works\nmodel.to('cuda:2')\n# works\nmodel.to('cuda:3')\n# works\n
\n

Here is the long error traceback if the model is put on one of the os.environ[] devices:

\n
model.to('cuda:5')\n
\n

Out:

\n
---------------------------------------------------------------------------\nRuntimeError                              Traceback (most recent call last)\nCell In [11], line 1\n----> 1 model.to('cuda:5')\n\nFile /srv/home/seid/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1145, in Module.to(self, *args, **kwargs)\n   1141         return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,\n   1142                     non_blocking, memory_format=convert_to_format)\n   1143     return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\n-> 1145 return self._apply(convert)\n\nFile /srv/home/seid/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:797, in Module._apply(self, fn)\n    795 def _apply(self, fn):\n    796     for module in self.children():\n--> 797         module._apply(fn)\n    799     def compute_should_use_set_data(tensor, tensor_applied):\n    800         if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):\n    801             # If the new tensor has compatible tensor type as the existing tensor,\n    802             # the current behavior is to change the tensor in-place using `.data =`,\n   (...)\n    807             # global flag to let the user control whether they want the future\n    808             # behavior of overwriting the existing tensor or not.\n\nFile /srv/home/seid/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:797, in Module._apply(self, fn)\n    795 def _apply(self, fn):\n    796     for module in self.children():\n--> 797         module._apply(fn)\n    799     def compute_should_use_set_data(tensor, tensor_applied):\n    800         if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):\n    801             # If the new tensor has compatible tensor type as the existing tensor,\n    802             # the current behavior is to change the tensor in-place using `.data =`,\n   (...)\n    807             # global flag to let the user control whether they want the future\n    808             # behavior of overwriting the existing tensor or not.\n\nFile /srv/home/seid/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:820, in Module._apply(self, fn)\n    816 # Tensors stored in modules are graph leaves, and we don't want to\n    817 # track autograd history of `param_applied`, so we have to use\n    818 # `with torch.no_grad():`\n    819 with torch.no_grad():\n--> 820     param_applied = fn(param)\n    821 should_use_set_data = compute_should_use_set_data(param, param_applied)\n    822 if should_use_set_data:\n\nFile /srv/home/seid/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1143, in Module.to.<locals>.convert(t)\n   1140 if convert_to_format is not None and t.dim() in (4, 5):\n   1141     return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,\n   1142                 non_blocking, memory_format=convert_to_format)\n-> 1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\n\nRuntimeError: CUDA error: invalid device ordinal\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n
\n

And here is how it looks like for the working devices 0-3:

\n
model.to('cuda:2')\n
\n

Out:

\n
GPT2LMHeadModel(\n  (transformer): GPT2Model(\n    (wte): Embedding(50265, 768)\n    (wpe): Embedding(1024, 768)\n    (drop): Dropout(p=0.0, inplace=False)\n    (h): ModuleList(\n      (0-11): 12 x GPT2Block(\n        (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n        (attn): GPT2Attention(\n          (c_attn): Conv1D()\n          (c_proj): Conv1D()\n          (attn_dropout): Dropout(p=0.0, inplace=False)\n          (resid_dropout): Dropout(p=0.0, inplace=False)\n        )\n        (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n        (mlp): GPT2MLP(\n          (c_fc): Conv1D()\n          (c_proj): Conv1D()\n          (act): NewGELUActivation()\n          (dropout): Dropout(p=0.0, inplace=False)\n        )\n      )\n    )\n    (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n  )\n  (lm_head): Linear(in_features=768, out_features=50265, bias=False)\n)\n
\n

I also checked it with:

\n
%set_env CUDA_VISIBLE_DEVICES='4,5,6,7'\n
\n

This code throws the same error:

\n
!export CUDA_VISIBLE_DEVICES='4,5,6,7'\n%set_env CUDA_VISIBLE_DEVICES='4,5,6,7'\n\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\nmodel_name = "dbmdz/german-gpt2"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\nconfig = model.config\n\nmodel.to('cuda:4')\n
\n

Thus, do not write %set_env CUDA_VISIBLE_DEVICES='4,5,6,7' since it is the same as the failing os.environ[]. If you write that after the !export ... line and drop the os.envion[], you get the same error in the output.

\n

Funny enough, there was not hint at !export ... among all of the answers at How to set env variable in Jupyter notebook - and that is a gold question:

\n\n

Answer

\n

The example is for devices 4,5,6,7. Check whether you have os.environ[] or %set_env in the Python code.

\n

If you begin with:

\n
import os\nos.environ['CUDA_VISIBLE_DEVICES'] = '4,5,6,7'\n
\n

Instead of the right:

\n
export CUDA_VISIBLE_DEVICES='4,5,6,7'\n
\n

Or if you are in a Jupyter Notebook cell:

\n
!export CUDA_VISIBLE_DEVICES='4,5,6,7'\n
\n

Or:

\n
CUDA_VISIBLE_DEVICES=4,5,6,7 python forward_test_ins.py\n# and then without os.environ in the forward_test_ins.py!\n
\n

Answer

\n

You must drop os.environ[] from the Python file since that overwrites the CLI command. Thus, even if you run your Python executable with the right setting, it is overridden by the os.environ[] - which does not work since it is not on CLI level.

\n

Links:

\n\n
\n

it shouldn\u2019t be inside the python script but to be set as an enviroment variable in the console such as CUDA_VISIBLE_DEVICES=0,1 python your_script.py Note that you SHOUDLN\u2019T t set it as a permanent enviroment variable in the bashrc as it affects the whole system.

\n
\n\n

Code answer

\n

This runs through if you !export ... at the beginning of the cell.\nYou can also run your Python executable with CUDA_VISIBLE_DEVICES=4,5,6,7 python forward_test_ins.py and leave that code line !export ... out.

\n
!export CUDA_VISIBLE_DEVICES='4,5,6,7'\n\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\nmodel_name = "dbmdz/german-gpt2"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\nconfig = model.config\n\nmodel.to('cuda:4')\n
\n

You can also set environment variables in the .bashrc, see this answer above, or the Jupyter Notebook service settings, see this answer above (but mind: the error will also be thrown outside of Jupyter Notebook, thus, better change the .bashrc (link 1) and not the Jupyter Notebook service settings (link 2).

\n

ParallelData

\n

If you get the error only from code with ParallelData, mind that ParallelData will always save the parameters and buffers to GPU 0, see Transformers Trainer: "RuntimeError: module must have its parameters ... on device cuda:6 (device_ids[0]) but found one of them on device: cuda:0", therefore, in the example above, better take a device_ids list that begins with 0 and also put that at the beginning of CUDA_VISIBLE_DEVICES. In your question, you show a snippet with CUDA_VISIBLE_DEVICES=0, therefore, this code cannot run into an error like that. But if you ran CUDA_VISIBLE_DEVICES=1 instead, you will get problems with two devices, 0 and 1, that both save parameters and buffers even though only one device is allowed to do that.

\n

Other device problems if you work with many GPUs

\n

What also (perhaps-perhaps) might disturb your devices is a strange mapping, namely that a coded GPU becomes another GPU that is being worked on. Check this with the CLI command nvidia-smi (Jupyter Notebook: !nvidia-smi). See "model.to('cuda:6')" becomes (nvidia-smi) GPU 4, same with any other "cuda:MY_GPU", only "cuda:0" becomes GPU 0. How do I get rid of this mapping?. Since I have this strange mapping in my setup and since I got the same error in question with os.environ[], you might check that. For example, to get 0,4,5,6,7, I had to code: 0,6,3,7,2. UPDATE: There is an answer to that strange mapping question. And the link is a cross-site duplicate of How does CUDA assign device IDs to GPUs?.

\n", + "system": "" + }, + { + "instruction": "Why does StableDiffusionPipeline return black images when generating multiple images at once?", + "input": "", + "output": "

Apparently it is indeed an Apple Silicon (M1/M2) issue, of which Hugging Face is not yet sure why this is happening, see this GitHub issue for more details.

\n", + "system": "" + }, + { + "instruction": "TorchVision using pretrained weights for entire model vs backbone", + "input": "", + "output": "

The difference is pretty simple: you can either choose to do transfer learning on the backbone only or on the whole network.

\n

RetinaNet from Torchvision has a Resnet50 backbone. You should be able to do both of:

\n\n

As implied by their names, the backbone weights are different. The former were trained on COCO (object detection) while the later were trained on ImageNet (classification).

\n

To answer your question, pretrained weights implies that the whole network, including backbone weights, are initialized. However, I don't think that it calls backbone_weights under the hood.

\n", + "system": "" + }, + { + "instruction": "Segmentation fault when importing torch-sparse (installing pytorch-geometric)", + "input": "", + "output": "

I've found a combination of packages that works for me - hopefully someone else will have this issue at some point and be able to reproduce the steps from me talking to myself here. The full process for getting stuff working was:

\n
    \n
  1. Fresh conda environment with forced Python=3.9 (conda create -n ENVNAME python=3.9)
  2. \n
  3. Activate that environment
  4. \n
  5. Install basic python packages (conda install numpy pandas matplotlib scikit-learn)
  6. \n
  7. Check CUDA version if working with a GPU (nvidia-smi in terminal prints these details for NVIDIA cards)
  8. \n
  9. Install Pytorch using their suggested conda command (conda install pytorch torchvision torchaudio cudatoolkit=CUDA_VERSION -c pytorch -c conda-forge). This had to go through the env solving process on my machine.
  10. \n
  11. Install pytorch geometric (or just torch sparse if that's all you need) with conda install pyg -c pyg. Again this had a solving process.
  12. \n
  13. Check that torch_sparse imports without fault
  14. \n
\n

Here's the conda list for this working combination of packages:

\n
# Name                    Version                   Build  Channel\n_libgcc_mutex             0.1                        main  \n_openmp_mutex             5.1                       1_gnu  \nblas                      1.0                         mkl  \nbottleneck                1.3.5            py39h7deecbd_0  \nbrotli                    1.0.9                h5eee18b_7  \nbrotli-bin                1.0.9                h5eee18b_7  \nbrotlipy                  0.7.0           py39hb9d737c_1004    conda-forge\nbzip2                     1.0.8                h7f98852_4    conda-forge\nca-certificates           2022.9.24            ha878542_0    conda-forge\ncertifi                   2022.9.24        py39h06a4308_0  \ncffi                      1.14.6           py39he32792d_0    conda-forge\ncharset-normalizer        2.1.1              pyhd8ed1ab_0    conda-forge\ncryptography              37.0.2           py39hd97740a_0    conda-forge\ncudatoolkit               11.6.0              hecad31d_10    conda-forge\ncycler                    0.11.0             pyhd3eb1b0_0  \ndbus                      1.13.18              hb2f20db_0  \nexpat                     2.4.9                h6a678d5_0  \nffmpeg                    4.3                  hf484d3e_0    pytorch\nfftw                      3.3.9                h27cfd23_1  \nfontconfig                2.13.1               h6c09931_0  \nfonttools                 4.25.0             pyhd3eb1b0_0  \nfreetype                  2.11.0               h70c0345_0  \ngiflib                    5.2.1                h7b6447c_0  \nglib                      2.69.1               h4ff587b_1  \ngmp                       6.2.1                h58526e2_0    conda-forge\ngnutls                    3.6.13               h85f3911_1    conda-forge\ngst-plugins-base          1.14.0               h8213a91_2  \ngstreamer                 1.14.0               h28cd5cc_2  \nicu                       58.2                 he6710b0_3  \nidna                      3.4                pyhd8ed1ab_0    conda-forge\nintel-openmp              2021.4.0          h06a4308_3561  \njinja2                    3.0.3              pyhd3eb1b0_0  \njoblib                    1.1.1            py39h06a4308_0  \njpeg                      9e                   h7f8727e_0  \nkiwisolver                1.4.2            py39h295c915_0  \nkrb5                      1.19.2               hac12032_0  \nlame                      3.100             h7f98852_1001    conda-forge\nlcms2                     2.12                 h3be6417_0  \nld_impl_linux-64          2.38                 h1181459_1  \nlerc                      3.0                  h295c915_0  \nlibbrotlicommon           1.0.9                h5eee18b_7  \nlibbrotlidec              1.0.9                h5eee18b_7  \nlibbrotlienc              1.0.9                h5eee18b_7  \nlibclang                  10.0.1          default_hb85057a_2  \nlibdeflate                1.8                  h7f8727e_5  \nlibedit                   3.1.20210910         h7f8727e_0  \nlibevent                  2.1.12               h8f2d780_0  \nlibffi                    3.3                  he6710b0_2  \nlibgcc-ng                 11.2.0               h1234567_1  \nlibgfortran-ng            11.2.0               h00389a5_1  \nlibgfortran5              11.2.0               h1234567_1  \nlibgomp                   11.2.0               h1234567_1  \nlibiconv                  1.17                 h166bdaf_0    conda-forge\nlibllvm10                 10.0.1               hbcb73fb_5  \nlibpng                    1.6.37               hbc83047_0  \nlibpq                     12.9                 h16c4e8d_3  \nlibstdcxx-ng              11.2.0               h1234567_1  \nlibtiff                   4.4.0                hecacb30_0  \nlibuuid                   1.0.3                h7f8727e_2  \nlibwebp                   1.2.4                h11a3e52_0  \nlibwebp-base              1.2.4                h5eee18b_0  \nlibxcb                    1.15                 h7f8727e_0  \nlibxkbcommon              1.0.1                hfa300c1_0  \nlibxml2                   2.9.14               h74e7548_0  \nlibxslt                   1.1.35               h4e12654_0  \nlz4-c                     1.9.3                h295c915_1  \nmarkupsafe                2.1.1            py39h7f8727e_0  \nmatplotlib                3.5.2            py39h06a4308_0  \nmatplotlib-base           3.5.2            py39hf590b9c_0  \nmkl                       2021.4.0           h06a4308_640  \nmkl-service               2.4.0            py39h7f8727e_0  \nmkl_fft                   1.3.1            py39hd3c417c_0  \nmkl_random                1.2.2            py39h51133e4_0  \nmunkres                   1.1.4                      py_0  \nncurses                   6.3                  h5eee18b_3  \nnettle                    3.6                  he412f7d_0    conda-forge\nnspr                      4.33                 h295c915_0  \nnss                       3.74                 h0370c37_0  \nnumexpr                   2.8.3            py39h807cd23_0  \nnumpy                     1.23.3           py39h14f4228_0  \nnumpy-base                1.23.3           py39h31eccc5_0  \nopenh264                  2.1.1                h780b84a_0    conda-forge\nopenssl                   1.1.1q               h7f8727e_0  \npackaging                 21.3               pyhd3eb1b0_0  \npandas                    1.4.4            py39h6a678d5_0  \npcre                      8.45                 h295c915_0  \npillow                    9.2.0            py39hace64e9_1  \npip                       22.2.2           py39h06a4308_0  \nply                       3.11             py39h06a4308_0  \npycparser                 2.21               pyhd8ed1ab_0    conda-forge\npyg                       2.1.0           py39_torch_1.12.0_cu116    pyg\npyopenssl                 22.0.0             pyhd8ed1ab_1    conda-forge\npyparsing                 3.0.9            py39h06a4308_0  \npyqt                      5.15.7           py39h6a678d5_1  \npyqt5-sip                 12.11.0          py39h6a678d5_1  \npysocks                   1.7.1              pyha2e5f31_6    conda-forge\npython                    3.9.13               haa1d7c7_2  \npython-dateutil           2.8.2              pyhd3eb1b0_0  \npython_abi                3.9                      2_cp39    conda-forge\npytorch                   1.12.1          py3.9_cuda11.6_cudnn8.3.2_0    pytorch\npytorch-cluster           1.6.0           py39_torch_1.12.0_cu116    pyg\npytorch-mutex             1.0                        cuda    pytorch\npytorch-scatter           2.0.9           py39_torch_1.12.0_cu116    pyg\npytorch-sparse            0.6.15          py39_torch_1.12.0_cu116    pyg\npytz                      2022.1           py39h06a4308_0  \nqt-main                   5.15.2               h327a75a_7  \nqt-webengine              5.15.9               hd2b0992_4  \nqtwebkit                  5.212                h4eab89a_4  \nreadline                  8.2                  h5eee18b_0  \nrequests                  2.28.1             pyhd8ed1ab_1    conda-forge\nscikit-learn              1.1.2            py39h6a678d5_0  \nscipy                     1.9.1            py39h14f4228_0  \nsetuptools                63.4.1           py39h06a4308_0  \nsip                       6.6.2            py39h6a678d5_0  \nsix                       1.16.0             pyhd3eb1b0_1  \nsqlite                    3.39.3               h5082296_0  \nthreadpoolctl             2.2.0              pyh0d69192_0  \ntk                        8.6.12               h1ccaba5_0  \ntoml                      0.10.2             pyhd3eb1b0_0  \ntorchaudio                0.12.1               py39_cu116    pytorch\ntorchvision               0.13.1               py39_cu116    pytorch\ntornado                   6.2              py39h5eee18b_0  \ntqdm                      4.64.1           py39h06a4308_0  \ntyping_extensions         4.4.0              pyha770c72_0    conda-forge\ntzdata                    2022e                h04d1e81_0  \nurllib3                   1.26.11            pyhd8ed1ab_0    conda-forge\nwheel                     0.37.1             pyhd3eb1b0_0  \nxz                        5.2.6                h5eee18b_0  \nzlib                      1.2.13               h5eee18b_0  \nzstd                      1.5.2                ha4553b6_0 
\n", + "system": "" + }, + { + "instruction": "how to integrate torch into a docker image while keeping image size reasonable?", + "input": "", + "output": "

By default torch will package CUDA packages and stuff. Add --extra-index-url https://download.pytorch.org/whl/cpu and --no-cache-dir to pip install command if you do not require CUDA.

\n

RUN pip install --no-cache-dir -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu

\n

Also it's good practice to remove the apt list cache:

\n
RUN apt-get update \\\n    && apt-get install -y \\\n    ffmpeg \\\n    libsm6 \\\n    libxext6 \\\n    && rm -rf /var/lib/apt/lists/*\n
\n", + "system": "" + }, + { + "instruction": "Pytorch dataloaders : Bad file descriptor and EOF for workers>0", + "input": "", + "output": "

I have finally found a solution. Adding this configuration to the dataset script works:

\n
import torch.multiprocessing\ntorch.multiprocessing.set_sharing_strategy('file_system')\n
\n

By default, the sharing strategy is set to 'file_descriptor'.

\n

I have tried some solutions explained in :

\n\n

As suggested by @AlexMeredith, the error may be linked to the distributed filesystem (Lustre) that some clusters use. The error may also come from distributed shared memory.

\n", + "system": "" + }, + { + "instruction": "AssertionError: If capturable=False, state_steps should not be CUDA tensors", + "input": "", + "output": "

It seems related to a newly introduced parameter (capturable) for the Adam and AdamW optimizers. Currently two workarounds:

\n
    \n
  1. forcing capturable = True after loading the checkpoint optim.param_groups[0]['capturable'] = True. This seems to slow down the model training by approx. 10% (YMMV depending on the setup).
  2. \n
  3. Reverting PyTorch back to previous versions (could be 1.11.0).
  4. \n
\n

Source: https://github.com/pytorch/pytorch/issues/80809#issuecomment-1173481031

\n", + "system": "" + }, + { + "instruction": "In the latest version of PyTorch, what is best practice to get all tensors to use a particular device by default?", + "input": "", + "output": "

Pytorch has an optional function to change the default type of tensor set_default_tensor_type. Applying the default type on the main script:

\n
>>> import torch\n>>>\n>>> if __name__ == '__main__':\n...     cuda = torch.cuda.is_available()\n...     if cuda:\n...         torch.set_default_tensor_type('torch.cuda.FloatTensor')\n...     a = torch.randn(3,3)\n...     print(a.device)\n...\ncuda:0\n
\n
\n

Or would that be a bad thing to do for some reason? E.g. is there any reason I wouldn't want every tensor/operation to be done on cuda by default?

\n
\n

I couldn't find any reference or any document to answer this question. However, in my opinion, it's to avoid the memory fragmentation in GPU memory.

\n

I'm not an expert, but the data in memory should be arranged in an efficient way, if not, the redundant space will cause OOM. That's why, in default, Tensorflow will take all of your GPU's memory no matter how many parameters your model has. You can improve the space and speed just by setting the tensor shape multiples of 8 amp documents.

\n
\n

In practice, higher performance is achieved when A and B dimensions are multiples of 8.

\n
\n

In conclusion, I think it's better to control the device of tensor manually instead of setting it gpu as default.

\n", + "system": "" + }, + { + "instruction": "Terminate called after throwing an instance of 'std::bad_alloc' from importing torch_geometric", + "input": "", + "output": "

This problem is because of mismatched versions of pytorch.\nThe current pytorch being used is 1.11.0, but when scatter and sparse were installed installed scatter and sparse, 1.10.1 were used:

\n\n

So,torch-1.10.1 was used to install scatter and sparse, but torch-1.11.0 was the true version.

\n

Simply doing:

\n\n

Resolves the issue.

\n", + "system": "" + }, + { + "instruction": "How to Reverse Order of Rows in a Tensor", + "input": "", + "output": "

According to documentation torch.flip has argument dims, which control what axis to be flipped. In this case torch.flip(tensor_a, dims=(0,)) will return expected result. Also torch.flip(tensor_a) will reverse all tensor, and torch.flip(tensor_a, dims=(1,)) will reverse every row, like [1, 2, 3] --> [3, 2, 1].

\n", + "system": "" + }, + { + "instruction": "segmentation fault python after import torch on mac M1", + "input": "", + "output": "

its because of a numpy dependency conflict, try this one:\nfirst import numpy, and then torch:

\n

import numpy; import torch

\n", + "system": "" + }, + { + "instruction": "what does mask_fill in pytorch do here", + "input": "", + "output": "

As the documentation page describes it:

\n
\n

Tensor.masked_fill(mask, value)
\nFills elements of self tensor with value where mask is True. The shape of mask must be broadcastable with the shape of the underlying tensor.

\n
\n

In your case it will place in p1 the value of float("-1e30") at the positions where k1 is equal to zero. Since k1 has singleton dimensions its shape will be broadcasted to the shape of p1.

\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'torch.optim.lr_scheduler' has no attribute 'LinearLR'", + "input": "", + "output": "

LinearLR scheduler was only recently introduced (v1.10.0). Please make sure your pytorch version is up to date and try again.

\n", + "system": "" + }, + { + "instruction": "Error: "bitwise_and_cpu" not implemented for 'Float'", + "input": "", + "output": "

Different function impact error RuntimeError: "bitwise_and_cpu" not implemented for 'Float' means that PyTorch's Float type data does not support bitwise operations when performing bitwise AND operations. Bitwise operations (such as &) are usually only applicable to integer type data, not floating point numbers.

\n

The solution:\nTo perform bitwise operations, you need to convert floating point numbers to integer types (such as torch.uint8 or torch.int). Alternatively, you can use logical operators (torch.logical_and and torch.logical_or), which work with floating-point numbers (after binarization) and which return data of type Boolean.

\n", + "system": "" + }, + { + "instruction": "Background removal with U2Net is too strong", + "input": "", + "output": "
    \n
  1. you should sharpen the image first (use strong sharpen)
  2. \n
\n

\"Example\"

\n
/* I've removed the code because I am not sure if it has some portion copied from other open-sourced codes, I am using it in my apps but I can't remember if it is quoted from other source, so I've removed it, also it could be auto-completed using github copilot */\n
\n
    \n
  1. use U2Net to remove the background of the sharpened image
  2. \n
\n

\"Example\"

\n
    \n
  1. use the result from step (2) as mask
  2. \n
\n

\"Example\"

\n
    \n
  1. using the mask from step (3) extract wanted result from original picture
  2. \n
\n

\"Example\"

\n

note: this is very quick example, you could refine it more

\n", + "system": "" + }, + { + "instruction": "How to make Python libraries accessible in PYTHONPATH?", + "input": "", + "output": "

Let's say you have the source code of this repo cloned to /somepath/stylegan2-ada-pytorch which means that the directories you quoted are at /somepath/stylegan2-ada-pytorch/torch_utils and /somepath/stylegan2-ada-pytorch/dnnlib, respectively.

\n

Now let's say you have a python script that you want to access this code. It can be anywhere on your machine, as long as you add this to the top of your python script:

\n
import os\nimport sys\n\n#save the literal filepath to both directories as strings\ntu_path = os.path.join('somepath','stylegan2-ada-pytorch','torch_utils')\ndnnlib_path = os.path.join('somepath','stylegan2-ada-pytorch','dnnlib')\n\n#add those strings to python path\nsys.path.append(tu_path)\nsys.path.append(dnnlib_path )\n
\n

Note that this only adds those location to PYTHONPATH for the duration of that python script running, so you need this at the top of any python script that intends to use those libraries.

\n", + "system": "" + }, + { + "instruction": "Pytorch: How to get all data and targets for subsets", + "input": "", + "output": "

The answer in the link you provided basically defeats the purpose of having a data loader: a data loader is meant to load your data to memory chunk by chunk. This has the clear advantage of not having to load the dataset in its entirety at a given moment.

\n

From your ImageFolder dataset you can split your data with the torch.utils.data.random_split function:

\n
>>> def train_test_dataset(dataset, test_split=.2):\n...    test_len = int(len(dataset)*test_split)\n...    train_len = len(dataset) - test_len \n...    return random_split(dataset, [train_len, test_len])\n
\n

Then you can plug those datasets in separate DataLoaders:

\n
>>> train_set, test_set = train_test_dataset(dataset)\n\n>>> train_dl = DataLoader(train_set, batch_size=16, shuffle=True)\n>>> test_dl  = DataLoader(train_set, batch_size=32 shuffle=False)\n
\n", + "system": "" + }, + { + "instruction": "How can I plot pytorch tensor?", + "input": "", + "output": "

Does this work?

\n
plt.plot(input.cpu().numpy(),output.cpu().numpy())\n
\n

Alternatively you can try,

\n
plt.plot(input.to('cpu').numpy(),output.to('cpu').numpy())\n
\n", + "system": "" + }, + { + "instruction": "Difference between tensor addition assignment and assignment in pytorch", + "input": "", + "output": "

The += is an inplace operation i.e. it modifies the content of original variable without making its copy (retaining same memory address).

\n

Other examples:

\n\n

In weights = weights - weights.grad * lr, it doesn't work as you're creating a new variable (different address, it's just that it also has name weights).

\n

By the way, in pytorch's optimizers it's implemented like this:

\n
weights.add_(weights.grad, alpha=-lr)\n
\n", + "system": "" + }, + { + "instruction": "Force BERT transformer to use CUDA", + "input": "", + "output": "

You can make the entire class inherit torch.nn.Module like so:

\n
class SentimentModel_t(torch.nn.Module):\n    def __init___(...)\n    super(SentimentModel_t, self).__init__()\n    ...\n
\n

Upon initializing your model you can then call .to(device) to cast it to the device of your choice, like so:

\n
sentiment_model = SentimentModel_t(...)\nsentiment_model.to('cuda')\n
\n

The .to() recursively applies to all submodules of the class, model being one of them (hugging face model inherit torch.nn.Module, thus providing an implementation for to()).\nNote that this makes choosing device in the __init__() redundant: its now an external context that you can switch to/from easily.

\n
\n

Alternatively, you can hardcode the device by casting the contained BERT model directly into cuda (less elegant):

\n
class SentimentModel_t():\n        def __init__(self, ...):\n            DEVICE = "cuda:0" if pt.cuda.is_available() else "cpu"\n            print(DEVICE)\n\n            self.model = AutoModelForSequenceClassification.from_pretrained(model_name).to(DEVICE)\n
\n", + "system": "" + }, + { + "instruction": "Print the validation loss in each epoch in PyTorch", + "input": "", + "output": "

NO!!!!
\n\"enter

\n

Under no circumstances should you train your model (i.e., call loss.backward() + optimizer.step()) using validation / test data!!!

\n

If you want to validate your model:

\n
model.eval()  # handle drop-out/batch norm layers\nloss = 0\nwith torch.no_grad():\n  for x,y in validation_loader:\n    out = model(x)  # only forward pass - NO gradients!!\n    loss += criterion(out, y)\n# total loss - divide by number of batches\nval_loss = loss / len(validation_loader)\n
\n

Note how optimizer has nothing to do with evaluating the model on the validation set.\nYou do not change the model according to the validation data - only validate it.

\n", + "system": "" + }, + { + "instruction": "PyTorch backward() on a tensor element affected by nan in other elements", + "input": "", + "output": "

When indexing the tensor in the assignment, PyTorch accesses all elements of the tensor (it uses binary multiplicative masking under the hood to maintain differentiability) and this is where it is picking up the nan of the other element (since 0*nan -> nan).

\n

We can see this in the computational graph:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
torchviz.make_dot(z1, params={'x':x,'y':y})torchviz.make_dot(z[1], params={'x':x,'y':y})
\"enter\"enter
\n
\n

If you wish to avoid this behaviour, either mask the nan's, or do as you have done in the first example - separate these into two different objects.

\n", + "system": "" + }, + { + "instruction": "Dropout Layer with zero dropping rate", + "input": "", + "output": "

Dropout with p=0 is equivalent to the identity operation.

\n

In fact, this is the exact behaviour of Dropout modules when set in eval mode:

\n
\n

During evaluation the module simply computes an identity function.

\n
\n", + "system": "" + }, + { + "instruction": "Delete duplicated rows in torch.tensor", + "input": "", + "output": "

You can simply exploit the parameter dim of torch.unique.

\n
t1 = torch.tensor([[1, 2, 3], [4, 5, 6], [1, 2, 3], [4, 5, 6], [7, 8, 9]])\ntorch.unique(t1, dim=0)\n
\n

In this way you obtain the result you want:

\n
tensor([[1, 2, 3],\n    [4, 5, 6],\n    [7, 8, 9]])\n
\n

Here you can read the meaning of that parameter.

\n", + "system": "" + }, + { + "instruction": "how to solve this (Pytorch RuntimeError: 1D target tensor expected, multi-target not supported)", + "input": "", + "output": "

When using NLLLoss the target tensor must contain the index representation of the labels and not one-hot. So for example:

\n

I guess this is what your target looks like:

\n
target = [0, 0, 1, 0]\n
\n

Just convert it to just the number which is the index of the 1:

\n
[0, 0, 1, 0] -> [2]\n[1, 0, 0, 0] -> [0]\n[0, 0, 0, 1] -> [3]\n
\n

And then convert it to long tensor, ie:

\n
target = [2]\ntarget = torch.Tensor(target).type(torch.LongTensor)\n
\n

It might be confusing, that your output is a tensor with the length of classes and your target is an number but that how it is.

\n

You can check it out yourself here.

\n", + "system": "" + }, + { + "instruction": "(Differentiable Image Sampling) Custom Integer Sampling Kernel, Spatial Transformer Network", + "input": "", + "output": "

For future reference, and for those who might have had similar questions to the one I posted.

\n
\n

I've emailed Dr Jaderberg (one of the authors of the 'Spatial Transformer Networks') about this question and he has confirmed: "that the gradient wrt the coordinates for integer sampling is 0.". So I wasn't doing anything wrong, and it was right all along!

\n

He was very kind in his response and expressed that integer sampling was mentioned in the paper to introduce the bilinear sampling scheme, and have given insights into how to possibly implement integer sampling if I really wanted to:

\n

"you could think about using some numerical differentiation techniques (e.g. look at difference of x to its neighbours). This would assume smoothness in the image wrt coordinates."

\n

So with great thanks to Dr Jaderberg, I'm happy to close this question.

\n

I guess thinking about how I'd use numerical methods to implement the integer kernel for the sampling function is another challenge for myself, but until then I guess the bilinear sampler is my friend! :)

\n", + "system": "" + }, + { + "instruction": "Why does my pytorch NN return a tensor of nan?", + "input": "", + "output": "

nan values as outputs just mean that the training is instable which can have about every possible cause including all kinds of bugs in the code. If you think your code is correct you can try addressing the instability by lowering the learning rate or use gradient clipping.

\n", + "system": "" + }, + { + "instruction": "What is the difference between src and tgt in nn.Transformer for PyTorch?", + "input": "", + "output": "

The transformer structure is of two components, the encoder and the decoder. The src is the input to encoder and the tgt is the input to decoder.

\n

For example doing a machine translation task that translates English sentence to French, the src is english sequence ids and tgt is french sequence ids.

\n", + "system": "" + }, + { + "instruction": "PyTorch Lightning wants create a folder on import due to usage of wandb, which raises error on AWS Lambda", + "input": "", + "output": "

You need to make sure you have write access somewhere.

\n

Then you can use wandb environment variables to modify the default location where files are saved locally, in particular look at WANDB_DIR, WANDB_CONFIG_DIR and WANDB_CACHE_DIR.

\n", + "system": "" + }, + { + "instruction": "Embedding in PyTorch creates embedding with norm larger than max_norm", + "input": "", + "output": "

The max_norm argument bounds the norm of the embedding, but not the norm of the weights.

\n

To better understand this, you can run the following example:

\n
from torch import LongTensor, norm\nfrom torch.nn import Embedding\n\nsentences = LongTensor([[1,2,4,5],[4,3,2,9]])\nembedding = Embedding(num_embeddings=10, embedding_dim=100, max_norm=1)\nfor sentence in embedding(sentences):\n    for word in sentence:\n        print(norm(word))\n
\n

This works by dividing each weight in the embedding vector by the norm of the embedding vector itself, and multiplying it by max_norm. In your example max_norm=1, hence it's equivalent to dividing by the norm.

\n

To answer the question you asked in the comment, you can obtain the embedding of a sentence (vector containing word indexes taken from your dictionary), with embedding(sentences), the norm using the 2 for loops above.

\n", + "system": "" + }, + { + "instruction": "How perform unsupervised clustering on numbers in an Array using PyTorch", + "input": "", + "output": "

Dynamic binning requires explicit criteria and is not an easy problem to automate because each array may require a different set of thresholds to bin them efficiently.

\n

I think Gaussian mixtures with a silhouette score criteria is the best bet you have. Here is a code for what you are trying to achieve. The silhouette scores help you determine the number of clusters/Gaussians you should use and is quite accurate and interpretable for 1D data.

\n
import numpy as np\nfrom sklearn.mixture import GaussianMixture\nfrom sklearn.metrics import silhouette_score\nimport scipy\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#Sample data\nx = [57, 58, 59, 60, 61, 78, 79, 80, 81, 82, 83, 101, 102, 103, 104, 105, 106]\n\n#Fit a model onto the data\ndata = np.array(x).reshape(-1,1)\n\n#change value of clusters to check best silhoutte score\nprint('Silhoutte scores')\nscores = []\nfor n in range(2,11):\n    model = GaussianMixture(n).fit(data)\n    preds = model.predict(data)\n    score = silhouette_score(data, preds)\n    scores.append(score)\n    print(n,'->',score)\n\nn_best = np.argmax(scores)+2 #because clusters start from 2\n\nmodel = GaussianMixture(n_best).fit(data) #best model fit\n    \n#Get list of means and variances\nmu = np.abs(model.means_.flatten())\nsd = np.sqrt(np.abs(model.covariances_.flatten()))\n\n#Plotting\nextend_window = 50  #this is for zooming into or out of the graph, higher it is , more zoom out\nx_values = np.arange(data.min()-extend_window, data.max()+extend_window, 0.1) #For plotting smooth graphs\nplt.plot(data, np.zeros(data.shape), linestyle='None', markersize = 10.0, marker='o') #plot the data on x axis\n\n#plot the different distributions (in this case 2 of them)\nfor i in range(num_components):\n    y_values = scipy.stats.norm(mu[i], sd[i])\n    plt.plot(x_values, y_values.pdf(x_values))\n\n#split data by clusters\npred = model.predict(data)\noutput = np.split(x, np.sort(np.unique(pred, return_index=True)[1])[1:])\nprint(output)\n
\n
Silhoutte scores\n2 -> 0.699444729378163\n3 -> 0.8962176943475543  #<--- selected as nbest\n4 -> 0.7602523591781903\n5 -> 0.5835620702692205\n6 -> 0.5313888070615105\n7 -> 0.4457049486461251\n8 -> 0.4355742296918767\n9 -> 0.13725490196078433\n10 -> 0.2159663865546218\n
\n

This creates 3 gaussians with the following distributions to split the data into clusters.

\n

\"enter

\n

Arrays output finally split by similar values

\n
#output - \n[array([57, 58, 59, 60, 61]),\n array([78, 79, 80, 81, 82, 83]),\n array([101, 102, 103, 104, 105, 106])]\n
\n", + "system": "" + }, + { + "instruction": "Manually update momentum terms in pytorch optimizers", + "input": "", + "output": "

Since mhat and vhat are dependent on the "betas" parameter, you can update that during training if you wish, as so:

\n
optimizer.param_groups[0]['betas'] = (beta1,beta2)\n
\n

However, to address your question, the source code at https://github.com/pytorch/pytorch/blob/22b12179db15923007aaec80829766079bb0b9d1/torch/optim/_functional.py#L53 doesn't seem to support direct modification of the mhat (exp_avg) and vhat (exp_avg_sq, I at least presume). If you wish to do this I think you would have to implement your own Adam optimizer function (which seems relatively straightforward if you just copy the source code and modify it according to what you'd like to use).

\n", + "system": "" + }, + { + "instruction": "What do the charts in the System Panels signify in Wandb (PyTorch)", + "input": "", + "output": "
    \n
  1. GPU Utilization and GPU memory access should add up to 100% is true if the hardware is doing both the process sequentially. But modern hardware doesn't do operations like this. GPU will be busy computing the numbers at the same time it will be accessing the memory.

    \n\n

    W&B allows you to monitor both the metrics and take decisions based on them. Recently I implemented a data pipeline using tf.data.Dataset. The GPU utilization was close to 0% while memory access was close to 0% as well. I was reading three different image files and stacking them. Here CPU was the bottleneck. To counter this I created a dataset by stacking the images. The ETA went from 1h per epoch to 3 min.

    \n
  2. \n
\n

\"enter

\n

From the plot, you can infer that the memory access of GPU increased while GPU utilization is close to 100%. CPU utilization decreased which was the bottleneck.

\n
    \n
  1. Here's a nice article by Lukas answering this question.
  2. \n
\n", + "system": "" + }, + { + "instruction": "Loading the pre-trained model of torch and sentence_transformers when running in a docker container failing", + "input": "", + "output": "

sentence-transformers downloads and stores model in ~/.cache directory (or whatever the cache_folder evaluates to be in - https://github.com/UKPLab/sentence-transformers/blob/a13a4ec98b8fdda83855aca7992ea793444a207f/sentence_transformers/SentenceTransformer.py#L63). For you this looks like the /nonexistant directory. The permission denied error suggests you do not have permission to access that directory (to create cache folder).

\n

You can modify the Dockerfile to create this directory and make it accessible to any user that needs to access this:-

\n
RUN mkdir ~/.cache\nRUN chmod -R 777 ~/.cache # don't do this in production - modify command to give permission to users who require it.\n
\n

Or you could try downloading the model in the Dockerfile itself -

\n
RUN python -c 'from sentence-transformers import SentenceTransformer; embedder = SentenceTransformer("distilbert-base-nli-stsb-mean-tokens")'\n
\n", + "system": "" + }, + { + "instruction": "Pytorch lightning saving model during the epoch", + "input": "", + "output": "

Have you checked pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint?\nNot sure if it exists on your version but, setting every_n_val_epochs to 1 should work. If so, it should save your model checkpoint after every validation loop.

\n", + "system": "" + }, + { + "instruction": "Predict only one class (person) in YOLACT/YOLACT++", + "input": "", + "output": "

In order to show a single class (person, id:0) output at the time of inference, you simply need to add

\n
cur_scores[1:] *= 0\n
\n

after cur_scores = conf_preds[batch_idx, 1:, :] in line 83 of yolact/layers/functions/detection.py.

\n

Then running

\n
!python eval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.15 --top_k=15 --image=input_image.png:output_image.png\n
\n

will give you single class inference.

\n

As mentioned by the author in issue#218:

\n
\n

you can make the change to save on NMS computation,\nsimply add cur_scores[<everything but your desired class>] *= 0

\n

For the index, if you wanted only person (class 0), you can put 1:, but if you wanted another class than that you'd need to do 2 statements: one with :<class_idx> and the other with <class_idx>+1:. Then when you run eval, run it with --cross_class_nms=True and that'll remove all the other classes from NMS.

\n
\n

Other method is to modify the output in output_utils.py.

\n", + "system": "" + }, + { + "instruction": "How to downgrade CUDA to 10.0.10 with conda, without conflicts?", + "input": "", + "output": "

Check current version with

\n
torch.version.cuda\n
\n

I had 10.2. But I need 10.1 according to: table 1 here and my 430 NVIDIA driver installed.

\n

Uninstall and Install

\n
conda remove pytorch torchvision cudatoolkit\n\nconda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1.168 -c pytorch\n
\n

Say yes to everything for the above commands.

\n", + "system": "" + }, + { + "instruction": "Why Pytorch autograd need another vector to backward instead of computing Jacobian?", + "input": "", + "output": "

When you call backward() PyTorch udpdates the grad of each learnable parameter with the gradient of some loss function L w.r.t to that parameter. It has been designed with Gradient Descent [GD] (and its variants) in mind. Once the gradient has been computed you can update each parameter with x = x - learning_rate * x.grad. Indeed in the background the Jacobian has to be computed but it is not what one needs (generally) when applying GD optimization. The vector [0.1, 1.0, 0.0001] lets you reduce the output to a scalar so that x.grad will be a vector (and not a matrix, in case you do not reduce), and hence GD is well defined. You could, however, obtain the Jacobian using backward with one-hot vectors. For example, in this case:

\n
x = torch.randn(3, requires_grad=True)\ny = x * 2\nJ = torch.zeros(x.shape[0],x.shape[0])\nfor i in range(x.shape[0]):\n    v = torch.tensor([1 if j==i else 0 for j in range(x.shape[0])], dtype=torch.float)\n    y.backward(v, retain_graph=True)\n    J[:,i] = x.grad\n    x.grad.zero_()\nprint(J)\n
\n", + "system": "" + }, + { + "instruction": "How can I apply a transformation to a torch tensor", + "input": "", + "output": "

The problem is that interpolate expects a batch dimension, and looks like your data does not have one, based on the error message and the successful application of transforms. Since your input is spatial (based on the size=(28, 28)), you can fix that by adding the batch dimension and changing the mode, since linear is not implemented for spatial input:

\n
z = nnf.interpolate(z.unsqueeze(0), size=(28, 28), mode='bilinear', align_corners=False)\n
\n

If you want z to still have a shape like (C, H, W), then:

\n
z = nnf.interpolate(z.unsqueeze(0), size=(28, 28), mode='bilinear', align_corners=False).squeeze(0)\n
\n", + "system": "" + }, + { + "instruction": "Find n smallest values in a list of tensors", + "input": "", + "output": "

Pytorch does provide a more elegant (I think) way to do it, with torch.unique_consecutive (see here)

\n

I'm going to work on a tensor, not a list of tensors because as you did yourself, there's just a cat to do. Unraveling the indices afterward is not hard either.

\n
# We want to find the n=3 min values and positions in t \nn = 3\nt = torch.tensor([1,2,3,2,0,1,4,3,2])\n\n# To get a random occurrence, we create a random permutation\nrandomizer = torch.randperm(len(t))\n\n# first, we sort t, and get the indices\nsorted_t, idx_t = t[randomizer].sort()\n\n# small util function to extract only the n smallest values and positions\nhead = lambda v,w : (v[:n], w[:n])\n\n# use unique_consecutive to remove duplicates\nuniques_t, counts_t = head(*torch.unique_consecutive(sorted_t, return_counts=True))\n\n# counts_t.cumsum gives us the position of the unique values in sorted_t\nuniq_idx_t = torch.cat([torch.tensor([0]), counts_t.cumsum(0)[:-1]], 0)\n\n# And now, we have the positions of uniques_t values in t : \nfinal_idx_t = randomizer[idx_t[uniq_idx_t]]\nprint(uniques_t, final_idx_t)\n\n#>>> tensor([0,1,2]), tensor([4,0,1])\n#>>> tensor([0,1,2]), tensor([4,5,8])\n#>>> tensor([0,1,2]), tensor([4,0,8])\n
\n

EDIT : I think the added permutation solves your need-random-occurrence problem

\n", + "system": "" + }, + { + "instruction": "Still get inplace operation error when I use clone()", + "input": "", + "output": "

Just as @jodag said in the comment, the problematic lines are:

\n
self.user_memory[ui_train[row]] = torch.stack((h, c)\nself.item_memory[ii_train[row]] = torch.stack((h, c))\n
\n

There you are modifying the values of the self.user_memory and self.item_memory in place, which caused the error.

\n

fix

\n

try

\n
self.user_memory = self.user_memory.clone()\nself.user_memory[ui_train[row]] = torch.stack((h, c)\nself.item_memory = self.item_memory.clone()\nself.item_memory[ii_train[row]] = torch.stack((h, c))\n
\n

and you can remove the other clone operations in your code, such as:\nmemory = self.user_memory[ui_train[row][0]].clone()\nthey are not necessary.

\n

explanation

\n

The reason why inplace operation could cause an error is that pytorch relies on intermediate tensor values to compute the gradient in the backward pass. If the tensor you are trying to modify happens to be required as an intermediate tensor, then this will break the gradient computation graph.

\n

The solution is instead of modifying the tensor inplace, we make a clone of the original tensor and modify on the cloned tensor.\nIn this way, pytorch could still access the old tensor.

\n

c.f. https://pytorch.org/docs/stable/notes/autograd.html#in-place-operations-with-autograd

\n", + "system": "" + }, + { + "instruction": "Correct Way to Fine-Tune/Train HuggingFace's Model from scratch (PyTorch)", + "input": "", + "output": "

In this way, you would unnecessarily download and load the pre-trained model weights. You can avoid that by downloading the BERT config

\n
config = transformers.AutoConfig.from_pretrained("bert-base-cased")\nmodel = transformers.AutoModel.from_config(config)\n
\n

Both yours and this solution assume you want to tokenize the input in the same as the original BERT and use the same vocabulary. If you want to use a different vocabulary, you can change in the config before instantiating the model:

\n
config.vocab_size = 123456\n
\n

Similarly, you can change any hyperparameter that you want to have different from the original BERT.

\n", + "system": "" + }, + { + "instruction": "How to find the source code of torch.solve?", + "input": "", + "output": "

I think it just uses LAPACK for CPU and CUBLAS for GPU, since torch.solve is listed under "BLAS and LAPACK Operations" on the official docs.

\n

Then we're looking for wrapper code, which I believe is this part.

\n", + "system": "" + }, + { + "instruction": "RuntimeError("grad can be implicitly created only for scalar outputs")", + "input": "", + "output": "

The pytorch error you get means "you can only call backward on scalars, i.e 0-dimensional tensors". Here, according to your prints, policy_lossis not scalar, it's a 1x4 matrix. As a consequence, so is policy_loss + 0.5 * value_loss. Thus your call to backward yields an error.

\n

You probably forgot to reduce your losses to a scalar (with functions like norm or MSELoss ...). See example here

\n

The reason it does not work is the way the gradient propagation works internally (it's basically a Jacobian multiplication engine). You can call backward on a non-scalar tensor, but then you have to provide a gradient yourself, like :

\n
# loss is 1x4\nloss = policy_loss + 0.5 * value_loss\n# explicit gradient backprop with non-scalar tensor\nloss.backward(torch.ones(1,4))\n
\n

You should really not do that without a good understanding of how Pytorch's Autograd works and what it means.

\n

PS: next time, please provide a minimal working example :)

\n", + "system": "" + }, + { + "instruction": "PyTorch: Is it possible to differentiate a matrix?", + "input": "", + "output": "

It is possible but it doesn't really fit into the standard use case of PyTorch where you are generally interested in the gradient of a scalar valued function.

\n

The derivative of a matrix Y w.r.t. a matrix X can be represented as a Generalized Jacobian. For the case where both matrices are just vectors this reduces to the standard Jacobian matrix, where each row of the Jacobian is the transpose of the gradient of one element of Y with respect to X. More generally if X is shape (n1, n2, ..., nD) and Y is shape (m1, m2, ..., mE) then a natural way to represent the Generalized Jacobian of Y with respect to X is as a tensor of shape (m1, m2, ..., mE, n1, n2, ..., nD).

\n

There are two ways to compute the Generalized Jacobian that I'm aware of in PyTorch.

\n

Option 1

\n

Repeated application of back-propagation on each element of Y.

\n
import torch\n\ndef construct_jacobian(y, x, retain_graph=False):\n    x_grads = []\n    for idx, y_element in enumerate(y.flatten()):\n        if x.grad is not None:\n            x.grad.zero_()\n        # if specified set retain_graph=False on last iteration to clean up\n        y_element.backward(retain_graph=retain_graph or idx < y.numel() - 1)\n        x_grads.append(x.grad.clone())\n    return torch.stack(x_grads).reshape(*y.shape, *x.shape)\n
\n

then the Jacobian for your test case may be computed using

\n
a = torch.tensor([1., 2., 3.])\nb = torch.tensor([4., 5., 6.], requires_grad=True)\nc = a * b\n\njacobian = construct_jacobian(c, b)\n\nprint(jacobian)\n
\n

which results in

\n
tensor([[1., 0., 0.],\n        [0., 2., 0.],\n        [0., 0., 3.]])\n
\n

Option 2

\n

In PyTorch 1.5.1 a new autograd.functional API was introduced, including the new function torch.autograd.functional.jacobian. This produces the same results as the previous example but takes a function as an argument. Not demonstrated here, but you can provide the jacobian function a list of inputs if your function takes multiple independent tensors as input. In that case the jacobian would return a tuple containing the Generalized Jacobian for each of the input arguments.

\n
import torch\n\na = torch.tensor([1., 2., 3.])\n\ndef my_fun(b):\n    return a * b\n\nb = torch.tensor([4., 5., 6.], requires_grad=True)\n\njacobian = torch.autograd.functional.jacobian(my_fun, b)\n\nprint(jacobian)\n
\n

which also produces

\n
tensor([[1., 0., 0.],\n        [0., 2., 0.],\n        [0., 0., 3.]])\n
\n
\n

As an aside, in some literature the term "gradient" is used to refer to the transpose of the Jacobian matrix. If that's what you're after then, assuming Y and X are vectors, you can simply use the code above and take the transpose of the resulting Jacobian matrix. If Y or X are higher order tensors (matrices or n-dimensional tensors) then I'm not aware of any literature that distinguishes between gradient and Generalized Jacobian. A natural way to represent such a "transpose" of the Generalized Jacobian would be to use Tensor.permute to turn it into a tensor of shape (n1, n2, ..., nD, m1, m2, ..., mE).

\n
\n

As another aside, the concept of the Generalized Jacobian is rarely used in literature (example usage) but is actually relatively useful in practice. This is because it basically works as a bookkeeping technique to keep track of the original dimensionality of Y and X. By this I mean you could just as easily take Y and X and flatten them into vectors, regardless of their original shape. Then the derivative would be a standard Jacobian matrix. Consequently this Jacobian matrix would be equivalent to a reshaped version of the Generalized Jacobian.

\n", + "system": "" + }, + { + "instruction": "Does tf.math.reduce_max allows gradient flow like torch.max?", + "input": "", + "output": "

Yes, tf.math.reduce_max does allow gradients to flow. It is easy to check (this is TensorFlow 2.x but it is the same result in 1.x):

\n
import tensorflow as tf\n\nwith tf.GradientTape() as tape:\n    x = tf.linspace(0., 2. * 3.1416, 10)\n    tape.watch(x)\n    # A sequence of operations involving reduce_max\n    y = tf.math.square(tf.math.reduce_max(tf.math.sin(x)))\n# Check gradients\ng = tape.gradient(y, x)\nprint(g.numpy())\n# [ 0.         0.         0.3420142 -0.        -0.        -0.\n#  -0.         0.         0.         0.       ]\n
\n

As you can see, there is a valid gradient for y with respect to x. Only one of the values is not zero, because it is the value that then resulted in the maximum value, so it is the only value in x that affects the value of y. This is the correct gradient for the operation.

\n", + "system": "" + }, + { + "instruction": "Process got killed when building docker image on GCP", + "input": "", + "output": "

I found that to build your Docker image you should have enough disk space and Python 3.7 installed, also there's a typo at your Docker file - no single quotes ' at the end of the last string. Beside of that, everything looks good and runs.

\n

Please find my steps below:

\n
    \n
  1. Enable Google Container Registry API

    \n
  2. \n
  3. Create VM instance:

    \n
  4. \n
\n
gcloud compute instances create instance-4 --zone=europe-west3-a --machine-type=e2-medium --image=ubuntu-1804-bionic-v20200701 --image-project=ubuntu-os-cloud --boot-disk-size=50GB \n
\n
    \n
  1. Follow documentation Pushing and pulling images.
  2. \n
  3. Install Python 3.7:
  4. \n
\n
sudo apt install python3.7\n
\n
    \n
  1. Build Docker image:
  2. \n
\n
docker build -t gcr.io/test-prj/testimage:v1 .\n...\nStep 16/16 : RUN bash -c 'echo -e "[server]\\nenableCORS = false\\n" > /root/.streamlit/config.toml\n ---> Running in 57502f97cfbe\n/bin/sh: 1: Syntax error: Unterminated quoted string\nThe command '/bin/sh -c bash -c 'echo -e "[server]\\nenableCORS = false\\n" > /root/.streamlit/config.toml' returned a non-zero code: 2\n
\n
    \n
  1. Change last string of the Docker file:
  2. \n
\n
" > /root/.streamlit/config.toml'\n
\n
    \n
  1. Build Docker image again:
  2. \n
\n
docker build -t gcr.io/test-prj/testimage:v1 .\n...\nStep 16/16 : RUN bash -c 'echo -e "[server]\\nenableCORS = false\\n" > /root/.streamlit/config.toml'\n ---> Running in c1c1f81a2d09\nRemoving intermediate container c1c1f81a2d09\n ---> 24b6609de554\nSuccessfully built 24b6609de554\nSuccessfully tagged gcr.io/test-prj/testimage:v1\n$ docker images\nREPOSITORY                            TAG                 IMAGE ID            CREATED             SIZE\ngcr.io/test-prj/testimage             v1                  24b6609de554        14 minutes ago      3.87GB\n
\n
    \n
  1. Push Docker image to the registry:
  2. \n
\n
gcloud docker -- push gcr.io/test-prj/testimage:v1\n
\n
    \n
  1. Create new VM instance and deploy image:
  2. \n
\n
gcloud compute instances create-with-container instance-5 --zone=europe-west3-a --machine-type=e2-medium --image=cos-stable-81-12871-148-0 --image-project=cos-cloud --boot-disk-size=50GB --container-image=gcr.io/test-prj/testimage:v1 \n
\n
    \n
  1. Check status of Docker container:
  2. \n
\n
instance-5 ~ $ docker ps\nCONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                                  PORTS               NAMES\ne21b80dc0de7        gcr.io/test-prj/testimage:v1   "/bin/sh -c 'streaml\u2026"   28 seconds ago      Restarting (2) Less than a second ago                       klt-instance-5-caqx\n
\n

and it doesn't look very good.

\n
    \n
  1. Stop container:
  2. \n
\n
instance-5 ~ $docker stop e21b80dc0de7\n
\n
    \n
  1. Follow the documentation and run container interactively:
  2. \n
\n
instance-5 ~ $docker run --name test -it gcr.io/test-prj/testimage:v1\nUsage: streamlit run [OPTIONS] TARGET [ARGS]...\n\nError: Invalid value: File does not exist: app.py\n
\n

no surprise because I don't have app.py.

\n

After that, I've added some dummy app.py, rebuild and finally it works:

\n
instance-6 ~ $ docker ps\nCONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS               NAMES\n1de2e8ded5d8        gcr.io/test-prj/testimage:v2   "/bin/sh -c 'streaml\u2026"   7 minutes ago       Up 7 minutes                           klt-instance-6-yezv\n
\n", + "system": "" + }, + { + "instruction": "Pytorch Error, RuntimeError: expected scalar type Long but found Double", + "input": "", + "output": "

In a classification task, the data type for input labels should be Long but you assigned them as float64

\n\n
type(b_labels[i]) = tensor(1., dtype=torch.float64)\n\n=>\n\ntype(b_labels[i]) = tensor(1., dtype=torch.long)\n
\n", + "system": "" + }, + { + "instruction": "How to obtain sequence of submodules from a pytorch module?", + "input": "", + "output": "\n

In Pytorch, the results of print(model) or .named_children(), etc are listed based on the order they are declared in __init__ of the model's class e.g.

\n

Case 1

\n
class Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n        self.fc1 = nn.Linear(320, 50)\n        self.fc2 = nn.Linear(50, 10)\n        self.conv2_drop = nn.Dropout2d()\n\n    def forward(self, x):\n        x = F.relu(F.max_pool2d(self.conv1(x), 2))\n        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n        x = x.view(-1, 320)\n        x = F.relu(self.fc1(x))\n        x = F.dropout(x, p=0.6)\n        x = self.fc2(x)\n        return F.log_softmax(x, dim=1)\n\nmodel = Model()\nprint(model)\n[name for name, _ in model.named_children()]\n# output\n['conv1', 'conv2', 'fc1', 'fc2', 'conv2_drop']\n
\n

Case 2

\n

Changed order of fc1 and fc2 layers in constructor.

\n
class Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n        self.fc2 = nn.Linear(50, 10)\n        self.fc1 = nn.Linear(320, 50)\n        self.conv2_drop = nn.Dropout2d()\n\n    def forward(self, x):\n        x = F.relu(F.max_pool2d(self.conv1(x), 2))\n        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n        x = x.view(-1, 320)\n        x = F.relu(self.fc1(x))\n        x = F.dropout(x, p=0.6)\n        x = self.fc2(x)\n        return F.log_softmax(x, dim=1)\n\nmodel = Model()\nprint(model)\n[name for name, _ in model.named_children()]\n# output\n['conv1', 'conv2', 'fc2', 'fc1', 'conv2_drop']\n
\n
\n

That's why classifier is printed before dropout as it's declared so in constructor:

\n
class DistilBertForSequenceClassification(DistilBertPreTrainedModel):\n        ...\n        self.distilbert = DistilBertModel(config)\n        self.pre_classifier = nn.Linear(config.dim, config.dim)\n        self.classifier = nn.Linear(config.dim, config.num_labels)\n        self.dropout = nn.Dropout(config.seq_classif_dropout)\n
\n
\n

Nevertheless, you can play with model's submodules using .modules(), etc. but they'll be listed only in the order they are declared in __init__. If you only want to print structure based on forward method, you may try using pytorch-summary.

\n", + "system": "" + }, + { + "instruction": "Measuring F1 score for multiclass classification natively in PyTorch", + "input": "", + "output": "

I have written my own implementation in Pytorch some time ago:

\n
from typing import Tuple\n\nimport torch\n\n\nclass F1Score:\n    """\n    Class for f1 calculation in Pytorch.\n    """\n\n    def __init__(self, average: str = 'weighted'):\n        """\n        Init.\n\n        Args:\n            average: averaging method\n        """\n        self.average = average\n        if average not in [None, 'micro', 'macro', 'weighted']:\n            raise ValueError('Wrong value of average parameter')\n\n    @staticmethod\n    def calc_f1_micro(predictions: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:\n        """\n        Calculate f1 micro.\n\n        Args:\n            predictions: tensor with predictions\n            labels: tensor with original labels\n\n        Returns:\n            f1 score\n        """\n        true_positive = torch.eq(labels, predictions).sum().float()\n        f1_score = torch.div(true_positive, len(labels))\n        return f1_score\n\n    @staticmethod\n    def calc_f1_count_for_label(predictions: torch.Tensor,\n                                labels: torch.Tensor, label_id: int) -> Tuple[torch.Tensor, torch.Tensor]:\n        """\n        Calculate f1 and true count for the label\n\n        Args:\n            predictions: tensor with predictions\n            labels: tensor with original labels\n            label_id: id of current label\n\n        Returns:\n            f1 score and true count for label\n        """\n        # label count\n        true_count = torch.eq(labels, label_id).sum()\n\n        # true positives: labels equal to prediction and to label_id\n        true_positive = torch.logical_and(torch.eq(labels, predictions),\n                                          torch.eq(labels, label_id)).sum().float()\n        # precision for label\n        precision = torch.div(true_positive, torch.eq(predictions, label_id).sum().float())\n        # replace nan values with 0\n        precision = torch.where(torch.isnan(precision),\n                                torch.zeros_like(precision).type_as(true_positive),\n                                precision)\n\n        # recall for label\n        recall = torch.div(true_positive, true_count)\n        # f1\n        f1 = 2 * precision * recall / (precision + recall)\n        # replace nan values with 0\n        f1 = torch.where(torch.isnan(f1), torch.zeros_like(f1).type_as(true_positive), f1)\n        return f1, true_count\n\n    def __call__(self, predictions: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:\n        """\n        Calculate f1 score based on averaging method defined in init.\n\n        Args:\n            predictions: tensor with predictions\n            labels: tensor with original labels\n\n        Returns:\n            f1 score\n        """\n\n        # simpler calculation for micro\n        if self.average == 'micro':\n            return self.calc_f1_micro(predictions, labels)\n\n        f1_score = 0\n        for label_id in range(1, len(labels.unique()) + 1):\n            f1, true_count = self.calc_f1_count_for_label(predictions, labels, label_id)\n\n            if self.average == 'weighted':\n                f1_score += f1 * true_count\n            elif self.average == 'macro':\n                f1_score += f1\n\n        if self.average == 'weighted':\n            f1_score = torch.div(f1_score, len(labels))\n        elif self.average == 'macro':\n            f1_score = torch.div(f1_score, len(labels.unique()))\n\n        return f1_score\n
\n

You can test it in the following way:

\n
from sklearn.metrics import f1_score\nimport numpy as np\nerrors = 0\nfor _ in range(10):\n    labels = torch.randint(1, 10, (4096, 100)).flatten()\n    predictions = torch.randint(1, 10, (4096, 100)).flatten()\n    labels1 = labels.numpy()\n    predictions1 = predictions.numpy()\n\n    for av in ['micro', 'macro', 'weighted']:\n        f1_metric = F1Score(av)\n        my_pred = f1_metric(predictions, labels)\n        \n        f1_pred = f1_score(labels1, predictions1, average=av)\n        \n        if not np.isclose(my_pred.item(), f1_pred.item()):\n            print('!' * 50)\n            print(f1_pred, my_pred, av)\n            errors += 1\n\nif errors == 0:\n    print('No errors!')\n
\n", + "system": "" + }, + { + "instruction": "PyTorch tensor slice and memory usage", + "input": "", + "output": "
\n

(i) In case 3 the resulting file size seems surprising as we are creating a new tensor. Why is this new tensor not just the slice?

\n
\n\n

Slicing creates a view of the tensor, which shares the underlying data but contains information about the memory offsets used for the visible data. This avoids having to copy the data frequently, which makes a lot of operations much more efficient. See PyTorch - Tensor Views for a list of affected operations.

\n\n

You are dealing with one of the few cases, where the underlying data matters. To save the tensor, it needs to save the underlying data, otherwise the offsets would no longer be valid.

\n\n

torch.FloatTensor does not create a copy of the tensor, if it's not necessary. You can verify that their underlying data is still the same (they have the exact same memory location):

\n\n
torch.FloatTensor(T[-20:]).storage().data_ptr() == T.storage().data_ptr()\n# => True\n
\n\n
\n

(ii) Is case 4, the optimal method for saving just part (slice) of a tensor?

\n \n

(iii) More generally, if I want to 'trim' a very large 1-dimensional tensor by removing the first half of its values in order to save memory, do I have to proceed as in case 4, or is there a more direct and less computationally costly way that does not involve creating a python list.

\n
\n\n

You will most likely not get around copying the data of the slice, but at least you can avoid creating a Python list from it and creating a new tensor from the list, by using torch.Tensor.clone instead:

\n\n
torch.save(T[-20:].clone(), 'junk5.pt')\n
\n", + "system": "" + }, + { + "instruction": "Unable to install fastai on Jupyter Notebook", + "input": "", + "output": "

Try downgrading from python3.8 to python 3.7, it works for me.

\n", + "system": "" + }, + { + "instruction": "Can conditionality be added inside Pytorch nn.Sequential()", + "input": "", + "output": "

No, but in your case it's easy to take if out of nn.Sequential:

\n\n
class Building_Blocks(torch.nn.Module):\n    def conv_block(self, in_features, out_features, kernal_size, upsample=False):\n        layers = [\n            torch.nn.Conv2d(in_features, out_features, kernal_size),\n            torch.nn.ReLU(inplace=True),\n            torch.nn.Conv2d(out_features, out_features, kernal_size),\n            torch.nn.ReLU(inplace=True),\n        ]\n        if upsample:\n            layers.append(\n                torch.nn.ConvTranspose2d(out_features, out_features, kernal_size)\n            )\n        block = torch.nn.Sequential(*layers)\n        return block\n\n    def __init__(self):\n        super(Building_Blocks, self).__init__()\n        self.contracting_layer1 = self.conv_block(3, 64, 3, upsample=True)\n\n    def forward(self, x):\n        x = self.contracting_layer1(x)\n        return x\n
\n\n

You can always construct a list containing layers however you want and unpack it into torch.nn.Sequential afterwards.

\n", + "system": "" + }, + { + "instruction": "TensorBoard: Tutorial Pytorch: module 'tensorflow._api.v2.io.gfile' has no attribute 'get_filesystem'", + "input": "", + "output": "

Try this

\n\n
import tensorflow as tf\nimport tensorboard as tb\ntf.io.gfile = tb.compat.tensorflow_stub.io.gfile\n
\n", + "system": "" + }, + { + "instruction": "How can I show multiple predictions of the next word in a sentence?", + "input": "", + "output": "

You can use torch.topk as follows:

\n\n

predicted_indices = [x.item() for x in torch.topk(predictions[0, -1, :],k=3)]

\n", + "system": "" + }, + { + "instruction": "Pytorch DataParallel doesn't work when the model contain tensor operation", + "input": "", + "output": "

I have no experience with DataParallel, but I think it might be because your tensor is not part of the model parameters. You can do this by writing:

\n\n

torch.nn.Parameter(torch.ones(5,1))

\n\n

Note that you don't have to move it to the gpu when initializing, because now when you call model.to('cuda:0') this is done automatically.

\n\n

I can imagine that DataParallel uses the model parameters to move them to the appropriate gpu.

\n\n

See this answer for more on the difference between a torch tensor and torch.nn.Parameter.

\n\n

If you don't want the tensor values to be updated by backpropagation during training, you can add requires_grad=False.

\n\n

Another way that might work is to override the to method, and initialize the tensor in the forward pass:

\n\n
class custom_model(torch.nn.Module):\n    def __init__(self):\n        super(custom_model, self).__init__()\n        self.layer = torch.nn.Linear(10,5)\n    def forward(self, x):\n        return self.layer(x) @ torch.ones(5,1, device=self.device)\n    def to(self, device: str):\n        new_self = super(custom_model, self).to(device)\n        new_self.device = device\n        return new_self\n
\n\n

or something like this:

\n\n
class custom_model(torch.nn.Module):\n    def __init__(self, device:str):\n        super(custom_model, self).__init__()\n        self.layer = torch.nn.Linear(10,5)\n        self.weight = torch.ones(5,1, device=device)\n    def forward(self, x):\n        return self.layer(x) @ self.weight\n    def to(self, device: str):\n        new_self = super(custom_model, self).to(device)\n        new_self.device = device\n        new_self.weight = torch.ones(5,1, device=device)\n        return new_self\n
\n", + "system": "" + }, + { + "instruction": "Running Detectron2 inference in Caffe2", + "input": "", + "output": "

Your input_image should be multiple of 32.\nSo problably do you need make a resize of your input img

\n

So do you need :

\n
caffe2_model = Caffe2Model.load_protobuf(input_directory)\nimg = cv2.imread(input_image)\nimg = cv2.resize(img, (64, 64))  \nimage = torch.as_tensor(img.astype("float32").transpose(2, 0, 1))\ndata = {'image': image, 'height': image.shape[1], 'width': image.shape[2]}\noutput = caffe2_model([data])\n
\n

See the class: classdetectron2.export.Caffe2Tracer\nIn link : https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer

\n", + "system": "" + }, + { + "instruction": "Pairwise similarity matrix between a set of vectors in PyTorch", + "input": "", + "output": "

The documentation implies that the shapes of the inputs to cosine_similarity must be equal but this is not the case. Internally PyTorch broadcasts via torch.mul, inserting a dimension with a slice (or torch.unsqueeze) will give you the desired result. This is not optimal due to duplicate computations and memory for the upper and lower triangles but it's simple:

\n
import torch\nfrom torch.nn import functional as F\nfrom scipy.spatial import distance\n\n# compute once in pytorch\nx = torch.randn(32, 100, 25)\ny = F.cosine_similarity(x[..., None, :, :], x[..., :, None, :], dim=-1)\n\nassert y.shape == torch.Size([32, 100, 100])\n\n# test against scipy by iterating over each batch element\nz = []\nfor i in range(x.shape[0]):\n    slice = x[i, ...].numpy()\n    z.append(torch.tensor(distance.cdist(slice, slice, metric='cosine'), dtype=torch.float32))\n\n# convert similarity to distance and ensure they're reasonably close\nassert torch.allclose(torch.stack(z), 1.0-y)\n\n
\n", + "system": "" + }, + { + "instruction": "How to use GPU in pytorch?", + "input": "", + "output": "

The GeForce GT 630M has compute capability 2.1 and therefore only supports up to CUDA 8.

\n\n\n\n

Due to the second point there's no way short of changing the PyTorch codebase to make your GPU work with the latest version. Your options are:

\n\n\n", + "system": "" + }, + { + "instruction": "Pytorch - Porting @ Operator", + "input": "", + "output": "

According to the python docs on operator precedence the @ operator has left-to-right associativity

\n\n

https://docs.python.org/3/reference/expressions.html#operator-precedence

\n\n
\n

Operators in the same box group left to right (except for exponentiation, which groups from right to left).

\n
\n\n

Therefore the equivalent operation is

\n\n
rotMat = torch.matmul(torch.matmul(xmat, ymat), zmat)\n
\n\n

Though keep in mind that matrix multiplication is associative (mathematically) so you shouldn't see much of a difference in the result if you do it the other way. Generally you want to associate in the way that results in the fewest computational steps. For example using the naive matrix multiplication algorithm, if X is 1x10, Y is 10x100 and Z is 100x1000 then the difference between

\n\n
(X @ Y) @ Z\n
\n\n

and

\n\n
X @ (Y @ Z)\n
\n\n

is about 1*10*100 + 1*100*1000 = 101,000 multiplication/addition operations for the first versus 10*100*1000 + 1*10*1000 = 1,001,000 operations for the second. Though these have the same result (ignoring rounding errors) the second version will be about 10 x slower!

\n\n
\n\n

As pointed out by @Szymon Maszke pytorch tensors also support the @ operator so you can still use

\n\n
xmat @ ymat @ zmat\n
\n\n

in pytorch.

\n", + "system": "" + }, + { + "instruction": "How can I convert Tensor into Bitmap on PyTorch Mobile?", + "input": "", + "output": "

None of the answers were able to produce the output I wanted, so this is what I came up with - it is basically only reverse engineered version of what happenes in TensorImageUtils.bitmapToFloat32Tensor().

\n

Please note that this function only works if you are using MemoryFormat.CONTIGUOUS (which is default) in TensorImageUtils.bitmapToFloat32Tensor().

\n
fun tensor2Bitmap(input: FloatArray, width: Int, height: Int, normMeanRGB: FloatArray, normStdRGB: FloatArray): Bitmap? {\n    val pixelsCount = height * width\n    val pixels = IntArray(pixelsCount)\n    val output = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)\n    \n    val conversion = { v: Float -> ((v.coerceIn(0.0f, 1.0f))*255.0f).roundToInt()}\n\n    val offset_g = pixelsCount\n    val offset_b = 2 * pixelsCount\n    for (i in 0 until pixelsCount) {\n        val r = conversion(input[i] * normStdRGB[0] + normMeanRGB[0])\n        val g = conversion(input[i + offset_g] * normStdRGB[1] + normMeanRGB[1])\n        val b = conversion(input[i + offset_b] * normStdRGB[2] + normMeanRGB[2])\n        pixels[i] = 255 shl 24 or (r.toInt() and 0xff shl 16) or (g.toInt() and 0xff shl 8) or (b.toInt() and 0xff)\n    }\n    output.setPixels(pixels, 0, width, 0, 0, width, height)\n    return output\n}\n
\n

Example usage then could be as follows:

\n
tensor2Bitmap(outputTensor.dataAsFloatArray, bitmap.width, bitmap.height, TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB)\n
\n", + "system": "" + }, + { + "instruction": "Weight Initialization from pretrained BERT error in pytorch", + "input": "", + "output": "

for the Error you cited, actually I think that is only a Warning that states you you are loading on your architecture BertForMultiLable the Weights from pretrained model that was not trained for that specific tasks.\nSimilar warning discussion here

\n

The real error here it's another: IndexError: tuple index out of range. but for that you should attach some code and more information about what you are doing .

\n", + "system": "" + }, + { + "instruction": "PyTorch, apply different functions element-wise", + "input": "", + "output": "

I think your solution is good. But it won't work with any tensor shape. You can slightly modify the solution as follows.

\n\n
t_shape = [4, 10, 10]\ndata = torch.rand(t_shape)\n\nfuncs = [lambda x: x+1, lambda x: x**2, lambda x: x-1, lambda x: x*2]\n\n# only change the following 2 lines\nd = [f(data[i]) for i, f in enumerate(funcs)]\nd = torch.stack(d, dim=0) \n
\n", + "system": "" + }, + { + "instruction": "What is the gradient of pytorch floor() gradient method?", + "input": "", + "output": "

As the floor function is piece wise constant. This means the gradient must be zero almost everywhere.

\n\n

\n\n

While the code doesn't say anything about it, I expect that the gradient is set to a constant zero everywhere.

\n", + "system": "" + }, + { + "instruction": "Why is my Fully Convolutional Autoencoder not symmetric?", + "input": "", + "output": "

There are two issues.

\n\n

First is insufficient padding: with kernel_size=5 your convolutions are shrinking the image by 4 every time they are applied (2 pixels on each side), so you need padding=2, and not just 1, in all places.

\n\n

Second is the \"uneven\" input size. What I mean is that once your convolutions are properly padded, you are left with downsampling operations which at each point try to divide your image resolution in half. When they fail, they just return a smaller result (integer division discards the remainder). Since your network has 4 successive 2x downsampling operations, you need your input to have H, W dimensions which are multiples of 2^4=16. Then you will actually get equally shaped output. An example below

\n\n
import torch\nimport torch.nn as nn\n\nclass AE(nn.Module):\n    def __init__(self):\n        super(AE, self).__init__()\n\n        self.encoder = nn.Sequential(\n            # conv 1\n            nn.Conv2d(in_channels=3, out_channels=64, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(64),\n            nn.ReLU(),\n            nn.MaxPool2d(kernel_size=2, stride=2),\n\n            # conv 2\n            nn.Conv2d(in_channels=64, out_channels=128, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(128),\n            nn.ReLU(),\n            nn.MaxPool2d(kernel_size=2, stride=2),\n\n            # conv 3\n            nn.Conv2d(in_channels=128, out_channels=256, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(256),\n            nn.ReLU(),\n            nn.MaxPool2d(kernel_size=2, stride=2),\n\n            # conv 4\n            nn.Conv2d(in_channels=256, out_channels=512, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(512),\n            nn.ReLU(),\n            nn.MaxPool2d(kernel_size=2, stride=2),\n\n            # conv 5\n            nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(1024),\n            nn.ReLU()\n        )\n\n        self.decoder = nn.Sequential(\n            # conv 6\n            nn.ConvTranspose2d(in_channels=1024, out_channels=512, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(512),\n            nn.ReLU(),\n\n            # conv 7\n            nn.Upsample(scale_factor=2, mode='bilinear'),\n            nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(256),\n            nn.ReLU(),\n\n            # conv 8\n            nn.Upsample(scale_factor=2, mode='bilinear'),\n            nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(128),\n            nn.ReLU(),\n\n            # conv 9\n            nn.Upsample(scale_factor=2, mode='bilinear'),\n            nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=5, stride=1, padding=2),\n            nn.BatchNorm2d(64),\n            nn.ReLU(),\n\n            # conv 10 out\n            nn.Upsample(scale_factor=2, mode='bilinear'),\n            nn.ConvTranspose2d(in_channels=64, out_channels=2, kernel_size=5, stride=1, padding=2),\n            nn.Softmax()    # multi-class classification\n        )\n\n    def forward(self, x):\n        x = self.encoder(x)\n        x = self.decoder(x)\n        return x\n\ninput = torch.randn(1, 3, 6*16, 7*16)\noutput = AE()(input)\nprint(input.shape)\nprint(output.shape)\n
\n", + "system": "" + }, + { + "instruction": "What is the time-complexity of the pseudo-inverse in pytorch (i.e. torch.pinverse)?", + "input": "", + "output": "

The PyTorch documentation states that pinverse is calculated using SVD (singular value decomposition). The complexity of SVD is O(n m^2), where m is the larger dimension of the matrix and n the smaller. Thus this is the complexity.

\n\n

For more info, check out these pages on wikipedia:

\n\n\n", + "system": "" + }, + { + "instruction": "Invalid multinomial distribution (encountering probability entry < 0) at /pytorch/aten/src/TH/generic/THTensorRandom.cpp:325", + "input": "", + "output": "

I believe you've found a bug in the error reporting for torch.multinomial.

\n\n

For example

\n\n
x = torch.ones(128, 1)\nx[0] *= 1e100\nout_probs = F.softmax(x, dim=0)\nprint('Negative values:', torch.sum(out_probs < 0).item())\ny = torch.multinomial(out_probs, 1)\n
\n\n

results in the following output

\n\n
Negative values: 0\nRuntimeError: invalid argument 2: invalid multinomial distribution (encountering probability entry < 0) at /pytorch/aten/src/TH/generic/THTensorRandom.cpp:298\n
\n\n

It turns out this is getting triggered because out_probs contains nan entries.

\n\n
print('nan values:', torch.sum(torch.isnan(out_probs)).item())\n
\n\n

gives

\n\n
nan values: 128\n
\n\n

Which are caused by mathematical instabilities in softmax.

\n\n

Strangely, when the values in out_probs are infinite you get the proper error message

\n\n
RuntimeError: invalid argument 2: invalid multinomial distribution (encountering probability entry = infinity or NaN) at /pytorch/aten/src/TH/generic/THTensorRandom.cpp:302\n
\n\n

This bug should probably be reported at https://github.com/pytorch/pytorch/issues if it hasn't been fixed in the most recent version.

\n\n

By the way I'm using PyTorch 1.0.1.post2

\n", + "system": "" + }, + { + "instruction": "How do I compile a pytorch script into an exe file with small size?", + "input": "", + "output": "

pyinstaller is kind of cheated .exe. It does not compile the script, but bundles what's needed (including python interpreter) into one (or many) files.

\n\n

To really be Python agnostic you should convert your model using torchscript (read about it here). You will be able to run your module using C++ libtorch without Python interpreter.

\n", + "system": "" + }, + { + "instruction": "Adding custom labels to pytorch dataloader/dataset does not work for custom dataset", + "input": "", + "output": "

I typically inherit the builtin DataSet class as follows:

\n\n
from torch.utils.data import DataLoader\nclass DataSet:\n\n    def __init__(self, root):\n        \"\"\"Init function should not do any heavy lifting, but\n            must initialize how many items are available in this data set.\n        \"\"\"\n\n        self.ROOT = root\n        self.images = read_images(root + \"/images\")\n        self.labels = read_labels(root + \"/labels\")\n\n    def __len__(self):\n        \"\"\"return number of points in our dataset\"\"\"\n\n        return len(self.images)\n\n    def __getitem__(self, idx):\n        \"\"\" Here we have to return the item requested by `idx`\n            The PyTorch DataLoader class will use this method to make an iterable for\n            our training or validation loop.\n        \"\"\"\n\n        img = images[idx]\n        label = labels[idx]\n\n        return img, label\n
\n\n

And now, you can create an instance of this class as,

\n\n
ds = Dataset('../input/train')\n
\n\n

Now, you can instantiate the DataLoader:

\n\n
dl = DataLoader(ds, batch_size=TRAIN_BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True)\n
\n\n

This will create batches of your data that you can access as:

\n\n
for image, label in dl:\n    print(label)\n
\n", + "system": "" + }, + { + "instruction": "Transfer learning in Pytorch using fasterrcnn_resnet50_fpn", + "input": "", + "output": "

The error message says it all. You need to pass in a pair of image, target to train your model, where target. is a dictionary that contains information about the bounding boxes, labels and masks.

\n\n

For more information and a comprehensive tutorial, take a look at https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

\n", + "system": "" + }, + { + "instruction": "Partial slices in pytorch / numpy with arbitrary and variable number of dimensions", + "input": "", + "output": "

Answer:\nMaking a tuple of slice objects does the trick:

\n\n
a[(slice(1,None),) * len(a.shape)]\n
\n\n

Explanation:\nslice is a builtin python class (not tied to numpy or pytorch) which provides an alternative to the subscript notation for describing slices. The answer to a different question suggests using this as a way to store slice information in python variables. The python glossary points out that

\n\n
\n

The bracket (subscript) notation uses slice objects internally.

\n
\n\n

Since the __getitem__ methods for numpy ndarrays and pytorch tensors support multi-dimensional indexing with slices, they must also support multidimensional indexing with slice objects, and so we can make a tuple of those slices that of the right length.

\n\n

Btw, you can see how python uses the slice objects by creating a dummy class as follows and then do slicing on it:

\n\n
class A(object):\n    def __getitem__(self, ix):\n        return ix\n\nprint(A()[5])  # 5\nprint(A()[1:])  # slice(1, None, None)\nprint(A()[1:,1:])  # (slice(1, None, None), slice(1, None, None))\nprint(A()[1:,slice(1,None)])  #  (slice(1, None, None), slice(1, None, None))\n\n\n
\n", + "system": "" + }, + { + "instruction": "Is it possible to make a max-pooling on dynamic length sentences without padding in Pytorch?", + "input": "", + "output": "

Yes.

\n\n

You can simply apply MaxPool1d() to any size, and they will be (probably) automatically padded. Note that, unfortunately, you need to switch up your dimensions, as it will by default look for pooling along the third dimension.

\n\n

Here is a short working example:

\n\n
import torch\nimport torch.nn as nn\n\nop = nn.MaxPool1d(5) # kernel size of 5\nx1 = torch.zeros([1,16,1])\nx2 = torch.zeros([1,16,3])\n\nop(x1) # works\n\nop(x2) # works as well\n
\n\n

To get your tensor in the correct dimension, you can use .view(), in a combination with .unsqueeze. Assuming you have a your given x1 with dimension (2,16), simply do

\n\n
x1.view(16,2).unsqueeze(dim=0)\n
\n\n

and afterwards perform the reverse transformation by

\n\n
result_x1 = op(x1).view(1,16) # only one value left due to pooling\n
\n", + "system": "" + }, + { + "instruction": "How can I recover transformation that was applied in the DataLoader?", + "input": "", + "output": "

I don't think there is an API to get the applied transforms.

\n\n

Can you just reimplement __call__ methods of the randomized transforms so they also log what was applied? (it's just couple of lines: https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html#RandomHorizontalFlip)

\n", + "system": "" + }, + { + "instruction": "How to sum based off index vector", + "input": "", + "output": "

You are looking for index_add_()

\n\n
A.index_add_(0, B, C)\n
\n\n

Note that B should be of type torch.long (it is an index vector), and C should be of type torch.float, same as the type of A.
\nMoreover, you can use the first dim argument to do this summation along different dimensions in case A and C are multi-dimensional tensors.

\n", + "system": "" + }, + { + "instruction": "Getting gradient of vectorized function in pytorch", + "input": "", + "output": "

Here you can find relevant discussion about your error.

\n\n

In essence, when you call backward() without arguments it is implicitly converted to backward(torch.Tensor([1])), where torch.Tensor([1]) is the output value with respect to which gradients are calculated.

\n\n

If you pass 4 (or more) inputs, each needs a value with respect to which you calculate gradient. You can pass torch.ones_like explicitly to backward like this:

\n\n
import torch\n\nx = torch.tensor([4.0, 2.0, 1.5, 0.5], requires_grad=True)\nout = torch.sin(x) * torch.cos(x) + x.pow(2)\n# Pass tensor of ones, each for each item in x\nout.backward(torch.ones_like(x))\nprint(x.grad)\n
\n", + "system": "" + }, + { + "instruction": "How to implement low-dimensional embedding layer in pytorch", + "input": "", + "output": "

Assuming your input vectors are one-hot that is where \"embedding layers\" are used, you can directly use embedding layer from torch which does above as well as some more things. nn.Embeddings take non-zero index of one-hot vector as input as a long tensor. For ex: if feature vector is

\n\n
f = [[0,0,1], [1,0,0]]\n
\n\n

then input to nn.Embeddings will be

\n\n

input = [2, 0]

\n\n

However, what OP has asked in question is getting embeddings by matrix multiplication and below I will address that. You can define a module to do that as below. Since, param is an instance of nn.Parameter it will be registered as a parameter and will be optimized when you call Adam or any other optimizer.

\n\n
class Embedding(nn.Module):\n    def __init__(self, input_dim, embedding_dim):\n        super().__init__()\n        self.param = torch.nn.Parameter(torch.randn(input_dim, embedding_dim))\n\n    def forward(self, x):\n        return torch.mm(x, self.param)\n
\n\n

If you notice carefully this is the same as a linear layer with no bias and slightly different initialization. Therefore, you can achieve the same by using a linear layer as below.

\n\n
self.embedding = nn.Linear(4096, 20, bias=False)\n# change initial weights to normal[0,1] or whatever is required\nembedding.weight.data = torch.randn_like(embedding.weight) \n
\n", + "system": "" + }, + { + "instruction": "PyTorch specify model parameters", + "input": "", + "output": "

Just wrap the learnable parameter with nn.Parameter (requires_grad=True is the default, no need to specify this), and have the fixed weight as a Tensor without nn.Parameter wrapper.

\n\n

All nn.Parameter weights are automatically added to net.parameters(), so when you do training like optimizer = optim.SGD(net.parameters(), lr=0.01), the fixed weight will not be changed.

\n\n

So basically this:

\n\n
weights_fixed = W1\nweights_guess = nn.Parameter(W2)\n
\n", + "system": "" + }, + { + "instruction": "Display a tensor image in matplotlib", + "input": "", + "output": "

You are trying to apply numpy.transpose to a torch.Tensor object, thus calling tensor.transpose instead.

\n\n

You should convert flower_tensor_image to numpy first, using .numpy()

\n\n
axs = imshow(flower_tensor_image.detach().cpu().numpy(), ax = plt)\n
\n", + "system": "" + }, + { + "instruction": "packed_padded_sequence gives error when used with GPU", + "input": "", + "output": "

I assume you are using the GPU and probably on the Google Colab. Check yours device

\n
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")\ndevice\n
\n

you may solve this error by downgrading the torch version, if you are using colab the following command will help you:

\n
!pip install torch==1.6.0 torchvision==0.7.0\n
\n

once you downgrade the torch, this error of padded length will go.

\n", + "system": "" + }, + { + "instruction": "How does the parameter 'dim' in torch.unique() work?", + "input": "", + "output": "

I was confused when using torch.unique the first time. After doing some experiments I have finally figured out how the dim argument works.\nDocs of torch.unique says that:

\n
\n

counts (Tensor): (optional) if return_counts is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor.

\n
\n

For example, if your input tensor is a 3D tensor of size n x m x k and dim=2, unique will work separately on k matrices of size n x m. In other words, it will treat all dimensions other than the dim 2 as a single tensor.

\n", + "system": "" + }, + { + "instruction": "Pytorch: Learnable threshold for clipping activations", + "input": "", + "output": "

There are two problems with that code.

\n\n
    \n
  1. The implementation-level one is that you're using an in-place operation which generally doesn't work well with autograd. Instead of
  2. \n
\n\n

relu1[relu1 > self.act_max] = self.act_max

\n\n

you should use an out-of-place operation like

\n\n

relu1 = torch.where(relu1 > self.act_max, self.act_max, relu1)

\n\n
    \n
  1. The other is more general : neural networks are generally trained with gradient descent methods and threshold values can have no gradient - the loss function is not differentiable with respect to thresholds.
  2. \n
\n\n

In your model you're using a dirty hackaround (whether you write is as it is or use torch.where) - model.act_max.grad.data is only defined because for some elements their value is set to model.act_max. But this gradient knows nothing about why they were set to that value. To make things more concrete, lets define cutoff operation C(x, t) which defines whether x is above or below threshold t

\n\n

C(x, t) = 1 if x < t else 0

\n\n

and write your clipping operation as a product

\n\n

clip(x, t) = C(x, t) * x + (1 - C(x, t)) * t

\n\n

you can then see that the threshold t has twofold meaning: it controls when to cutoff (inside C) and it controls the value above cutoff (the trailing t). We can therefore generalize the operation as

\n\n

clip(x, t1, t2) = C(x, t1) * x + (1 - C(x, t1)) * t2

\n\n

The problem with your operation is that it is only differentiable with respect to t2 but not t1. Your solution ties the two together so that t1 == t2, but it is still the case that gradient descent will act as if there was no changing the threshold, only changing the above-the-threshold-value.

\n\n

For this reason, in general your thresholding operation may not be learning the value you would hope it learns. This is something to keep in mind when developing your operations, but not a guarantee of failure - in fact, if you consider the standard ReLU on biased output of some linear unit, we get a similar picture. We define the cutoff operation H

\n\n

H(x, t) = 1 if x > t else 0

\n\n

and ReLU as

\n\n

ReLU(x + b, t) = (x + b) * H(x + b, t) = (x + b) * H(x, t - b)

\n\n

where we could again generalize to

\n\n

ReLU(x, b, t) = (x + b) * H(x, t)

\n\n

and again we can only learn b and t is implicitly following b. Yet it seems to work :)

\n", + "system": "" + }, + { + "instruction": "Why does this error pop up while working with Deep Q learning?", + "input": "", + "output": "

Based on the documentation you didn't specify the num_samples of multinomial function to draw your multinomial distribution.

\n\n
\n

torch.multinomial(input, num_samples, replacement=False, out=None)

\n \n
\n

Returns a tensor where each row contains num_samples\n indices sampled from the multinomial probability distribution located\n in the corresponding row of tensor input.

\n
\n
\n\n

Change the code as below:

\n\n
def select_action(self, state):\n    probs = F.softmax(self.model(Variable(state, volatile = True))*7)\n    action = probs.multinomial(1) # 1 is the number of samples to draw\n    return action.data[0,0]\n
\n", + "system": "" + }, + { + "instruction": "pytorch save a minibatch of 4D tensor as images", + "input": "", + "output": "

If you want to save individual images you could use:

\n\n
for i in range(tensor.size(0)):\n           torchvision.utils.save_image(tensor[i, :, :, :], '{}.png'.format(i))\n
\n\n

which will save as : 1.png, 2.png ..., 8.png

\n", + "system": "" + }, + { + "instruction": "Transforms not applying to the dataset", + "input": "", + "output": "

The transforms are applied when the __getitem__ method of the Dataset is called. For example look at the __getitem__ method of the MNIST dataset class: https://github.com/pytorch/vision/blob/master/torchvision/datasets/mnist.py#L62

\n\n
def __getitem__(self, index):\n    \"\"\"\n    Args:\n        index (int): Index\n    Returns:\n        tuple: (image, target) where target is index of the target class.\n    \"\"\"\n    img, target = self.data[index], self.targets[index]\n\n    # doing this so that it is consistent with all other datasets\n    # to return a PIL Image\n    img = Image.fromarray(img.numpy(), mode='L')\n\n    if self.transform is not None:\n        img = self.transform(img)\n\n    if self.target_transform is not None:\n        target = self.target_transform(target)\n\n    return img, target\n
\n\n

The __getitem__ method gets called when you index your MNIST instance for the training set, e.g.:

\n\n
trainset[0]\n
\n\n

For more information on __getitem__: https://docs.python.org/3.6/reference/datamodel.html#object.getitem

\n\n

The reason why Resize and RandomHorizontalFlip should be before ToTensor is that they act on PIL Images and all the datasets in Pytorch for consistency load the data as PIL Images first. In fact you can see that here they force that behavior through:

\n\n
img = Image.fromarray(img.numpy(), mode='L')\n
\n\n

Once you have the PIL Image of the corresponding index, the transforms are applied with

\n\n
if self.transform is not None:\n    img = self.transform(img)\n
\n\n

ToTensor transforms the PIL Image to a torch.Tensor and Normalize subtracts the mean and divides by the standard deviation you provide.

\n\n

Eventually some transforms are applied to the label with

\n\n
if self.target_transform is not None:\n    target = self.target_transform(target)\n
\n\n

Finally the processed image and the processed label are returned. All of this happens in a single trainset[key] call.

\n\n
import torch\nfrom torchvision.transforms import *\nfrom torchvision.datasets import MNIST\nfrom torch.utils.data import DataLoader\n\ntransform_train = Compose([Resize(28, interpolation=2),\n                           RandomHorizontalFlip(p=0.5),\n                           ToTensor(),\n                           Normalize([0.], [1.])])\n\ntrainset = MNIST(root='./data', train=True, download=True,\n                 transform=transform_train)\ntrainloader = DataLoader(trainset, batch_size=32, shuffle=True, num_workers=2)\nprint(trainset[0][0].size(), trainset[0][0].min(), trainset[0][0].max())\n
\n\n

shows

\n\n
(torch.Size([1, 28, 28]), tensor(0.), tensor(1.))\n
\n", + "system": "" + }, + { + "instruction": "PyTorch element-wise filter layer", + "input": "", + "output": "

In pytorch you can always implement your own layers, by making them subclasses of nn.Module. You can also have trainable parameters in your layer, by using nn.Parameter.
\nPossible implementation of such layer might look like

\n
import torch\nfrom torch import nn\n\nclass TrainableEltwiseLayer(nn.Module):\n    def __init__(self, n, h, w):\n        super(TrainableEltwiseLayer, self).__init__()\n        self.weights = nn.Parameter(torch.Tensor(1, n, h, w))  # define the trainable parameter\n\n    def forward(self, x):\n        # assuming x is of size b-n-h-w\n        return x * self.weights  # element-wise multiplication\n
\n

You still need to worry about initializing the weights. look into nn.init on ways to init weights. Usually, one init the weights of all the net prior to training and prior to loading any stored model (so partially trained models can override random init). Something like

\n
model = mymodel(*args, **kwargs)  # instantiate a model\nfor m in model.modules():\n    if isinstance(m, nn.Conv2d):\n       nn.init.normal_(m.weights.data)  # init for conv layers\n    if isinstance(m, TrainableEltwiseLayer):\n       nn.init.constant_(m.weights.data, 1)  # init your weights here...\n
\n", + "system": "" + }, + { + "instruction": "Understanding PyTorch Bernoulli distribution from the documention", + "input": "", + "output": "

Well, Bernoulli is a probability distribution. Specifically, torch.distributions.Bernoulli() samples from the distribution and returns a binary value (i.e. either 0 or 1). Here, it returns 1 with probability p and return 0 with probability 1-p.

\n\n

Below example will make the understanding clear:

\n\n
In [141]: m =  torch.distributions.Bernoulli(torch.tensor([0.63]))\n\nIn [142]: m.sample() # 63% chance 1; 37% chance 0\nOut[142]: tensor([ 0.])\n\nIn [143]: m.sample() # 63% chance 1; 37% chance 0\nOut[143]: tensor([ 1.])\n\nIn [144]: m.sample() # 63% chance 1; 37% chance 0\nOut[144]: tensor([ 0.])\n\nIn [145]: m.sample() # 63% chance 1; 37% chance 0\nOut[145]: tensor([ 0.])\n\nIn [146]: m.sample() # 63% chance 1; 37% chance 0\nOut[146]: tensor([ 1.])\n\nIn [147]: m.sample() # 63% chance 1; 37% chance 0\nOut[147]: tensor([ 1.])\n\nIn [148]: m.sample() # 63% chance 1; 37% chance 0\nOut[148]: tensor([ 1.])\n\nIn [149]: m.sample() # 63% chance 1; 37% chance 0\nOut[149]: tensor([ 1.])\n\nIn [150]: m.sample() # 63% chance 1; 37% chance 0\nOut[150]: tensor([ 1.])\n\nIn [151]: m.sample() # 63% chance 1; 37% chance 0\nOut[151]: tensor([ 1.])\n
\n\n

So, we sampled it 10 times, out of which we got 1s 7 times which is approximately close to 63%. We need to sample this finitely large number of times to get the exact percentage of 37 and 63 for 0s and 1s respectively; This is because of the Law of Large Numbers.

\n", + "system": "" + }, + { + "instruction": "Need help understand this implementation of ConvLSTM code in pytorch", + "input": "", + "output": "

Yes, you are right, the output is timed by 4 because there are 4 gates

\n", + "system": "" + }, + { + "instruction": "How to make the trainloader use a specific amount of images?", + "input": "", + "output": "

You can wrap the class DatasetFolder (or ImageFolder) in another class to limit the dataset:

\n
class LimitDataset(data.Dataset):\n    def __init__(self, dataset, n):\n        self.dataset = dataset\n        self.n = n\n\n    def __len__(self):\n        return self.n\n\n    def __getitem__(self, i):\n        return self.dataset[i]\n
\n

You can also define some mapping between the index in LimitDataset and the index in the original dataset to define more complex behavior (such as random subsets).

\n

If you want to limit the batches per epoch instead of the dataset size:

\n
from itertools import islice\nfor data in islice(dataloader, 0, batches_per_epoch):\n    ...\n
\n

Note that if you use this shuffle, the dataset size will be the same, but the data that each epoch will see will be limited. If you don't shuffle the dataset this will also limit the dataset size.

\n", + "system": "" + }, + { + "instruction": "Pytorch: Gradient of output w.r.t parameters", + "input": "", + "output": "

You're trying to compute a Jacobian of a function, while PyTorch is expecting you to compute vector-Jacobian products. You can see an in-depth discussion of computing Jacobians with PyTorch here.

\n\n

You have two options. Your first option is to use JAX or autograd and use the jacobian() function. Your second option is to stick with Pytorch and compute 20 vector-jacobian products, by calling backwards(vec) 20 times, where vec is a length-20 one-hot vector where the index of the component which is one ranges from 0 to 19. If this is confusing, I recommend reading the autodiff cookbook from the JAX tutorials.

\n", + "system": "" + }, + { + "instruction": "PyTorch Getting Custom Loss Function Running", + "input": "", + "output": "

You are subclassing nn.Module to define a function, in your case Loss function. So, when you compute loss.backward(), it tries to store the gradients in the loss itself, instead of the model and there is no variable in the loss for which to store the gradients. Your loss needs to be a function and not a module. See Extending autograd.

\n\n

You have two options here -

\n\n
    \n
  1. The easiest one is to directly pass cust_loss function as criterion parameter to train_model.
  2. \n
  3. You can extend torch.autograd.Function to define the custom loss (and if you wish, the backward function as well).
  4. \n
\n\n

P.S. - It is mentioned that you need to implement the backward of the custom loss functions. This is not always the case. It is required only when your loss function is non-differentiable at some point. But, I do not think so that you\u2019ll need to do that.

\n", + "system": "" + }, + { + "instruction": "How to form a sequence of consecutive numbers in Pytorch?", + "input": "", + "output": "

You can directly use the arange method from Pytorch.

\n\n
torch_v = torch.arange(1,n)\n
\n\n

Reference: https://pytorch.org/docs/master/torch.html?highlight=arange#torch.arange

\n", + "system": "" + }, + { + "instruction": "Very weird behaviour when running the same deep learning code in two different GPUs", + "input": "", + "output": "

Q: can you please tell what type of experiment this is.. and what architecture of NN you use ?

\n\n

In below tips, I will assume you are running a straight backpropagation neural net.

\n\n\n", + "system": "" + }, + { + "instruction": "How to select index over two dimension in PyTorch?", + "input": "", + "output": "

Does it work for you?

\n\n
import torch\n\na = torch.randn(3, 2, 4, 5)\nprint(a.size())\n\nb = [a[2, :, 0, :], a[1, :, 1, :], a[2, :, 2, :], a[0, :, 3, :]]\nb = torch.stack(b, 0)\n\nprint(b.size()) # torch.Size([4, 2, 5])\n
\n", + "system": "" + }, + { + "instruction": "Pytorch: Sparse Matrix multiplcation", + "input": "", + "output": "

spmm has been moved from torch module to torch.sparse module. For official documentation please check this link. There is also a warning in the beginning of the documentation of torch.sparse module:

\n\n
\n

This API is currently experimental and may change in the near future.

\n
\n", + "system": "" + }, + { + "instruction": "Does tf.math.reduce_max allows gradient flow like torch.max?", + "input": "", + "output": "

Yes, tf.math.reduce_max does allow gradients to flow. It is easy to check (this is TensorFlow 2.x but it is the same result in 1.x):

\n
import tensorflow as tf\n\nwith tf.GradientTape() as tape:\n    x = tf.linspace(0., 2. * 3.1416, 10)\n    tape.watch(x)\n    # A sequence of operations involving reduce_max\n    y = tf.math.square(tf.math.reduce_max(tf.math.sin(x)))\n# Check gradients\ng = tape.gradient(y, x)\nprint(g.numpy())\n# [ 0.         0.         0.3420142 -0.        -0.        -0.\n#  -0.         0.         0.         0.       ]\n
\n

As you can see, there is a valid gradient for y with respect to x. Only one of the values is not zero, because it is the value that then resulted in the maximum value, so it is the only value in x that affects the value of y. This is the correct gradient for the operation.

\n", + "system": "" + }, + { + "instruction": "PyTorch RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED", + "input": "", + "output": "

This was a bug in PyTorch. To solve, find and go to python3.xx\\site-packages\\torch\\cuda\\__init__.py and modify this function:

\n

Remove or comment out this old function:

\n
'''\n@lru_cache(maxsize=1)\ndef device_count() -> int:\n    r"""Return the number of GPUs available."""\n    if not _is_compiled():\n    return 0\n    # bypass _device_count_nvml() if rocm (not supported)\n    nvml_count = -1 if torch.version.hip else _device_count_nvml()\n    return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count\n'''\n
\n

And replace it with this new code:

\n
_cached_device_count: Optional[int] = None\n\ndef device_count() -> int:\n    r"""Return the number of GPUs available."""\n    global _cached_device_count\n    if not _is_compiled():\n        return 0\n    if _cached_device_count is not None:\n        return _cached_device_count\n    # Check if using ROCm (HIP)\n    if torch.version.hip:\n        nvml_count = -1  # Assuming ROCm is not supported, set nvml_count to -1\n    else:\n        nvml_count = _device_count_nvml()  # Use NVML for NVIDIA GPUs\n    r = torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count\n\n    # NB: Do not cache the device count prior to CUDA initialization, because\n    # the number of devices can change due to changes to CUDA_VISIBLE_DEVICES\n    # setting prior to CUDA initialization.\n    if _cached_device_count is None and _initialized:\n        _cached_device_count = r\n    return r\n
\n", + "system": "" + }, + { + "instruction": "Is there any reason to use torchvision.Compose over torch.nn.Sequential?", + "input": "", + "output": "

The following works with Compose but not with Sequential:

\n
transform = torch.nn.Compose([transforms.ToTensor(), transforms.CenterCrop((512,512))])\n
\n

When using Sequential, you'd get the error

\n
\\site-packages\\torch\\nn\\modules\\module.py", line 596, in add_module\n    raise TypeError("{} is not a Module subclass".format(\nTypeError: torchvision.transforms.transforms.ToTensor is not a Module subclass\n
\n

I conclude that torch.nn.Sequential only works with subclasses of torch.nn.Module.

\n", + "system": "" + }, + { + "instruction": "Torchmetrics Multiclass F1Score same results as Accuracy", + "input": "", + "output": "

Apparently there's a documentation bug.

\n

The fix is always explicitly to state what 'average' you want.

\n", + "system": "" + }, + { + "instruction": "Why cant I set TrainingArguments.device in Huggingface?", + "input": "", + "output": "

There is a parameter in TrainingArguments called no_cuda. If you set that to True, training will take place on the CPU even if you have a GPU in your setup. For example, the following code worked for me:

\n
    # initialize the trainer\n    training_args = TrainingArguments(\n        output_dir='./results',\n        num_train_epochs=1,\n        per_device_train_batch_size=1,\n        fp16=False,\n        warmup_steps=500,\n        weight_decay=0.01,\n        logging_dir='./logs',\n        prediction_loss_only=True,\n        no_cuda=True\n    )\n    trainer = Trainer(\n        model,\n        training_args,\n        train_dataset=tokenized_dataset["train"],\n    )\n\n    # execute the training!\n    trainer.train()\n
\n", + "system": "" + }, + { + "instruction": "PyTorch installation for CUDA 12 on win10 64 bit", + "input": "", + "output": "

One option is installing the nightly build that supports Python 3.11 and CUDA 12.1:

\n
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 \n
\n", + "system": "" + }, + { + "instruction": "Accuracy value more than 1 with nn.BCEWithLogitsLoss() loss function pytorch in Binary Classifier", + "input": "", + "output": "

I didn't understand why you are using torch.max as you have one output. Anyway, you should use squeeze before comparing, so this line:

\n
running_corrects += torch.sum(preds == labels.data)\n
\n

should become

\n
 running_corrects += torch.sum(preds == labels.squeeze())\n
\n

to see why:

\n
labels = torch.tensor([[0],\n        [0],\n        [0],\n        [1]])\n\npreds = torch.tensor([0, 0, 0, 0])\nprint(torch.sum(preds == labels)) # output 12\nprint(torch.sum(preds == labels.squeeze())) # output 3\n
\n", + "system": "" + }, + { + "instruction": "How do I extract features from a torchvision VisitionTransfomer (ViT)?", + "input": "", + "output": "

Looking at the forward function in the source code of VisionTransformer and this helpful forum post, I managed to extract the features in the following way:

\n
\n    from torch import nn\n\n    from torchvision.models.vision_transformer import vit_b_16\n    from torchvision.models import ViT_B_16_Weights\n\n    from PIL import Image as PIL_Image\n\n    vit = vit_b_16(weights=ViT_B_16_Weights.DEFAULT)\n\n    preprocessing = ViT_B_16_Weights.DEFAULT.transforms()\n\n    img = PIL_Image.open("example.png")\n    img = preprocessing(img)\n\n    # Add batch dimension\n    img = img.unsqueeze(0)\n\n    feats = vit._process_input(img)\n\n    # Expand the CLS token to the full batch\n    batch_class_token = vit.class_token.expand(img.shape[0], -1, -1)\n    feats = torch.cat([batch_class_token, feats], dim=1)\n\n    feats = vit.encoder(feats)\n\n    # We're only interested in the representation of the CLS token that we appended at position 0\n    feats = feats[:, 0]\n\n    print(feats.shape)\n\n
\n

Which correctly returns:

\n
torch.Size([1, 768])\n
\n

Edit: Depending on the downstream task, it might be better to average the features for all patches instead of taking the features from the CLS token: feats_avg = feats[:, 1:].mean(dim=1).

\n", + "system": "" + }, + { + "instruction": "Stable Diffusion Error: Couldn't install torch / No matching distribution found for torch==1.12.1+cu113", + "input": "", + "output": "

I ran into the same problem, found out that I was using python 3.11, instead of the version from instructions - Python 3.10.6; You can uninstall other versions from Programs and Features/ edit env vars

\n", + "system": "" + }, + { + "instruction": "Is it safe to do a ToTensor() data transform before a ColorJitter and RandomHorizotnalFlip and other data augmentation techniques in PyTorch?", + "input": "", + "output": "

One reason why people usually use PIL format instead of tensor to perform image preprocessing is related to the resizing algorithms and especially downsizing. As pointed out in this paper,

\n
\n

The Lanczos, bicubic, and bilinear implementations by PIL (top row) adjust the antialiasing filter width by the downsampling factor (marked as ). Other implementations (including those used for PyTorch-FID and TensorFlow-FID) use fixed filter widths, introducing aliasing artifacts and resemble naive nearest subsampling.

\n
\n\n

image source: clean-fid official github repo

\n

Despite that torchvision.transforms.Resize now supports antialias option, as per the documentation, it is only reducing the gap between the tensor outputs and PIL outputs to some extent. Another reason may be related to the reproducibility and portability, i.e. to have a universal preprocessing pipeline.

\n", + "system": "" + }, + { + "instruction": "Dimension error by using Patch Embedding for video processing", + "input": "", + "output": "

The input tensor has shape [batch=16, channels=3, frames=16, H=224, W=224], while Rearrange expects dimensions in order [ b t c h w ]. You expect channels but pass frames. This leads to a last dimension of (p1 * p2 * c) = 16 * 16 * 16 = 4096.

\n

Please try to align positions of channels and frames:

\n
from torch import torch, nn\nfrom einops.layers.torch import Rearrange\n\npatch_size = 16\ndim = 192\n\nb, f, c, h, w = 16, 16, 3, 224, 224\ninput_tensor = torch.randn(b, f, c, h, w)\n\npatch_dim = c * patch_size ** 2\n\nm = nn.Sequential(\n    Rearrange('b t c (h p1) (w p2) -> b t (h w) (p1 p2 c)', p1=patch_size, p2=patch_size),\n    nn.Linear(patch_dim, dim)\n)\n\nprint(m(input_tensor).size())\n
\n

Output:

\n
torch.Size([16, 16, 196, 192])\n
\n", + "system": "" + }, + { + "instruction": "FutureWarning: Passing a set as an indexer is deprecated and will raise in a future version", + "input": "", + "output": "

you are trying to access the new_df with predictors which is set.\nconvert it to list.

\n

example:

\n
print(new_df[list(predictors)].head())\n
\n", + "system": "" + }, + { + "instruction": "`torch.cuda.Event()` throws error `TypeError: object.__new__() takes exactly one argument (the type to instantiate)`", + "input": "", + "output": "

I faced this problem several times, on different machines. Each time, the program was runing on cpu and not gpu as it was supposed to. Ensuring that it was running on gpu solved the problem for me.

\n

I didn't manage to find a solution to make it work on cpu

\n", + "system": "" + }, + { + "instruction": "Batch Kronecker product of tensors", + "input": "", + "output": "
x = torch.randn(100,10,10) \ny = torch.randn(100,2,2)\nz = torch.einsum('iab,icd->iacbd', x, y).view((100, 20, 20))\n
\n

which is verified below as equivalent to taking Kronecker product over the last two dimensions:

\n
for xx, yy, zz in zip(x, y, z):\n    assert zz.allclose(torch.kron(xx, yy))\n
\n", + "system": "" + }, + { + "instruction": "PyTorch: Load/unpickle only the state dict of a model saved with torch.save", + "input": "", + "output": "

Referring to the documentation,

\n
\n

only layers with learnable parameters (convolutional layers, linear\nlayers, etc.) and registered buffers (batchnorm\u2019s running_mean) have\nentries in the model\u2019s state_dict.

\n
\n

Your patched_features is not stored in the model\u2019s state_dict, but you can do so by registering it with register_parameter as pointed out by ptrblck in this post but it may involve retraining.

\n", + "system": "" + }, + { + "instruction": "TypeError: setup() got an unexpected keyword argument 'stage'", + "input": "", + "output": "

You need to add an extra argument stage=None to your setup method:

\n
def setup(self, stage=None):\n    self.train_dataset = BioQADataset(\n        self.train_df,\n        self.tokenizer,\n        self.source_max_token_len,\n        self.target_max_token_len\n    )\n\n    self.test_dataset = BioQADataset(\n        self.test_df,\n        self.tokenizer,\n        self.source_max_token_len,\n        self.target_max_token_len\n    )\n
\n

I've played with Pytorch Lightning myself for multi-GPU training here. Although some of the code is a bit outdated (metrics are a standalone module now), you might find it useful.

\n", + "system": "" + }, + { + "instruction": "Python Runtime Error during enumerating 'DataLoader' in for loop", + "input": "", + "output": "

Maybe this doc can help you solve. When using Windows, please wrap your iteration loop under the code if name == 'main':

\n", + "system": "" + }, + { + "instruction": "RuntimeError: DataLoader worker (pid(s) 15876, 2756) exited unexpectedly", + "input": "", + "output": "

set num_workers=0\nOn Windows, due to multiprocessing restrictions, setting num_workers to > 0 leads to errors. This is expected.

\n

There is an issue on Github too:

\n", + "system": "" + }, + { + "instruction": "PyTorch tokenizers: how to truncate tokens from left?", + "input": "", + "output": "

As pointed out by andrea in the comments, you can use truncation_side='left' when initialising the tokenizer. You can also set this attribute after tokenizer creation:

\n
tokenizer.truncation_side='left'. # Default is 'right'\n
\n

The tokenizer internally takes care of the rest and truncates based on the max_len argument. Alternatively; if you need to use a transformers version which does not have this feature, you can tokenize without truncation and implement the following custom logic as a postprocessing step:

\n
tokens = tokens[-max_len:]\nattn_mask = attn_mask[-max_len:]\n
\n", + "system": "" + }, + { + "instruction": "ValueError: only one element tensors can be converted to Python scalars when converting list to float Torch tensor", + "input": "", + "output": "

Use torch.stack():

\n
X = torch.stack(X)\n
\n", + "system": "" + }, + { + "instruction": "How can I retrieve elements in a multidimensional pytorch tensor by a list of indices?", + "input": "", + "output": "

I imagine you tried something like

\n
indices = scores.argmax(dim=1)\nselection = lists[:, indices]\n
\n

This does not work because the indices are selected for every element in dimension 0, so the final shape is (x, x, 4).

\n

The perform the correct selection you need to replace the slice with a range.

\n
indices = scores.argmax(dim=1)\nselection = lists[range(indices.size(0)), indices]\n
\n", + "system": "" + }, + { + "instruction": "Cross entropy yields different results for vectors with identical distributions", + "input": "", + "output": "

From the docs, the input to CrossEntropyLoss "is expected to contain raw, unnormalized scores for each class". Those are typically called logits.

\n

There are two questions:

\n\n", + "system": "" + }, + { + "instruction": "Using Pytorch Dataloader with Probability Distribution", + "input": "", + "output": "

Why don't you simply use WeightedRandomSampler?

\n
weights = [1./(S*j) for j in range(1, N+1)]  # your weights\nsampler = WeightedRandomSampler(weights, replacement=True)\nloader = DataLoader(data, batch_size=20, sampler=sampler)\n
\n", + "system": "" + }, + { + "instruction": "What's the difference between torch.mean and torch.nn.avg_pool?", + "input": "", + "output": "

With the very example you provided the result is the same, but only because you specified dim=2 and kernel_size equal to the dimensionality of the third (index 2) dimension.\nBut in principle, you are applying two different functions, that sometimes just happen to collide with specific choices of hyperparameters.

\n

torch.mean is effectively a dimensionality reduction function, meaning that when you average all values across one dimension, you effectively get rid of that dimension.\nOn the other hand, average 1-dimensional pooling is more powerful in this regard, as it gives you a lot more flexibility in choosing kernel size, padding and stride like you would normally do when using a convolutional layer.

\n

You can see the first function as a specific case of 1-d pooling.

\n", + "system": "" + }, + { + "instruction": "ERROR: vars() argument must have __dict__ attribute when trying to use trainer.train() on custom HF dataset?", + "input": "", + "output": "

I was working with HyenaDna model. Since my dataset is too big to fit in RAM, I had to use pagination using IterableDataset, and faced this error.

\n

It looks like our custom dataset needs to return a dictionary object from the def __iter__(self) method.

\n

What keys, and values should be returned depends on your specific model. For example, here is the relevant part from my CustomDatasets.

\n
class PagingMQTLDataset(IterableDataset):\n  def __init__(self, m_dataset, bert_tokenizer):  # my args\n    # my constructor initializing variables\n    self.dataset = m_dataset\n    self.tokenizer = bert_tokenizer\n    self.seq_len = 4000\n    pass\n\n    def __iter__(self):\n    for row in self.dataset:\n      processed = self.preprocess(row)\n      if processed is not None:\n        yield processed\n\n    def preprocess(self, row):\n    sequence = row['sequence']  # Fetch the 'sequence' column\n    if len(sequence) != self.seq_len:\n      return None  # skip problematic row!\n    label = row['label']  # Fetch the 'label' column (or whatever target you use)\n    input_ids = self.bert_tokenizer(sequence)["input_ids"]\n    tokenized_tensor = torch.tensor(input_ids)\n    label_tensor = torch.tensor(label)\n    output_dict = {"input_ids": tokenized_tensor, "labels": label_tensor} # so this is now you do it?\n    return output_dict  # tokenized_tensor, label_tensor\n\n\n
\n

Finally, you run the code like this:

\n
dataset_map = load_dataset("fahimfarhan/mqtl-classification-datasets", streaming=True)\n\ntrain_dataset = PagingMQTLDataset(dataset_map[f"train_binned_{WINDOW}"], tokenizer=tokenizer)\n\n# similarly create validate, test datasets\n\ntrainer = Trainer(\n    model=model,\n    args=training_args,\n    train_dataset=train_dataset,\n    eval_dataset=val_dataset\n  )\n  # train, and validate\n  result = trainer.train()\n\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: Can't pickle local object 'pre_datasets.<locals>.<lambda>' when implementing Pytorch framework", + "input": "", + "output": "

I had a similar issue and I used dill like this:

\n
import dill as pickle\n
\n

and it worked out of the box!

\n", + "system": "" + }, + { + "instruction": "unable to mmap 1024 bytes - Cannot allocate memory - even though there is more than enough ram", + "input": "", + "output": "

While I do not know how to deal with this problem directly,\nI had a somewhat similar issue(and solved). The difference is:

\n\n

From the other people's code, I found that he uses python -m torch.distributed.launch -- ... to run fairseq-train, and I added it to my bash script and the RuntimeError is gone and training is going.

\n

So I guess if you can run with 21000 samples, you may use torch.distributed to make whole data into small batches and distribute them to several workers.

\n", + "system": "" + }, + { + "instruction": "AttributeError in torch_geometric.transforms", + "input": "", + "output": "

It has been renamed to RandomNodeSplit in the latest version of torch_geometric. You can directly use RandomNodeSplit to replace it.

\n", + "system": "" + }, + { + "instruction": "Multi-output regression using skorch & sklearn pipeline gives runtime error due to dtype", + "input": "", + "output": "

By default OneHotEncoder returns numpy array of dtype=float64. So one could simply cast the input-data X when being fed into forward() of the model:

\n
class RegressionModule(torch.nn.Module):\n    def __init__(self, input_dim=80):\n        super().__init__()\n        self.l0 = torch.nn.Linear(input_dim, 10)\n        self.l1 = torch.nn.Linear(10, 5)\n\n    def forward(self, X):\n        X = X.to(torch.float32)\n        y = self.l0(X)\n        y = self.l1(y)\n        return y\n
\n", + "system": "" + }, + { + "instruction": "Check if each element of a tensor is contained in a list", + "input": "", + "output": "

Use torch.isin method is the most convinient way. It's simple as follows: torch.isin(A, vals)

\n", + "system": "" + }, + { + "instruction": "while exporting T5 model to onnx using fastT5 getting "RuntimeError:output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2]"", + "input": "", + "output": "

I've checked the repository, it looks like a known issue as reported here : https://github.com/Ki6an/fastT5/issues/1

\n

Developer of the library has posted a solution and created a notebook file here: https://colab.research.google.com/drive/1HuH1Ui3pCBS22hW4djIOyUBP5UW93705?usp=sharing

\n

Solution is to modify modeling_t5.py file, at line 494 :

\n
# Define this at line 426:\nint_seq_length = int(seq_length)\n\n# Change this at line 494:\nposition_bias = position_bias[:, :, -seq_length:, :]\nposition_bias = position_bias[:, :, -int_seq_length:, :]  # Updated version\n
\n

If you don't want to modify the file yourself, you will need to wait until this pull request to be merged into Transformers library.

\n", + "system": "" + }, + { + "instruction": "Pytorch MNIST autoencoder to learn 10-digit classification", + "input": "", + "output": "

Autoencoder is technically not used as a classifier in general. They learn how to encode a given image into a short vector and reconstruct the same image from the encoded vector. It is a way of compressing image into a short vector:

\n

\"Autoencoder\"

\n

Since you want to train autoencoder with classification capabilities, we need to make some changes to model. First of all, there will be two different losses:

\n
    \n
  1. MSE loss: Current autoencoder reconstruction loss. This will force network to output an image as close as possible to given image by taking the compressed representation.
  2. \n
  3. Classification loss: Classic cross entropy should do the trick. This loss will take compressed representation (C dimensional) and target labels to calculate negative log likelihood loss. This loss will force encoder to output compressed representation such that it aligns well with the target class.
  4. \n
\n

I've done a couple of changes to your code to get the combined model working. Firstly, let's see the code:

\n
 import torch\n import torchvision as tv\n import torchvision.transforms as transforms\n import torch.nn as nn\n import torch.nn.functional as F\n from torch.autograd import Variable\n from torchvision.utils import save_image\n\n num_epochs = 10\n batch_size = 64\n transform = transforms.Compose([\n     transforms.ToTensor(),\n     transforms.Normalize((0.1307,), (0.3081,))\n ])     \n \n trainset = tv.datasets.MNIST(root='./data',  train=True, download=True, transform=transform)\n testset  = tv.datasets.MNIST(root='./data',  train=False, download=True, transform=transform)\n dataloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=4)\n testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=True, num_workers=4)\n device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n \n class Autoencoderv3(nn.Module):\n     def __init__(self):\n         super(Autoencoderv3,self).__init__()\n         self.encoder = nn.Sequential(\n             nn.Conv2d(1, 4, kernel_size=5),\n             nn.Dropout2d(p=0.1),\n             nn.ReLU(True),\n             nn.Conv2d(4, 8, kernel_size=5),\n             nn.Dropout2d(p=0.1),\n             nn.ReLU(True),\n             nn.Flatten(),\n             nn.Linear(3200, 10)\n             )\n         self.softmax = nn.Softmax(dim=1)\n         self.decoder = nn.Sequential(\n             nn.Linear(10, 400),\n             nn.ReLU(True),\n             nn.Unflatten(1, (1, 20, 20)),\n             nn.Dropout2d(p=0.1),\n             nn.ConvTranspose2d(1, 10, kernel_size=5),\n             nn.ReLU(True),\n             nn.Dropout2d(p=0.1),\n             nn.ConvTranspose2d(10, 1, kernel_size=5)\n             )\n         \n     def forward(self, x):\n         out_en = self.encoder(x)\n         out = self.softmax(out_en)\n         out = self.decoder(out)\n         return out, out_en\n \n model = Autoencoderv3().to(device)\n distance   = nn.MSELoss()\n class_loss = nn.CrossEntropyLoss()\n \n optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)\n \n mse_multp = 0.5\n cls_multp = 0.5\n \n model.train()\n \n for epoch in range(num_epochs):\n     total_mseloss = 0.0\n     total_clsloss = 0.0\n     for ind, data in enumerate(dataloader):\n         img, labels = data[0].to(device), data[1].to(device) \n         output, output_en = model(img)\n         loss_mse = distance(output, img)\n         loss_cls = class_loss(output_en, labels)\n         loss = (mse_multp * loss_mse) + (cls_multp * loss_cls)  # Combine two losses together\n         optimizer.zero_grad()\n         loss.backward()\n         optimizer.step()\n         # Track this epoch's loss\n         total_mseloss += loss_mse.item()\n         total_clsloss += loss_cls.item()\n \n     # Check accuracy on test set after each epoch:\n     model.eval()   # Turn off dropout in evaluation mode\n     acc = 0.0\n     total_samples = 0\n     for data in testloader:\n         # We only care about the 10 dimensional encoder output for classification\n         img, labels = data[0].to(device), data[1].to(device) \n         _, output_en = model(img)   \n         # output_en contains 10 values for each input, apply softmax to calculate class probabilities\n         prob = nn.functional.softmax(output_en, dim = 1)\n         pred = torch.max(prob, dim=1)[1].detach().cpu().numpy() # Max prob assigned to class \n         acc += (pred == labels.cpu().numpy()).sum()\n         total_samples += labels.shape[0]\n     model.train()   # Enables dropout back again\n     print('epoch [{}/{}], loss_mse: {:.4f}  loss_cls: {:.4f}  Acc on test: {:.4f}'.format(epoch+1, num_epochs, total_mseloss / len(dataloader), total_clsloss / len(dataloader), acc / total_samples))\n   \n
\n

This code should now train the model both as a classifier and a generative autoencoder. In general though, this type of approach can be a bit tricky to get the model training. In this case, MNIST data is simple enough to get those two complementary losses train together. In more complex cases like Generative Adversarial Networks (GAN), they apply model training switching, freezing one model etc. to get whole model trained. This autoencoder model trains easily on MNIST without doing those types of tricks:

\n
 epoch [1/10], loss_mse: 0.8928  loss_cls: 0.4627  Acc on test: 0.9463\n epoch [2/10], loss_mse: 0.8287  loss_cls: 0.2105  Acc on test: 0.9639\n epoch [3/10], loss_mse: 0.7803  loss_cls: 0.1574  Acc on test: 0.9737\n epoch [4/10], loss_mse: 0.7513  loss_cls: 0.1290  Acc on test: 0.9764\n epoch [5/10], loss_mse: 0.7298  loss_cls: 0.1117  Acc on test: 0.9762\n epoch [6/10], loss_mse: 0.7110  loss_cls: 0.1017  Acc on test: 0.9801\n epoch [7/10], loss_mse: 0.6962  loss_cls: 0.0920  Acc on test: 0.9794\n epoch [8/10], loss_mse: 0.6824  loss_cls: 0.0859  Acc on test: 0.9806\n epoch [9/10], loss_mse: 0.6733  loss_cls: 0.0797  Acc on test: 0.9814\n epoch [10/10], loss_mse: 0.6671  loss_cls: 0.0764  Acc on test: 0.9813\n
\n

As you can see, both mse loss and classification loss is decreasing, and accuracy on test set is increasing. In the code, MSE loss and classification loss are added together. This means respective gradients calculated from each loss are fighting against each other to force the network into their direction. I've added loss multiplier to control the contribution from each loss. If MSE has a higher multiplier, network will have more gradients from MSE loss, meaning it will better learn reconstruction, if CLS loss has a higher multiplier, network will get better classification accuracies. You can play with those multiplier to see how end result is changing, but MNIST is a very easy dataset so differences might be hard to see maybe. Currently, it doesn't do too bad at reconstructing inputs:

\n
 import numpy as np\n import matplotlib.pyplot as plt\n \n model.eval()\n img, labels = list(dataloader)[0]\n img = img.to(device)\n output, output_en = model(img)\n inp = img[0:10, 0, :, :].squeeze().detach().cpu()\n out = output[0:10, 0, :, :].squeeze().detach().cpu()\n \n # Just some trick to concatenate first ten images next to each other\n inp = inp.permute(1,0,2).reshape(28, -1).numpy()\n out = out.permute(1,0,2).reshape(28, -1).numpy()\n combined = np.vstack([inp, out])\n \n plt.imshow(combined)\n plt.show()\n
\n

\"Reconstrunction\"

\n

I am sure with more training and fine tuning loss multipliers, you can get better results.

\n

Lastly, decoder receives softmax of encoder output. This mean decoder tries to create output image from 0 - 1 probabilities of the input. So if the softmax probability vector is 0.98 at input location 0 and close to zero elsewhere, decoder should output an image that looks like 0.0. Here I give network input to create 0 to 9 reconstructions:

\n
 test_arr = np.zeros([10, 10], dtype = np.float32)\n ind = np.arange(0, 10)\n test_arr[ind, ind] = 1.0\n \n model.eval()\n img = torch.from_numpy(test_arr).to(device)\n out = model.decoder(img)\n out = out[0:10, 0, :, :].squeeze().detach().cpu()\n out = out.permute(1,0,2).reshape(28, -1).numpy()\n plt.imshow(out)\n plt.show()\n
\n

\"0

\n

I've also done a few small changes in the code, printing epoch average loss etc. which doesn't really change the training logic, so you can see those changes in the code and let me know if anything looks weird.

\n", + "system": "" + }, + { + "instruction": "Pytorch Conv1d on simple 1d signal", + "input": "", + "output": "

First, you should be aware that the term "convolution" used in basically all literature related to convolutional neural networks (CNNs) actually corresponds to the correlation operation not the convolution operation.

\n

The only difference (for real-valued inputs) between correlation and convolution is that in convolution the kernel is flipped/mirrored before sliding it across the signal, whereas in correlation no such flipping occurs.

\n

There are also some extra operations that convolution layers in CNNs perform that are not part of the definition of convolution. They apply an offset (a.k.a. bias), they operate on mini-batches, and they map multi-channel inputs to multi-channel outputs.

\n

Therefore, in order to recreate a convolution operation using a convolution layer we should (i) disable bias, (ii) flip the kernel, and (iii) set batch-size, input channels, and output channels to one.

\n

For example, a PyTorch implementation of the convolution operation using nn.Conv1d looks like this:

\n
import torch\nfrom torch import nn\n\nx = torch.tensor([4, 1, 2, 5], dtype=torch.float)\nk = torch.tensor([1, 2, -1], dtype=torch.float)\n\n# Define these constants to differentiate the various usages of "1".\nBATCH_SIZE, IN_CH, OUT_CH = 1, 1, 1\n\n# Pad with len(k)-1 zeros to ensure all non-zero outputs are computed.\nh = nn.Conv1d(IN_CH, OUT_CH, kernel_size=len(k), padding=len(k) - 1, bias=False)\n\n# Copy flipped k into h.weight.\n# h.weight is shape (OUT_CH, IN_CH, kernel_size), reshape k accordingly.\n# Perform copy inside no_grad context to avoid autograd issues.\nwith torch.no_grad():\n    h.weight.copy_(torch.flip(k, dims=[0]).reshape(OUT_CH, IN_CH, -1))\n\n# Input shape to h is assumed to be (BATCH_SIZE, IN_CH, SIGNAL_LENGTH), reshape x accordingly.\n# Output shape of h is (BATCH_SIZE, OUT_CH, OUTPUT_LENGTH), reshape output to 1D signal.\ny = h(x.reshape(BATCH_SIZE, IN_CH, -1)).reshape(-1)\n
\n

which results in

\n
>>> print(y)\ntensor([ 4.,  9.,  0.,  8.,  8., -5.], grad_fn=<ViewBackward>)\n
\n", + "system": "" + }, + { + "instruction": "Wild discrepancies between training DeepLab ResNet V3 on Google Colab versus on local machine", + "input": "", + "output": "

I fixed this problem by unzipping the training data to Google Drive and reading the files from there instead of using the Colab command to unzip the folder to my workspace directly. I have absolutely no idea why this was causing the problem; a quick visual inspection at the images and their corresponding tensors looks fine, but I can't go through each of the 6,000 or so images to check every one.\nIf anyone knows why this was causing a problem, please let me know!

\n", + "system": "" + }, + { + "instruction": "In training mode, targets should be passed", + "input": "", + "output": "

In your example you don't explain how you load your model, but I think you have forgotten model.eval(). This function is a kind of switch for some specific layers/parts of the model that behave differently during training and inference (evaluating) time.

\n

To make inferences, you can load your model like this :

\n
model.load_state_dict(torch.load("/content/gdrive/MyDrive/Models/model_Resnet.pth"))\nmodel.eval()\n
\n", + "system": "" + }, + { + "instruction": "How can I limit the range of parameters in pytorch?", + "input": "", + "output": "

A trick used in some generative adversarial networks (some of which require the parameters of the discriminator to be within a certain range) is to clamp the values after every gradient update. For example:

\n
model = YourPyTorchModule()\n\nfor _ in range(epochs):\n    loss = ...\n    optimizer.step()\n    for p in model.parameters():\n        p.data.clamp_(-1.0, 1.0)\n
\n", + "system": "" + }, + { + "instruction": "PyTorch - Where are kernels launched?", + "input": "", + "output": "

To get a helpful stack trace, you would most likely need to build pytorch with debug symbols (build instructions are here). I'm not sure if there are any debug builds available to download. But a stack trace might not make very much sense without some background, so here's a general outline of where things are defined in the codebase:

\n

Most operators in PyTorch are implemented in the codebase as a C++ at::native namespace function within pytorch/aten/src/ATen/native. When PyTorch is built, codegen scripts automatically generate the Python functions and the Python-to-C++ bindings for the operators defined in native_functions.yaml, and the generated code is not checked into the repo (so you would have to either read the scripts or build PyTorch yourself if you want to see what's going on in codegen).

\n

An at::native operator will usually call a device dispatch function for that operator, which is often suffixed with _stub. The dispatch function checks what device (cpu, cuda, etc) the arguments are on, and then runs a device-specific implementation. From there, another dispatch happens, which calls a datatype-specific implementation.

\n

To go through an example, the add.out operator (which is called when you do torch.add(..., out=...) in Python) is declared here. Codegen generates everything needed to bind the Python function to at::native::add_out, which is defined here. Notice that that function calls add_stub, which is the device dispatch function.

\n

A CPU implementation for add_stub is registered here and implemented here as add_kernel. A CUDA implementation is registered here and implemented here as add_kernel_cuda. Notice that both of these use a TensorIteratorBase object. Long story short, this object will iterate through each pair of elements in the tensor inputs that should be added together.

\n

There is another dispatch within add_kernel and add_kernel_cuda which chooses a separate implementation based on the data type of the arguments. The separate data type implementations are generated from a shared template function. You can see that the CPU function also has a different implementation for a vectorized and a non-vectorized operation, while the CUDA implementation just has the one here.

\n

If you want to see a full stack trace, you could run a script with gdb --args python <script name>, and create a break point for the specific kernel you want. Again, debug symbols would be needed to make sense of it.

\n", + "system": "" + }, + { + "instruction": "Changing Dropout value during training", + "input": "", + "output": "

To change the dropout probability during training, you should use the functional version, i.e. torch.nn.functional.dropout.

\n

The input arguments to the functional version of dropout are

\n\n

Thus, you can alter the probability of the dropout in your forward method, according to your needs.

\n

For example, you can do in your forward method:

\n
\ndef forward(self, x):\n\n    ...\n\n    # apply some layers to the input\n    h = self.my_layers(x)\n\n    # set the value of p\n    p = self.get_value_for_p()\n\n    # apply dropout with new p\n    h = torch.nn.functional.dropout(h, p, self.training)\n\n    ...\n
\n

More on the functional version of dropout, here: https://pytorch.org/docs/stable/nn.functional.html#dropout-functions

\n", + "system": "" + }, + { + "instruction": "PyTorch DataLoader using Mongo DB", + "input": "", + "output": "

Introduction

\n

This one is a little open-ended, but let's try, also please correct me if I'm wrong somewhere.

\n
\n

So far, I have been exporting the data to the file system first, with subfolders named as the classes of the documents.

\n
\n

IMO this isn't sensible because:

\n\n
\n

Am I right? Would it make sense to directly connect to the MongoDB?

\n
\n

Given above, probably yes (especially when it comes to clear and portable implementation)

\n
\n

Or are there reasons not to do it (e.g. DBs generally being to slow etc.)?

\n
\n

AFAIK DB shouldn't be slower in this case as it will cache access to it, but I'm no db expert unfortunately. Many tricks for faster access are implemented out-of-the-box for databases.

\n
\n

can one prefetch the data somehow?

\n
\n

Yes, if you just want to get data, you could load a larger part of data (say 1024 records) at one go and return batches of data from that (say batch_size=128)

\n

Implementation

\n
\n

How would one implement a PyTorch DataLoader? I have found only very few code snippets online ([1] and [2]) which makes me doubt with my approach.

\n
\n

I'm not sure why would you want to do that. What you should go for is torch.utils.data.Dataset as shown in the examples you've listed.

\n

I would start with simple non-optimized approach similar to the one here, so:

\n\n

Given that, something along those lines is what I'd do:

\n
class DatabaseDataset(torch.utils.data.Dataset):\n    def __init__(self, query, batch_size, path: str, database: str):\n        self.batch_size = batch_size\n\n        client = pymongo.MongoClient(path)\n        self.db = client[database]\n        self.query = query\n        # Or non-approximate method, if the approximate method\n        # returns smaller number of items you should be fine\n        self.length = self.db.estimated_document_count()\n\n        self.cursor = None\n\n    def __enter__(self):\n        # Ensure that this find returns the same order of query every time\n        # If not, you might get duplicated data\n        # It is rather unlikely (depending on batch size), shouldn't be a problem\n        # for 20 million samples anyway\n        self.cursor = self.db.find(self.query)\n        return self\n\n    def shuffle(self):\n        # Find a way to shuffle data so it is returned in different order\n        # If that happens out of the box you might be fine without it actually\n        pass\n\n    def __exit__(self, *_, **__):\n        # Or anything else how to close the connection\n        self.cursor.close()\n\n    def __len__(self):\n        return len(self.examples)\n\n    def __getitem__(self, index):\n        # Read takes long, hence if you can load a batch of documents it should speed things up\n        examples = self.cursor[index * batch_size : (index + 1) * batch_size]\n        # Do something with this data\n        ...\n        # Return the whole batch\n        return data, labels\n
\n

Now batching is taken care of by DatabaseDataset, hence torch.utils.data.DataLoader can have batch_size=1. You might need to squeeze additional dimension.

\n

As MongoDB uses locks (which is no surprise, but see here) num_workers>0 shouldn't be a problem.

\n

Possible usage (schematically):

\n
with DatabaseDataset(...) as e:\n    dataloader = torch.utils.data.DataLoader(e, batch_size=1)\n    for epoch in epochs:\n        for batch in dataloader:\n            # And all the stuff\n            ...\n        dataset.shuffle() # after each epoch\n
\n

Remember about shuffling implementation in such case! (also shuffling can be done inside context manager and you might want to close connection manually or something along those lines).

\n", + "system": "" + }, + { + "instruction": "Defining a Torch Class in R package "torch"", + "input": "", + "output": "

You can do that directly using R's torch package which seems quite comprehensive at least for the basic tasks.

\n

Neural networks

\n

Here is an example of how to create nn.Sequential like this:

\n
library(torch)\n\nmodel <- nn_sequential(\n    nn_linear(D_in, H),\n    nn_relu(),\n    nn_linear(H, D_out)\n)\n
\n

Below is a custom nn_module (a.k.a. torch.nn.Module) which is a simple dense (torch.nn.Linear) layer (source):

\n
library(torch)\n\n# creates example tensors. x requires_grad = TRUE tells that \n# we are going to take derivatives over it.\ndense <- nn_module(\n  clasname = "dense",\n  # the initialize function tuns whenever we instantiate the model\n  initialize = function(in_features, out_features) {\n    \n    # just for you to see when this function is called\n    cat("Calling initialize!") \n    \n    # we use nn_parameter to indicate that those tensors are special\n    # and should be treated as parameters by `nn_module`.\n    self$w <- nn_parameter(torch_randn(in_features, out_features))\n    self$b <- nn_parameter(torch_zeros(out_features))\n    \n  },\n  # this function is called whenever we call our model on input.\n  forward = function(x) {\n    cat("Calling forward!")\n    torch_mm(x, self$w) + self$b\n  }\n)\n\nmodel <- dense(3, 1)\n
\n

Another example, using torch.nn.Linear layers to create a neural network this time (source):

\n
two_layer_net <- nn_module(\n   "two_layer_net",\n   initialize = function(D_in, H, D_out) {\n      self$linear1 <- nn_linear(D_in, H)\n      self$linear2 <- nn_linear(H, D_out)\n   },\n   forward = function(x) {\n      x %>% \n         self$linear1() %>% \n         nnf_relu() %>% \n         self$linear2()\n   }\n)\n
\n

Also there are other resources like here (using flow control and weight sharing).

\n

Other

\n

Looking at the reference it seems most of the layers are already provided (didn't notice transformer layers at a quick glance, but this is minor).

\n

As far as I can tell basic blocks for neural networks, their training etc. are in-place (even JIT so sharing between languages should be possible).

\n", + "system": "" + }, + { + "instruction": "Equivalent of embeddings_regularizer in pyTorch", + "input": "", + "output": "

There is no direct equivalent for PyTorch as PyTorch only supports L2 regularization on parameters via torch.optim optimizers.

\n

For example torch.optim.SGD has weight_decay parameter. If you set it up and you optimize your nn.Embedding it will be regularized by L2 with specified strength (you can pass only nn.Embedding for weight_decay, see per-parameter-options of optimizers).

\n

If you wish to use L1 regularization you would have to:

\n\n

Coding on your own

\n

Usually we add L1 regularization to loss and backpropagate, but this is inefficient approach.

\n

It is better to populate gradients of our parameters (there are some edge cases though) with the derivative of regularization (for L1 is it sign value). Something along those lines:

\n
import torch\n\n# Do this in your optimization loop AT THE TOP\nembedding = torch.nn.Embedding(150, 100)\nembedding.weight.grad = torch.sign(embedding.weight)\n\n# Do the rest of optimization AND clear gradients!\n...\n
\n

Though it is a little harder to make it work in general (stuff like batch accumulation etc.) and pretty unclear IMO. You could apply L2 on top of that also.

\n

torchlayers third party library

\n

Disclaimer: I'm the author of this project

\n

You can install torchlayers-nightly and get per-layer L1 and L2 regularization.

\n

Install via pip:

\n
pip install -U torchlayers-nightly\n
\n

In your code you could do:

\n
import torchlayers as tl\nimport torch\n\nembedding = torch.nn.Embedding(150, 100)\nregularized_embedding = tl.L1(embedding)\n\n# Do the rest as per usual\n
\n

This feature is experimental for now, but should work and I've used it with success previously.

\n

Also, you should be able to use tl.L2 the same way, see docstrings about usage of this particular layer.

\n

For more info check github repository and read documentation here.

\n", + "system": "" + }, + { + "instruction": "How to use fairseq interactive.py non-interactively?", + "input": "", + "output": "

fairseq-interactive can read lines from a file with the --input parameter, and it outputs translations to standard output.

\n

So let's say I have this input text file source.txt (where every sentence to translate is on a separate line):

\n
Hello world!\nMy name is John\n
\n

You can run:

\n
fairseq-interactive --input=source.txt [all-your-fairseq-parameters] > target.txt\n
\n

Where > target.txt means "put in the target.txt file all (standard) output generated by fairseq-interactive". The file will be created if it doesn't exist yet.

\n

With an English to French model it would generate a file target.txt that looks something like this (actual output may vary depending on your model, configuration and Fairseq version):

\n
S-0     Hello world!\nW-0     0.080   seconds\nH-0     -0.43813419342041016    Bonj@@ our le monde !\nD-0     -0.43813419342041016    Bonjour le monde !\nP-0     -0.1532 -1.7157 -0.0805 -0.0838 -0.1575\nS-1     My name is John\nW-1     0.080   seconds\nH-1     -0.3272092938423157     Je m' appelle John .\nD-1     -0.3272092938423157     Je m'appelle John.\nP-2     -0.3580 -0.2207 -0.0398 -0.1649 -1.0216 -0.1583\n
\n

To keep only the translations (lines starting with D-), you would have to filter the content of this file. You could use this command for example:

\n
grep -P "D-[0-9]+" target.txt | cut -f3 > only_translations.txt\n
\n

but you can merge all commands in one line:

\n
fairseq-interactive --input=source.txt [all-your-fairseq-parameters] | grep -P "D-[0-9]+" | cut -f3 > target.txt\n
\n

(Actual command will depend on the actual structure of target.txt.)

\n

Finally, know that you can use --input=- to read input from standard input.

\n", + "system": "" + }, + { + "instruction": "`*** RuntimeError: mat1 dim 1 must match mat2 dim 0` whenever I run model(images)", + "input": "", + "output": "

The output from self.conv(x) is of shape torch.Size([32, 64, 2, 2]): 32*64*2*2= 8192 (this is equivalent to (self.conv_out_size). The input to fully connected layer expects a single dimension vector i.e. you need to flatten it before passing to a fully connected layer in the forward function.

\n

i.e.

\n
class Network():\n    ...\n    def foward():\n    ...\n        conv_out = self.conv(x)\n        print(conv_out.shape)\n        conv_out = conv_out.view(-1, 32*64*2*2)\n        print(conv_out.shape)\n        x = self.fc(conv_out)\n        return x\n
\n

output

\n
torch.Size([32, 64, 2, 2])\ntorch.Size([1, 8192])\n
\n
\n

EDIT:

\n

I think you're using self._get_conv_out function wrong.

\n

It should be

\n
    def _get_conv_out(self, shape):\n        output = self.conv(torch.zeros(1, *shape)) # not (32, *size)\n        return int(numpy.prod(output.size()))\n
\n

then, in the forward pass, you can use

\n
        conv_out = self.conv(x)\n        # flatten the output of conv layers\n        conv_out = conv_out.view(conv_out.size(0), -1)\n        x = self.fc(conv_out)\n
\n

For an input of (32, 1, 110, 110), the output should be torch.Size([32, 2]).

\n", + "system": "" + }, + { + "instruction": "Finding TensorFlow equivalent of Pytorch GRU feature", + "input": "", + "output": "

The hidden size is 64 in your tensorflow example. To get the equivalent, you should use

\n
import tensorflow as tf\ntf.keras.layers.GRU(64*2, return_state=True)\n
\n

This is because the keras layer does not require you to specify your input size (64 in this example); it is decided when you build or run your model for the first time.

\n", + "system": "" + }, + { + "instruction": "Reproducibility of the results for GNN using DGL grahSAGE", + "input": "", + "output": "

I saw similar issues online and the solution was to use:

\n
torch.set_deterministic(True) \n
\n

The reason maybe because the scatter operations since it uses atomic operations under the hood, therefore, the ordering of summation may vary every time you run the code.

\n

also when you run the code set this in the terminal before calling your script as follows:

\n
CUBLAS_WORKSPACE_CONFIG=:16:8 python file_name.py\n
\n

reference:\nhttps://pytorch.org/docs/stable/notes/randomness.html\nhttps://github.com/pyg-team/pytorch_geometric/issues/859

\n", + "system": "" + }, + { + "instruction": "Understanding behavior of index_put in PyTorch", + "input": "", + "output": "

I copied your examples here with argument names inserted, fixed brackets and correct output(yours was swapped):

\n
a.index_put(indices=[torch.tensor([1, 0]), torch.tensor([1, 1])], values=torch.tensor(1.))\n\ntensor([[0., 1., 0.],\n        [0., 1., 0.]])\n\na.index_put(indices=[torch.tensor([0, 0]), torch.tensor([0, 1])], values = torch.tensor(1.))\n\ntensor([[1., 1., 0.],\n        [0., 0., 0.]]\n
\n

What this method does is inserting value(s) into locations in the original a tensor indicated by indices. indices is a list of x coordinates of insertions and y coordinates of insertions. values may be single value or a 1d tensor.

\n

to obtain the desired output use:

\n
a.index_put(indices=[torch.tensor([0,0,1]), torch.tensor([1, 2, 1])], values=torch.tensor(1.))\n\ntensor([[0., 1., 1.],\n        [0., 1., 0.]])\n
\n

moreover, you can pass multiple values in values argument to insert them into the indicated positions:

\n
a.index_put(indices=[torch.tensor([0,0,1]), torch.tensor([1, 2, 1])], values=torch.tensor([1., 2., 3.]))\n\ntensor([[0., 1., 2.],\n        [0., 3., 0.]])\n
\n", + "system": "" + }, + { + "instruction": "PyTorch: Inference on a single very large image using multiple GPUs?", + "input": "", + "output": "

You are looking for model parallel mode of work.
\nBasically, you can assign different parts of your model to different GPUs and then you should take care of the "bookkeeping".
\nThis solution is very model-specific and task-specific therefore, there are no "generic" wrappers for it (as opposed to data parallel).

\n

For example:

\n
class MyModelParallelNetwork(nn.Module):\n  def __init__(self, ...):\n    # define the network\n    self.part_one = ... # some nn.Module\n    self.part_two = ... # additional nn.Module \n    self.part_three = ... \n    self.part_four = ...\n\n    # important part - "send" the different parts to different GPUs\n    self.part_one.to(torch.device('gpu:0'))\n    self.part_two.to(torch.device('gpu:1'))\n    self.part_three.to(torch.device('gpu:2'))\n    self.part_four.to(torch.device('gpu:3'))\n\n  def forward(self, x):\n    # forward through model parts and GPUs:\n    p1 = self.part_one(x.to(torch.device('gpu:0')))\n    p2 = self.part_two(p1.to(torch.device('gpu:1')))\n    p3 = self.part_three(p2.to(torch.device('gpu:2')))\n    y = self.part_four(p3.to(torch.device('gpu:3')))\n    return y  # result is on cuda:3 device\n
\n", + "system": "" + }, + { + "instruction": "Projector tab is blank in PyTorch Tensorboard", + "input": "", + "output": "

After running into the same error I tried running a sample script for embeddings and suddenly it worked.

\n

I narrowed it down to: adding any image to Tensorboard before or after adding an embedding makes it work. Like so:

\n

writer.add_image('image', np.ones((3,3,3)), 0)

\n

Not sure what is causing this.

\n", + "system": "" + }, + { + "instruction": "Efficient way to compute Jacobian x Jacobian.T", + "input": "", + "output": "

In JAX, you can compute a full jacobian matrix using jax.jacfwd or jax.jacrev, or you can compute a jacobian operator and its transpose using jax.jvp and jax.vjp.

\n

So, for example, say you had a function R\u1d3a \u2192 R\u1d39 that looks something like this:

\n
import jax.numpy as jnp\nimport numpy as np\n\nnp.random.seed(1701)\nN, M = 10000, 5\nf_mat = np.array(np.random.rand(M, N))\ndef f(x):\n  return jnp.sqrt(f_mat @ x / N)\n
\n

Given two vectors x1 and x2, you can evaluate the Jacobian matrix at each using jax.jacfwd

\n
import jax\nx1 = np.array(np.random.rand(N))\nx2 = np.array(np.random.rand(N))\nJ1 = jax.jacfwd(f)(x1)\nJ2 = jax.jacfwd(f)(x2)\nprint(J1 @ J2.T)\n# [[3.3123782e-05 2.5001222e-05 2.4946943e-05 2.5180108e-05 2.4940484e-05]\n#  [2.5084497e-05 3.3233835e-05 2.4956826e-05 2.5108084e-05 2.5048916e-05]\n#  [2.4969209e-05 2.4896170e-05 3.3232871e-05 2.5006309e-05 2.4947023e-05]\n#  [2.5102483e-05 2.4947576e-05 2.4906987e-05 3.3327218e-05 2.4958186e-05]\n#  [2.4981882e-05 2.5007204e-05 2.4966144e-05 2.5076926e-05 3.3595043e-05]]\n
\n

But, as you note, along the way to computing this 5x5 result, we instantiate two 5x10,000 matrices. How might we get around this?

\n

The answer is in jax.jvp and jax.vjp. These have somewhat unintuitive call signatures for the purposes of your question, as they are designed primarily for use in forward-mode and reverse-mode automatic differentiation. But broadly, you can think of them as a way to compute J @ v and J.T @ v for a vector v, without having to actually compute J explicitly.

\n

For example, you can use jax.jvp to compute the effect of J1 operating on a vector, without actually computing J1:

\n
J1_op = lambda v: jax.jvp(f, (x1,), (v,))[1]\n\nvN = np.random.rand(N)\nnp.allclose(J1 @ vN, J1_op(vN))\n# True\n
\n

Similarly, you can use jax.vjp to compute the effect of J2.T operating on a vector, without actually computing J2:

\n
J2T_op = lambda v: jax.vjp(f, x2)[1](v)[0]\n\nvM = np.random.rand(M)\nnp.allclose(J2.T @ vM, J2T_op(vM))\n# True\n
\n

Putting these together and operating on an identity matrix gives you the full jacobian matrix product that you're after:

\n
def direct(f, x1, x2):\n  J1 = jax.jacfwd(f)(x1)\n  J2 = jax.jacfwd(f)(x2)\n  return J1 @ J2.T\n\ndef indirect(f, x1, x2, M):\n  J1J2T_op = lambda v: jax.jvp(f, (x1,), jax.vjp(f, x2)[1](v))[1]\n  return jax.vmap(J1J2T_op)(jnp.eye(M)).T\n\nnp.allclose(direct(f, x1, x2), indirect(f, x1, x2, M))\n# True\n
\n

Along with the memory savings, this indirect method is also a fair bit faster than the direct method, depending on the sizes of the jacobians involved:

\n
%time direct(f, x1, x2)\n# CPU times: user 1.43 s, sys: 14.9 ms, total: 1.44 s\n# Wall time: 886 ms\n%time indirect(f, x1, x2, M)\n# CPU times: user 311 ms, sys: 0 ns, total: 311 ms\n# Wall time: 158 ms\n
\n", + "system": "" + }, + { + "instruction": "PyTorch - RuntimeError: Error(s) in loading state_dict for VGG:", + "input": "", + "output": "

The problem is that what is being saved is not the same as what is expected to be loaded. The code is trying to load only a state_dict; it is saving quite a bit more than that - looks like a state_dict inside another dict with additional info. The load method doesn't have any logic to look inside the dict.

\n

This should work:

\n
import torch, torchvision.models\nmodel = torchvision.models.vgg16()\npath = 'test.pth'\ntorch.save(model.state_dict(), path) # nothing else here\nmodel.load_state_dict(torch.load(path))\n
\n", + "system": "" + }, + { + "instruction": "What does unsqueeze do in pytorch?", + "input": "", + "output": "

unsqueeze turns an n-dimensionsal tensor into an n+1-dimensional one, by adding an extra dimension of zero depth. However, since it is ambiguous which axis the new dimension should lie across (i.e. in which direction it should be "unsqueezed"), this needs to be specified by the dim argument.

\n

Hence the resulting unsqueezed tensors have the same information, but the indices used to access them are different.

\n

Here is a visual representation of what squeeze/unsqueeze do for an effectively 2d matrix, where it is going from a 2d tensor to a 3d one, and hence there are 3 choices for the new dimension's position:

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Why does 'dimension' mean several different things in the machine-learning world?", + "input": "", + "output": "

In the context of word embeddings in neural networks, dimensionality reduction, and many other machine learning areas, it is indeed correct to call the vector (which is typically, an 1D array or tensor) as n-dimensional where n is usually greater than 2. This is because we usually work in the Euclidean space where a (data) point in a certain dimensional (Euclidean) space is represented as an n-tuple of real numbers (i.e. real n-space \u211dn).

\n\n

Below is an exampleref of a (data) point in a 3D (Euclidean) space. To represent any point in this space, say d1, we need a tuple of three real numbers (x1, y1, z1).

\n\n

\"a

\n\n

Now, your confusion arises why this point d1 is called as 3 dimensional instead of 1 dimensional array. The reason is because it lies or lives in this 3D space. The same argument can be extended to all points in any n-dimensional real space, as it is done in the case of embeddings with 300d, 512d, 1024d vector etc.

\n\n

However, in all nD array compute frameworks such as NumPy, PyTorch, TensorFlow etc, these are still 1D arrays because the length of the above said vectors can be represented using a single number.

\n\n

But, what if you have more than 1 data point? Then, you have to stack them in some (unique) way. And this is where the need for a second dimension arises. So, let's say you stack 4 of these 512d vectors vertically, then you'd end up with a 2D array/tensor of shape (4, 512). Note that here we call the array as 2D because two integer numbers are required to represent the extent/length along each axis.

\n\n

To understand this better, please refer my other answer on axis parameter visualization for nD arrays, the visual representation of which I will include it below.

\n\n

\"axis

\n\n
\n\n

ref: Euclidean space wiki

\n", + "system": "" + }, + { + "instruction": "Why does nn.CrossEntropyLoss throw "TypeError: iteration over a 0-d tensor" when I verify inputs to be non-0-dimensional?", + "input": "", + "output": "

You get the error because nn.CrossEntropyLoss just returns one torch.Tensor, not a pair (it doesn't return accuracy). And this tensor is 0-dimensional, i.e. one number (unless you don't override reduction argument to 'none' to get per-element loss). So when you try to assign its value to two variables loss, accuracy python tries to iterate over this tensor variable, hence the error message. Simply use loss = lossLayer(pred2, targetBatch).

\n", + "system": "" + }, + { + "instruction": "Setting numpy array to slice without any in-place operations", + "input": "", + "output": "

If n_id is a fixed index array, you can get z_sparse as a matrix multiplication:

\n\n
# N, n, m = 2078,100, 64\nrow_mat = (n_id[:n] == np.arange(N)[:,None])\n\n# for pytorch tensor\n# row_mat = Tensor(n_id[:n] == np.arange(N)[:,None])\n\nz_sparse =  row_mat @ z\n
\n\n

Since row_mat is a constant array (tensor), your graph should work just fine.

\n", + "system": "" + }, + { + "instruction": "Split a multiple dimensional pytorch tensor into "n" smaller tensors", + "input": "", + "output": "

This creates overlapping tensors which what I wanted :

\n\n
torch.unfold(dimension, size, step) \n
\n", + "system": "" + }, + { + "instruction": "How to extract patches from an image in pytorch?", + "input": "", + "output": "

The size of your x is [1, 3, height, width]. Calling x.unfold(1, size, stride) tries to create slices of size 128 from dimension 1, which has size 3, hence it is too small to create any slice.

\n\n

You don't want to create slices across dimension 1, since those are the channels of the image (RGB in this case) and they need to be kept as they are for all patches. The patches are only created across the height and width of an image.

\n\n
patches = x.unfold(2, size, stride).unfold(3, size, stride)\n
\n\n

The resulting tensor will have size [1, 3, num_vertical_slices, num_horizontal_slices, 128, 128]. You can reshape it to combine the slices to get a list of patches i.e. size of [1, 3, num_patches, 128, 128]:

\n\n
patches = patches.reshape(1, 3, -1, size, size)\n
\n", + "system": "" + }, + { + "instruction": "Why do we need state_dict = state_dict.copy()", + "input": "", + "output": "
state_dict = state_dict.copy()\n
\n

does exactly what you tell him to do: it copies in place the state_dict. State dict are all the parameters of your model, and copying it allows to make them independant. One should be careful whether you need a copy or a deepcopy though !

\n", + "system": "" + }, + { + "instruction": "Pytorch memory model: how does "torch.from_numpy()" work?", + "input": "", + "output": "

Yes, t and arr are different Python objects at different regions of memory (hence different id) but both point to the same memory address which contains the data (contiguous (usually) C array).

\n\n

numpy operates on this region using C code binded to Python functions, same goes for torch (but using C++). id() doesn't know anything about the memory address of data itself, only of it's \"wrappers\".

\n\n

EDIT: When you assign b = a (assuming a is np.array), both are references to the same Python wrapper (np.ndarray). In other words they are the same object with different name.

\n\n

It's just how Python's assignment works, see documentation. All of the cases below would return True as well:

\n\n
import torch\nimport numpy as np\n\ntensor = torch.tensor([1,2,3])\ntensor2 = tensor\nid(tensor) == id(tensor2)\n\narr = np.array([1, 2, 3, 4, 5])\narr2 = arr\nid(arr) == id(arr2)\n\nsome_str = \"abba\"\nother_str = some_str\nid(some_str) == id(other_str)\n\nvalue = 0\nvalue2 = value\nid(value) == id(value2)\n
\n\n

Now, when you use torch.from_numpy on np.ndarray you have two objects of different classes (torch.Tensor and original np.ndarray). As those are of different types they can't have the same id. One could see this case as analogous to the one below:

\n\n
value = 3\nstring_value = str(3)\n\nid(value) == id(string_value)\n
\n\n

Here it's intuitive both string_value and value are two different objects at different memory locations.

\n\n

EDIT 2:

\n\n

All in all concepts of Python object and underlying C array have to be separated. id() doesn't know about C bindings (how could it?), but it knows about memory addresses of Python structures (torch.Tensor, np.ndarray).

\n\n

In case of numpy and torch.tensor you can have following situations:

\n\n\n", + "system": "" + }, + { + "instruction": "Wasserstein GAN implemtation in pytorch. How to implement the loss?", + "input": "", + "output": "

I noticed some errors in the implementation of your discriminator training protocol. You call your backward functions twice with both the real and fake values loss being backpropagated at different time steps.

\n\n

Technically an implementation using this scheme is possible but highly unreadable. There was a mistake with your errD_real in which your output is going to be positive instead of negative as an optimal D(G(z))>0 and so you penalize it for being correct. Overall your model converges simply by predicting D(x)<0 for all inputs.

\n\n

To fix this do not call your errD_readl.backward() or your errD_fake.backward(). Simply using an errD.backward() after you define errD would work perfectly fine. Otherwise, your generator seems to be correct.

\n", + "system": "" + }, + { + "instruction": "how does BatchNorm1d() method whithin the torch library work?", + "input": "", + "output": "

BatchNorm1d normalises data to 0 mean and unit variance for 2/3-dimensional data (N, C) or (N, C, L), computed over the channel dimension at each (N, L) or (N,) slice; while BatchNorm2d does the same thing for 4 dimensions (N, C, H, W), computed over the channel dimension at each (N, H, W) slice.

\n\n

Which one to use depends on the dimensionality of input data. For instance in image processing, feature maps ususally have 2 spatial dimensions (N, C, H, W), so BatchNorm2d is useful here. However for some NLP tasks, if there is only the length dimension to consider, one would use BatchNorm1d. For both functions, the d1 parameter is the number of features, and equals dim C of the input tensor.

\n", + "system": "" + }, + { + "instruction": "Asking gpt-2 to finish sentence with huggingface transformers", + "input": "", + "output": "

Unfortunately there is no way to do so. You can set the length parameter to a greater value and then just discard the incomplete part at the end.

\n

Even GPT3 doesn't support completing a sentence before a specific length. GPT3 support "sequences" though. Sequences force the model to stop when certain condition is fulfilled. You can find more information about in thi article

\n", + "system": "" + }, + { + "instruction": "Using flatten in pytorch v1.0 Sequential module", + "input": "", + "output": "

Just make a new Flatten layer.

\n\n
from collections import OrderedDict\n\nclass Flatten(nn.Module):\n    def forward(self, input):\n        return input.view(input.size(0), -1)\n\nlayers = OrderedDict()\nlayers['conv1'] = nn.Conv2d(1, 5, 3)\nlayers['relu1'] = nn.ReLU()\nlayers['conv2'] = nn.Conv2d(5, 1, 3)\nlayers['relu2'] = nn.ReLU()\nlayers['flatten'] = Flatten()\nlayers['linear1'] = nn.Linear(3600, 1)\nmodel = nn.Sequential(\nlayers\n).cuda()\n
\n", + "system": "" + }, + { + "instruction": "How to apply linear layer to 2D layer only in one dimension (by row or by column) - partially connected layers", + "input": "", + "output": "

The PyTorch nn.Linear module can be applied to multidimensional input, the linear will be applied to the last dimension so to apply by column the solution is to swap rows and columns.

\n\n
linear_3_to_1 = nn.Linear(3, 1)\n\nx = torch.randn(1, 1, 3, 50)\nx = x.transpose(2, 3) #swap 3 and 50\nout = linear_3_to_1(x).flatten()\n
\n", + "system": "" + }, + { + "instruction": "How to embed Sequence of Sentences in RNN?", + "input": "", + "output": "

The simplest way is to use 2 kinds of LSTM.

\n

Prepare the toy dataset

\n
xi = [\n# Input features at timestep 1\n[1, 48, 91, 0],\n# Input features at timestep 2\n[20, 5, 17, 32],\n# Input features at timestep 3\n[12, 18, 0, 0],\n# Input features at timestep 4\n[0, 0, 0, 0],\n# Input features at timestep 5\n[0, 0, 0, 0]\n]\nyi = 1\n\nx = torch.tensor([xi, xi])\ny = torch.tensor([yi, yi])\n\nprint(x.shape)\n# torch.Size([2, 5, 4])\n\nprint(y.shape)\n# torch.Size([2])\n
\n

Then, x is the batch of inputs. Here batch_size = 2.

\n

Embed the input

\n
vocab_size = 1000\nembed_size = 100\nhidden_size = 200\nembed = nn.Embedding(vocab_size, embed_size)\n\n# shape [2, 5, 4, 100]\nx = embed(x)\n
\n

The first word-LSTM is to encode each sequence into a vector

\n
# convert x into a batch of sequences\n# Reshape into [2, 20, 100]\nx = x.view(bs * 5, 4, 100)\n\nwlstm = nn.LSTM(embed_size, hidden_size, batch_first=True)\n# get the only final hidden state of each sequence\n\n_, (hn, _) = wlstm(x)\n\n# hn shape [1, 10, 200]\n\n# get the output of final layer\nhn = hn[0] # [10, 200]\n
\n

The second seq-LSTM is to encode sequences into a single vector

\n
# Reshape hn into [bs, num_seq, hidden_size]\nhn = hn.view(2, 5, 200)\n\n# Pass to another LSTM and get the final state hn\nslstm = nn.LSTM(hidden_size, hidden_size, batch_first=True)\n_, (hn, _) = slstm(hn) # [1, 2, 200]\n\n# Similarly, get the hidden state of the last layer\nhn = hn[0] # [2, 200]\n
\n

Add some classification layers

\n
pred_linear = nn.Linear(hidden_size, 1)\n\n# [2, 1]\noutput = torch.sigmoid(pred_linear(hn))\n
\n", + "system": "" + }, + { + "instruction": "Pytorch - Apply pooling on specific dimension", + "input": "", + "output": "

You can simply permute the dimensions:

\n\n
x = torch.rand(5, 128, 32)\npool = nn.MaxPool1d(2, 2)\npool(x.permute(0,2,1)).permute(0,2,1)  # shape (5, 128, 32) -> (5, 64, 32)\n
\n", + "system": "" + }, + { + "instruction": "Masking and computing loss for a padded batch sent through an RNN with a linear output layer in pytorch", + "input": "", + "output": "

I think the PyTocrh Chatbot Tutorial might be instructional for you.

\n\n

Basically, you calculate the mask of valid output values (paddings are not valid), and use that to calculate the loss for only those values.

\n\n

See the outputVar and maskNLLLoss methods on the tutorial page. For your convenience I copied the code here, but you really need to see it in context of all the code.

\n\n
# Returns padded target sequence tensor, padding mask, and max target length\ndef outputVar(l, voc):\n    indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]\n    max_target_len = max([len(indexes) for indexes in indexes_batch])\n    padList = zeroPadding(indexes_batch)\n    mask = binaryMatrix(padList)\n    mask = torch.BoolTensor(mask)\n    padVar = torch.LongTensor(padList)\n    return padVar, mask, max_target_len\n
\n\n
def maskNLLLoss(inp, target, mask):\n    nTotal = mask.sum()\n    crossEntropy = -torch.log(torch.gather(inp, 1, target.view(-1, 1)).squeeze(1))\n    loss = crossEntropy.masked_select(mask).mean()\n    loss = loss.to(device)\n    return loss, nTotal.item()\n
\n", + "system": "" + }, + { + "instruction": "Why does torch.from_numpy require a different byte ordering while matplotlib doesn't?", + "input": "", + "output": "

FITS stores data in big-endian byte ordering (at the time FITS was developed this was a more common machine architecture; sadly the standard has never been updated to allow flexibility on this, although it could easily be done with a single header keyword to indicate endianness of the data...)

\n\n

According to the Numpy docs Numpy arrays report the endianness of the underlying data as part of its dtype (e.g. a dtype of '>i' means big-endian ints, and 'and change the array's dtype to reflect the new byte order.

\n\n

Your solution of calling .astype(np.float32) should work, but that's because the np.float32 dtype is explicitly little-endian, so .astype(...) copies an existing array and converts the data in that array, if necessary, to match that dtype. I just wanted to explain exactly why that works, since it might be otherwise unclear why you're doing that.

\n\n

As for matplotlib it doesn't really have much to do with your question. Numpy arrays can transparently perform operations on data that does not match the endianness of your machine architecture, by automatically performing byte swaps as necessary. Matplotlib and many other scientific Python libraries work directly with Numpy arrays and thus automatically benefit from its transparent handling of endianness.

\n\n

It just happens that PyTorch (in part because of its very high performance and GPU-centric data handling model) requires you to hand it data that's already in little-endian order, perhaps just to avoid ambiguity. But that's particular to PyTorch and is not specifically a contrast with matplotlib.

\n", + "system": "" + }, + { + "instruction": "Repeat specific columns of a tensor in Pytorch", + "input": "", + "output": "

Provided you're using PyTorch >= 1.1.0 you can use torch.repeat_interleave.

\n\n
repeat_tensor = torch.tensor(num_repeats).to(X.device, torch.int64)\nX_dup = torch.repeat_interleave(X, repeat_tensor, dim=1)\n
\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'torch'", + "input": "", + "output": "

Try the below mentioned one, surely it will work.

\n\n

conda install -c pytorch pytorch

\n", + "system": "" + }, + { + "instruction": "pytorch nn.Sequential(*list) TypeError: list is not a Module subclass", + "input": "", + "output": "

Please provide the list of layers that you have created, are you sure you haven' done any error in that. Try checking if your list is actually [] and not [[..]]. The other thing that I noticed is that you have list as a variable name, which isn't a good idea - list is a Python keyword.

\n\n

I tried writing a sample code of unpacking a list and it works fine for me.

\n\n
import torch\nimport torch.nn as nn                                                                           net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))                                           \nlayers = [nn.Linear(2, 2), nn.Linear(2, 2)]                                                  \nnet = nn.Sequential(*layers)\nprint(net)\n
\n\n

This ran without any error, and the result was:

\n\n
Sequential(\n  (0): Linear(in_features=2, out_features=2, bias=True)\n  (1): Linear(in_features=2, out_features=2, bias=True)\n)\n
\n\n

Hope this helps. :)

\n", + "system": "" + }, + { + "instruction": "pytorch - gradients not calculated for parameters", + "input": "", + "output": "

Basically, when you create a new tensor, like torch.nn.Parameter() or torch.tensor(), you are creating a leaf node tensor.

\n\n

And when you do something like c=a+1, c will be intermediate node. You can print(c.is_leaf) to check whether the tensor is leaf node or not. Pytorch will not calculate the gradient of intermediate node in default.

\n\n

In your code snippet, a, b, d are all leaf node tensor, and c is intermediate node. c.grad will None as pytorch doesn't calculate the gradient for intermediate node. a is isolated from the graph when you call loss.backword(). That's why a.grad is also None.

\n\n

If you change the code to this

\n\n
a = torch.nn.Parameter(torch.randn(1, requires_grad=True, dtype=torch.float, device=device))\nb = torch.nn.Parameter(torch.randn(1, requires_grad=True, dtype=torch.float, device=device))\nc = a + 1\nd = c\nfor epoch in range(n_epochs):\n    yhat = d + b * x_train_tensor\n    error = y_train_tensor - yhat\n    loss = (error ** 2).mean()\n    loss.backward()\n    print(a.grad) # Not None\n    print(b.grad) # Not None\n    print(c.grad) # None\n    print(d.grad) # None\n
\n\n

You will find a and b have gradients, but c.grad and d.grad are None, because they're intermediate node.

\n", + "system": "" + }, + { + "instruction": "How to fix the error `Process finished with exit code -1073741819 (0xC0000005)`", + "input": "", + "output": "

Finally, I figured out this.

\n\n

This error occurs just because one of my variables is not loaded in cuda.

\n\n

When I add this output = Variable(netD(real_cpu),requires_grad=True) the problem solved.

\n", + "system": "" + }, + { + "instruction": "How to do sequence classification with pytorch nn.Transformer?", + "input": "", + "output": "

The accuracy you mentioned indicates that something is wrong. Since you are comparing LSTM with TransformerEncoder, I want to point to some crucial differences.

\n\n
    \n
  1. Positional embeddings: This is very important since the Transformer does not have recurrence concept and so it doesn't capture sequence information. So, make sure you add positional information along with the input embeddings.

  2. \n
  3. Model architecture: d_model, n_head, num_encoder_layers are important. Go with the default size as used in Vaswani et al., 2017. (d_model=512, n_head=8, num_encoder_layers=6)

  4. \n
  5. Optimization: In many scenarios, it has been found that the Transformer needs to be trained with smaller learning rate, large batch size, WarmUpScheduling.

  6. \n
\n\n

Last but not least, for a sanity check, just make sure the parameters of the model is updating. You can also check the training accuracy to make sure the accuracy keeps increasing as the training proceeds.

\n\n

Although it is difficult to say what is exactly wrong in your code but I hope that the above points will help!

\n", + "system": "" + }, + { + "instruction": "Multi-GPU training of AllenNLP coreference resolution", + "input": "", + "output": "

After some digging through the code I found out that AllenNLP does this under the hood directly through its Trainer. The cuda_device can either be a single int (in the case of single-processing) or a list of ints (in the case of multi-processing):

\n\n
\n

cuda_device : Union[int, List[int]], optional (default = -1)\n An integer or list of integers specifying the CUDA device(s) to use. If -1, the CPU is used.

\n
\n\n

So all GPU devices needed should be passed on instead:

\n\n
if torch.cuda.is_available():\n    cuda_device = list(range(torch.cuda.device_count()))\n    model = model.cuda(cuda_device[0])\nelse:\n    cuda_device = -1\n
\n\n

Note that the model still has to be manually moved to the GPU (via model.cuda(...)), as it would otherwise try to use multiple CPUs instead.

\n", + "system": "" + }, + { + "instruction": "How to fix pytorch multi processing issue on cpu?", + "input": "", + "output": "

Skeleton

\n\n

Using the skeleton below I see 4 processes running. You should tweak n_train_processes. I set it to 10 which was 2-much as I have 8 cores. Setting it to 6 work fine.

\n\n
...\nimport torch.multiprocessing as mp\n\nclass MyModel(nn.Module):\n    ...\n\n\ndef train(model, rank):\n    ...\n\n\ndef test(model):\n    ...\n\n\n\nn_train_processes = 3\n\n\nif __name__ == '__main__':\n    model = MyModel()\n    model.share_memory()\n\n    processes = []\n    for rank in range(n_train_processes + 1):  # + 1 for test process\n        if rank == 0:\n            p = mp.Process(target=test, args=(model,))\n        else:\n            p = mp.Process(target=train, args=(model, rank,))\n        p.start()\n        processes.append(p)\n    for p in processes:\n        p.join()\n
\n\n

Complete example

\n\n

This example is taken from https://github.com/seungeunrho/minimalRL which has some other nice RL examples. This is a3c.py.

\n\n
# a3c.py\nimport gym\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.distributions import Categorical\nimport torch.multiprocessing as mp\nimport time\n\nn_train_processes = 6\n\n# Hyperparameters\nlearning_rate = 0.0002\nupdate_interval = 5\ngamma = 0.98\nmax_train_ep = 300\nmax_test_ep = 400\n\n\nclass ActorCritic(nn.Module):\n    def __init__(self):\n        super(ActorCritic, self).__init__()\n        self.fc1 = nn.Linear(4, 256)\n        self.fc_pi = nn.Linear(256, 2)\n        self.fc_v = nn.Linear(256, 1)\n\n    def pi(self, x, softmax_dim=0):\n        x = F.relu(self.fc1(x))\n        x = self.fc_pi(x)\n        prob = F.softmax(x, dim=softmax_dim)\n        return prob\n\n    def v(self, x):\n        x = F.relu(self.fc1(x))\n        v = self.fc_v(x)\n        return v\n\n\ndef train(model, rank):\n    optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n    env = gym.make('CartPole-v1')\n\n    for n_epi in range(max_train_ep):\n        done = False\n        s = env.reset()\n        while not done:\n            s_lst, a_lst, r_lst = [], [], []\n            for t in range(update_interval):\n                prob = model.pi(torch.from_numpy(s).float())\n                m = Categorical(prob)\n                a = m.sample().item()\n                s_prime, r, done, info = env.step(a)\n\n                s_lst.append(s)\n                a_lst.append([a])\n                r_lst.append(r/100.0)\n\n                s = s_prime\n                if done:\n                    break\n\n            R = 0.0\n            R_lst = []\n            for reward in r_lst[::-1]:\n                R = gamma * R + reward\n                R_lst.append([R])\n            R_lst.reverse()\n\n            done_mask = 0.0 if done else 1.0\n            s_batch, a_batch, R_batch, s_final = \\\n                torch.tensor(s_lst, dtype=torch.float), torch.tensor(a_lst), \\\n                torch.tensor(R_lst), torch.tensor(s_prime, dtype=torch.float)\n\n            td_target = R_batch + gamma * model.v(s_final) * done_mask\n            advantage = td_target - model.v(s_batch)\n            pi = model.pi(s_batch, softmax_dim=1)\n            pi_a = pi.gather(1, a_batch)\n            loss = -torch.log(pi_a) * advantage.detach() + \\\n                F.smooth_l1_loss(td_target.detach(), model.v(s_batch))\n\n            optimizer.zero_grad()\n            loss.mean().backward()\n            optimizer.step()\n\n    env.close()\n    print(\"Training process {} reached maximum episode.\".format(rank))\n\n\ndef test(model):\n    env = gym.make('CartPole-v1')\n    score = 0.0\n    print_interval = 20\n\n    for n_epi in range(max_test_ep):\n        done = False\n        s = env.reset()\n        while not done:\n            prob = model.pi(torch.from_numpy(s).float())\n            a = Categorical(prob).sample().item()\n            s_prime, r, done, info = env.step(a)\n            s = s_prime\n            score += r\n\n        if n_epi % print_interval == 0 and n_epi != 0:\n            print(\"# of episode :{}, avg score : {:.1f}\".format(\n                n_epi, score/print_interval))\n            score = 0.0\n            time.sleep(1)\n    env.close()\n\n\nif __name__ == '__main__':\n    model = ActorCritic()\n    model.share_memory()\n    processes = []\n    for rank in range(n_train_processes + 1):  # + 1 for test process\n        if rank == 0:\n            p = mp.Process(target=test, args=(model,))\n        else:\n            p = mp.Process(target=train, args=(model, rank,))\n        p.start()\n        processes.append(p)\n    for p in processes:\n        p.join()\n
\n", + "system": "" + }, + { + "instruction": "How to initialize the weights of different layers of nn.Sequential block in different styles in pytorch?", + "input": "", + "output": "

Here is one way of doing so:

\n\n
import torch\nimport torch.nn as nn \n\nnet = nn.Sequential()\n\nll1 = nn.Linear(2, 5, bias = False)\ntorch.nn.init.uniform_(ll1.weight, a=0, b=1) # a: lower_bound, b: upper_bound\nnet.add_module('Linear_1', ll1)\nprint(ll1.weight)\n\nll2 = nn.Linear(5, 5, bias = False)\ntorch.nn.init.constant_(ll2.weight, 2.0)\nnet.add_module('Linear_2', ll2)\nprint(ll2.weight)\n\nprint(net)\n
\n\n

Output:

\n\n
Parameter containing:\ntensor([[0.2549, 0.7823],\n        [0.3439, 0.4721],\n        [0.0709, 0.6447],\n        [0.3969, 0.7849],\n        [0.7631, 0.5465]], requires_grad=True)\n\nParameter containing:\ntensor([[2., 2., 2., 2., 2.],\n        [2., 2., 2., 2., 2.],\n        [2., 2., 2., 2., 2.],\n        [2., 2., 2., 2., 2.],\n        [2., 2., 2., 2., 2.]], requires_grad=True)\n\nSequential(\n(Linear_1): Linear(in_features=2, out_features=5, bias=False)\n(Linear_2): Linear(in_features=5, out_features=5, bias=False)\n)\n
\n", + "system": "" + }, + { + "instruction": "Pytorch tensor indexing: How to gather rows by tensor containing indices", + "input": "", + "output": "

Here is a solution you may look for

\n\n
ids = ids.repeat(1, 255).view(-1, 1, 255)\n
\n\n

An example as below:

\n\n
x = torch.arange(24).view(4, 3, 2) \n\"\"\"\ntensor([[[ 0,  1],\n         [ 2,  3],\n         [ 4,  5]],\n\n        [[ 6,  7],\n         [ 8,  9],\n         [10, 11]],\n\n        [[12, 13],\n         [14, 15],\n         [16, 17]],\n\n        [[18, 19],\n         [20, 21],\n         [22, 23]]])\n\"\"\"\nids = torch.randint(0, 3, size=(4, 1))\n\"\"\"\ntensor([[0],\n        [2],\n        [0],\n        [2]])\n\"\"\"\nidx = ids.repeat(1, 2).view(4, 1, 2) \n\"\"\"\ntensor([[[0, 0]],\n\n        [[2, 2]],\n\n        [[0, 0]],\n\n        [[2, 2]]])\n\"\"\"\n\ntorch.gather(x, 1, idx) \n\"\"\"\ntensor([[[ 0,  1]],\n\n        [[10, 11]],\n\n        [[12, 13]],\n\n        [[22, 23]]])\n\"\"\"\n
\n", + "system": "" + }, + { + "instruction": "How to dynamically index the tensor in pytorch?", + "input": "", + "output": "

There is also a way just using PyTorch and avoiding the loop using indexing and torch.split:

\n\n
tensor = torch.rand(12, 512, 768)\n\n# create tensor with idx\nidx_list = [0,2,3,400,5,32,7,8,321,107,100,511]\n# convert list to tensor\nidx_tensor = torch.tensor(idx_list) \n\n# indexing and splitting\nlist_of_tensors = tensor[:, idx_tensor, :].split(1, dim=1)\n
\n\n

When you call tensor[:, idx_tensor, :] you will get a tensor of shape:
(12, len_of_idx_list, 768).
Where the second dimension depends on your number of indices.

\n\n

Using torch.split this tensor is split into a list of tensors of shape: (12, 1, 768).

\n\n

So finally list_of_tensors contains tensors of the shape:

\n\n
[torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768]),\n torch.Size([12, 1, 768])]\n
\n", + "system": "" + }, + { + "instruction": "How to use spatial transformer to crop the image in pytorch?", + "input": "", + "output": "

The image in the original post already provides a good answer, but it might be useful to provide some code.

\n

Importantly, this method should retain gradients correctly. In my case I have a batch of y,x values that represent the center of the crop position (in the range [-1,1]). As for the values a and b, which are scale x and y values for the transformation, in my case I used 0.5 for each in combination with a smaller output size (half in width and height) to retain the original scale, i.e. to crop. You can use 1 to have no scale changes, but then there would be no cropping.

\n
import torch.nn.functional as F\n\ndef crop_to_affine_matrix(t):\n    'Turns (N,2) translate values into (N,2,3) affine transformation matrix'\n    t = t.reshape(-1,1,2,1).flip(2) # flip x,y order to y,x\n    t = F.pad(t, (2,0,0,0)).squeeze(1)\n    t[:,0,0] = a\n    t[:,1,1] = b\n    return t\n\nt = torch.zeros(5,2) # center crop positions for batch size 5\nF.affine_grid(crop_to_affine_matrix(t), outsize)\n
\n", + "system": "" + }, + { + "instruction": "PyTorch: What does @weak_script_method decorator do?", + "input": "", + "output": "

You can find the exact decorator location to get the idea.

\n\n
def weak_script_method(fn):\n    weak_script_methods[fn] = {\n        \"rcb\": createResolutionCallback(frames_up=2),\n        \"original_method\": fn\n    }\nreturn fn\n
\n\n

But, you shouldn't need to worry about that decorator. This decorator is internal to JIT.

\n\n

Technically method decorated with @weak_script_method will be added to the weak_script_methods dictionary created in front, like this:

\n\n
weak_script_methods = weakref.WeakKeyDictionary() \n
\n\n

That dict tracks methods to avoid circular dependency problems; methods calling other methods while creating the PyTorch graph.

\n\n
\n\n

This really has no much sense unless you understand the concept of TorchScript in general.

\n\n

The idea of TorchScript is to train models in PyTorch and export models to another non Python production environment (read:C++/C/Cuda) that support static typing.

\n\n

PyTorch team made TorchScript on limited Python base to support static typing.\nBy default, Python is dynamically typed language, but with few tricks (read:checks) it can become statically typed language.

\n\n

And so TorchScript functions are statically-typed subset of Python that contains all of PyTorch's built-in Tensor operations. This difference allows TorchScript modules code to run without the need for a Python interpreter.

\n\n

You can either convert the existing PyTorch methods to TorchScript using tracing (torch.jit.trace() method), or to create your TorchScripts by hand using @torch.jit.script decorator.

\n\n

If you use tracing you will get a single class module at the end. Here is the example:

\n\n
import inspect\n\nimport torch\ndef foo(x, y):\n    return x + y\ntraced_foo = torch.jit.trace(foo, (torch.rand(3), torch.rand(3)))\n\nprint(type(traced_foo)) #<class 'torch.jit.TopLevelTracedModule'>\nprint(traced_foo) #foo()\nprint(traced_foo.forward) #<bound method TopLevelTracedModule.forward of foo()>\n\nlines = inspect.getsource(traced_foo.forward)\nprint(lines)\n
\n\n
\n\n

Output:

\n\n
<class 'torch.jit.TopLevelTracedModule'>\nfoo()\n<bound method TopLevelTracedModule.forward of foo()>\n    def forward(self, *args, **kwargs):\n        return self._get_method('forward')(*args, **kwargs)\n
\n\n

You can investigate further using the inspect module. This was just a showcase how to convert one function using tracing.

\n", + "system": "" + }, + { + "instruction": "Pytorch: Why loss functions are implemented both in nn.modules.loss and nn.functional module?", + "input": "", + "output": "

I think of it as of a partial application situation - it's useful to be able to \"bundle\" many of the configuration variables with the loss function object. In most cases, your loss function has to take prediction and ground_truth as its arguments. This makes for a fairly uniform basic API of loss functions. However, they differ in details. For instance, not every loss function has a reduction parameter. BCEWithLogitsLoss has weight and pos_weight parameters; PoissonNLLLoss has log_input, eps. It's handy to write a function like

\n\n
def one_epoch(model, dataset, loss_fn, optimizer):\n    for x, y in dataset:\n        model.zero_grad()\n        y_pred = model(x)\n        loss = loss_fn(y_pred, y)\n        loss.backward()\n        optimizer.step()\n
\n\n

which can work with instantiated BCEWithLogitsLoss equally well as with PoissonNLLLoss. But it cannot work with their functional counterparts, because of the bookkeeping necessary. You would instead have to first create

\n\n
loss_fn_packed = functools.partial(F.binary_cross_entropy_with_logits, weight=my_weight, reduction='sum')\n
\n\n

and only then you can use it with one_epoch defined above. But this packing is already provided with the object-oriented loss API, along with some bells and whistles (since losses subclass nn.Module, you can use forward and backward hooks, move stuff between cpu and gpu, etc).

\n", + "system": "" + }, + { + "instruction": "How to wrap PyTorch functions and implement autograd?", + "input": "", + "output": "

You have picked a rather unlucky example. torch.nn.functional.max_pool1d is not an instance of torch.autograd.Function, because it's a PyTorch built-in, defined in C++ code and with an autogenerated Python binding. I am not sure if it's possible to get the backward property via its interface.

\n\n

Firstly, in case you haven't noticed, you don't need to write any custom code for backpropagation of this formula because both power operation and max_pool1d already have it defined, so their composition also is covered by the autograd. Assuming your goal is an exercise, I would suggest you do it more manually (without falling back to backward of max_pool1d). An example is below

\n\n
import torch\nimport torch.nn.functional as F\nimport torch.autograd as tag\n\nclass SquareAndMaxPool1d(tag.Function):\n    @staticmethod\n    def forward(ctx, input, kernel_size, **kwargs):\n        # we're gonna need indices for backward. Currently SquareAnd...\n        # never actually returns indices, I left it out for simplicity\n        kwargs['return_indices'] = True\n\n        input_sqr = input ** 2\n        output, indices = F.max_pool1d(input_sqr, kernel_size, **kwargs)\n        ctx.save_for_backward(input, indices)\n\n        return output\n\n    @staticmethod\n    def backward(ctx, grad_output):\n        input, indices = ctx.saved_tensors\n\n        # first we need to reconstruct the gradient of `max_pool1d`\n        # by putting all the output gradient elements (corresponding to\n        # input elements which made it through the max_pool1d) in their\n        # respective places, the rest has gradient of 0. We do it by\n        # scattering it against a tensor of 0s\n        grad_output_unpooled = torch.zeros_like(input)\n        grad_output_unpooled.scatter_(2, indices, grad_output)\n\n        # then incorporate the gradient of the \"square\" part of your\n        # operator\n        grad_input = 2. * input * grad_output_unpooled\n\n        # the docs for backward\n        # https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function.backward\n        # say that \"it should return as many tensors, as there were inputs\n        # to forward()\". It fails to mention that if an argument was not a\n        # tensor, it should return None (I remember reading this somewhere,\n        # but can't find it anymore). Anyway, we need to\n        # return a (grad_input, None) tuple to avoid a complaint that two\n        # outputs were expected\n        return grad_input, None\n
\n\n

We can then use the numerical gradient checker to verify that the operation works as expected.

\n\n
f = SquareAndMaxPool1d.apply\nxT = torch.randn(1, 1, 6, requires_grad=True, dtype=torch.float64)\ntag.gradcheck(lambda t: f(t, 2), xT)\n
\n\n

I'm sorry if this doesn't address your question of how to get the backward of max_pool1d, but hopefully you find my answer useful enough.

\n", + "system": "" + }, + { + "instruction": "How to return back to cpu from gpu in pytorch?", + "input": "", + "output": "

You can set the GPU device that you want to use using:

\n\n
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n
\n\n

And in your case just you can return to CPU using:

\n\n
torch.device('cpu')\n
\n", + "system": "" + }, + { + "instruction": "pytorch : unable to understand model.forward function", + "input": "", + "output": "

I'm a layman so I'll help you with the layman's terms :)

\n\n
input_size = 784\nhidden_sizes = [128, 64]\noutput_size = 10\n
\n\n

These are parameters for the layers in your network. Each neural network consists of layers, and each layer has an input and an output shape.

\n\n

Specifically input_size deals with the input shape of the first layer. This is the input_size of the entire network. Each sample that is input into the network will be a 1 dimension vector that is length 784 (array that is 784 long).

\n\n

hidden_size deals with the shapes inside the network. We will cover this a little later.

\n\n

output_size deals with the output shape of the last layer. This means that our network will output a 1 dimensional vector that is length 10 for each sample.

\n\n

Now to break up model definition line by line:

\n\n
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),\n
\n\n

The nn.Sequential part simply defines a network, each argument that is input defines a new layer in that network in that order.

\n\n

nn.Linear(input_size, hidden_sizes[0]) is an example of such a layer. It is the first layer of our network takes in an input of size input_size, and outputs a vector of size hidden_sizes[0]. The size of the output is considered \"hidden\" in that it is not the input or the output of the whole network. It \"hidden\" because it's located inside of the network far from the input and output ends of the network that you interact with when you actually use it.

\n\n

This is called Linear because it applies a linear transformation by multiplying the input by its weights matrix and adding its bias matrix to the result. (Y = Ax + b, Y = output, x = input, A = weights, b = bias).

\n\n
nn.ReLU(),\n
\n\n

ReLU is an example of an activation function. What this function does is apply some sort of transformation to the output of the last layer (the layer discussed above), and outputs the result of that transformation. In this case the function being used is the ReLU function, which is defined as ReLU(x) = max(x, 0). Activation functions are used in neural networks because they create non-linearities. This allows your model to model non-linear relationships.

\n\n
nn.Linear(hidden_sizes[0], hidden_sizes[1]),\n
\n\n

From what we discussed above, this is a another example of a layer. It takes an input of hidden_sizes[0] (same shape as the output of the last layer) and outputs a 1D vector of length hidden_sizes[1].

\n\n
nn.ReLU(),\n
\n\n

Apples the ReLU function again.

\n\n
nn.Linear(hidden_sizes[1], output_size)\n
\n\n

Same as the above two layers, but our output shape is the output_size this time.

\n\n
nn.Softmax(dim=1))\n
\n\n

Another activation function. This activation function turns the logits outputted by nn.Linear into an actual probability distribution. This lets the model output the probability for each class. At this point our model is built.

\n\n
# Forward pass through the network and display output\nimages, labels = next(iter(trainloader))\nimages.resize_(images.shape[0], 1, 784)\nprint(images.shape)\n
\n\n

These are simply just preprocessing training data and putting it into the correct format

\n\n
ps = model.forward(images[0,:])\n
\n\n

This passes the images through the model (forward pass) and applies the operations previously discussed in layer. You get the resultant output.

\n", + "system": "" + }, + { + "instruction": "When does PyTorch automatically cast Tensor dtype?", + "input": "", + "output": "

It looks like the PyTorch team is working on those types of problems, see this issue. It seems like some basic upcasting is already implemented in 1.0.0 as per your example (probably for the overloaded operators, tried some others like '//' or addition and they work fine), did not find any proof of this though (like github issue or info in documentation). If someone finds it (implicit casting of torch.Tensor for various operations), please post a comment or another answer.

\n\n

This issue is a proposal on type promotion, as you can see all of those are still open.

\n", + "system": "" + }, + { + "instruction": "TypeError: add(): argument 'other' (position 1) must be Tensor, not numpy.ndarray", + "input": "", + "output": "

Ok, the error seems be for the latest pytorch-1.0.0, when degrade pytorch to pytorch-0.4.1, the code seems work (passed the error lines at this point). Still have no idea to make the code work with pytorch-1.0.0

\n", + "system": "" + }, + { + "instruction": "How can I compute the tensor in Pytorch efficiently?", + "input": "", + "output": "

It is slow because you use two for loops.

\n\n

You can use .prod See: https://pytorch.org/docs/stable/torch.html#torch.prod

\n\n

In your case,

\n\n

x = torch.prod(x, dim=1) or x = x.prod(dim=1)

\n\n

should work

\n", + "system": "" + }, + { + "instruction": "Increasingly large, positive WGAN-GP loss", + "input": "", + "output": "

Batch Normalization in the discriminator breaks Wasserstein GANs with gradient penalty. The authors themselves advocate the usage of layer normalization instead, but this is clearly written in bold in their paper (https://papers.nips.cc/paper/7159-improved-training-of-wasserstein-gans.pdf). It is hard to say if there are other bugs in your code, but I urge you to thoroughly read the DCGAN and the Wasserstein GAN paper and really take notes on the hyperparameters. Getting them wrong really destroys the performance of the GAN and doing a hyperparameter search gets expensive quite quickly.

\n\n

By the way transposed convolutions produce stairway artifacts in your output images. Use image resizing instead. For an indepth explanation of that phenomenon I can recommend the following resource (https://distill.pub/2016/deconv-checkerboard/).

\n", + "system": "" + }, + { + "instruction": "PyTorch: create non-fully-connected layer / concatenate output of hidden layers", + "input": "", + "output": "

It turned out to be a simple comprehension problem with the concatenation function.\nChanging\n x = torch.cat((xLeft, xRight))\nto\n x = torch.cat((xLeft, xRight), dim=1)\ndid the trick.\nThanks @dennlinger

\n", + "system": "" + }, + { + "instruction": "Problems about torch.nn.DataParallel", + "input": "", + "output": "

DataParallel should work on a single GPU as well, but you should check if args.gpus only contains the id of the device that is to be used (should be 0) or None.\nChoosing None will make the module use all available devices.

\n\n

Also you could remove DataParallel as you do not need it and move the model to GPU only by calling model.cuda() or, as I prefer, model.to(device) where device is the device's name.

\n\n

Example:

\n\n

This example shows how to use a model on a single GPU, setting the device using .to() instead of .cuda().

\n\n
from torch import nn\nimport torch\n\n# Set device to cuda if cuda is available\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n# Create model\nmodel = nn.Sequential(\n  nn.Conv2d(1,20,5),\n  nn.ReLU(),\n  nn.Conv2d(20,64,5),\n  nn.ReLU()\n)\n\n# moving model to GPU\nmodel.to(device)\n
\n\n

If you want to use DataParallel you could do it like this

\n\n
# Optional DataParallel, not needed for single GPU usage\nmodel1 = torch.nn.DataParallel(model, device_ids=[0]).to(device)\n# Or, using default 'device_ids=None'\nmodel1 = torch.nn.DataParallel(model).to(device)\n
\n", + "system": "" + }, + { + "instruction": "Problem when using py.test with a pytorch project via the python extension of visual studio code", + "input": "", + "output": "

I found the following workaround: \nThe issue does not appear when code is started from the command line with the desired env activated.

\n", + "system": "" + }, + { + "instruction": "Pytorch tensor - How to get the indexes by a specific tensor", + "input": "", + "output": "

How about

\n\n
In [1]: torch.nonzero((t == q).sum(dim=1) == t.size(1))\nOut[1]: \ntensor([[ 0],\n        [ 3]])\n
\n\n

Comparing t == q performs element-wise comparison between t and q, since you are looking for entire row match, you need to .sum(dim=1) along the rows and see what row is a perfect match == t.size(1).

\n\n
\n\n

As of v0.4.1, torch.all() supports dim argument:

\n\n
torch.all(t==q, dim=1)\n
\n", + "system": "" + }, + { + "instruction": "Pytorch how to get the gradient of loss function twice", + "input": "", + "output": "

I think you are trying to implement generative adversarial network (GAN), but from the code, I don't understand and can't follow to what you are trying to achieve as there are a few missing pieces for a GAN to works. I can see there's a discriminator network module, DNetwork but missing the generator network module.

\n\n

If to guess, when you say 'loss function twice', I assumed you mean you have one loss function for the discriminator net and another for the generator net. If that's the case, let me share how I would implement a basic GAN model.

\n\n

As an example, let's take a look at this Wasserstein GAN Jupyter notebook

\n\n

I'll skip the less important bits and zoom into the important ones here:

\n\n
    \n
  1. First, import PyTorch libraries and set up

    \n\n
    # Set up batch size, image size, and size of noise vector:\nbs, sz, nz = 64, 64, 100 # nz is the size of the latent z vector for creating some random noise later\n
  2. \n
  3. Build a discriminator module

    \n\n
    class DCGAN_D(nn.Module):\n    def __init__(self):\n        ... truncated, the usual neural nets stuffs, layers, etc ...\n    def forward(self, input):\n        ... truncated, the usual neural nets stuffs, layers, etc ...\n
  4. \n
  5. Build a generator module

    \n\n
    class DCGAN_G(nn.Module):\n    def __init__(self):\n        ... truncated, the usual neural nets stuffs, layers, etc ...\n    def forward(self, input):\n        ... truncated, the usual neural nets stuffs, layers, etc ...\n
  6. \n
  7. Put them all together

    \n\n
    netG = DCGAN_G().cuda()\nnetD = DCGAN_D().cuda()\n
  8. \n
  9. Optimizer needs to be told what variables to optimize. A module automatically keeps track of its variables.

    \n\n
    optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)\noptimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)\n
  10. \n
  11. One forward step and one backward step for Discriminator

    \n\n

    Here, the network can calculate gradient during the backward pass, depends on the input to this function. So, in my case, I have 3 type of losses; generator loss, dicriminator real image loss, dicriminator fake image loss. I can get gradient of loss function three times for 3 different net passes.

    \n\n
    def step_D(input, init_grad):\n    # input can be from generator's generated image data or input image from dataset\n    err = netD(input)\n    err.backward(init_grad) # backward pass net to calculate gradient\n    return err # loss\n
  12. \n
  13. Control trainable parameters [IMPORTANT]

    \n\n

    Trainable parameters in the model are those that require gradients.

    \n\n
    def make_trainable(net, val):\n    for p in net.parameters():\n        p.requires_grad = val # note, i.e, this is later set to False below in netG update in the train loop.\n
    \n\n

    In TensorFlow, this part can be coded like below:\n\ngrad = tf.gradients(loss, X)\ngrad = tf.stop_gradient(grad)\n

    \n\n

    So, I think this will answer your first question, \"I substituted tf.stop_gradient with grad.detach_(). Is this correct?\"

  14. \n
  15. Train loop

  16. \n
\n\n

You can see here how's the 3 different loss functions are being called here.

\n\n
    def train(niter, first=True):\n\n        for epoch in range(niter):\n            # Make iterable from PyTorch DataLoader\n            data_iter = iter(dataloader)\n            i = 0\n\n            while i < n:\n                ###########################\n                # (1) Update D network\n                ###########################\n                make_trainable(netD, True)\n\n                # train the discriminator d_iters times\n                d_iters = 100\n\n                j = 0\n\n                while j < d_iters and i < n:\n                    j += 1\n                    i += 1\n\n                    # clamp parameters to a cube\n                    for p in netD.parameters():\n                        p.data.clamp_(-0.01, 0.01)\n\n                    data = next(data_iter)\n\n                    ##### train with real #####\n                    real_cpu, _ = data\n                    real_cpu = real_cpu.cuda()\n                    real = Variable( data[0].cuda() )\n                    netD.zero_grad()\n\n                    # Real image discriminator loss\n                    errD_real = step_D(real, one)\n\n                    ##### train with fake #####\n                    fake = netG(create_noise(real.size()[0]))\n                    input.data.resize_(real.size()).copy_(fake.data)\n\n                    # Fake image discriminator loss\n                    errD_fake = step_D(input, mone)\n\n                    # Discriminator loss\n                    errD = errD_real - errD_fake\n                    optimizerD.step()\n\n                ###########################\n                # (2) Update G network\n                ###########################\n                make_trainable(netD, False)\n                netG.zero_grad()\n\n                # Generator loss\n                errG = step_D(netG(create_noise(bs)), one)\n                optimizerG.step()\n\n                print('[%d/%d][%d/%d] Loss_D: %f Loss_G: %f Loss_D_real: %f Loss_D_fake %f'\n                    % (epoch, niter, i, n,\n                    errD.data[0], errG.data[0], errD_real.data[0], errD_fake.data[0]))\n
\n\n
\n

\"I was getting \"RuntimeError: Trying to backward through the graph a second time...\"

\n
\n\n

PyTorch has this behaviour; to reduce GPU memory usage, during the .backward() call, all the intermediary results (if you have like saved activations, etc.) are deleted when they are not needed anymore. Therefore, if you try to call .backward() again, the intermediary results don't exist and the backward pass cannot be performed (and you get the error you see).

\n\n

It depends on what you are trying to do. You can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option.

\n\n
\n

Can someone let me know pytorch's best practice on this

\n
\n\n

As you can see from the PyTorch code above and from the way things are being done in PyTorch which is trying to stay Pythonic, you can get a sense of PyTorch's best practice there.

\n", + "system": "" + }, + { + "instruction": "Channel wise CrossEntropyLoss for image segmentation in pytorch", + "input": "", + "output": "

As Shai's answer already states, the documentation on the torch.nn.CrossEntropy() function can be found here and the code can be found here. The built-in functions do indeed already support KD cross-entropy loss.

\n\n

In the 3D case, the torch.nn.CrossEntropy() functions expects two arguments: a 4D input matrix and a 3D target matrix. The input matrix is in the shape: (Minibatch, Classes, H, W). The target matrix is in the shape (Minibatch, H, W) with numbers ranging from 0 to (Classes-1). If you start with a one-hot encoded matrix, you will have to convert it with np.argmax().

\n\n

Example with three classes and minibatch size of 1:

\n\n
import pytorch\nimport numpy as np\n\ninput_torch = torch.randn(1, 3, 2, 5, requires_grad=True)\n\none_hot = np.array([[[1, 1, 1, 0, 0], [0, 0, 0, 0, 0]],    \n                    [[0, 0, 0, 0, 0], [1, 1, 1, 0, 0]],\n                    [[0, 0, 0, 1, 1], [0, 0, 0, 1, 1]]])\n\ntarget = np.array([np.argmax(a, axis = 0) for a in target])\ntarget_torch = torch.tensor(target_argmax)\n\nloss = torch.nn.CrossEntropyLoss()\noutput = loss(input_torch, target_torch)\noutput.backward()\n
\n", + "system": "" + }, + { + "instruction": "How to perform sum pooling in PyTorch", + "input": "", + "output": "

You could use torch.nn.AvgPool1d (or torch.nn.AvgPool2d, torch.nn.AvgPool3d) which are performing mean pooling - proportional to sum pooling. If you really want the summed values, you could multiply the averaged output by the pooling surface.

\n", + "system": "" + }, + { + "instruction": "Convert image to proper dimension PyTorch", + "input": "", + "output": "\n\n

I'd prefer the following, which leaves the original image unmodified and simply adds a new axis as desired:

\n\n
_image = np.array(_image)\nimage = torch.from_numpy(_image)\nimage = image[np.newaxis, :] \n# _unsqueeze works fine here too\n
\n\n

Then to swap the axes as desired:

\n\n
image = image.permute(0, 3, 1, 2)\n# permutation applies the following mapping\n# axis0 -> axis0\n# axis1 -> axis3\n# axis2 -> axis1\n# axis3 -> axis2\n
\n", + "system": "" + }, + { + "instruction": "pytorch cnn stride error", + "input": "", + "output": "

Fine, maybe I know what's happening cause I've faced same runtime error just 4 or 5 hours ago.

\n\n

Here is my solution in my case(I defined the dataset by myself):

\n\n

The image I feed into the net is 1 channel, same as your code(self.conv1 = nn.Conv2d(in_channels=1,...)). And the attribute of the image which would bring runtime error is as followed:

\n\n

error_img

\n\n

\"enter

\n\n

The image I fixed is as followed:

\n\n

fixed_img

\n\n

\"enter

\n\n

Can you feel the difference? The input image's channel should be 1, so the img.shape()should be tuple! Use img.reshape(1,100,100)to fix it and the net's forward function would proceed.

\n\n

I hope it can help you.

\n", + "system": "" + }, + { + "instruction": "math operator difference *= or +=", + "input": "", + "output": "

When you do dd = cc, both dd and cc are now references to the same object (same for bb = aa). Nothing is being copied!

\n\n

When you do bb = bb * 2, the * operator creates a new object and bb now refers to that object. No existing object is changed.

\n\n

When you do dd *= 2, the object that dd refers to (and which cc also refers to) is changed.

\n\n

So the difference is that * creates a new object and = makes a variable refer to a new object (rather than changing the object in any way), whereas *= changes the object.

\n\n

It may be counter-intuitive that x *= y behaves differently than x = x * y, but those are the semantics of the language, not a bug.

\n", + "system": "" + }, + { + "instruction": "PyTorch ValueError: Target and input must have the same number of elements", + "input": "", + "output": "

The idea of nn.BCELoss() is to implement the following formula:

\n\n

\"enter

\n\n

Both o and tare tensors of arbitrary (but same!) size and i simply indexes each element of the two tensor to compute the sum above.

\n\n

Typically, nn.BCELoss() is used in a classification setting: o and i will be matrices of dimensions N x D. N will be the number of observations in your dataset or minibatch. D will be 1 if you are only trying to classify a single property, and larger than 1 if you are trying to classify multiple properties. t, the target matrix, will only hold 0 and 1, as for every property, there are only two classes (that's where the binary in binary cross entropy loss comes from). o will hold the probability with which you assign every property of every observation to class 1.

\n\n

Now in your setting above, it is not clear how many classes you are considering and how many properties there are for you. If there is only one property as suggested by the shape of your target, you should only output a single quantity, namely the probability of being in class 1 from your model. If there are two properties, your targets are incomplete! If there are multiple classes, you should work with torch.nn.CrossEntropyLossinstead of torch.nn.BCELoss().

\n\n

As an aside, often it is desirable for the sake of numerical stability to use torch.nn.BCEWithLogitsLossinstead of torch.nn.BCELoss() following nn.Sigmoid() on some outputs.

\n", + "system": "" + }, + { + "instruction": "expected Double tensor (got Float tensor) in Pytorch", + "input": "", + "output": "\n\n

Error 1

\n\n
\n

RuntimeError: expected Double tensor (got Float tensor)

\n
\n\n

This happens when you pass a double tensor to the first conv1d function. Conv1d only works with float tensor. \nEither do,

\n\n
    \n
  1. conv1.double() , or
  2. \n
  3. model.double().
  4. \n
\n\n

Which is what you have done, and that is correct.

\n\n

Error 2

\n\n
\n

RuntimeError: Given input size: (300 x 1 x 1). Calculated output size: (128 x 1 x -3). Output size is too small

\n
\n\n

This is because you are passing an input to which a convolution with a window size of 5 is not valid. You will have to add padding to your Conv1ds for this to work, like so:

\n\n
self.conv1 = nn.Conv1d(300, 128, kernel_size=5, padding=2)\n
\n\n

If you dont wish to add padding, then given (batch_size, in_channels, inp_size) as the size of the input tensor, you have to make sure that your inp_size is greater than 5.

\n\n

All fixes combined

\n\n

Make sure that your sizes are correct for the rest of your network. Like so:

\n\n
class Net(nn.Module):\n    def __init__(self):\n        super(Net, self).__init__()\n        self.conv1 = nn.Conv1d(300, 128, kernel_size=5, padding=2)\n        self.conv2 = nn.Conv1d(128, 64, kernel_size=2, padding=1)\n        self.conv2_drop = nn.Dropout()\n        self.fc1 = nn.Linear(64, 20)\n        self.fc2 = nn.Linear(20, 2)\n\n    def forward(self, x):\n        x = F.relu(F.avg_pool1d(self.conv1(x), 2, padding=1))\n        x = F.relu(F.avg_pool1d(self.conv2_drop(self.conv2(x)), 2))\n        x = x.view(1, -1) # bonus fix, Linear needs (batch_size, in_features) and not (in_features, batch_size) as input.\n        x = F.relu(self.fc1(x))\n        x = F.dropout(x, training=self.training)\n        return self.fc2(x)\n\nif __name__ == '__main__':\n\n    t = Variable(torch.randn((1, 300, 1))).double() # t is a double tensor\n    model = Net()\n    model.double() # this will make sure that conv1d will process double tensor\n    out = model(t)\n
\n", + "system": "" + }, + { + "instruction": "Batch Kronecker product of tensors", + "input": "", + "output": "
x = torch.randn(100,10,10) \ny = torch.randn(100,2,2)\nz = torch.einsum('iab,icd->iacbd', x, y).view((100, 20, 20))\n
\n

which is verified below as equivalent to taking Kronecker product over the last two dimensions:

\n
for xx, yy, zz in zip(x, y, z):\n    assert zz.allclose(torch.kron(xx, yy))\n
\n", + "system": "" + }, + { + "instruction": "pytorch nn.Sequential(*list) TypeError: list is not a Module subclass", + "input": "", + "output": "

Please provide the list of layers that you have created, are you sure you haven' done any error in that. Try checking if your list is actually [] and not [[..]]. The other thing that I noticed is that you have list as a variable name, which isn't a good idea - list is a Python keyword.

\n\n

I tried writing a sample code of unpacking a list and it works fine for me.

\n\n
import torch\nimport torch.nn as nn                                                                           net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))                                           \nlayers = [nn.Linear(2, 2), nn.Linear(2, 2)]                                                  \nnet = nn.Sequential(*layers)\nprint(net)\n
\n\n

This ran without any error, and the result was:

\n\n
Sequential(\n  (0): Linear(in_features=2, out_features=2, bias=True)\n  (1): Linear(in_features=2, out_features=2, bias=True)\n)\n
\n\n

Hope this helps. :)

\n", + "system": "" + }, + { + "instruction": ""C:\\torch\\lib\\fbgemm.dll" or one of its dependencies", + "input": "", + "output": "

I was able to resolve this issue and it was related to a missing dependency for:

\n

libomp140.x86_64.dll

\n

I was setting up a clean Windows 11 Pro system and wanted to get the latest and greatest Torch with Cuda 12.4 on there for development. Until I got exactly the error you are mentioning. I tried installing and reinstalling different versions of the C++ redistributables to no avail. I also installed Visual Studio Pro 2022. Nothing would get me that file.

\n

Remedy: Get a copy of libomp140.x86_64.dll and put it in C:\\Windows\\Sytem32

\n

After I did that, everything started working. For some reason, installing the newest VS and/or C++ redistributables will no longer put that file in there. I got mine from a (slightly) older install that had Visual Studio Pro on it. So, you may need to get it yourself until there is some kind of official fix. Disclaimer: I have no idea if this fix is safe or causes conflicts. Please note those here if you have issues.

\n", + "system": "" + }, + { + "instruction": "libtorch_cpu.so: undefined symbol: iJIT_IsProfilingActive", + "input": "", + "output": "

Downgrade mkl package to an earlier version. It should work.

\n", + "system": "" + }, + { + "instruction": "Cannot run MetaAI's llama2 due to "No module named 'fire'" error", + "input": "", + "output": "

@cybernetic, did you run the following command line in the root of the llama repository?

\n

How to correct the issue?

\n
    \n
  1. run the command pip install -e .

    \n
  2. \n
  3. run the torchrun command in the root directory

    \n
  4. \n
\n
torchrun --nproc_per_node 1 example_chat_completion.py \\\n    --ckpt_dir llama-2-7b-chat/ \\\n    --tokenizer_path tokenizer.model \\\n    --max_seq_len 512 --max_batch_size 6\n
\n", + "system": "" + }, + { + "instruction": "Custom optimizer in PyTorch or TensorFlow 2.12.0", + "input": "", + "output": "

The pytorch optimizers are complicated because they are general and optimized for performance. Implementing your own is relatively straightforward if you don't need it to be particularly robust. For example, a very simple implementation of gradient descent with momentum might look something like this.

\n
from torch.optim import Optimizer\n\nclass SimpleGD(Optimizer):\n    def __init__(self, params, lr, momentum=0.0):\n        defaults = dict(lr=lr, momentum=momentum)\n        super().__init__(params, defaults)\n\n    @torch.no_grad()\n    def step(self, closure=None):\n        loss = None\n        if closure is not None:\n            loss = closure()\n        for group in self.param_groups:\n            for param in group['params']:\n                grad = param.grad + group['momentum'] * self.state[param].get('momentum_buffer', 0)\n                self.state[param]['momentum_buffer'] = grad\n                param -= group['lr'] * grad\n        return loss\n
\n

Which should match the results of PyTorch's SGD with the same lr and momentum (and everything else defaulted).

\n

For your case, you would need to compute sign of the gradients and cache them at each step similar to the way the momentum_buffer is cached in the above example.

\n", + "system": "" + }, + { + "instruction": "self() as function within class, what does it do?", + "input": "", + "output": "

Meh, this is pytorch. Remember that you can use the model like this: model(x) to do the model.forward(x). So inside of the model class self(x) will be the basically the same as doing self.forward(x).

\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'os' has no attribute 'add_dll_directory'", + "input": "", + "output": "

The function os.add_dll_directory was introduced in Python 3.8. It's not available in your version, Python 3.7.

\n

To use that function, you'll need to upgrade to Python 3.8 or later.

\n

Since it's being used from within PyTorch, it's possible that this is due to an incompatible version of PyTorch being used. Downgrading your PyTorch version may solve that.

\n", + "system": "" + }, + { + "instruction": "why is there tensor.dtype and tensor.type()?", + "input": "", + "output": "

The Tensor.type(dtype=None) function has the ability to cast the tensor to the given dtype as a parameter. If dtype parameter is not provided, it just returns the dtype of the tensor. So, the difference is the added functionality of casting.

\n

Also, If the casting is performed to a new type, then the function returns a copy of the tensor.

\n", + "system": "" + }, + { + "instruction": "Getting "Unable to load weights from pytorch checkpoint file" when loading model from transformers", + "input": "", + "output": "

I managed to find the solution all I needed to do is delete the model's folder(69659a277424eb381574e4952f3b3fa3440a419b in my case) from hugging face trocr in:

\n
\n

/home/user/.cache/huggingface/hub/models--microsoft--trocr-base-handwritten/snapshots/69659a277424eb381574e4952f3b3fa3440a419b

\n
\n

and when I restarted the process the model was downloaded again with no problems.

\n", + "system": "" + }, + { + "instruction": "Projection onto unit simplex using gradient decent in Pytorch", + "input": "", + "output": "

The error comes from the derivation of the formula:

\n

\"enter

\n

from:

\n

\"enter

\n

If you develop the expression

\n

\"enter

\n

you will realize that it should be

\n

\"enter

\n

instead of

\n

\"enter

\n

In short, this error comes from forgetting the 1/2 factor while developing the norm. Once you make that change everything works as intended:

\n
import torch\nimport torchvision\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntorch.manual_seed(1)\nx = torch.randn(10)\n\nx_list = x.tolist()\n\nnu_0 = torch.tensor(0., requires_grad = True)\nnu = nu_0\noptimizer = torch.optim.SGD([nu], lr=1e-1)\n\nnu_old = torch.tensor(float('inf'))\nsteps = 100\neps = 1e-6\ni = 1\nwhile torch.norm(nu_old-nu) > eps:\n  nu_old = nu.clone()\n  optimizer.zero_grad()\n  f_of_nu = -(0.5*torch.norm(-torch.relu(-(x-nu)))**2 + nu*(torch.sum(x) -1) -0.5*x.size()[0]*nu**2)\n  f_of_nu.backward()\n  optimizer.step()\n  print(f'At step {i+1:2} the function value is {f_of_nu.item(): 1.4f} and nu={nu: 0.4f}' )\n  i += 1\n\ny_star = torch.relu((x-nu)).cpu().detach()\nprint(y_star)\nprint(list(map(lambda x: round(x, 4), y_star.tolist())))\nprint(y_star.sum())\n
\n

And the output gives:

\n
...\nAt step 25 the function value is -2.0721 and nu= 0.2328\ntensor(0.2328, requires_grad=True)\ntensor([0.4285, 0.0341, 0.0000, 0.3885, 0.0000, 0.0000, 0.0000, 0.1489, 0.0000,\n        0.0000])\n[0.4285, 0.0341, 0.0, 0.3885, 0.0, 0.0, 0.0, 0.1489, 0.0, 0.0]\ntensor(1.0000)\n
\n", + "system": "" + }, + { + "instruction": "How to fine-tune gpt-j using Huggingface Trainer", + "input": "", + "output": "

I found what appears to work, though now I'm running low on memory and working through ways of handling it.

\n

The data_collator parameter seems to take care of the exact issue that I was having.

\n
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)\n\ntrainer = Trainer(\n    model=model,\n    args=training_args,\n    train_dataset=small_train_dataset,\n    eval_dataset=small_eval_dataset,\n    compute_metrics=compute_metrics,\n    data_collator=data_collator,\n)\n
\n", + "system": "" + }, + { + "instruction": "Shuffling two 2D tensors in PyTorch and maintaining same order correlation", + "input": "", + "output": "

You can use the function torch.randperm to get a set of indices that act as a random permutation. The following is a small example of getting a random permutation, then applying it to both the a and b tensors:

\n
indices = torch.randperm(a.size()[0])\na=a[indices]\nb=b[indices]\n
\n", + "system": "" + }, + { + "instruction": "Stratified train/val/test split in Pytorch", + "input": "", + "output": "

I got the same issue. The answer is that you have mismatch your dataset and subset.\nWhen you build your second split, you use train_dataset in order to build indices, then at the end you use dataset for train_dataset Subset. It's an error it should have been train_dataset from first split as you build your indices on it and not anymore on dataset.

\n

I've had a little optim too in order to not rebuild labels as you got them from first split.\nIt should be:

\n
train_indices, test_indices, train_label, _ = train_test_split\n(\n    range(len(dataset)),\n    dataset.targets,\n    stratify=dataset.targets,\n    test_size=0.1,\n    random_state=1\n)\n\ntrain_dataset = Subset(dataset, train_indices)\ntest_dataset = Subset(dataset, test_indices)\n\ntrain_indices, val_indices, _, _ = train_test_split\n(\n    range(len(train_dataset)),\n    train_label,\n    stratify=train_label,\n    test_size=0.111,\n    random_state=1\n)\n\ntrain_dataset = Subset(train_dataset, train_indices)\nval_dataset = Subset(train_dataset, val_indices)\n
\n", + "system": "" + }, + { + "instruction": "Create custom connection/ non-fully connected layers in Pytorch", + "input": "", + "output": "

As @Jan said here, you can overload nn.Linear and provide a point-wise mask to mask the interaction you want to avoid having. Remember that a fully connected layer is merely a matrix multiplication with an optional additive bias.

\n

Looking at its source code, we can do:

\n
class MaskedLinear(nn.Linear):\n    def __init__(self, *args, mask, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.mask = mask\n\n    def forward(self, input):\n        return F.linear(input, self.weight, self.bias)*self.mask\n
\n

Having F defined as torch.nn.functional

\n

Considering the constraint you have given to the second layer:

\n
\n

the first two neurons of the hidden layer should be connected to the first neuron of the output layer

\n
\n

It seems you are looking for this pattern:

\n
tensor([[1., 0., 0.],\n        [1., 0., 0.],\n        [0., 1., 0.],\n        [0., 1., 0.],\n        [0., 0., 1.],\n        [0., 0., 1.]])\n
\n

Which can be obtained using torch.block_diag:

\n
mask = torch.block_diag(*[torch.ones(2,1),]*output_size)\n
\n

Having this, you can define your network as:

\n
net = nn.Sequential(nn.Linear(input_size, hidden_size),\n                    MaskedLinear(hidden_size, output_size, mask))\n
\n
\n

If you feel like it, you can even implement it inside the custom layer:

\n
class LocalLinear(nn.Linear):\n    def __init__(self, *args, kernel_size=2, **kwargs):\n        super().__init__(*args, **kwargs)\n\n        assert self.in_features == kernel_size*self.out_features\n        self.mask = torch.block_diag(*[torch.ones(kernel_size,1),]*self.out_features)\n\ndef forward(self, input):\n    return F.linear(input, self.weight, self.bias)*self.mask\n
\n

And defining it like so:

\n
net = nn.Sequential(nn.Linear(input_size, hidden_size),\n                    LocalLinear(hidden_size, output_size))\n
\n", + "system": "" + }, + { + "instruction": "Difference between forward and train_step in Pytorch Lightning?", + "input": "", + "output": "
\n

I am confused about the difference between the def forward () and the\ndef training_step() methods.

\n
\n

Quoting from the docs:

\n

"In Lightning we suggest separating training from inference. The training_step defines the full training loop. We encourage users to use the forward to define inference actions."

\n

So forward() defines your prediction/inference actions. It doesn't even need to be part of your training_step in which you would define you whole training loop. Nonetheless you can choose to have it in your training_step if you want it that way. An example where forward() isn't part of the training_step:

\n
    def forward(self, x):\n        # in lightning, forward defines the prediction/inference actions\n        embedding = self.encoder(x)\n        return embedding\n\n    def training_step(self, batch, batch_idx):\n        # training_step defined the train loop.\n        # in this case it is independent of forward\n        x, y = batch\n        x = x.view(x.size(0), -1)\n        z = self.encoder(x)\n        x_hat = self.decoder(z)\n        loss = F.mse_loss(x_hat, x)\n        # Logging to TensorBoard by default\n        self.log("train_loss", loss)\n        return loss\n
\n
\n

the model is not called in the training step, only in forward. But\nforward is also not called in the training step

\n
\n

The fact that forward() is not called in your train_step is because self(x) does it for you. You can alternatively call forward() explicitly instead of using call(x).

\n
\n

I am worried that given the forward method is not being called, the\nmodel is somehow not being implemented?

\n
\n

As long as you see your metrics logged with self.log move in the right direction you will know that you model gets called correctly and its learning.

\n", + "system": "" + }, + { + "instruction": "Understanding gradient computation using backward() in PyTorch", + "input": "", + "output": "

Whenever you are using value.backward(), you compute the derivative value (in your case value == x) with respect to all your parameters (in your case that is just x). Roughly speaking, this means all tensors that are somehow involved in your computation that have requires_grad=True. So this means

\n
x.grad = dx / dx = 1\n
\n

To add to that: With the automatic differentiation you always ever compute with "constant" values: All your function or networks are always evaluated at a concrete point. And the gradient you get is the gradient evaluated at that same point. There is no symbolic computation taking place. All the information needed for the computation of the gradient is encoded in the computation graph.

\n", + "system": "" + }, + { + "instruction": "how to only show progress bar of the master node of tqdm in distributed training?", + "input": "", + "output": "

You can switch it off by setting disable=True parameter for non-master processes, for example:

\n
# ...\n    \nmaster_process = ddp_rank == 0\n\n# ...\n    \nfor epoch in range(epoch_num):\n  with tqdm(dataloader, disable=not master_process) as pbar:    \n    # ...\n
\n", + "system": "" + }, + { + "instruction": "What is the different for torchvision.models.resnet and torch.hub.load?", + "input": "", + "output": "

The only difference that there is between your models if you load them in that way it's the number of layers, since you're loading resnet18 with Torch Hub and resnet50 with Models (thus, also the pretrained weights). They behave differently, you can see more about that in this paper.

\n

Torch Hub also lets you publish pretrained models in your repository, but since you're loading it from 'pytorch/vision:v0.10.0' (which is the same repository from which Models is loading the neural networks), there should be no difference between:

\n
model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=True)\n
\n

and

\n
model = models.resnet18(pretrained=True)\n
\n", + "system": "" + }, + { + "instruction": "How to compute Cross Entropy Loss for sequences", + "input": "", + "output": "

The documentation page of nn.CrossEntropyLoss clearly states:

\n
\n

Input: shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K >= 1 in the case of K-dimensional loss.
\nTarget: If containing class indices, shape (), (N) or (N, d_1, d_2, ..., d_K) with K >= 1 in the case of K-dimensional loss where each value should be between [0, C). If containing class probabilities, the input and each value should be between [0, 1].

\n
\n

Just to be crystal clear, "input" refers to the output prediction of your model\nwhile the "target" is the label tensor. In a nutshell, the target must have one less dimension than that of the input. This missing dimension in the target would contain each class logit value. Usually, we say the target is in the dense format, it only contains the class indices corresponding to the true labels.

\n

The example you give corresponds to the use case of:

\n
#input = (batch_size, sequence_length, numb_classes)\n#target = (batch_size, sequence_length)\n
\n

Which is the case of #input = (N, C, d_1) and #target = (N, d_1), i.e;, you need to permute the axes, or tranpose two axes from your input tensor such that it gets a shape of (batch_size, numb_classes, sequence_length) which is (N, C, d_1). You can do so with either torch.Tensor.transpose or torch.Tensor.permute:

\n
>>> input.permute(0,2,1)\n
\n

or

\n
>>> input.transpose(1,2)\n
\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'x' from 'y'", + "input": "", + "output": "

Looking at your error, it appears get_num_classes doesn't exist anymore. I verified this by looking that their github and docs.

\n

It was removed after this commit.

\n", + "system": "" + }, + { + "instruction": "How to find the size of a deep learning model?", + "input": "", + "output": "

You are able to calculate the number of parameters and buffers.\nThen try to multiply them with the element size and you will have the size of all parameters.

\n
model = models.resnet18()\nparam_size = 0\nbuffer_size = 0\nfor param in model.parameters():\n    param_size += param.nelement() * param.element_size()\n\nfor buffer in model.buffers():\n    buffer_size += buffer.nelement() * buffer.element_size()\n\nsize_all_mb = (param_size + buffer_size) / 1024**2\nprint('Size: {:.3f} MB'.format(size_all_mb))\n
\n

And it will print:

\n
\n

Size: 361.209 MB

\n
\n", + "system": "" + }, + { + "instruction": "numpy how to speed up tanh?", + "input": "", + "output": "

You could try with numexpr as follows:

\n
pip install numexpr\n
\n

Then:

\n
import numexpr as ne\nimport numpy as np\n\ndata=np.random.randn(128,64,32).astype(np.float32)\n\nresne = ne.evaluate("tanh(data)")\nresnp = np.tanh(data)\n
\n

Then check all close:

\n
In [16]: np.allclose(resne,resnp)\nOut[16]: True\n
\n

And check timings:

\n
In [14]: %timeit res = ne.evaluate("tanh(data)")\n311 \u00b5s \u00b1 1.26 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\nIn [15]: %timeit np.tanh(data)\n1.85 ms \u00b1 7.43 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n
\n", + "system": "" + }, + { + "instruction": "How pytorch uses an implicit for loop instead of an explicit for loop\uff1f", + "input": "", + "output": "

You could take advantage of the following fact:

\n

Given a random variable Z ~ Uniform(0,1) and real numbers a and b where a < b, if X = Z * (b - a) + a then X ~ Uniform(a,b). Particularly relevant is the case where a = -b which allows us to simplify to X = (2 * Z - 1) * b.

\n

Putting this into practice in PyTorch looks something like this

\n
z = torch.rand(128, 3, 32, 32)\ny = torch.arange(128).reshape(-1, 1, 1, 1)\nx = (2 * z - 1) * y\n
\n

where torch.rand returns a new random tensor sampled from Uniform(0,1). We also had to insert some extra unitary dimensions to y using Tensor.reshape to make it shape [128,1,1,1] so that broadcasting will work the way we want it to (see here if you're unfamiliar with broadcast semantics in NumPy and PyTorch, they both use the same rules). Also note that we use torch.arange instead of np.arange since we generally want to stay in the PyTorch framework when performing mathematical operations on PyTorch tensors (because PyTorch tensors don't usually play nice with NumPy arrays).

\n", + "system": "" + }, + { + "instruction": "How can I determine validation loss for faster RCNN (PyTorch)?", + "input": "", + "output": "

So it turns out no stages of the pytorch fasterrcnn return losses when model.eval() is set. However, you can just manually use the forward code to generate the losses in evaluation mode:

\n
from typing import Tuple, List, Dict, Optional\nimport torch\nfrom torch import Tensor\nfrom collections import OrderedDict\nfrom torchvision.models.detection.roi_heads import fastrcnn_loss\nfrom torchvision.models.detection.rpn import concat_box_prediction_layers\ndef eval_forward(model, images, targets):\n    # type: (List[Tensor], Optional[List[Dict[str, Tensor]]]) -> Tuple[Dict[str, Tensor], List[Dict[str, Tensor]]]\n    """\n    Args:\n        images (list[Tensor]): images to be processed\n        targets (list[Dict[str, Tensor]]): ground-truth boxes present in the image (optional)\n    Returns:\n        result (list[BoxList] or dict[Tensor]): the output from the model.\n            It returns list[BoxList] contains additional fields\n            like `scores`, `labels` and `mask` (for Mask R-CNN models).\n    """\n    model.eval()\n\n    original_image_sizes: List[Tuple[int, int]] = []\n    for img in images:\n        val = img.shape[-2:]\n        assert len(val) == 2\n        original_image_sizes.append((val[0], val[1]))\n\n    images, targets = model.transform(images, targets)\n\n    # Check for degenerate boxes\n    # TODO: Move this to a function\n    if targets is not None:\n        for target_idx, target in enumerate(targets):\n            boxes = target["boxes"]\n            degenerate_boxes = boxes[:, 2:] <= boxes[:, :2]\n            if degenerate_boxes.any():\n                # print the first degenerate box\n                bb_idx = torch.where(degenerate_boxes.any(dim=1))[0][0]\n                degen_bb: List[float] = boxes[bb_idx].tolist()\n                raise ValueError(\n                    "All bounding boxes should have positive height and width."\n                    f" Found invalid box {degen_bb} for target at index {target_idx}."\n                )\n\n    features = model.backbone(images.tensors)\n    if isinstance(features, torch.Tensor):\n        features = OrderedDict([("0", features)])\n    model.rpn.training=True\n    #model.roi_heads.training=True\n\n\n    #####proposals, proposal_losses = model.rpn(images, features, targets)\n    features_rpn = list(features.values())\n    objectness, pred_bbox_deltas = model.rpn.head(features_rpn)\n    anchors = model.rpn.anchor_generator(images, features_rpn)\n\n    num_images = len(anchors)\n    num_anchors_per_level_shape_tensors = [o[0].shape for o in objectness]\n    num_anchors_per_level = [s[0] * s[1] * s[2] for s in num_anchors_per_level_shape_tensors]\n    objectness, pred_bbox_deltas = concat_box_prediction_layers(objectness, pred_bbox_deltas)\n    # apply pred_bbox_deltas to anchors to obtain the decoded proposals\n    # note that we detach the deltas because Faster R-CNN do not backprop through\n    # the proposals\n    proposals = model.rpn.box_coder.decode(pred_bbox_deltas.detach(), anchors)\n    proposals = proposals.view(num_images, -1, 4)\n    proposals, scores = model.rpn.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)\n\n    proposal_losses = {}\n    assert targets is not None\n    labels, matched_gt_boxes = model.rpn.assign_targets_to_anchors(anchors, targets)\n    regression_targets = model.rpn.box_coder.encode(matched_gt_boxes, anchors)\n    loss_objectness, loss_rpn_box_reg = model.rpn.compute_loss(\n        objectness, pred_bbox_deltas, labels, regression_targets\n    )\n    proposal_losses = {\n        "loss_objectness": loss_objectness,\n        "loss_rpn_box_reg": loss_rpn_box_reg,\n    }\n\n    #####detections, detector_losses = model.roi_heads(features, proposals, images.image_sizes, targets)\n    image_shapes = images.image_sizes\n    proposals, matched_idxs, labels, regression_targets = model.roi_heads.select_training_samples(proposals, targets)\n    box_features = model.roi_heads.box_roi_pool(features, proposals, image_shapes)\n    box_features = model.roi_heads.box_head(box_features)\n    class_logits, box_regression = model.roi_heads.box_predictor(box_features)\n\n    result: List[Dict[str, torch.Tensor]] = []\n    detector_losses = {}\n    loss_classifier, loss_box_reg = fastrcnn_loss(class_logits, box_regression, labels, regression_targets)\n    detector_losses = {"loss_classifier": loss_classifier, "loss_box_reg": loss_box_reg}\n    boxes, scores, labels = model.roi_heads.postprocess_detections(class_logits, box_regression, proposals, image_shapes)\n    num_images = len(boxes)\n    for i in range(num_images):\n        result.append(\n            {\n                "boxes": boxes[i],\n                "labels": labels[i],\n                "scores": scores[i],\n            }\n        )\n    detections = result\n    detections = model.transform.postprocess(detections, images.image_sizes, original_image_sizes)  # type: ignore[operator]\n    model.rpn.training=False\n    model.roi_heads.training=False\n    losses = {}\n    losses.update(detector_losses)\n    losses.update(proposal_losses)\n    return losses, detections\n
\n

Testing this code gives me:

\n
import torchvision\nfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictor\n\n# load a model pre-trained on COCO\nmodel = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)\n\n# replace the classifier with a new one, that has\n# num_classes which is user-defined\nnum_classes = 2  # 1 class (person) + background\n# get number of input features for the classifier\nin_features = model.roi_heads.box_predictor.cls_score.in_features\n# replace the pre-trained head with a new one\nmodel.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)\nlosses, detections = eval_forward(model,torch.randn([1,3,300,300]),[{'boxes':torch.tensor([[100,100,200,200]]),'labels':torch.tensor([0])}])\n\n{'loss_classifier': tensor(0.6594, grad_fn=<NllLossBackward0>),\n'loss_box_reg': tensor(0., grad_fn=<DivBackward0>),\n 'loss_objectness': tensor(0.5108, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>),\n 'loss_rpn_box_reg': tensor(0.0160, grad_fn=<DivBackward0>)}\n
\n", + "system": "" + }, + { + "instruction": "Finding patterns in time series with PyTorch", + "input": "", + "output": "
\n

This is just a copy-paste from an RNN demo. Here is my first issue. Is an RNN the right choice?

\n
\n

In theory what model you choose does not matter as much as "How" you formulate your problem.

\n

But in your case the most obvious limitation you're going to face is your sequence length: 1500. RNN store information across steps and typically runs into trouble over long sequence with vanishing or exploding gradient.

\n

LSTM net have been developed to circumvent this limitations with memory cell, but even then in the case of long sequence it will still be limited by the amount of information stored in the cell.

\n

You could try using a CNN network as well and think of it as an image.

\n
\n

Are there existing examples of this kind of problem?

\n
\n

I don't know but I might have some suggestions : If I understood your problem correctly, you're going from a (1500, 1) input to a (7,1) output, where 6 of the 7 positions are 0 except for the corresponding class where it's 1.

\n

I don't see any activation function, usually when dealing with multi class you don't use the output of the dense layer to compute the loss you apply a normalizing function like softmax and then you can compute the loss.

\n", + "system": "" + }, + { + "instruction": "Why should I use a 2**N value and how do I choose the right one?", + "input": "", + "output": "

The reason is how hardware makes the process. In deep learning matrix operations are the main computations and source of floating point operations (FLOPs).

\n

Single Instruction Multiple Data (SIMD) operations in CPUs happen in batch sizes, which are powers of 2. Consider take a look if you are interested:

\n

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf

\n

And for GPUs:

\n

https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html

\n
\n

Memory allocated through the CUDA Runtime API, such as via\ncudaMalloc(), is guaranteed to be aligned to at least 256 bytes.\nTherefore, choosing sensible thread block sizes, such as multiples of\nthe warp size (i.e., 32 on current GPUs), facilitates memory accesses\nby warps that are properly aligned. (Consider what would happen to the\nmemory addresses accessed by the second, third, and subsequent thread\nblocks if the thread block size was not a multiple of warp size, for\nexample.)

\n
\n

This means that any multiple of 32 will optimize the memory access, and thus, the processing speed, while you are using a gpu.

\n

About the right value, pyramidal shape usually works better, because as you go deeper, the neural network tends to create internal representations of the transformed data, in an expected hierarchical, thus, pyramidal shape. So a good guess is to use decreasing amounts of neurons at each layer as you come close to the output, e.g:

\n
self.flatten = nn.Flatten()\nself.linear_relu_stack = nn.Sequential(\n    nn.Linear(28*28, 512),\n    nn.ReLU(),\n    nn.Linear(512, 128),\n    nn.ReLU(),\n    nn.Linear(128, 10),\n    nn.ReLU()\n    )\n
\n

But there is no general rule and you can find whole fields of study (like Neural Architecture Search) about how to find optimal hyper-parameters for neural networks.

\n

you can take a look here for some deeper information:

\n

https://arxiv.org/pdf/1608.04064.pdf

\n", + "system": "" + }, + { + "instruction": "Pytorch equivalent of tensorflow keras StringLookup?", + "input": "", + "output": "

Package torchnlp,

\n
pip install pytorch-nlp\n
\n
from torchnlp.encoders import LabelEncoder\n\ndata = ["a", "c", "d", "e", "d"]\nencoder = LabelEncoder(data, reserved_labels=['unknown'], unknown_index=0)\n\nenl = encoder.batch_encode(data)\n\nprint(enl)\n
\n
tensor([1, 2, 3, 4, 3])\n
\n", + "system": "" + }, + { + "instruction": "Implementing custom learning rate scheduler in Pytorch?", + "input": "", + "output": "

Since this is a scheduler used in a popular paper (Attention is all you need), reasonably good implementations already exist online.

\n

You can grab a PyTorch implementation from this repository by @jadore801120.

\n

Once you have it, then simply

\n
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001, betas=(0.9, 0.98), eps=1e-9)\nsched = ScheduledOptim(optimizer, d_model=..., n_warmup_steps=...)\n
\n

also make sure to invoke the scheduler at the right time

\n
for i, batch in enumerate(dataloader):\n    sched.zero_grad()\n    ...\n    loss.backward()\n    sched.step_and_update_lr()\n
\n", + "system": "" + }, + { + "instruction": "What is the Best way to define Adam Optimizer in PyTorch?", + "input": "", + "output": "

In the second method, different configurations are being provided to update weights and biases. This is being done using per-parameter options for the optimizer.

\n
optim = torch.optim.Adam(\n            [\n                {'params': get_parameters(model, bias=False)},\n                {'params': get_parameters(model, bias=True), 'lr': cfg['lr'] * 2, 'weight_decay': 0},\n            ],\n            lr=cfg['lr'],\n            weight_decay=cfg['weight_decay'])\n
\n

As per this, the learning rate for biases is 2 times that of weights, and weight decay is 0.

\n

Now, the reason why it's being done could be the network not learning properly. Read more Why is the learning rate for the bias usually twice as large as the the LR for the weights?

\n", + "system": "" + }, + { + "instruction": "Using Hugging-face transformer with arguments in pipeline", + "input": "", + "output": "

The max_length tokenization parameter is not supported per default (i.e. no padding to max_length is applied), but you can create your own class and overwrite this behavior:

\n
from transformers import AutoTokenizer, AutoModel\nfrom transformers import FeatureExtractionPipeline\nfrom transformers.tokenization_utils import TruncationStrategy\n\ntokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")\nmodel = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")\n\ninputs = ['hello world']\n\nclass MyFeatureExtractionPipeline(FeatureExtractionPipeline):\n      def _parse_and_tokenize(\n        self, inputs, max_length, padding=True, add_special_tokens=True, truncation=TruncationStrategy.DO_NOT_TRUNCATE, **kwargs\n    ):\n        """\n        Parse arguments and tokenize\n        """\n        # Parse arguments\n        if getattr(self.tokenizer, "pad_token", None) is None:\n            padding = False\n        inputs = self.tokenizer(\n            inputs,\n            add_special_tokens=add_special_tokens,\n            return_tensors=self.framework,\n            padding=padding,\n            truncation=truncation,\n            max_length=max_length\n        )\n        return inputs\n\nmynlp = MyFeatureExtractionPipeline(model=model, tokenizer=tokenizer)\no = mynlp("hello world", max_length = 500, padding='max_length', truncation=True)\n
\n

Let us compare the size of the output:

\n
print(len(o))\nprint(len(o[0]))\nprint(len(o[0][0]))\n
\n

Output:

\n
1\n500\n768\n
\n

Please note: that this will only work with transformers 4.10.X and previous versions. The team is currently refactoring the pipeline classes and future releases will require different adjustments (i.e. that will not work as soon as the refactored pipelines are released).

\n", + "system": "" + }, + { + "instruction": "pytorch model evaluation slow when deployed on kubernetes", + "input": "", + "output": "

I had the same issue. Locally my pytorch model would return a prediction in 25 ms and then on Kubernetes it would take 5 seconds. The problem had to do with how many threads torch had available to use. I'm not 100% sure why this works, but reducing the number of threads sped up performance significantly.

\n

Set the following environment variable on your kubernetes pod.\nOMP_NUM_THREADS=1

\n

After doing that it performed on kubernetes like it did running it locally ~30ms per call.

\n

These are my pod limits:

\n\n

I was led to discover this from this blog post: https://www.chunyangwen.com/blog/python/pytorch-slow-inference.html

\n", + "system": "" + }, + { + "instruction": "torch.masked_scatter result did not meet expectations", + "input": "", + "output": "

You are right, this is confusing and there is virtually no documentation.

\n

However, the way scatter works (as you have discovered) is that the ith True in a row is given the ith value from the source. So not the value corresponding to the position of the True.

\n

Luckily what you are trying to do can easily be achieved using the normal indexing notation:

\n
>>> y[mask] = x[mask]\n>>> y\ntensor([[0.0000, 0.0000, 0.9004, 0.0000, 0.6004],\n        [0.5735, 0.9006, 0.6797, 0.0000, 0.0000]])\n
\n", + "system": "" + }, + { + "instruction": "CPU only pytorch is crashing with error AssertionError: Torch not compiled with CUDA enabled", + "input": "", + "output": "

You are using CPU only pytorch, but your code has statement like cr = nn.CrossEntropyLoss(weight=torch.tensor(classes_weights).cuda()) which is trying to move the tensor to GPU.

\n

To fix it,\nremove all the .cuda() operations.

\n", + "system": "" + }, + { + "instruction": "How to prepare data for TpyTorch's 3d attn_mask argument in MultiHeadAttention", + "input": "", + "output": "

I've the same query until I get to the link posted by cokeSchlimpf. Thanks for sharing this.

\n

Overview:

\n

If we want to set different mask = src_mask for each of the example/instance in a batch, then it's advised (here) to have the src_attention_mask in shape of N.num_heads, T, S where N is the batch-size, num_heads is the number of heads in MultiHeadAttention module. Additionally, T is the target sequence length and S is the source sequence length.

\n
Explanation of code at link:
\n

Say, mask is of shape N, T, S, then with torch.repeat_interleave(mask, num_heads, dim=0) each mask instance (in total there are N instances) is repeated num_heads times and stacked to form num_heads, T, S shape array. Repeating this for all such N masks we'll finally get an array of shape:

\n
[\n    [num_heads, T, S] # for example 1 in the batch\n    [num_heads, T, S] # for example 2 in the batch\n    .\n    .\n    .\n    [num_heads, T, S] # for example N in the batch\n] = [N.num_heads, T, S] # after concatenating along dim=0\n\n
\n

The following is small snippet of code on implementation (with torch==1.12.1+cu102).

\n
import torch\n\nclass test(nn.Module):\n    def __init__(self):\n        super(test, self).__init__()\n        enc_layer = torch.nn.TransformerEncoderLayer(d_model=16, nhead=8, batch_first=True)\n        self.layer = torch.nn.TransformerEncoder(enc_layer, num_layers=1)\n\n    def forward(self, x, src_mask, key_mask):\n        return self.layer(x, mask=src_mask, src_key_padding_mask=key_mask)\n\nmod = test()\nmod.eval()\nout = mod(x=torch.randn(2, 22, 16), src_mask=torch.ones(8*2, 22, 22), key_mask=torch.ones(2, 22))\nprint(out.shape)\n
\n

Hope this helps!

\n", + "system": "" + }, + { + "instruction": "How to extract overlapping patches from a 3D volume and recreate the input shape from the patches?", + "input": "", + "output": "

To extract (overlapping-) patches and to reconstruct the input shape we can use the torch.nn.functional.unfold and the inverse operation torch.nn.functional.fold. These methods only process 4D tensors or 2D images, however you can use these methods to process one dimension at a time.

\n

Few notes:

\n
    \n
  1. This way requires fold/unfold methods from pytorch, unfortunately I have yet to find a similar method in the TF api.

    \n
  2. \n
  3. We start with 2D then 3D then 4D to show the incremental differences, you can extend to arbitrarily many dimensions (probably write a loop instead of hardcoding each dimension like i did)

    \n
  4. \n
  5. We can extract patches in 2 ways, their output is the same. The methods are called extract_patches_Xd and extract_patches_Xds where X is the number of dimensions. The latter uses torch.Tensor.unfold() and has less lines of code. (output is the same, except it cannot use dilation)

    \n
  6. \n
  7. The methods extract_patches_Xd and combine_patches_Xd are inverse methods and the combiner reverses the steps from the extracter step by step.

    \n
  8. \n
  9. The lines of code are followed by a comment stating the dimensionality such as (B, C, T, D, H, W). The following are used:

    \n
      \n
    1. B: Batch size
    2. \n
    3. C: Channels
    4. \n
    5. T: Time Dimension
    6. \n
    7. D: Depth Dimension
    8. \n
    9. H: Height Dimension
    10. \n
    11. W: Width Dimension
    12. \n
    13. x_dim_in: In the extraction method, this is the number input pixels in dimension x. In the combining method, this is the number of number of sliding windows in dimension x.
    14. \n
    15. x_dim_out: In the extraction method, this is the number of sliding windows in dimension x. In the combining method, this is the number output pixels in dimension x.
    16. \n
    \n
  10. \n
  11. I have a public notebook to try out the code

    \n
  12. \n
  13. I have tried out basic 2D, 3D and 4D tensors as shown below. However, my code is not infallible and I appreciate feedback when tested on other inputs.

    \n
  14. \n
  15. The get_dim_blocks() method is the function given on the pytorch docs website to compute the output shape of a convolutional layer.

    \n
  16. \n
  17. Note that if you have overlapping patches and you combine them, the overlapping elements will be summed. If you would like to get the initial input again there is a way.

    \n
      \n
    1. Create similar sized tensor of ones as the patches with torch.ones_like(patches_tensor).
    2. \n
    3. Combine the patches into full image with same output shape. (this creates a counter for overlapping elements).
    4. \n
    5. Divide the Combined image with the Combined ones, this should reverse any double summation of elements.
    6. \n
    \n
  18. \n
\n

First (2D):

\n

The torch.nn.functional.fold and torch.nn.functional.unfold methods can be used directly.

\n
import torch\n
\n
def extract_patches_2ds(x, kernel_size, padding=0, stride=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding, padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride)\n\n    channels = x.shape[1]\n\n    x = torch.nn.functional.pad(x, padding)\n    # (B, C, H, W)\n    x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1])\n    # (B, C, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1])\n    x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1])\n    # (B * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1])\n    return x\n\ndef extract_patches_2d(x, kernel_size, padding=0, stride=1, dilation=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride)\n    if isinstance(dilation, int):\n        dilation = (dilation, dilation)\n\n    def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1):\n        dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1\n        return dim_out\n        \n    channels = x.shape[1]\n    h_dim_in = x.shape[2]\n    w_dim_in = x.shape[3]\n    h_dim_out = get_dim_blocks(h_dim_in, kernel_size[0], padding[0], stride[0], dilation[0])\n    w_dim_out = get_dim_blocks(w_dim_in, kernel_size[1], padding[1], stride[1], dilation[1])\n\n    # (B, C, H, W)\n    x = torch.nn.functional.unfold(x, kernel_size, padding=padding, stride=stride, dilation=dilation)\n    # (B, C * kernel_size[0] * kernel_size[1], h_dim_out * w_dim_out)\n    x = x.view(-1, channels, kernel_size[0], kernel_size[1], h_dim_out, w_dim_out)\n    # (B, C, kernel_size[0], kernel_size[1], h_dim_out, w_dim_out)\n    x = x.permute(0,1,4,5,2,3)\n    # (B, C, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1])\n    x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1])\n    # (B * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1])\n    return x\n\n\ndef combine_patches_2d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride)\n    if isinstance(dilation, int):\n        dilation = (dilation, dilation)\n\n    def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1):\n        dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1\n        return dim_out\n\n    channels = x.shape[1]\n    h_dim_out, w_dim_out = output_shape[2:]\n    h_dim_in = get_dim_blocks(h_dim_out, kernel_size[0], padding[0], stride[0], dilation[0])\n    w_dim_in = get_dim_blocks(w_dim_out, kernel_size[1], padding[1], stride[1], dilation[1])\n\n    # (B * h_dim_in * w_dim_in, C, kernel_size[0], kernel_size[1])\n    x = x.view(-1, channels, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1])\n    # (B, C, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1])\n    x = x.permute(0,1,4,5,2,3)\n    # (B, C, kernel_size[0], kernel_size[1], h_dim_in, w_dim_in)\n    x = x.contiguous().view(-1, channels * kernel_size[0] * kernel_size[1], h_dim_in * w_dim_in)\n    # (B, C * kernel_size[0] * kernel_size[1], h_dim_in * w_dim_in)\n    x = torch.nn.functional.fold(x, (h_dim_out, w_dim_out), kernel_size=(kernel_size[0], kernel_size[1]), padding=padding, stride=stride, dilation=dilation)\n    # (B, C, H, W)\n    return x\n\n\n\na = torch.arange(1, 65, dtype=torch.float).view(2,2,4,4)\nprint(a.shape)\nprint(a)\nb = extract_patches_2d(a, 2, padding=1, stride=2, dilation=1)\n# b = extract_patches_2ds(a, 2, padding=1, stride=2)\nprint(b.shape)\nprint(b)\nc = combine_patches_2d(b, 2, (2,2,4,4), padding=1, stride=2, dilation=1)\nprint(c.shape)\nprint(c)\nprint(torch.all(a==c))\n
\n

Output (2D)

\n
torch.Size([2, 2, 4, 4])\ntensor([[[[ 1.,  2.,  3.,  4.],\n          [ 5.,  6.,  7.,  8.],\n          [ 9., 10., 11., 12.],\n          [13., 14., 15., 16.]],\n\n         [[17., 18., 19., 20.],\n          [21., 22., 23., 24.],\n          [25., 26., 27., 28.],\n          [29., 30., 31., 32.]]],\n\n\n        [[[33., 34., 35., 36.],\n          [37., 38., 39., 40.],\n          [41., 42., 43., 44.],\n          [45., 46., 47., 48.]],\n\n         [[49., 50., 51., 52.],\n          [53., 54., 55., 56.],\n          [57., 58., 59., 60.],\n          [61., 62., 63., 64.]]]])\ntorch.Size([18, 2, 2, 2])\ntensor([[[[ 0.,  0.],\n          [ 0.,  1.]],\n\n         [[ 0.,  0.],\n          [ 2.,  3.]]],\n\n\n        [[[ 0.,  0.],\n          [ 4.,  0.]],\n\n         [[ 0.,  5.],\n          [ 0.,  9.]]],\n\n\n        [[[ 6.,  7.],\n          [10., 11.]],\n\n         [[ 8.,  0.],\n          [12.,  0.]]],\n\n\n        [[[ 0., 13.],\n          [ 0.,  0.]],\n\n         [[14., 15.],\n          [ 0.,  0.]]],\n\n\n        [[[16.,  0.],\n          [ 0.,  0.]],\n\n         [[ 0.,  0.],\n          [ 0., 17.]]],\n\n\n        [[[ 0.,  0.],\n          [18., 19.]],\n\n         [[ 0.,  0.],\n          [20.,  0.]]],\n\n\n        [[[ 0., 21.],\n          [ 0., 25.]],\n\n         [[22., 23.],\n          [26., 27.]]],\n\n\n        [[[24.,  0.],\n          [28.,  0.]],\n\n         [[ 0., 29.],\n          [ 0.,  0.]]],\n\n\n        [[[30., 31.],\n          [ 0.,  0.]],\n\n         [[32.,  0.],\n          [ 0.,  0.]]],\n\n\n        [[[ 0.,  0.],\n          [ 0., 33.]],\n\n         [[ 0.,  0.],\n          [34., 35.]]],\n\n\n        [[[ 0.,  0.],\n          [36.,  0.]],\n\n         [[ 0., 37.],\n          [ 0., 41.]]],\n\n\n        [[[38., 39.],\n          [42., 43.]],\n\n         [[40.,  0.],\n          [44.,  0.]]],\n\n\n        [[[ 0., 45.],\n          [ 0.,  0.]],\n\n         [[46., 47.],\n          [ 0.,  0.]]],\n\n\n        [[[48.,  0.],\n          [ 0.,  0.]],\n\n         [[ 0.,  0.],\n          [ 0., 49.]]],\n\n\n        [[[ 0.,  0.],\n          [50., 51.]],\n\n         [[ 0.,  0.],\n          [52.,  0.]]],\n\n\n        [[[ 0., 53.],\n          [ 0., 57.]],\n\n         [[54., 55.],\n          [58., 59.]]],\n\n\n        [[[56.,  0.],\n          [60.,  0.]],\n\n         [[ 0., 61.],\n          [ 0.,  0.]]],\n\n\n        [[[62., 63.],\n          [ 0.,  0.]],\n\n         [[64.,  0.],\n          [ 0.,  0.]]]])\ntorch.Size([2, 2, 4, 4])\ntensor([[[[ 1.,  2.,  3.,  4.],\n          [ 5.,  6.,  7.,  8.],\n          [ 9., 10., 11., 12.],\n          [13., 14., 15., 16.]],\n\n         [[17., 18., 19., 20.],\n          [21., 22., 23., 24.],\n          [25., 26., 27., 28.],\n          [29., 30., 31., 32.]]],\n\n\n        [[[33., 34., 35., 36.],\n          [37., 38., 39., 40.],\n          [41., 42., 43., 44.],\n          [45., 46., 47., 48.]],\n\n         [[49., 50., 51., 52.],\n          [53., 54., 55., 56.],\n          [57., 58., 59., 60.],\n          [61., 62., 63., 64.]]]])\ntensor(True)\n\n
\n

Second (3D):

\n

Now it becomes interesting: We need to use 2 fold and unfold where we first apply the fold to the D dimension and leave the W and H untouched by setting kernel to 1, padding to 0, stride to 1 and dilation to 1. After we review the tensor and fold over the H and W dimensions. The unfolding happens in reverse, starting with H and W, then D.

\n
def extract_patches_3ds(x, kernel_size, padding=0, stride=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding, padding, padding, padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride, stride)\n\n    channels = x.shape[1]\n\n    x = torch.nn.functional.pad(x, padding)\n    # (B, C, D, H, W)\n    x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]).unfold(4, kernel_size[2], stride[2])\n    # (B, C, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2])\n    x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2])\n    # (B * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2])\n    return x\n\ndef extract_patches_3d(x, kernel_size, padding=0, stride=1, dilation=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride, stride)\n    if isinstance(dilation, int):\n        dilation = (dilation, dilation, dilation)\n\n    def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1):\n        dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1\n        return dim_out\n\n    channels = x.shape[1]\n\n    d_dim_in = x.shape[2]\n    h_dim_in = x.shape[3]\n    w_dim_in = x.shape[4]\n    d_dim_out = get_dim_blocks(d_dim_in, kernel_size[0], padding[0], stride[0], dilation[0])\n    h_dim_out = get_dim_blocks(h_dim_in, kernel_size[1], padding[1], stride[1], dilation[1])\n    w_dim_out = get_dim_blocks(w_dim_in, kernel_size[2], padding[2], stride[2], dilation[2])\n    # print(d_dim_in, h_dim_in, w_dim_in, d_dim_out, h_dim_out, w_dim_out)\n    \n    # (B, C, D, H, W)\n    x = x.view(-1, channels, d_dim_in, h_dim_in * w_dim_in)                                                     \n    # (B, C, D, H * W)\n\n    x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1))                   \n    # (B, C * kernel_size[0], d_dim_out * H * W)\n\n    x = x.view(-1, channels * kernel_size[0] * d_dim_out, h_dim_in, w_dim_in)                                   \n    # (B, C * kernel_size[0] * d_dim_out, H, W)\n\n    x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[1], kernel_size[2]), padding=(padding[1], padding[2]), stride=(stride[1], stride[2]), dilation=(dilation[1], dilation[2]))        \n    # (B, C * kernel_size[0] * d_dim_out * kernel_size[1] * kernel_size[2], h_dim_out, w_dim_out)\n\n    x = x.view(-1, channels, kernel_size[0], d_dim_out, kernel_size[1], kernel_size[2], h_dim_out, w_dim_out)  \n    # (B, C, kernel_size[0], d_dim_out, kernel_size[1], kernel_size[2], h_dim_out, w_dim_out)  \n\n    x = x.permute(0, 1, 3, 6, 7, 2, 4, 5)\n    # (B, C, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2])\n\n    x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2])\n    # (B * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2])\n\n    return x\n\n\n\ndef combine_patches_3d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride, stride)\n    if isinstance(dilation, int):\n        dilation = (dilation, dilation, dilation)\n\n    def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1):\n        dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1\n        return dim_out\n\n    channels = x.shape[1]\n    d_dim_out, h_dim_out, w_dim_out = output_shape[2:]\n    d_dim_in = get_dim_blocks(d_dim_out, kernel_size[0], padding[0], stride[0], dilation[0])\n    h_dim_in = get_dim_blocks(h_dim_out, kernel_size[1], padding[1], stride[1], dilation[1])\n    w_dim_in = get_dim_blocks(w_dim_out, kernel_size[2], padding[2], stride[2], dilation[2])\n    # print(d_dim_in, h_dim_in, w_dim_in, d_dim_out, h_dim_out, w_dim_out)\n\n    x = x.view(-1, channels, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2])\n    # (B, C, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2])\n\n    x = x.permute(0, 1, 5, 2, 6, 7, 3, 4)\n    # (B, C, kernel_size[0], d_dim_in, kernel_size[1], kernel_size[2], h_dim_in, w_dim_in)\n\n    x = x.contiguous().view(-1, channels * kernel_size[0] * d_dim_in * kernel_size[1] * kernel_size[2], h_dim_in * w_dim_in)\n    # (B, C * kernel_size[0] * d_dim_in * kernel_size[1] * kernel_size[2], h_dim_in * w_dim_in)\n\n    x = torch.nn.functional.fold(x, output_size=(h_dim_out, w_dim_out), kernel_size=(kernel_size[1], kernel_size[2]), padding=(padding[1], padding[2]), stride=(stride[1], stride[2]), dilation=(dilation[1], dilation[2]))\n    # (B, C * kernel_size[0] * d_dim_in, H, W)\n\n    x = x.view(-1, channels * kernel_size[0], d_dim_in * h_dim_out * w_dim_out)\n    # (B, C * kernel_size[0], d_dim_in * H * W)\n\n    x = torch.nn.functional.fold(x, output_size=(d_dim_out, h_dim_out * w_dim_out), kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1))\n    # (B, C, D, H * W)\n    \n    x = x.view(-1, channels, d_dim_out, h_dim_out, w_dim_out)\n    # (B, C, D, H, W)\n\n    return x\n\na = torch.arange(1, 129, dtype=torch.float).view(2,2,2,4,4)\nprint(a.shape)\nprint(a)\n# b = extract_patches_3d(a, 2, padding=1, stride=2)\nb = extract_patches_3ds(a, 2, padding=1, stride=2)\nprint(b.shape)\nprint(b)\nc = combine_patches_3d(b, 2, (2,2,2,4,4), padding=1, stride=2)\nprint(c.shape)\nprint(c)\nprint(torch.all(a==c))\n
\n

Output (3D)

\n

(I had to limit the characters please look at the notebook)

\n

Third (4D)

\n

We add a time dimension to the 3D volume. We start the folding with just the T dimension, leaving D, H and W alone similarly to the 3D version. Then we fold over D leaving H and W. Finally we do H and W. The unfolding happens in reverse again. Hopefully by now you notice a pattern and you can add arbitrarily many dimensions and start folding one by one. The unfolding happens in reverse again.

\n
def extract_patches_4ds(x, kernel_size, padding=0, stride=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding, padding, padding, padding, padding, padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride, stride, stride)\n\n    channels = x.shape[1]\n\n    x = torch.nn.functional.pad(x, padding)\n    # (B, C, T, D, H, W)\n    x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]).unfold(4, kernel_size[2], stride[2]).unfold(5, kernel_size[3], stride[3])\n    # (B, C, t_dim_out, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3])\n    x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3])\n    # (B * t_dim_out, d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3])\n    return x\n\ndef extract_patches_4d(x, kernel_size, padding=0, stride=1, dilation=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding, padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride, stride, stride)\n    if isinstance(dilation, int):\n        dilation = (dilation, dilation, dilation, dilation)\n\n    def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1):\n        dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1\n        return dim_out\n\n    channels = x.shape[1]\n\n    t_dim_in = x.shape[2]\n    d_dim_in = x.shape[3]\n    h_dim_in = x.shape[4]\n    w_dim_in = x.shape[5]\n    t_dim_out = get_dim_blocks(t_dim_in, kernel_size[0], padding[0], stride[0], dilation[0])\n    d_dim_out = get_dim_blocks(d_dim_in, kernel_size[1], padding[1], stride[1], dilation[1])\n    h_dim_out = get_dim_blocks(h_dim_in, kernel_size[2], padding[2], stride[2], dilation[2])\n    w_dim_out = get_dim_blocks(w_dim_in, kernel_size[3], padding[3], stride[3], dilation[3])\n    # print(t_dim_in, d_dim_in, h_dim_in, w_dim_in, t_dim_out, d_dim_out, h_dim_out, w_dim_out)\n    \n    # (B, C, T, D, H, W)\n    x = x.view(-1, channels, t_dim_in, d_dim_in * h_dim_in * w_dim_in)                                                     \n    # (B, C, T, D * H * W)\n\n    x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1))\n    # (B, C * kernel_size[0], t_dim_out * D * H * W)\n\n    x = x.view(-1, channels * kernel_size[0] * t_dim_out, d_dim_in, h_dim_in * w_dim_in)                                   \n    # (B, C * kernel_size[0] * t_dim_out, D, H * W)\n\n    x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[1], 1), padding=(padding[1], 0), stride=(stride[1], 1), dilation=(dilation[1], 1))\n    # (B, C * kernel_size[0] * t_dim_out * kernel_size[1], d_dim_out * H * W)\n\n    x = x.view(-1, channels * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out, h_dim_in, w_dim_in)\n    # (B, C * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out, H, W)\n\n    x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[2], kernel_size[3]), padding=(padding[2], padding[3]), stride=(stride[2], stride[3]), dilation=(dilation[2], dilation[3]))        \n    # (B, C * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out * kernel_size[2] * kernel_size[3], h_dim_out * w_dim_out)\n\n    x = x.view(-1, channels, kernel_size[0], t_dim_out, kernel_size[1], d_dim_out, kernel_size[2], kernel_size[3], h_dim_out, w_dim_out)\n    # (B, C, kernel_size[0], t_dim_out, kernel_size[1], d_dim_out, kernel_size[2], kernel_size[3], h_dim_out, w_dim_out)\n\n    x = x.permute(0, 1, 3, 5, 8, 9, 2, 4, 6, 7)\n    # (B, C, t_dim_out, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3])\n\n    x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3])\n    # (B * t_dim_out * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3])\n\n    return x\n\n\n\ndef combine_patches_4d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1):\n    if isinstance(kernel_size, int):\n        kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size)\n    if isinstance(padding, int):\n        padding = (padding, padding, padding, padding)\n    if isinstance(stride, int):\n        stride = (stride, stride, stride, stride)\n    if isinstance(dilation, int):\n        dilation = (dilation, dilation, dilation, dilation)\n\n    def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1):\n        dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1\n        return dim_out\n\n    channels = x.shape[1]\n    t_dim_out, d_dim_out, h_dim_out, w_dim_out = output_shape[2:]\n    t_dim_in = get_dim_blocks(d_dim_out, kernel_size[0], padding[0], stride[0], dilation[0])\n    d_dim_in = get_dim_blocks(d_dim_out, kernel_size[1], padding[1], stride[1], dilation[1])\n    h_dim_in = get_dim_blocks(h_dim_out, kernel_size[2], padding[2], stride[2], dilation[2])\n    w_dim_in = get_dim_blocks(w_dim_out, kernel_size[3], padding[3], stride[3], dilation[3])\n    # print(t_dim_in, d_dim_in, h_dim_in, w_dim_in, t_dim_out, d_dim_out, h_dim_out, w_dim_out)\n\n    x = x.view(-1, channels, t_dim_in, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3])\n    # (B, C, t_dim_in, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3])\n\n    x = x.permute(0, 1, 6, 2, 7, 3, 8, 9, 4, 5)\n    # (B, C, kernel_size[0], t_dim_in, kernel_size[1], d_dim_in, kernel_size[2], kernel_size[3], h_dim_in, w_dim_in)\n\n    x = x.contiguous().view(-1, channels * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in * kernel_size[2] * kernel_size[3], h_dim_in * w_dim_in)\n    # (B, C * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in * kernel_size[2] * kernel_size[3], h_dim_in, w_dim_in)\n\n    x = torch.nn.functional.fold(x, output_size=(h_dim_out, w_dim_out), kernel_size=(kernel_size[2], kernel_size[3]), padding=(padding[2], padding[3]), stride=(stride[2], stride[3]), dilation=(dilation[2], dilation[3]))\n    # (B, C * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in, H, W)\n\n    x = x.view(-1, channels * kernel_size[0] * t_dim_in * kernel_size[1], d_dim_in * h_dim_out * w_dim_out)\n    # (B, C * kernel_size[0] * t_dim_in * kernel_size[1], d_dim_in * H * W)\n\n    x = torch.nn.functional.fold(x, output_size=(d_dim_out, h_dim_out * w_dim_out), kernel_size=(kernel_size[1], 1), padding=(padding[1], 0), stride=(stride[1], 1), dilation=(dilation[1], 1))\n    # (B, C * kernel_size[0] * t_dim_in, D, H * W)\n\n    x = x.view(-1, channels * kernel_size[0], t_dim_in * d_dim_out * h_dim_out * w_dim_out)\n    # (B, C * kernel_size[0], t_dim_in * D * H * W)\n\n    x = torch.nn.functional.fold(x, output_size=(t_dim_out, d_dim_out * h_dim_out * w_dim_out), kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1))\n    # (B, C, T, D * H * W)\n    \n    x = x.view(-1, channels, t_dim_out, d_dim_out, h_dim_out, w_dim_out)\n    # (B, C, T, D, H, W)\n\n    return x\n\na = torch.arange(1, 129, dtype=torch.float).view(2,2,2,2,4,2)\nprint(a.shape)\nprint(a)\n# b = extract_patches_4d(a, 2, padding=1, stride=2)\nb = extract_patches_4ds(a, 2, padding=1, stride=2)\nprint(b.shape)\nprint(b)\nc = combine_patches_4d(b, 2, (2,2,2,2,4,2), padding=1, stride=2)\nprint(c.shape)\nprint(c)\nprint(torch.all(a==c))\n
\n

Output (4D)

\n

(I had to limit the characters please look at the notebook)

\n", + "system": "" + }, + { + "instruction": "How to get the size of a Hugging Face pretrained model?", + "input": "", + "output": "

Size is a little ambiguous here so here's both the answers for

\n

1. Size (No. of parameters) of a model.

\n

Use the .num_parameters() function, e.g.

\n
from transformers import MarianMTModel\nmodel = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de")\n\nmodel.num_parameters()\n
\n

[out]:

\n
74410496\n
\n

2. Size (in GPU RAM) of a model.

\n

First install this:

\n
pip install -U nvidia-ml-py3\n
\n

Then in code:

\n
from pynvml import *\nfrom transformers import MarianMTModel\n\ndef print_gpu_utilization():\n    nvmlInit()\n    handle = nvmlDeviceGetHandleByIndex(0)\n    info = nvmlDeviceGetMemoryInfo(handle)\n    print(f"GPU memory occupied: {info.used//1024**2} MB.")\n\nmodel = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de")\nmodel.to('cuda')\n\nprint_gpu_utilization()\n
\n

[out]:

\n
GPU memory occupied: 884 MB.\n
\n", + "system": "" + }, + { + "instruction": "Is a .pth file a security risk, and how can we sanitise it?", + "input": "", + "output": "

As pointed out by @MobeusZoom, this is answer is about Pickle and not PyTorch format. Anyway as PyTorch load mechanism relies on Pickle behind the scene observations drawn in this answer still apply.

\n

TL;DR;

\n
\n

Don't try to sanitize pickle. Trust or reject.

\n
\n

Quoted from Marco Slaviero in his presentation Sour Pickle at the Black Hat USA 2011.

\n

Real solution is:

\n\n

Also be aware that there are new kind of AI based attacks, this even if the pickle is shellcode free, you still may have other issues to address when loading pre-trained networks from untrusted sources.

\n

Important notes

\n

From the presentation linked above we can draw several important notes:

\n\n

MCVE

\n

Find below a very naive MCVE to evaluate your suggestion to encapsulate cleaning of suspect pickled files in Docker container. We will use it to assess main associated risks. Be aware, real exploit will be more advanced and complexer.

\n

Consider the two classes below, Normal is what you expect to unpickle:

\n
# normal.py\nclass Normal:\n\n    def __init__(self, config):\n        self.__dict__.update(config)\n\n    def __str__(self):\n        return "<Normal %s>" % self.__dict__\n
\n

And Exploit is the attacker vessel for its shellcode:

\n
# exploit.py\nclass Exploit(object):\n\n    def __reduce__(self):\n        return (eval, ("print('P@wn%d!')",))\n
\n

Then, the attacker can use pickle as an helper to produce intermediate payloads in order to forge the final exploit payload:

\n
import pickle\nfrom normal import Normal\nfrom exploit import Exploit\n\nhost = Normal({"hello": "world"})\nevil = Exploit()\n\nhost_payload = pickle.dumps(host, protocol=0) # b'c__builtin__\\neval\\np0\\n(S"print(\\'P@wn%d!\\')"\\np1\\ntp2\\nRp3\\n.'\nevil_payload = pickle.dumps(evil, protocol=0) # b'(i__main__\\nNormal\\np0\\n(dp1\\nS"hello"\\np2\\nS"world"\\np3\\nsb.'\n
\n

At this point the attacker can craft a specific payload to both inject its shellcode and returns the data.

\n
with open("inject.pickle", "wb") as handler:\n    handler.write(b'c__builtin__\\neval\\np0\\n(S"print(\\'P@wn%d!\\')"\\np1\\ntp2\\nRp3\\n(i__main__\\nNormal\\np0\\n(dp1\\nS"hello"\\np2\\nS"world"\\np3\\nsb.')\n
\n

Now, when victim will deserialize the malicious pickle file, the exploit is executed and a valid object is returned as expected:

\n
from normal import Normal\nwith open("inject.pickle", "rb") as handler:\n     data = pickle.load(handler)\nprint(data)\n
\n

Execution returns:

\n
P@wn%d!\n<Normal {'hello': 'world'}>\n
\n

Off course, shellcode is not intended to be so obvious, you may not notice it has been executed.

\n

Containerized cleaner

\n

Now, lets try to clean this pickle as you suggested. We will encapsulate the following cleaning code:

\n
# cleaner.py\nimport pickle\nfrom normal import Normal\n\nwith open("inject.pickle", "rb") as handler:\n    data = pickle.load(handler)\nprint(data)\n\ncleaned = Normal(data.__dict__)\nwith open("cleaned.pickle", "wb") as handler:\n    pickle.dump(cleaned, handler)\n\nwith open("cleaned.pickle", "rb") as handler:\n    recovered = pickle.load(handler)\nprint(recovered)\n
\n

Into a Docker image to try to contain its execution. As a baseline, we could do something like this:

\n
FROM python:3.9\n\nADD ./exploit ./\nRUN chown 1001:1001 inject.pickle\n\nUSER 1001:1001\n\nCMD ["python3", "./cleaner.py"]\n
\n

Then we build the image and execute it:

\n
docker build -t jlandercy/doclean:1.0 .\ndocker run -v /home/jlandercy/exploit:/exploit jlandercy/doclean:1.0\n
\n

Also ensure the mounted folder containing the exploit has restrictive ad hoc permissions.

\n
P@wn%d!\n<Normal {'hello': 'world'}> # <-- Shellcode has been executed\n<Normal {'hello': 'world'}> # <-- Shellcode has been removed\n
\n

Now the cleaned.pickle is shellcode free. Off course you need to carefully check this assumption before releasing the cleaned pickle.

\n

Observations

\n

As you can see, Docker image does not prevent the exploit to be executed when unpickling but it may help to contain the exploit in some extent.

\n

Points of attention are (not exhaustive):

\n\n", + "system": "" + }, + { + "instruction": "only first gpu is allocated (eventhough I make other gpus visible, in pytorch cuda framework)", + "input": "", + "output": "

Have you tried something like this?

\n
device = torch.device("cuda:0,1" if torch.cuda.is_available() else "cpu") ## specify the GPU id's, GPU id's start from 0.\n\nmodel = CreateModel()\n\nmodel= nn.DataParallel(model,device_ids = [0, 1])\nmodel.to(device)  \n
\n

let me know about this

\n", + "system": "" + }, + { + "instruction": "NER tagging schema for non-contiguous tokens", + "input": "", + "output": "

First, about doing this without a new data format:

\n

There are a paper and repo about doing this using TextAE for this:

\n

paper

\n

repo

\n

However, looking at their examples and yours, it seems like you could improve on what they did by using dependency parsing. If you look at the dependency parse of "Jane and James Smith are walking in the park", you can see that spaCy understands that Jane is conjoined with Smith. So after running entity extraction, you could do a dependency parse step, then edit your entities based on that.

\n

Now, to answer the real question. I have seen multi-dimensional labels that work in the following way (assume you have a maximum of ten entities per sentence:

\n
empty = [0,0,0,0,0,0,0,0,0]\n\ntokens = ["Jane", "and", "James", "Smith", "are", "walking", "in", "the", "park"]\nlabels = [\n    [1, 0, 0, 1, 0, 0, 0, 0, 0],\n    [0, 0, 1, 1, 0, 0, 0, 0, 0],\n]\nlabels = labels + [empty] * (10-len(labels))\n
\n

If you have more than one entity type, you can use those instead of just 1.

\n

This format works better with BERT anyway, since the BIO format is a pain when you have to split up tokens into BPE anyway.

\n", + "system": "" + }, + { + "instruction": "torchvision MNIST HTTPError: HTTP Error 403: Forbidden", + "input": "", + "output": "

Yes it's a known bug: https://github.com/pytorch/vision/issues/3500

\n

The possible solution can be to patch MNIST download method.

\n

But it requires wget to be installed.

\n

For Linux:

\n
sudo apt install wget\n
\n

For Windows:

\n
choco install wget\n
\n
import os\nimport subprocess as sp\nfrom torchvision.datasets.mnist import MNIST, read_image_file, read_label_file\nfrom torchvision.datasets.utils import extract_archive\n\n\ndef patched_download(self):\n    """wget patched download method.\n    """\n    if self._check_exists():\n        return\n\n    os.makedirs(self.raw_folder, exist_ok=True)\n    os.makedirs(self.processed_folder, exist_ok=True)\n\n    # download files\n    for url, md5 in self.resources:\n        filename = url.rpartition('/')[2]\n        download_root = os.path.expanduser(self.raw_folder)\n        extract_root = None\n        remove_finished = False\n\n        if extract_root is None:\n            extract_root = download_root\n        if not filename:\n            filename = os.path.basename(url)\n        \n        # Use wget to download archives\n        sp.run(["wget", url, "-P", download_root])\n\n        archive = os.path.join(download_root, filename)\n        print("Extracting {} to {}".format(archive, extract_root))\n        extract_archive(archive, extract_root, remove_finished)\n\n    # process and save as torch files\n    print('Processing...')\n\n    training_set = (\n        read_image_file(os.path.join(self.raw_folder, 'train-images-idx3-ubyte')),\n        read_label_file(os.path.join(self.raw_folder, 'train-labels-idx1-ubyte'))\n    )\n    test_set = (\n        read_image_file(os.path.join(self.raw_folder, 't10k-images-idx3-ubyte')),\n        read_label_file(os.path.join(self.raw_folder, 't10k-labels-idx1-ubyte'))\n    )\n    with open(os.path.join(self.processed_folder, self.training_file), 'wb') as f:\n        torch.save(training_set, f)\n    with open(os.path.join(self.processed_folder, self.test_file), 'wb') as f:\n        torch.save(test_set, f)\n\n    print('Done!')\n\n\nMNIST.download = patched_download\n\nmnist_train = MNIST("../data", train=True, download=True, transform=transforms.ToTensor())\nmnist_test = MNIST("../data", train=False, download=True, transform=transforms.ToTensor())\ntrain_loader = DataLoader(mnist_train, batch_size=1, shuffle=True)\ntest_loader = DataLoader(mnist_test, batch_size=1, shuffle=False)\n
\n", + "system": "" + }, + { + "instruction": "Get a 10x10 patch from a 100x100 pytorch tensor with torus style wrap around the boundries", + "input": "", + "output": "

You are looking for torch.roll

\n
def random_patch(a, size) -> Tensor:\n  shifts = np.random.randint(low = 0, high = a.size())\n  return torch.roll(a, shifts=shifts, dims=(0, 1))[:size[0], :size[1]]\n
\n", + "system": "" + }, + { + "instruction": "How to add a L1 or L2 regularization to weights in pytorch", + "input": "", + "output": "

Generally L2 regularization is handled through the weight_decay argument for the optimizer in PyTorch (you can assign different arguments for different layers too). This mechanism, however, doesn't allow for L1 regularization without extending the existing optimizers or writing a custom optimizer.

\n

According to the tensorflow docs they use a reduce_sum(abs(x)) penalty for L1 regularization and a reduce_sum(square(x)) penalty for L2 regularization. Probably the easiest way to achieve this is to just directly add these penalty terms to the loss function used for gradient computation during training.

\n
# set l1_weight and l2_weight to non-zero values to enable penalties\n\n# inside the training loop (given input x and target y)\n...\npred = net(x)\nloss = loss_func(pred, y)\n\n# compute penalty only for net.hidden parameters\nl1_penalty = l1_weight * sum([p.abs().sum() for p in net.hidden.parameters()])\nl2_penalty = l2_weight * sum([(p**2).sum() for p in net.hidden.parameters()])\nloss_with_penalty = loss + l1_penalty + l2_penalty\n\noptimizer.zero_grad()\nloss_with_penalty.backward()\noptimizer.step()\n\n# The pre-penalty loss is the one we ultimately care about\nprint('loss:', loss.item())\n
\n", + "system": "" + }, + { + "instruction": "Estimating mixture of Gaussian models in Pytorch", + "input": "", + "output": "

There is ordering problem in your code, since you create Gaussian mixture model outside of training loop, then when calculate the loss the Gaussian mixture model will try to use the initial value of the parameters that you set when you define the model, but the optimizer1.step() already modify that value so even you set loss2.backward(retain_graph=True) there will still be the error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

\n

Solution to this problem is simply create new Gaussian mixture model whenever you update the parameters, example code running as expected:

\n
import numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.datasets as datasets\n\nimport torch\nfrom torch import nn\nfrom torch import optim\nimport torch.distributions as D\n\nnum_layers = 8\nweights = torch.ones(8,requires_grad=True)\nmeans = torch.tensor(np.random.randn(8,2),requires_grad=True)\nstdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True)\n\nparameters = [weights, means, stdevs]\noptimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9)\n\nnum_iter = 10001\nfor i in range(num_iter):\n    mix = D.Categorical(weights)\n    comp = D.Independent(D.Normal(means,stdevs), 1)\n    gmm = D.MixtureSameFamily(mix, comp)\n\n    optimizer1.zero_grad()\n    x = torch.randn(5000,2)#this can be an arbitrary x samples\n    loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()\n    loss2.backward()\n    optimizer1.step()\n\n    print(i, loss2)\n
\n", + "system": "" + }, + { + "instruction": "How to run Python script on a Discrete Graphics AMD GPU?", + "input": "", + "output": "

After extensive research and several try I reached the conclusion:

\n
    \n
  1. PyOpenGL: Mainly works with NVIDIA. If you have an AMD GPU you you need to install ROCm
  2. \n
  3. PyOpenCL: Mainly works with NVIDIA. If you have an AMD GPU you you need to install ROCm
  4. \n
  5. TensorFlow: Mainly works with NVIDIA. If you have an AMD GPU you you need to install ROCm
  6. \n
  7. PyTorch: Mainly works with NVIDIA. If you have an AMD GPU you you need to install ROCm
  8. \n
\n

I installed ROCm but if I run rocminfo it returns:

\n
ROCk module is NOT loaded, possibly no GPU devices\nUnable to open /dev/kfd read-write: No such file or directory\nFailed to get user name to check for video group membership\nhsa api call failure at: /src/rocminfo/rocminfo.cc:1142\nCall returned HSA_STATUS_ERROR_OUT_OF_RESOURCES: The runtime failed to allocate the necessary resources. This error may also occur when the core runtime library needs to spawn threads or create internal OS-specific events.\n
\n

clinfo returns:

\n
Number of platforms                               1\n  Platform Name                                   AMD Accelerated Parallel Processing\n  Platform Vendor                                 Advanced Micro Devices, Inc.\n  Platform Version                                OpenCL 2.0 AMD-APP (3212.0)\n  Platform Profile                                FULL_PROFILE\n  Platform Extensions                             cl_khr_icd cl_amd_event_callback\n  Platform Extensions function suffix             AMD\n\n  Platform Name                                   AMD Accelerated Parallel Processing\nNumber of devices                                 0\n\nNULL platform behavior\n  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform\n  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform\n  clCreateContext(NULL, ...) [default]            No platform\n  clCreateContext(NULL, ...) [other]              No platform\n  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  No devices found in platform\n  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform\n  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  No devices found in platform\n  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform\n  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform\n  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  No devices found in platform\n
\n

rocm-smi returns:

\n
Segmentation fault\n
\n

This because in the official guide it says that "The integrated GPUs of Ryzen are not officially supported targets for ROCm." and because mine is an integrated GPU I'm out of scope.

\n

I will stop wasting my time and probably buy an NVIDIA or AMD eGPU (external GPU)

\n", + "system": "" + }, + { + "instruction": "Pytorch torch.load ModuleNotFoundError: No module named 'utils'", + "input": "", + "output": "

EDIT this answer doesn't provide the answer for the question but addresses another issue in the given code

\n

the .pth file just stores the parameters of a model, not the model itself. When you want to load a model you will need the .pt/-h file and the python code of your model class. Then you can load it like this:

\n
\n# your model\nclass YourModel(nn.Modules):\n    def __init__(self):\n        super(YourModel, self).__init__()\n        . . .\n\n    def forward(self, x):\n        . . .\n\n# the pytorch save-file in which you stored your trained model\nmodel_file = "<your path>"\n\nmodel = Model()\nmodel = model.load_state_dict(torch.load(model_file))\n\nmodel.eval()\n
\n", + "system": "" + }, + { + "instruction": "Incompletable PyTorch with any CUDA version (module 'torch' has no attribute 'cuda')", + "input": "", + "output": "

As a result, I had a file named torch.py in my home directory. After the renaming problem was solved.
\nThanks. Maybe my answer will be helpful to someone.

\n", + "system": "" + }, + { + "instruction": "Optuna Pytorch: returned value from the objective function cannot be cast to float", + "input": "", + "output": "

This exception is raised because the objetive function from your study must return a float.

\n

In your case, the problem is in this line:

\n
study.optimize(autotune, n_trials=1)\n
\n

The autotune function you defined before does not return a value and cannot be used for optimization.

\n

How to fix?

\n

For hyperparameter search, the autotune function must return the either some metric you can get after some training - like the loss or cross-entropy.

\n

A quick fix on your code could be something like this:

\n
def autotune():\n  cfg= { 'device' : "cuda" if torch.cuda.is_available() else "cpu"\n        ...etc...\n       }\n\n  best_loss = 1e100;  # or larger\n\n  # Train the model\n  for _ in range(epochs):\n     for i, (data, labels) in enumerate(trainloader):\n        ... (train the model) ...\n        # compute loss/error by comparing predicted out vs actual labels\n        loss = criterion(y_pred, labels)\n        best_loss = min(loss,best_loss)\n\n  return best_loss\n
\n

There is a good example with Pythorch in the Optuna repo that uses a pythoch callback to retrieve the accuracy (but can be changed easily to use the RMSE if needed). It also uses more than one experiment and takes the median for hyperparameters.

\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'torch' has no attribute 'hstack'", + "input": "", + "output": "

Apparently you are calling a function that does not exist YET in your PyTorch version --this is what the error message is about.

\n

Your link points to the help page related to developers preview: note .8.0a0+342069f version number in the top left corner. When clicking, Click here to view docs for latest stable release. link - an error message come.

\n

This function becomes available in torch version 1.8.0. --until then consider using torch.cat with `dim=1'.

\n
torch.cat([a,b], dim=1)  # a, b - 2d torch.Tensors\n
\n", + "system": "" + }, + { + "instruction": "How to upgrade to pytorch-nightly in google colab?", + "input": "", + "output": "

You're using the wrong package name, as mentioned on the pytorch website use this:

\n
!pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html -U\n
\n

Here, -U option is for upgrade (as there is already pytorch stable version installed on colab).

\n", + "system": "" + }, + { + "instruction": "Why the CUDA memory is not release with torch.cuda.empty_cache()", + "input": "", + "output": "

At least in Ubuntu, your script does not release memory when it is run in the interactive shell and works as expected when running as a script. I think there are some reference issues in the in-place call. The following will work in both the interactive shell and as a script.

\n
import torch\na = torch.zeros(300000000, dtype=torch.int8)\na = a.cuda()\ndel a\ntorch.cuda.empty_cache()\n
\n", + "system": "" + }, + { + "instruction": "Does my for loop run parallely, if all the tensors involved in the loop are on the GPU?", + "input": "", + "output": "

PyTorch runs GPU operations asynchronously (see docs).

\n
\n

When you call a function that uses the GPU, the operations are enqueued to the particular device

\n
\n

This means, your sum operations may run in parallel.

\n

I have made a simple experiment to test this. If I am right, it proves that you don't need to worry about parallelism here.

\n
import torch\n\nA = torch.rand(100000, 32, device='cuda')\nsplits = torch.split(A, 4)\n
\n

Your code:

\n
%%timeit -r1 -n5\nsums = [s.sum() for s in splits]\ntorch.cuda.synchronize()\n\n# Output: 5 loops, best of 1: 374 ms per loop\n
\n

Added synchronization after every sum operation:

\n
%%timeit -r1 -n5\nsums = [torch.cuda.synchronize() or s.sum() for s in splits]\n\n# Output: 5 loops, best of 1: 897 ms per loop\n
\n", + "system": "" + }, + { + "instruction": "Is maxpooling on odd number possible?", + "input": "", + "output": "

Regarding your first question:
\nYou can read in the documentation how the output shape of max-pooling is computed. You can max-pool odd-shaped tensors with even strides with or without padding. You need to be careful about the boundaries where some pixels may be lost.

\n\n
\n\n

Regarding your second question:
\nYour model expects a 4D input: batch-channel-height-width.
\nBy selecting only one image from the batch (img=images[0]) you eliminate the batch dimension ending up with only a 3D tensor.
\nTo fix this:

\n\n
img=images[0:1, ...]  # select first image, but leave batch dimension as a singleton\n
\n", + "system": "" + }, + { + "instruction": "Expected input batch_size (1) to match target batch_size (11)", + "input": "", + "output": "

The labels for DistilBertForSequenceClassification need to have the size torch.Size([batch_size]) as mentioned in the documentation:

\n\n
\n \n
\n\n

In your case, your labels should have size torch.Size([1]).

\n\n

That is not possible for your data, and that's because the sequence classification has one label for each sequence, but you wanted to make it a multi-label classification.

\n\n

As far as I'm aware there is no multi-label model in HuggingFace's transformer library that you could use out of the box. You would need to create your own model, which is not particularly difficult, because these extra models all use the same base model and add an appropriate classifier at the end, depending on the task to be solved. HuggingFace - Multi-label Text Classification using BERT \u2013 The Mighty Transformer explains how this can be done.

\n", + "system": "" + }, + { + "instruction": "How calculate the dice coefficient for multi-class segmentation task using Python?", + "input": "", + "output": "

You can use dice_score for binary classes and then use binary maps for all the classes repeatedly to get a multiclass dice score.

\n\n

I'm assuming your images/segmentation maps are in the format (batch/index of image, height, width, class_map).

\n\n
import numpy as np\nimport matplotlib.pyplot as plt\n\ndef dice_coef(y_true, y_pred):\n    y_true_f = y_true.flatten()\n    y_pred_f = y_pred.flatten()\n    intersection = np.sum(y_true_f * y_pred_f)\n    smooth = 0.0001\n    return (2. * intersection + smooth) / (np.sum(y_true_f) + np.sum(y_pred_f) + smooth)\n\ndef dice_coef_multilabel(y_true, y_pred, numLabels):\n    dice=0\n    for index in range(numLabels):\n        dice += dice_coef(y_true[:,:,:,index], y_pred[:,:,:,index])\n    return dice/numLabels # taking average\n\nnum_class = 5\n\nimgA = np.random.randint(low=0, high= 2, size=(5, 64, 64, num_class) ) # 5 images in batch, 64 by 64, num_classes map\nimgB = np.random.randint(low=0, high= 2, size=(5, 64, 64, num_class) )\n\n\nplt.imshow(imgA[0,:,:,0]) # for 0th image, class 0 map\nplt.show()\n\nplt.imshow(imgB[0,:,:,0]) # for 0th image, class 0 map\nplt.show()\n\ndice_score = dice_coef_multilabel(imgA, imgB, num_class)\nprint(f'For A and B {dice_score}')\n\ndice_score = dice_coef_multilabel(imgA, imgA, num_class)\nprint(f'For A and A {dice_score}')\n
\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "PyTorch LSTM for multiclass classification: TypeError: '<' not supported between instances of 'Example' and 'Example'", + "input": "", + "output": "

The BucketIterator sorts the data to make batches with examples of similar length to avoid having too much padding. For that it needs to know what the sorting criterion is, which should be the text length. Since it is not fixed to a specific data layout, you can freely choose which field it should use, but that also means you must provide that information to sort_key.

\n\n

In your case, there are two possible fields, text and wage_label, and you want to sort it based on the length of the text.

\n\n
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(\n    (train_data, valid_data, test_torch), \n    batch_size = BATCH_SIZE,\n    sort_within_batch = True,\n    sort_key = lambda x: len(x.text),\n    device = device)\n
\n\n

You might be wondering why it worked in the tutorial but doesn't in your example. The reason is that if sort_key is not specified, it defers it to the underlying dataset. In the tutorial they used the IMDB dataset, which defines the sort_key to be x.text. Your custom dataset did not define that, so you need to specify it manually.

\n", + "system": "" + }, + { + "instruction": "kth-value per row in pytorch?", + "input": "", + "output": "

Assuming you don't need indices into original matrix (if you do, just use fancy indexing for the second return value as well) you could simply sort the values (by last index by default) and return appropriate values like so:

\n\n
def kth_smallest(tensor, indices):\n    tensor_sorted, _ = torch.sort(tensor)\n    return tensor_sorted[torch.arange(len(indices)), indices]\n
\n\n

And this test case gives you your desired values:

\n\n
tensor = torch.tensor(\n    [[0.7455, 0.7736, 0.1772], [0.6646, 0.4191, 0.6602], [0.0818, 0.8079, 0.6424]]\n)\n\nprint(kth_smallest(tensor, [0, 1, 0])) # -> [0.1772,0.6602,0.0818]\n
\n", + "system": "" + }, + { + "instruction": "Object Detection Model (PyTorch) to ONNX:empty output by ONNX inference", + "input": "", + "output": "

In the 2nd setup, you have a random tensor, so no bounding box with a high enough detection score was selected. Check that you have an image with detectable objects for the input.

\n\n

I assume that the phrase about random inputs is correct in most cases (classification, segmentation, etc.), but detection models are using NonMaxSuppression and suppressing detections with low scores.

\n", + "system": "" + }, + { + "instruction": "Torchscripting a module with _ConvNd in forward", + "input": "", + "output": "

As alternative to [https://stackoverflow.com/users/6210807/kharshit] suggestion, you can define network functional way:

\n\n
class MyCell(torch.nn.Module):\n    def __init__(self):\n        super(MyCell, self).__init__()\n        self.w = []\n        for i in range(5):\n            self.w.append( torch.Tensor( 1, 1, 2*i+3 ) )\n            # init w[i] here, maybe make it \"requires grad\" \n\n    def forward(self, x):\n        for i in range(5):\n            x = torch.nn.functional.conv1d( x, self.w[i] )\n            x = torch.nn.functional.relu( x )\n        return x\n
\n", + "system": "" + }, + { + "instruction": "Pytorch: why print(model) does not show the activation functions?", + "input": "", + "output": "

There are two ways of adding operations to the network graph: lowlevel functional way and more advanced object way. You need latter to make your structure observable, In first case is just calling (not exactly, but...) a function without storing info about it. So, instead of

\n\n
    def forward(self, x):\n    ...\n        x = F.relu(self.hidden(x))\n
\n\n

it must be something like

\n\n
def __init__(...):\n    ...\n    self.myFirstRelu= torch.nn.ReLU()\n\ndef forward(self, x):\n    ...\n    x1 = self.hidden(x)\n    x2 = self.myFirstRelu(x1)\n
\n\n

Anyway a mix of two theese ways is generally bad idea, although even torchvision models have such inconsistiencies: models.inception_v3 not register the poolings for example >:-( (EDIT: it is fixed in june 2020, thanks, mitmul!).

\n\n
\n\n

UPD:\n- Thanks, that works, now if I print I see ReLU(). But this seems to only print the function in the same order they are defined in the init. Is there a way to get the associations between layers and activation functions? For example I want to know which activation was applyed to layer 1, which to layer 2 end so on...

\n\n

There is no uniform way, but here is some tricks:\nobject way:

\n\n

-just init them in order

\n\n

-use torch.nn.Sequential

\n\n

-hook callbacks on nodes like that -

\n\n
def hook( m, i, o):\n    print( m._get_name() )\n\nfor ( mo ) in model.modules():\n    mo.register_forward_hook(hook)\n
\n\n

functional and object way:

\n\n

-make use of internal model graph, builded on forward pass, as torchviz do (https://github.com/szagoruyko/pytorchviz/blob/master/torchviz/dot.py), or just use plot generated by said torchviz.

\n", + "system": "" + }, + { + "instruction": "How to make early stopping in image classification pytorch", + "input": "", + "output": "

This is what I did in each epoch

\n\n
val_loss += loss\nval_loss = val_loss / len(trainloader)\nif val_loss < min_val_loss:\n  #Saving the model\n  if min_loss > loss.item():\n    min_loss = loss.item()\n    best_model = copy.deepcopy(loaded_model.state_dict())\n    print('Min loss %0.2f' % min_loss)\n  epochs_no_improve = 0\n  min_val_loss = val_loss\n\nelse:\n  epochs_no_improve += 1\n  # Check early stopping condition\n  if epochs_no_improve == n_epochs_stop:\n    print('Early stopping!' )\n    loaded_model.load_state_dict(best_model)\n
\n\n

Donno how correct it is (I took most parts of this code from a post on another website, but forgot where, so I can't put the reference link. I have just modified it a bit), hope you find it useful, in case I'm wrong, kindly point out the mistake. Thank you

\n", + "system": "" + }, + { + "instruction": "HuggingFace Transformers For Text Generation with CTRL with Google Colab's free GPU", + "input": "", + "output": "

The solution was to increase the RAM. Since I was using the Google Colab's free GPU, I was going through this: GitHub issue\nand found this useful: Solution

\n\n

The following piece of code will crash the session in Colab and select 'Get more RAM', which will increase the RAM up to 25.51GB

\n\n
d=[]\nwhile(1):\n  d.append('1')\n
\n", + "system": "" + }, + { + "instruction": "With pytorch/python, is it better to overwrite variables or define new ones?", + "input": "", + "output": "

tensor and tensor1 in your example are just different views of the same data in memory, so the memory difference of potentially maintaining two slightly different references to it should be negligible. The relevant part would only be tensor1 vs tensor2.

\n\n

You might want to see this similar question:\nPython: is the "old" memory free'd when a variable is assigned new content?

\n\n

Since the reassignment to tensor that actually allocates new memory is also the final call in bar, I suspect that in this particular example the total memory wouldn't be impacted (tensor1 would be unreferenced once the function returns anyway).

\n\n

With a longer chain of operations, I don't think the GC is guaranteed to be called on any of these reassignments, though it might give python some more flexibility. I'd probably prefer the style in foo just because it's easier to later change the order of operations in the chain. Keeping track of different names adds overhead for the programmer, not just the interpreter.

\n", + "system": "" + }, + { + "instruction": "Why do we need pack_padded_sequence() when we have pack_sequence()?", + "input": "", + "output": "

Mostly for historical reasons; torch.nn.pack_padded_sequence() was created before torch.nn.pack_sequence() (the later appeared in 0.4.0 for the first time if I see correctly) and I suppose there was no reason to remove this functionality and break backward compatibility.

\n\n

Furthermore, it's not always clear what's the best/fastest way to pad your input and it highly varies on data you are using. When data was somehow padded beforehand (e.g. your data was pre-padded and provided to you like that) it is faster to use pack_padded_sequence() (see source code of pack_sequence, it's calculating length of each data point for you and calls pad_sequence followed by pack_padded_sequence internally). Arguably pad_packed_sequence is rarely of use right now though.

\n\n

Lastly, please notice enforce_sorted argument provided since 1.2.0 version for both of those functions. Not so long ago users had to sort their data (or batch) with the longest sequence first and shortest last, now it can be done internally when this parameter is set to False.

\n", + "system": "" + }, + { + "instruction": "How to specify pytorch as a package requirement on windows?", + "input": "", + "output": "
\n

What are the best practices for going about this ?

\n
\n\n

If your project depends on other projects that are not distributed through PyPI then you have to inform the users of your project one way or another. I recommend the following combination:

\n\n\n\n

The reason why (or main reason, there are others), is that anyone using pip assumes that (by default) everything will be downloaded from PyPI and nowhere else. In other words anyone using pip puts some trust into pypi.org as a source for Python project distributions. If pip were suddenly to download artifacts from other sources, it would breach this trust. It should be the user's decision to download from other sources.

\n\n

So you could provide in your project's documentation an example of requirements.txt file like the following:

\n\n
# ...\ntorch===1.4.0  --find-links https://download.pytorch.org/whl/torch_stable.html\ntorchvision===0.5.0  --find-links https://download.pytorch.org/whl/torch_stable.html\n# ...\n
\n\n
\n\n

Update

\n\n

The best solution would be to help the maintainers of the projects in question to publish Windows wheels on PyPI directly:

\n\n\n", + "system": "" + }, + { + "instruction": "How can I load CSV data for a PyTorch neural network?", + "input": "", + "output": "
import pandas as pd\n\nsample = pd.read_csv('myfile.csv').sample(n=10)\n
\n\n

you should load the file only 1 time and then sample as you go:

\n\n
df = pd.read_csv('myfile.csv')\nsample1 = df.sample(n=10)\nsample2 = df.sample(n=10)\n
\n", + "system": "" + }, + { + "instruction": "How to install torch in python", + "input": "", + "output": "

For pip environment use this

\n\n
pip3 install torchvision\n
\n\n

For conda environment use this (run this command on anaconda prompt)

\n\n
conda install PyTorch -c PyTorch\n
\n\n
\n\n

Update

\n\n

Use this code to turn off your cache

\n\n
pip3 --no-cache-dir install torchvision\n
\n\n

or

\n\n
pip3 install torchvision--no-cache-dir\n
\n\n

or

\n\n
pip install --no-cache-dir torchvision\n
\n\n

Try one by one

\n", + "system": "" + }, + { + "instruction": "Understanding batch_size in CNNs", + "input": "", + "output": "

CNN is a general term for convolutional neural networks. Depending on the particular architecture it may do different things. The main building blocks of CNNs are convolutions which do not cause any \"crosstalk\" between items in batch and pointwise nonlinearities like ReLU which do not either. However, most architectures also involve other operations, such as normalization layers - arguably the most popular is batch norm which does introduce crosstalk. Many models will also use dropout which behaves stochastically outside of eval mode (by default models are in train mode). Both above effects could lead to the observed outcome above, as well as other custom operations which could cause cross-talk across the batch.

\n\n

Aside from that, because of numeric precision issues, your code may not give exactly the same results, even if it doesn't feature any cross-batch operations. This error is very minor but sufficient to manifest itself when checking with CNN(input_1) == CNN(input_2)[:2]. It is better to use allclose instead, with a suitable epsilon.

\n", + "system": "" + }, + { + "instruction": "How do I install Pytorch 1.3.1 with CUDA enabled", + "input": "", + "output": "

Given that your system is running Ubuntu 16.04, it comes with glibc installed. You can check your version by typing ldd --version.

\n

Keep in mind that PyTorch is compiled on CentOS which runs glibc version 2.17.

\n

Then check the CUDA version installed on your system nvcc --version

\n

Then install PyTorch as follows e.g. if your cuda version is 9.2:\nconda install pytorch torchvision cudatoolkit=9.2 -c pytorch

\n

If you get the glibc version error, try installing an earlier version of PyTorch.

\n

If neither of the above options work, then try installing PyTorch from sources.

\n

If you would like to set a specific PyTorch version to install, please set it as <version_nr> in the below command:\nconda install pytorch=<version_nr> torchvision cudatoolkit=9.2 -c pytorch

\n", + "system": "" + }, + { + "instruction": "GPU memory doesn't get freed up after evaluating data on PyTorch model in parallel process", + "input": "", + "output": "

I found this similar thread where the memory leakage occurs due to the instantiation of the Pool() in the loop, rather than outside.

\n\n

The above problem also instantiates the Pool() inside the function without using the with notation which would ensure that all started processes return

\n\n

E.g. bad way:

\n\n
def evaluation(workload):\n    jobslist = [job for job in workload]\n    with Pool() as mypool:\n        mypool.starmap(_workerfunction, jobslist)\nif __name__ == '__main__':\n    # pseudo data\n    workload = [[(100,200) for i in range(1000)] for i in range(50)]\n    for i in range(100):\n        evaluation(workload)\n
\n\n

The proper way of doing this would be to instantiate the pool outside the loop, and pass a reference to the pool into the function for processing i.e.:

\n\n
def evaluation(workload, mypool):\n    jobslist = [job for job in workload]\n    mypool.starmap(_workerfunction, jobslist)\nif __name__ == '__main__':\n    # pseudo data\n    with Pool() as mypool:\n        workload = [[(100,200) for i in range(1000)] for i in range(50)]\n        for i in range(100):\n            evaluation(workload, mypool)\n
\n\n

I suspect that the GPU memory gets leaked due to left over references in parallel processes that have not been cleaned up yet.

\n", + "system": "" + }, + { + "instruction": "CTC: blank must be in label range", + "input": "", + "output": "

This error will occur when the index of blank is larger than the total number of classes, which equals number of chars + blank. What's more, the index starts from 0, instead of 1, so if you have 62 characters in total, their index should be 0-61 and the index of blank should be 62 instead of 63. (Or you can set blank as 0, other characters from 1-62)

\n

You should also check the shape of the output tensor, it should has shape [T, B, C], where T is the time step length, B is the batch size, C is the class num, remember to add blank in to the class num or you will meet the problem

\n", + "system": "" } ] \ No newline at end of file