title stringlengths 15 126 | category stringclasses 3 values | posts list | answered bool 2 classes |
|---|---|---|---|
About the Site Feedback category | Site Feedback | [
{
"contents": "Discussion about this site, its organization, how it works, and how we can improve it.",
"isAccepted": false,
"likes": null,
"poster": "system"
},
{
"contents": "Is it possible to have more categories other than those specified? Similar to tags on stackoverflow where people ca... | false |
Pytorch is awesome! | null | [
{
"contents": "Can’t wait for the open source release!",
"isAccepted": false,
"likes": 7,
"poster": "ebetica"
}
] | false |
About the vision category | vision | [
{
"contents": "Topics related to either pytorch/vision or vision research related topics",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "I created my model from scratch . I tried with batch_size 32 , it threw me this error. I tried with batch_size 1 , I got 320% accurac... | false |
About the reinforcement-learning category | reinforcement-learning | [
{
"contents": "A section to discuss RL implementations, research, problems",
"isAccepted": false,
"likes": 2,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "lakehanne"
},
{
"contents": "I had the same problem, it only reached at 199 a... | false |
Request: Tutorial for deploying on cloud based virtual machine | null | [
{
"contents": "I don’t have a GPU computer. Is there a tutorial or best practices for using PyTorch on a cloud-based virtual machine with GPU capabilities?",
"isAccepted": false,
"likes": 1,
"poster": "rcmckee"
},
{
"contents": "PyTorch does not NEED GPUs to function. It works great on CPUs ... | false |
Convert/import Torch model to PyTorch | null | [
{
"contents": "Hi, Great library! I’d like to ask if it is possible to import a trained Torch model to PyTorch… Thanks",
"isAccepted": false,
"likes": 4,
"poster": "miliadis"
},
{
"contents": "As of today, you can deserialize Lua’s .t7 files into PyTorch containing Tensors, numbers, tables, ... | false |
Roadmap for torch and pytorch | null | [
{
"contents": "why did you choose to create this interface for python now, it did not seem a priority for the community (however I understand this will probably increase the adoption rate) ? as a torch user, I invested some time to learn how lua and torch operate, I guess it will operate the same way in the nea... | false |
But what about Hugh Perkins? | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Jeremy_Godenz"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
CFFI for interfacing C functions | null | [
{
"contents": "There are many benefits of CFFI, just to mention few: Abstract the Python version (CPython2, CPython3, PyPy). Better control over when and why the C compilation occurs, and more standard ways to write setuptools-based setup.py 3 files. Keep all the Python-related logic in Python so that you don’t... | false |
Import/Export models | null | [
{
"contents": "Meanwhile, do you have already any internal data structure to read the models definition?",
"isAccepted": false,
"likes": 2,
"poster": "edgarriba"
},
{
"contents": "We’re open to discussion about formats for sharing parameters and model definitions between libraries. If there’... | false |
Is new serialization format documented? | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jsenellart-systran"
},
{
"contents": "I can write up the specs, but they’re much more complicated now. Serialized files are actually tars containing 4 other files - one listing tensors, one listing storages, one with storage da... | false |
Does it support Multi-GPU card on a single node? | null | [
{
"contents": "Hi, Now I am using keras with a simplified interface of data parallisim sync version of parallelism. Now I wish to know if pytorch has similar function, seems like PyTorch is trying to advocate in “Speed”, so it had better support Multi-GPU just for a single node.",
"isAccepted": false,
"... | false |
Import nn module written in Lua? | null | [
{
"contents": "Thanks",
"isAccepted": false,
"likes": null,
"poster": "zzz"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Eugenio_Culurciello"
},
{
"contents":... | false |
2017 and we still can’t support AMD hardware, why? | null | [
{
"contents": "Not all of us own NV hardware. Some of us want to use these tools at home without having to make a new purchase or purchase some AWS time. Not all of us are research scientists with fat grants allowing us to buy the latest NV hardware. When will the vendor lock-in stop? Surely “free” tools should... | false |
Package manager choice: conda vs. pip | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Atcold"
},
{
"contents": "This is because conda manages dependencies that are not just python dependencies. For example with conda, we get tighter control over which BLAS is installed.",
"isAccepted": false,
"likes": 3,... | false |
How to do backward() for a net with multiple outputs? | null | [
{
"contents": "e.g., in Torch 7 I have a net with the last module an nn.ConcatTable, then I make the gradOutputs a table of tensors and do the net:backward(inputs, gradOutputs) How to do similar things with pytorch? I tried to backward() for each output, but it complained that backward should not be called mult... | false |
Why cant I see .grad of an intermediate variable? | null | [
{
"contents": "Hi! I am loving this framework… Since Im a noob, I am probably not getting something, but I am wondering why I cant get the gradient of an intermediate variable with .grad? Here is an example of what I mean: <SCODE>xx = Variable(torch.randn(1,1), requires_grad = True)\nyy = 3*xx\nzz = yy**2\nzz.b... | false |
Input preparation | null | [
{
"contents": "Hello all, For instance, the fallowing code results in error: <SCODE>(class model above)\n self.affine1(10, 100)\n(...)\n\nx = np.random.random(10)\nipt = torch.from_numpy(x)\nprobs = model(Variable(ipt))\n<ECODE> Then <SCODE>TypeError: addmm_ received an invalid combination of arguments - got... | false |
Help core dumped problem! | null | [
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "Chun_Li"
},
{
"contents": "Thank you. It seems to be specific to AMD Athlon CPU. I will take a look.",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "Thanks for the response. BTW, I forg... | false |
Model.cuda() takes long time | null | [
{
"contents": "Even for a small model, calling its cuda() method takes minutes to finish. Is it normal? If yes what it does behind the scene that takes so long? Thanks!",
"isAccepted": false,
"likes": 4,
"poster": "ming"
},
{
"contents": "We’re aware that there are problems with binary packa... | false |
How to transform Variable into numpy? | null | [
{
"contents": "When I used pytorch, I met a problem that I can not transform Variable into numpy. When I try to use torch.Tensor or transform it into torch.FloatTensor, but it’s fail. So how can I to solve this problem?",
"isAccepted": false,
"likes": 6,
"poster": "cumttang"
},
{
"contents":... | false |
What’s the easiest way to clean gradients before saving a net? | null | [
{
"contents": "<SCODE>torch.save(net.state_dict(), 'net_name.pth')\n<ECODE> right?",
"isAccepted": false,
"likes": null,
"poster": "pengsun"
},
{
"contents": "<SCODE>torch.save(net.state_dict(), 'net_name.pth')\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "fmassa"
}... | false |
Load a saved model | null | [
{
"contents": "and how do you forward with this model file?",
"isAccepted": false,
"likes": 1,
"poster": "Eugenio_Culurciello"
},
{
"contents": "<SCODE>x = torch.rand(1, 3, 224, 224)\nxVar = torch.autograd.Variable(x)\nres18(xVar)\n\n--> \n\nVariable containing:\n-0.4374 -0.3994 -0.5249 ...... | false |
Copying nn.Modules without shared memory | null | [
{
"contents": "",
"isAccepted": false,
"likes": 4,
"poster": "lenny"
},
{
"contents": "no, what you need to do is to send them model to the new process, and then do: <SCODE>import copy\nmodel = copy.deepcopy(model)\n<ECODE> share_memory() only shares the memory ahead of time (in case you wan... | false |
Why passing in tensor instead of Variable when backward? | null | [
{
"contents": "Hi, I just realized Variable.grad field is also a Variable, it thus seems more natural to pass gradients as a Variable when doing backward (currently it must be a tensor). Any reason for such an interface?",
"isAccepted": false,
"likes": 1,
"poster": "pengsun"
},
{
"contents":... | false |
Why cant I change the padding, and strides in Conv2d? | null | [
{
"contents": "Hi, perhaps I am missing something, but I cannot seem to figure out how to change the padding and stride amounts in the Conv2d method. Thanks",
"isAccepted": false,
"likes": null,
"poster": "Kalamaya"
},
{
"contents": "You can pass them as arguments to the Module constructor e... | false |
Asynchronous parameters updating? | reinforcement-learning | [
{
"contents": "<SCODE>-- in main thread: shared parameters\nparams, _ = sharedNet:getParameters()\n\n-- in worker thread: its own gradParameters\ntNet = sharedNet:clone()\n_, gradParams = tNet:getParameters()\n\n-- in worker thread: stuff\n\n-- in worker thread: updating shared parameters with its own gradParam... | false |
How to extract features of an image from a trained model | null | [
{
"contents": "Hi all,",
"isAccepted": false,
"likes": 19,
"poster": "big_tree"
},
{
"contents": "<SCODE>torchvision.models.resnet18(pretrained=True)\n<ECODE>",
"isAccepted": false,
"likes": 7,
"poster": "apaszke"
},
{
"contents": "You can either reconstruct the classifie... | false |
Broadcasting? Or alternative solutions | null | [
{
"contents": "Hi, I really like the library so far. However, I was wondering if broadcasting is on the roadmap, and what would be your current suggestion for work-arounds? <SCODE>x = Variable(torch.Tensor([[1.0, 1.0], \n [2.0, 2.0], \n [3.0, 3.0], \n ... | false |
Understanding how torch.nn.Module works | null | [
{
"contents": "Essentially, I want to reproduce the results I get when I do it “manually:” <SCODE>from torch.autograd import Variable\nimport torch\n\n\nx = Variable(torch.Tensor([[1.0, 1.0], \n [1.0, 2.1], \n [1.0, 3.6], \n [1.0, 4.2... | false |
Concatenate two tensor for each timestep | null | [
{
"contents": "<SCODE>def forward(...):\n ...\n z = Variable(torch.zeros(y.size(0), y.size(1), 1024))\n for i in range(z.size(0)):\n z[i] = torch.cat((y[i], x), 1)\n\n z = self.decoderGRU(z, init_decoder_GRU)\n decoded = self.classLinear(z.view(z.size(0)*z.size(1), ... | false |
Model zoo categories | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Eugenio_Culurciello"
},
{
"contents": "i’m not sure i understand your question",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "the list of category names that correspond to each of t... | false |
Are tables like concattable or paralleltable present in torch | null | [
{
"contents": "Are there utilities like concattable, paralleltable in pytorch??",
"isAccepted": false,
"likes": 1,
"poster": "gsp-27"
},
{
"contents": "No, but you can easily achieve that using autograd. e.g. for concat table: <SCODE>class MyModel(nn.Module):\n def __init__(self):\n ... | false |
Non-legacy View module? | null | [
{
"contents": "There doesn’t currently appear to be any non-legacy View module in pytorch’s torch.nn module. Any reason for this? While obviously not essential, it’s convenient when porting over existing Torch networks. Would be happy to submit a PR.",
"isAccepted": false,
"likes": 1,
"poster": "leb... | false |
“wheel not supported” for pip install | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "eulerreich"
},
{
"contents": "Not sure if that’s a problem in your case, but I had a similar issue (or a similar message) when I accidentally tried to install a package into a wrong environment (a py 3.5 wheel into a 3.6 env). ... | false |
JoinTable like operation when combining CNNs | null | [
{
"contents": "<SCODE>class Net(nn.Module):\ndef __init__(self):\n super(Net, self).__init__()\n self.embed = nn.Embedding(vocab.size(), 300)\n #self.embed.weight = Parameter( torch.from_numpy(vocab.get_weights().astype(np.float32))) \n self.conv_3 = nn.Conv2d(1, 50, kernel_size=(3, 300),stri... | false |
Inferring shape via flatten operator | null | [
{
"contents": "Is there a flatten-like operator to calculate the shape of a layer output. An example would be transitioning from a conv layer to linear layer. In all the examples I’ve seen thus far this seems to be manually calculated, ex: <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net... | true |
Simple L2 regularization? | null | [
{
"contents": "Hi, does simple L2 / L1 regularization exist in pyTorch? I did not see anything like that in the losses. I guess the way we could do it is simply have the data_loss + reg_loss computed, (I guess using nn.MSEloss for the L2), but is an explicit way we can use it without doing it this way? Thanks",... | false |
Problems with target arrays of int (int32) types in loss functions | null | [
{
"contents": "(On a side note, would you recommend using doubles and longs over floats and ints performance-wise?) <SCODE>TypeError: FloatClassNLLCriterion_updateOutput \nreceived an invalid combination of arguments - \ngot (int, torch.FloatTensor, torch.IntTensor, torch.FloatTensor, bool, NoneType, torch.Floa... | false |
Why use nn.Conv2D over nn.functional.conv2d? | null | [
{
"contents": "Hi, I stumbled upon this question because I am trying to have control over how my convolutional weights are initialized. The API for both of those however seems different. For the former: torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) For the latte... | false |
Are GPU and CPU random seeds independent? | null | [
{
"contents": "<SCODE>torch.manual_seed(args.seed)\nif args.cuda:\n torch.cuda.manual_seed(args.seed)\n<ECODE> torch.manual_seed(args.seed) sufficient?",
"isAccepted": false,
"likes": 3,
"poster": "rasbt"
},
{
"contents": "It is sufficient for CPU determinism, but it won’t affect the GPU ... | false |
Equivalent of np.reshape() in pyTorch? | null | [
{
"contents": "",
"isAccepted": false,
"likes": 4,
"poster": "Kalamaya"
},
{
"contents": "<SCODE>>>> import torch\n>>> t = torch.ones((2, 3, 4))\n>>> t.size()\ntorch.Size([2, 3, 4])\n>>> t.view(-1, 12).size()\ntorch.Size([2, 12])<ECODE>",
"isAccepted": false,
"likes": 12,
"poster... | false |
Visual watcher when training/evaluating or tensorboard equivalence? | null | [
{
"contents": "Hi, As leveraging python ecosystem seems one of the main reasons for switching to pytorch from Torch 7, would you provide guidance to visually overseeing training/evaluating with 3rd party Python libraries that is hard to do in pure Torch 7 ? For example,",
"isAccepted": false,
"likes": n... | false |
Variable Can not compare with other parameter? | null | [
{
"contents": "When I try to find out some Variable result who is the biggest. However, I found that “Variable” can not compare with other parameters, such as ndarray or FloatTensor. Thus, is there any thing wrong with Variable, or what should I do?",
"isAccepted": false,
"likes": null,
"poster": "c... | false |
Parallelism via CPU | null | [
{
"contents": "<SCODE>sess = tf.Session(config=tf.ConfigProto(\n intra_op_parallelism_threads=NUM_THREADS))<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "rasbt"
},
{
"contents": "Hi,",
"isAccepted": false,
"likes": 2,
"poster": "fmassa"
},
{
"contents": "Sounds... | false |
Cannot install via pip on LINUX? | null | [
{
"contents": "I am getting the following error when I try to install on my 64-bit Ubuntu 14.04 LTS system. The command I run is: And the error I get is: After that it says: For additional context, I have made a virtual environment (called pytorch) first, and then within that environment, I tried to run the ins... | false |
Weight initilzation | null | [
{
"contents": "How I can initialize variables, say Xavier initialization?",
"isAccepted": false,
"likes": 23,
"poster": "Hamid"
},
{
"contents": "",
"isAccepted": false,
"likes": 6,
"poster": "Atcold"
},
{
"contents": "",
"isAccepted": false,
"likes": 12,
"pos... | false |
Cuda() failed with “The NVIDIA driver on your system is too old” | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "rohjunha"
},
{
"contents": "That actually sounds like a docker issue. When PyTorch runs, we check with the CUDA driver API if the driver is sufficient, and we report back what it says… Are you using NVIDIA-docker? because you n... | false |
High memory usage while training | null | [
{
"contents": "<SCODE>class testNet(nn.Module):\n def __init__(self):\n super(testNet, self).__init__()\n self.rnn = nn.RNN(input_size=200, hidden_size=1000, num_layers=1)\n self.linear = nn.Linear(1000, 100)\n\n def forward(self, x, init):\n x = self.rnn(x, init)[0]\n y... | false |
Train Neural network problem in reinforcement learning | null | [
{
"contents": "<SCODE>self.value_optimizer.zero_grad()\npreValue = self.value_net(state).data\nnextValue = self.value_net(next_state).data\nexpectedValue = (self.gamma * nextValue) + reward\n\npreValue = Variable(preValue)\nexpectedValue = Variable(expectedValue)\nloss = F.smooth_l1_loss(preValue, expectedValue... | false |
How to install PyTorch such that can use GPUs | null | [
{
"contents": "EDIT: I use net.cuda() and this seems to work. However when I try to run my script I get this error: (If I do not try to run net.cuda() my script works find on the CPU).",
"isAccepted": false,
"likes": null,
"poster": "Kalamaya"
},
{
"contents": "",
"isAccepted": false,
... | false |
Serving Model Trained with PyTorch | null | [
{
"contents": "Tensorflow has Tensorflow Serving. I know pytorch is a framework in its early stages, but how do people serve models trained with pytorch. Must it be from Python? I’m specifically looking to serve from C++.",
"isAccepted": false,
"likes": 4,
"poster": "abc"
},
{
"contents": "W... | false |
Check `gradInput`s | null | [
{
"contents": "<SCODE>net = Net()\nh = net(x)\nJ = loss(h, y)\nJ.backward()\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Atcold"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes... | false |
Sharing optimizer between processes | null | [
{
"contents": "I am wondering if it is possible to share optimizer between different threads. To be specific, when optimizer.step() is applied, modified state of the optimizer should be available for all processes.",
"isAccepted": false,
"likes": null,
"poster": "delgado"
},
{
"contents": "i... | false |
RuntimeError: expected a Variable argument, but got function | null | [
{
"contents": "Hello, I get this weird error while running my code. <SCODE>class Residual(nn.Module):\n\tdef __init__(self,dropout, shape, negative_slope, BNflag = False):\n\t\tsuper(Residual, self).__init__()\n\t\tself.dropout = dropout\n\t\tself.linear1 = nn.Linear(shape[0],shape[1])\n\t\tself.linear2 = nn.Li... | false |
How to check if Model is on cuda | null | [
{
"contents": "",
"isAccepted": true,
"likes": 11,
"poster": "napsternxg"
},
{
"contents": "As replied on the github issues, an easy way is: <SCODE>next(model.parameters()).is_cuda # returns a boolean\n<ECODE>",
"isAccepted": true,
"likes": 56,
"poster": "smth"
},
{
"cont... | true |
Dropout functional API, advantages/disadvantages? | null | [
{
"contents": "I saw in one of the examples that the functional API was used to implement dropout for a conv layer but not for the fully connected layer. Was wondering if this has a specific reason? I pasted the code below <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__(... | false |
RNN for sequence prediction | null | [
{
"contents": "Hello, Previously I used keras for CNN and so I am a newbie on both PyTorch and RNN. In keras you can write a script for an RNN for sequence prediction like, <SCODE>in_out_neurons = 1\nhidden_neurons = 300\n\nmodel = Sequential() \nmodel.add(LSTM(hidden_neurons, batch_input_shape=(None, length_o... | false |
How to get the graph of your DNN drawn? | null | [
{
"contents": "Hi all, I am wondering if there is an easy plug-and-play way in order for me to have a nice visualization of my DNN that I have designed? Im after something simple to be honest, just something where I can see the various layers connected to each other to get a sense of what I have coded. Thanks",... | false |
Why is input used as var name in examples and pytorch convention! | null | [
{
"contents": "A discussion is needed for adopting common python specific convention to pytorch.",
"isAccepted": false,
"likes": null,
"poster": "napsternxg"
},
{
"contents": "I know that many people will consider that a “bad practice”, but I seriously don’t think that giving up readability ... | false |
Data set loader questions | null | [
{
"contents": "I have downloaded and split the CIFAR 10 data using the given torch.datasets api, now I want to separate out training and validate data, say I want out 50000 sample 49000 as training example and 1000 as validation examples, how do I achieve this?? Also say I keep the batch size as 64, then in the... | false |
Using previous output values as part of a loss function | reinforcement-learning | [
{
"contents": "I am trying to implement TRPO, and I need the gradient of the network parameters w.r.t. the KL divergence between the current action distribution and the action distribution after the parameters are changed. What is the best way to implement this? How do I make sure action_distribution0 doesn’t g... | false |
Proper way to do gradient clipping? | null | [
{
"contents": "Is there a proper way to do gradient clipping, for example, with Adam? It seems like that the value of Variable.data.grad should be manipulated (clipped) before calling optimizer.step() method. I think the value of Variable.data.grad can be modified in-place to do gradient clipping. Is it safe to... | false |
Text classification unexpectedly slow | null | [
{
"contents": "I wrote a simple demo for short text classification but the speed is unexpectedly slow. When I tried to find out where the bottleneck is, it turns out to be intractable. At first, the bottleneck is this line: However, after commenting out the above line, it slows down at these lines in get_batch(... | false |
Usage of Numpy and gradients | null | [
{
"contents": "Hi, I was wondering if gradients are automatically computed if Numpy is in the Forward() function of nn.module, i.e, torch tensors are converted to numpy arrays, a numpy op is applied and then we convert it back to torch tensors. Is there any implication of doing so? Thanks!",
"isAccepted": f... | false |
Matrix-vector multiply (handling batched data) | null | [
{
"contents": "<SCODE># (batch x inp)\nv = torch.randn(5, 15)\n# (inp x output)\nM = torch.randn(15, 20)\n<ECODE> Compute: <SCODE># (batch x output)\nout = torch.Tensor(5, 20)\nfor i, batch_v in enumerate(v):\n out[i] = (batch_v * M).t()\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "e... | false |
Second order derivatives? | null | [
{
"contents": "and I need the gradients of the network parameters w.r.t (flat_grad * x) In the process of flattening the gradients, I had to convert everything into a numpy array, which broke the backprop chain. How can I solve this problem?",
"isAccepted": false,
"likes": null,
"poster": "yoonholee... | false |
Implementing Squeezenet | null | [
{
"contents": "Hi, I am trying to implement Squeezenet and train it on Cifar10 data, I have got the code ready but there seems to be some problem, my training set accuracy never increases though the loss function graph makes sense. <SCODE>class fire(nn.Module):\ndef __init__(self, inplanes, squeeze_planes, expa... | false |
Different results with same input | null | [
{
"contents": "Hi, I have this output : EDIT : I have the same problem if I do two forward pass with a batch of 1",
"isAccepted": false,
"likes": null,
"poster": "maxgreat"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "apaszke"
}
] | false |
Titan X Maxwell (old) issues | null | [
{
"contents": "I can confirm we can run Torch7 on those machines with no issues. Also on the new Pascal Titan X, we do not have this problem. Maybe an issues with driver? Anyone else has the same issue?",
"isAccepted": false,
"likes": null,
"poster": "Eugenio_Culurciello"
},
{
"contents": "W... | false |
Example on how to use batch-norm? | null | [
{
"contents": "TLDR: What exact size should I give the batch_norm layer here if I want to apply it to a CNN? output? In what format? I have a two-fold question: Is this the correct intended usage? Maybe an example of the syntax for it’s usage with a CNN? Thanks.",
"isAccepted": false,
"likes": 4,
"p... | false |
Adding a scalar? | null | [
{
"contents": "Dumb question, but how do I make a scalar Variable? I’d like to add a trainable parameter to a vector, but I keep getting size-mismatch problems. <SCODE># Works, but I can't make the 2 a \n# parameter, so I can't do gradient descent on it\nVariable(torch.from_numpy(np.ones(5))) + 2 \n<ECODE> Than... | true |
List of nn.Module in a nn.Module | null | [
{
"contents": "<SCODE>class testNet(nn.Module):\n def __init__(self, input_dim, hidden_dim, step=1):\n super(testNet, self).__init__()\n self.linear = nn.Linear(100, 100) #dummy module\n self.linear_combines1 = []\n self.linear_combines2 = []\n for i in range(step):\n ... | false |
When will the conda package be updated? | null | [
{
"contents": "I found some features like upsampling_nearest are in the github but not in the conda package. Is there a timeline when the conda package will be updated?",
"isAccepted": false,
"likes": null,
"poster": "junonia"
},
{
"contents": "the conda package was updated yesterday eveni... | false |
Read data from the GPU? | null | [
{
"contents": "Thanks!",
"isAccepted": false,
"likes": null,
"poster": "Kalamaya"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "tomsercu"
}
] | false |
Understanding graphs and state | null | [
{
"contents": "There are two questions remaining - the second question is more important. 1) No guarantee that second backward will fail? <SCODE>x = Variable(torch.ones(2,3), requires_grad=True)\ny = x.mean(dim=1).squeeze() + 3 # size (2,)\nz = y.pow(2).mean() # size 1\ny.backward(torch.ones(2))\nz.backward() #... | false |
Help clarifying repackage_hidden in word_language_model | null | [
{
"contents": "<SCODE>def repackage_hidden(h):\n \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n if type(h) == Variable:\n return Variable(h.data)\n else:\n return tuple(repackage_hidden(v) for v in h)\n<ECODE> I dont think I fully understand what th... | false |
CUDA out of memory error | null | [
{
"contents": "I ran my net with a large minibatch on the GPU with problem, and then ctrl-c out of it. However when I try to re-run my script I get: RuntimeError: cuda runtime error (2) : out of memory at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu:66 Nothing changed and it... | false |
End of preview. Expand
in Data Studio
- Downloads last month
- 19